source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
712,425 | I have installed pulseUI VPN on Ubuntu 22.04. After setting the vpn address, I receive a dialogue box that "Chromium Embedded Browser (CEF) is not installed on this machine. [...] Do you want to download and install Chromium Embedded Browser?" I accept, and a progress dialogue box pops up that says 'Installing Chromium Embedded Browser'. However, this appears to hang and the progress bar never fills after several hours. | While creating the post I found the instructions for manually installing CEF . I did (with root permissions): /opt/pulsesecure/bin$ ./setup_cef.sh install And verified with /opt/pulsesecure/bin$ ./setup_cef.sh check_installed | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/536375/"
]
} |
712,454 | I found a similar question here , but it is not exactly what I want. Let's suppose I have a text file like this: [...]age: 10country: United Statescity: New Yorkname: Johnage: 27country: Canadacity: Torontoname: Robertage: 32country: Mexicocity: Guadalajaraname: Pedro[...] I want to match the line starting with "name: Robert" and print the 3 previous lines along with the matched result, getting only these lines: age: 27country: Canadacity: Torontoname: Robert How can I do this in the terminal? | Using grep $ grep -B3 '^name: Robert$' input_fileage: 27country: Canadacity: Torontoname: Robert | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/495239/"
]
} |
712,458 | On a centos 7 machine, I'd like to run a python server alongside an apache server. I figured the easiest way would be to configure apache as a reverse proxy. This is my VirtualHost configuration: <VirtualHost *:443> DocumentRoot /home/username/mydomain/src ServerName mydomain.com ErrorLog logs/mydomain-error_log CustomLog logs/mydomain-access_log common DirectoryIndex index.php <Directory /home/username/mydomain/src> Options -Indexes +FollowSymLinks AllowOverride None Require all granted AddOutputFilterByType DEFLATE text/html text/plain text/xml </Directory> ProxyPreserveHost On ProxyPass /mediaproxy http://127.0.0.1:9001/mediaproxy ProxyPassReverse /mediaproxy http://127.0.0.1:9001/mediaproxy LogLevel alert rewrite:trace6 RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^/api/media/(.*) /data/$1 [L] RewriteRule ^/api/v1/* /api/v1/index.php [L] RewriteRule ^/assets/(.*) /site/v1/content/assets/$1 [L] RewriteRule ^/css/(.*) /site/v1/content/css/$1 [L] RewriteRule ^/js/(.*) /site/v1/content/js/$1 [L] RewriteRule ^/fonts/(.*) /site/v1/content/fonts/$1 [L] RewriteRule ^/* /index.php [L] # problematic rule // lets encrypt entries Now, my problem is that rewrite rules takes precedence over ProxyPass. That ism when I visit mydomain.com/mediaproxy/somepage, it serves the content at /index.php , specified with RewriteRule ^/* /index.php [L] . Reverse proxy works correctly if I remove the problematic rule. Unfortunately I need to keep it. How do I tell apache to use ProxyPass rule first, and use RewriteRule only if there is no match? | Using grep $ grep -B3 '^name: Robert$' input_fileage: 27country: Canadacity: Torontoname: Robert | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/213796/"
]
} |
712,477 | I have Afile made by the equivalent of cat a.gif b.7z > Afile . How may I split Afile into the original a.gif and b.7z files? | You'll have to figure out where the gif ends and where the 7z starts. If you don't know the original size of the gif file, you can try and spot the start of the 7z file which should start with the 7z characters. If you're lucky: grep -boa 7z Afile (assuming the GNU implementation of grep or compatible for its non-standard -b ( b yte offset), -o ( o utput matched portion only) -a ( a ll files including non-text ones)) will return only one: <offset>:7z Line where <offset> will be the offset in the file where the 7z file starts. Then, you can extract them with: tail -c +<offset+1> Afile > b.7zhead -c <offset> Afile > a.gif For instance, if grep returns 1234:7z , run tail -c +1235 Afile > b.7z and head -c 1234 > a.gif . If grep returns more than one, one of them will be the start of the 7z file whilst the other ones will just be the gif or 7z files happening to contain the 0x37 0x7a (the values of the 7 and z character in the ASCII set) byte sequence. To determine which is the right one, you can pipe the output of tail -c for each of them to file - which should return something like 7-zip archive data for the right one. Or even try to list its contents with bsdtar tf - for instance. tail -c +<offset+1> Afile | file -tail -c +<offset+1> Afile | bsdtar tf - The binwalk utility can be used to automate that process, as it tries to find file format signatures inside files (typically used to extract information from firmware images): $ binwalk AfileDECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------0 0x0 GIF image data, version "89a", 584 x 1378570 0x217A 7-zip archive data, version 0.4 Ideally, as noted by @Henrik in comments, you'd want to look inside the gif part metadata for the information as to where the end of the GIF data is. I checked ImageMagick's identify , GNU extract , perl's Image::Info and exiftool , common tools that report information out of images and neither of them reported that information unfortunately. It's likely possible to do it by hand by studying the GIF image format specification , another approach could be to hook into image viewers or converters to see where they stop reading the file when trying to parse the file. I find that giftopnm from the venerable netpbm software lets me do that. In zsh: zmodload zsh/system{ giftopnm > /dev/null head -c $(( systell(0) )) < Afile > a.gif cat > b.7z} < Afile Works in my test as giftopnm leaves the position within stdin just after the end of the gif file after converting to pnm (which we discard here). That assumes the gif didn't already have extra information after the end of the data, which looks like it's not uncommon. See for instance libreoffice's gallery/htmlexpo/bludown.gif which has 212 seemingly random bytes after the end of the useful data. That cut.gif in openjdk seems to have 949 extra bytes (almost 80% of the size of the file!), including some Sun Microsystems copyright notice (not cleaned by mat2 ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/536180/"
]
} |
712,511 | I am dealing with an embedded system which has some memory that is accessible by a file descriptor (I have no idea what am I saying, so please correct me if I am wrong). This memory is 32 kB and I want to fill it with 0x00 to 0xFFFFFFFF. I know this for text files: exec {fh} >> ./eeprom;for i in {0..32767}; do echo $i >& $fh; done; $fh>&-; This will write ASCII characters 0 to 977. And if I do a hexdump eeprop | head I get: 0000000 0a30 0a31 0a32 0a33 0a34 0a35 0a36 0a370000010 0a38 0a39 3031 310a 0a31 3231 310a 0a330000020 3431 310a 0a35 3631 310a 0a37 3831 310a0000030 0a39 3032 320a 0a31 3232 320a 0a33 34320000040 320a 0a35 3632 320a 0a37 3832 320a 0a390000050 3033 330a 0a31 3233 330a 0a33 3433 330a0000060 0a35 3633 330a 0a37 3833 330a 0a39 30340000070 340a 0a31 3234 340a 0a33 3434 340a 0a350000080 3634 340a 0a37 3834 340a 0a39 3035 350a0000090 0a31 3235 350a 0a33 3435 350a 0a35 3635 How can I fill each address with its uint32 , not the ASCII representation? | perl -e 'print pack "L*", 0..0x7fff' > file Would write them in the local system's endianness. Use: perl -e 'print pack "L>*", 0..0x7fff'perl -e 'print pack "L<*", 0..0x7fff' To force big-endian or little-endian respectively regardless of the native endianness of the local system. See perldoc -f pack for details. With bash builtins specifically, you can write arbitrary byte values with: printf '\123' # 123 in octalprintf '\xff' # ff in hexadecimal So you could do it by writing each byte of the uint32 numbers by hand with something like: for ((i = 0; i <= 32767; i++)); do printf -v format '\\x%x' \ "$(( i & 0xff ))" \ "$(( (i >> 8) & 0xff ))" \ "$(( (i >> 16) & 0xff ))" \ "$(( (i >> 24) & 0xff ))" printf "$format"done (here in little-endian). In any case, note that 32767 is 0x7fff, not 0xFFFFFFFF . uint32 numbers 0 to 32767 take up 128KiB, not 32kb. 0 to 0xFFFFFFFF would take up 16GiB. To write those 16GiB in perl , you'd need to change the code to: perl -e 'print pack "L", $_ for 0..0xffffffff' As otherwise it would try (and likely fail) to allocate those 16GiB in memory. On my system, I find perl writes the output at around 30MiB/s, while bash writes it at around 250KiB/s (so would take hours to complete). To write 32kb (32000 bits, 4000 bytes, 1000 uint32 numbers) worth of uint32 numbers, you'd use the 0..999 range. 0..8191 for 32KiB. Or you could write 0..16383 as uint16 numbers by replacing L (unsigned long) with S (unsigned short). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712511",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330762/"
]
} |
712,525 | In Bash, how do you programmatically get the job id of a job started with & ? It is possible to start a job in the background with & , and then interact with it using its job id with Bash builtins like fg , bg , kill , etc. For instance, if I start a job like yes > /dev/null & I can then kill it with the following command (assuming this job gets job id 1): kill %1 When creating a new job with & , how do you programmatically get the job id of the newly created job? I realize you can get the process id (not the job id) with $! , but I am specifically wondering about how you can get the job id. | The command jobs prints the currently running background jobs along with their ID: $ for i in {1..3}; do yes > /dev/null & done[1] 3472564[2] 3472565[3] 3472566$ jobs[1] Running yes > /dev/null &[2]- Running yes > /dev/null &[3]+ Running yes > /dev/null & So, to get the id of the last job launched that is still running, since it will be marked with a + , you could do (with GNU grep ): $ jobs | grep -oP '\d(?=]\+)'3 Or, more portably: $ jobs | sed -n 's/^\[\([0-9]*\)\]+.*/\1/p' However, note that if you suspend one of the jobs, then that will take the + . So you might want to just take the last line: $ jobs | tail -n1 | cut -d' ' -f1 | tr -d '][+'3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712525",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80019/"
]
} |
712,556 | I am using docker's official docker image and want to install python3.9.6 Running this installs python 3.10.x apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python and I can't figure out how to specify exact version of python I want to install. Please don't suggest using docker images with python preinstalled | The command jobs prints the currently running background jobs along with their ID: $ for i in {1..3}; do yes > /dev/null & done[1] 3472564[2] 3472565[3] 3472566$ jobs[1] Running yes > /dev/null &[2]- Running yes > /dev/null &[3]+ Running yes > /dev/null & So, to get the id of the last job launched that is still running, since it will be marked with a + , you could do (with GNU grep ): $ jobs | grep -oP '\d(?=]\+)'3 Or, more portably: $ jobs | sed -n 's/^\[\([0-9]*\)\]+.*/\1/p' However, note that if you suspend one of the jobs, then that will take the + . So you might want to just take the last line: $ jobs | tail -n1 | cut -d' ' -f1 | tr -d '][+'3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284397/"
]
} |
712,588 | When reading over Google's shell style guide , I found the following line that confused me a little: "Executables should have no extension (strongly preferred) or a .sh extension. Libraries must have a .sh extension and should not be executable." link At least when it comes to bash scripts, what exactly is the difference between an executable and a library of bash scripts? Are the scripts in the library not also executables? | At least when it comes to bash scripts, what exactly is the difference between an executable and a library of bash scripts? I haven't heard the phrase "library" used much for shell scripts, and if I would, I'd suggest that the fact that the program needs to be split into libraries implies that it'd be better off implemented in some better programming language. That said, I'd expect a library in shell programming would be similar to a library in other programming languages: something that provides functions and subroutines for other programs to use, but doesn't in itself do anything in particular with those tools. E.g. a library might provide the tools for compressing/uncompressing some data, but it would take a program using that library to implement a command line tool for compressing/uncompressing files named on the command line. In practice, a program implemented in the shell would be just executed normally, like any program, while a shell library would only contain function definitions and would be sourced from another shell script, pulling those definitions in. E.g. of the two files below, I might call main a program, and functions.sh a library. main could made executable and executed as ./main . (Or placed in $PATH , but then functions.sh also needs to be in $PATH so that source will find it.) Note that while you could run e.g. bash functions.sh , it wouldn't do anything. main : #!/bin/bashsource functions.shsay hello world functions.sh : say() { printf "%s\n" "$*"} The point about executable programs not having an extension probably has to do with not exposing the implementation. The users of a program don't need to know how it's implemented, and in case a shell script is reimplemented in Perl, it's less confusing if it's not called foo.sh , and less work if the name doesn't need to be changed (requiring modifications to all users of the program). A library of shell functions can only be used from a shell script though, so exposing the implementation there does make sense. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/241691/"
]
} |
712,603 | I think my VPS is hacked - there are multiple instances of curl process running and pointing to a Jpg file of some IP address. How to stop that process from getting created over and over? There are multiple such processes running simultaneously, deleting one of them doesn't delete the other. I have tried deleting all but they get recreated. | At least when it comes to bash scripts, what exactly is the difference between an executable and a library of bash scripts? I haven't heard the phrase "library" used much for shell scripts, and if I would, I'd suggest that the fact that the program needs to be split into libraries implies that it'd be better off implemented in some better programming language. That said, I'd expect a library in shell programming would be similar to a library in other programming languages: something that provides functions and subroutines for other programs to use, but doesn't in itself do anything in particular with those tools. E.g. a library might provide the tools for compressing/uncompressing some data, but it would take a program using that library to implement a command line tool for compressing/uncompressing files named on the command line. In practice, a program implemented in the shell would be just executed normally, like any program, while a shell library would only contain function definitions and would be sourced from another shell script, pulling those definitions in. E.g. of the two files below, I might call main a program, and functions.sh a library. main could made executable and executed as ./main . (Or placed in $PATH , but then functions.sh also needs to be in $PATH so that source will find it.) Note that while you could run e.g. bash functions.sh , it wouldn't do anything. main : #!/bin/bashsource functions.shsay hello world functions.sh : say() { printf "%s\n" "$*"} The point about executable programs not having an extension probably has to do with not exposing the implementation. The users of a program don't need to know how it's implemented, and in case a shell script is reimplemented in Perl, it's less confusing if it's not called foo.sh , and less work if the name doesn't need to be changed (requiring modifications to all users of the program). A library of shell functions can only be used from a shell script though, so exposing the implementation there does make sense. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31726/"
]
} |
712,655 | Why does the POSIX standard reserve the -W option for vendor extensions of the system utilities? I do not understand why the letter ‘W’ is used. ‘V’ (for v endor) could make more sense. Maybe this question should be moved to Retrocomputing SE. | This provision was added between Single Unix v2 (1997) and Single Unix v3 (2001). It wasn't done in a vacuum: it had to take into account both the previous specifications and existing practice. If a letter was already specified for some commands, the existing commands would have to be grandfathered in and wouldn't be able to follow this guideline. If a letter was already used by popular programs not specified by POSIX or by popular implementations of POSIX programs, this would have made it harder to specify those utilities later, and harder for users to remember options with similar meanings but different letters for different commands. Looking at the documented options in SUSv2: grep -h -Po '(?<=^<dt><b>-)[[:alnum:]]' /usr/share/doc/susv2/susv2/xcu/*.html | sort | uniq -c we can see that all the lowercase letters are taken by at least one utility, and most uppercase letters as well. The free letters are -B , -J , -K , -Y and -Z . -V is taken only for two commands: command , where it's a variant of -v (added — I don't know by who originally, possibly one of the Unix specification working groups or ksh — because the original definition of -v wasn't quite satisfactory). dis , where it's an option to print the version of the utility. POSIX could have chosen -V for vendor, but it would have meant that command would not have followed the guidelines. This would have been annoying since command was created for the sake of portability (both for its behavior of avoiding differences between shell builtins and external utilities, and for its function similar to type but without the output formatting variability). In addition, dis was far from the only program out there to use -V for “version” (most of these weren't codified by POSIX because they weren't part of the base system: you don't need a “print version” option for a utility that's part of the base system, you just use the version number of the base system). So -V would have had too many exceptions, both inside POSIX and out, to be a good choice. -W was only taken by cc . cc implementations tended to differ quite a lot between vendors (in particular, with respect to which C dialect it expected), which led to it being removed from future versions of the standard (replaced by c89 , c99 , etc.). Since the next version of the standard no longer had cc , giving -W a new meaning didn't exclude any standard utility. As far as I know, it wasn't a particularly common choice of option letter in non-POSIX utilities, so it was up for grabs. Why -W and not another of the uppercase letters that wasn't used at all? I don't know for sure, it could have been arbitrary, but it didn't come out of the blue. The -W option was codified for cc with an argument that itself had to have a certain structure allowing multiplexing: it had to start with a character specifying what “subdomain” (compilation phase) the option applies to, followed by “subdomain-specific” options. Since POSIX.1-2001 only leaves one letter for implementation-specific options, this letter would have to be multiplexed in order to allow more than one implementation-specific behavior change. So the -W option of cc was an inspiration for how the implementation-specific -W could be used — not necessarily the exact syntax, but the basic principle of taking an argument with a prefix indicating a “sub-option” of some sort. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194408/"
]
} |
712,749 | Suppose I have a path like this under some directory (e.g. home): ~/mydir/subdir/file1 ,and I want to make a copy of file1 : We could do this from the home directory: cp ~/mydir/subdir/file1 ~/mydir/subdir/file2 Is there a shortcut that means "in the same path as the source" so I don't have to repeat the mydir/subdir portion again, just like . means the current directory? Or another way to achieve the same result? I know I can cd ~/mydir/subdir first, but I'm looking for a one line solution that would allow me to stay in the same directory. One option would be to use a variable, but I'm hoping there's a more elegant way when typing lots of commands. Had a look at man cp and searched online but couldn't find such an option. If the answer is a definite "no" that's fine too, just thought I'd check if there's a time saving trick others use. Thanks! | using bash (1), and assuming path without space or new line (2), I would use cp ~/mydir/subdir1/{file1,file2} this would be expanded by bash to (along with ~ expansion) cp ~/mydir/subdir1/file1 ~/mydir/subdir1/file2 as always you can test using echo echo cp ~/mydir/subdir1/{file1,file2} As per comment : (1) this may work on other shell, I didn't have time to test them all. (2) for "funny" names, either use tab completion (so bash will put proper escape), or use quotes (but not arround braces) : mv ./"some dir"/{"some name","other id"} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712749",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212177/"
]
} |
712,756 | I've used the ethernet interface rarely in the past on my debian 10. Last week I've updated my debian to debian 11. During the installation it failed to connect via ethernet. I've tried 3 different cable some of them are in daily use for my tv. So a faulty cable can be ruled out. It was quite a challenge to get through the installation as I need some non-open firmwire for my wlan interface. So now I would like to figure out if it is a setup issue or my hardware (ethernet) is broken. I'm not at all a specialist in interfaces / hardware related stuff. So would be great if someone could tell me what's the most likely case. Running a simple sudo lshw -class network -shortH/W path Device Class Description============================================================/0/100/1c.6/0 wlp3s0 network Wireless 8265 / 8275/0/100/1f.6 enp0s31f6 network Ethernet Connection (4) I219-V looks to me that the interface is wokring correctly, no? Does this mean the hardware has most likely a loose contact / is broken? ip link show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: enp0s31f6: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether 8c:16:45:32:c8:b8 brd ff:ff:ff:ff:ff:ff3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000 link/ether 00:21:6b:ff:ac:d5 brd ff:ff:ff:ff:ff:ff Edit /usr/sbin/ethtool enp0s31f6Settings for enp0s31f6: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: Unknown! Duplex: Unknown! (255) Auto-negotiation: on Port: Twisted Pair PHYAD: 2 Transceiver: internal MDI-X: Unknown (auto) Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: no Edit 2 After sample setting sudo /usr/sbin/ethtool -s enp0s31f6 speed 100 duplex full[sudo] password for nicolas: (srv) nicolas@debian:~/phd/src$ sudo /usr/sbin/ethtool enp0s31f6Settings for enp0s31f6: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 100baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: Unknown! Duplex: Unknown! (255) Auto-negotiation: on Port: Twisted Pair PHYAD: 2 Transceiver: internal MDI-X: Unknown (auto) Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: no | using bash (1), and assuming path without space or new line (2), I would use cp ~/mydir/subdir1/{file1,file2} this would be expanded by bash to (along with ~ expansion) cp ~/mydir/subdir1/file1 ~/mydir/subdir1/file2 as always you can test using echo echo cp ~/mydir/subdir1/{file1,file2} As per comment : (1) this may work on other shell, I didn't have time to test them all. (2) for "funny" names, either use tab completion (so bash will put proper escape), or use quotes (but not arround braces) : mv ./"some dir"/{"some name","other id"} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712756",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18180/"
]
} |
712,788 | I would like the sed equivalent to: grep -Eo ' regex ' . I might, hypothetically, want to do further work with the output. This step might be just the first part of somethingthat will be more elaborate by adding ; s… to a longer sed expression. To be clearer, I want to be able to isolate each string matching a given regular expression in an input stream. For proof-of-concept purposes,each such string should be output as a separate line with no context(i.e., no surrounding text from the input). So an input line with multiple (non-overlapping) matchesshould result in multiple output lines;an input line with no matches should result in no output. Example: Regular expression: [a-zA-Z]{3}[0-9]{4} (i.e., three letters followed by four digits) Input: FGH1234 and CAS4057MAX2345 Output: FGH1234CAS4057MAX2345 | Update to fix behaviour for zero-length regex matches: sed 't match;s/REGEX/\n&\n/g;D;:match;/^\n/!P;s/\n//;D' file Globally substitute matches with <newline><matched part><newline> . Then print them by creating a loop P;s/\n//;D back to t match and so on until all matched parts have been printed. /^\n/!P is used instead of just P so that only non-empty matches are printed (like GNU grep -o does). A similar approach using awk could be: regex='REGEX' awk 'BEGIN {FS="\n"} gsub(ENVIRON["regex"], FS "&" FS) {for (i=2;i<NF;i+=2) if ($i!="") print $i}' file Original attempt: note that these commands behave badly when given a regex that matches an empty string (such as .* ) - empty lines will be printed in an endless loop. With a single invocation of sed : sed 't matchs/[[:alpha:]]\{3\}[[:digit:]]\{4\}/\&\/;D;:matchP;D' file POSIX sed syntax is used: the regex is a basic regular expression, \ -escaped newlines are used in the replacement string of s/// , and newlines are used rather than ; after the branch labels. Some versions of sed (such as GNU sed) can accept the script all on one line: sed 't match;s/[[:alpha:]]\{3\}[[:digit:]]\{4\}/\n&\n/;D;:match;P;D' file The substitution isolates the first match by adding newlines before and after the matching portion. The conditional branch t match at the start of the script will only be followed after a successful substitution is made. :match is where the matching portion is printed. D is used so that the line containing the match is removed from the pattern space and the remainder used as input for the next cycle, allowing further matches to be found. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/327114/"
]
} |
712,830 | I am investigating the behavior of a binary on Oracle Linux 9 (XFS filesystem). This binary, when called by a process, creates a directory under /tmp and copies some files to it. This directory gets a randomized name each time the process runs (a keyword + a GUID). Immediately after, it deletes the directory. I want to access the files contained in this directory before it is deleted, but the whole process ends too fast for any of my commands. Is there any way I could "intercept" and copy this directory before it is deleted? | You could always run the application under: gdb --args /path/to/your/your-program and its args Then add breakpoints on unlink() , unlinkat() , rmdir() functions or syscalls: catch syscall unlinkcatch syscall unlinkatcatch syscall rmdirrun Then each time a breakpoint is reached, check that it's about deleting files in that directory and inspect the files in there or copy them elsewhere. Enter cont in gdb to resume execution (until the next breakpoint). Example with rm -rf : $ gdb -q --args rm -rf /tmp/tmp.HudBncQ4NiReading symbols from rm...Reading symbols from /usr/lib/debug/.build-id/f6/7ac1d7304650a51950992d074f98ec88fe2f49.debug...(gdb) catch syscall unlinkCatchpoint 1 (syscall 'unlink' [87])(gdb) catch syscall unlinkatCatchpoint 2 (syscall 'unlinkat' [263])(gdb) catch syscall rmdirCatchpoint 3 (syscall 'rmdir' [84])(gdb) runStarting program: /bin/rm -rf /tmp/tmp.HudBncQ4NiCatchpoint 2 (call to syscall unlinkat), 0x00007ffff7eb6fa7 in __GI_unlinkat () at ../sysdeps/unix/syscall-template.S:120120 ../sysdeps/unix/syscall-template.S: No such file or directory.(gdb) info registersrax 0xffffffffffffffda -38rbx 0x555555569830 93824992319536rcx 0x7ffff7eb6fa7 140737352789927rdx 0x0 0rsi 0x555555569938 93824992319800rdi 0x4 4rbp 0x555555568440 0x555555568440rsp 0x7fffffffda48 0x7fffffffda48r8 0x3 3r9 0x0 0r10 0xfffffffffffffa9c -1380r11 0x206 518r12 0x0 0r13 0x7fffffffdc30 140737488346160r14 0x0 0r15 0x555555569830 93824992319536rip 0x7ffff7eb6fa7 0x7ffff7eb6fa7 <__GI_unlinkat+7>eflags 0x206 [ PF IF ]cs 0x33 51ss 0x2b 43ds 0x0 0es 0x0 0fs 0x0 0gs 0x0 0(gdb) x/s $rsi0x555555569938: "test"(gdb) info procprocess 7524cmdline = '/bin/rm -rf /tmp/tmp.HudBncQ4Ni'cwd = '/export/home/stephane'exe = '/bin/rm'(gdb) !readlink /proc/7524/fd/4/tmp/tmp.HudBncQ4Ni(gdb) !find /tmp/tmp.HudBncQ4Ni -ls 1875981 4 drwx------ 2 stephane stephane 4096 Aug 8 09:30 /tmp/tmp.HudBncQ4Ni 1835128 4 -rw-r--r-- 1 stephane stephane 5 Aug 8 09:30 /tmp/tmp.HudBncQ4Ni/test Here, the breakpoint was on the unlinkat() system call for the test entry inside /tmp/tmp.HudBncQ4Ni on a x86_64 Linux system where the first two arguments of the syscall are in the rdi and rsi registers. strace can inject signals to a process when a syscall is called ( strace -e inject=unlink,unlinkat,rmdir:signal=STOP to suspend for instance), but AFAICT it always does it after the syscall returns, so once the file has already been removed. You can however delay the entry so you can suspend by hand with Ctrl + Z for instance: $ strace -e inject=unlink,unlinkat,rmdir:delay_enter=5s -e unlink,unlinkat,rmdir rm -rf /tmp/tmp.HudBncQ4Niunlinkat(4, "test", 0^Zzsh: suspended strace -e inject=unlink,unlinkat,rmdir:delay_enter=10s -e rm -rf Or, as suggested by @PhilippWendler, you can use: strace -e inject=unlink,unlinkat,rmdir:retval=0 -e unlink,unlinkat,rmdir ... or: strace -e inject=unlink,unlinkat,rmdir:error=EACCES -e unlink,unlinkat,rmdir ... To hijack the syscalls and pretend they succeed (with retval=0 ) or fail (with EACCES here meaning Permission denied ) without actually calling them. Both gdb and strace can attach to an already running process with --pid <the-process-id> / -p <the-process-id> respectively. They can also be told to follow forks and execs and trace the children as well so you can attach to the parent and watch for or hijack unlinks in the children (see -f in strace and the follow-* settings in gdb ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/536771/"
]
} |
712,838 | I need forward packets from one server ("as a proxy") to another with keeping the original IP address of clients.Topology is:X.X.X.X - public IP1 ("proxy server")Y.Y.Y.Y - public IP2 (host for virtualization)172.16.0.2 - private IP (virutal machine with web server) some public IP ---> X.X.X.X ---> Y.Y.Y.Y ---> 172.16.0.2 (web server) On systems with X.X.X.X and Y.Y.Y.Y forwarding is enabled. Traffic from some public IP via X.X.X.X is on X.X.X.X routed to Y.Y.Y.Y but it never gets there. I captured it with tcpdump.If I use masquerade on "proxy server" it works OK, but the original ip address is not preserved.If I use DNAT for port 80 only form Y.Y.Y.Y to 172.16.0.2 - iptables -t nat -A PREROUTING -d Y.Y.Y.Y/32 -i venet0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.16.0.2:80 and I try Y.Y.Y.Y:80 it works ok. Problem is the most probably on "proxy server" with public IP X.X.X.X. "Proxy server" host IP X.X.X.X It has only one interface with connect to the internet - eth0. iptables rule: (forwarding in filter table is allowed) iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j DNAT --to-destination Y.Y.Y.Y:443 Route table:It uses main table with: default via X.X.X.1 dev eth0 onlink Host IP Y.Y.Y.Y It has only one interface with connect to the internet - venet0.For VM is used Qemu and interface br0. iptables rule: (forwarding in filter table is allowed) iptables -t nat -A PREROUTING -d Y.Y.Y.Y/32 -i venet0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.16.0.2:443iptables -t nat -A POSTROUTING -s 172.16.0.0/24 ! -o br0 -j MASQUERADE Route table:It uses main table with: default via 255.255.255.254 dev venet0 Host IP 172.16.0.2 It has only one interface with connect to the internet - ens6Route table:It uses main table with: default via 172.16.0.1 dev ens6 proto static iptables rules according to gapsf answer: X.X.X.X iptables rules: iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 443 -j DNAT --to-destination Y.Y.Y.Yiptables -t nat -A POSTROUTING -s Y.Y.Y.Y/32 -o eth0 -p tcp -m tcp --sport 443 -j SNAT --to-source X.X.X.Xiptables -A FORWARD -d Y.Y.Y.Y/32 -p tcp -m tcp --dport 443 -j ACCEPTiptables -A FORWARD -s Y.Y.Y.Y/32 -p tcp -m tcp --sport 443 -j ACCEPT Y.Y.Y.Y iptables rules: iptables -t nat -A PREROUTING -i venet0 -p tcp -m tcp --dport 443 -j DNAT --to-destination 172.16.0.2:443iptables -t nat -A POSTROUTING -s 172.16.0.2/32 -o venet0 -p tcp -m tcp --sport 443 -j SNAT --to-source Y.Y.Y.Yiptables -A FORWARD -d 172.16.0.2/32 -i venet0 -o br0 -p tcp -m tcp --dport 443 -j ACCEPTiptables -A FORWARD -s 172.16.0.2/32 -i br0 -o venet0 -p tcp -m tcp --sport 443 -j ACCEPT 172.16.0.2 iptables rules: iptables -A INPUT -i ens6 -p tcp -m multiport --dports 80,443 -j ACCEPTiptables -A OUTPUT -o ens6 -p tcp -m tcp --sport 443 -j ACCEPT Could you help me please - where is the problem? | You could always run the application under: gdb --args /path/to/your/your-program and its args Then add breakpoints on unlink() , unlinkat() , rmdir() functions or syscalls: catch syscall unlinkcatch syscall unlinkatcatch syscall rmdirrun Then each time a breakpoint is reached, check that it's about deleting files in that directory and inspect the files in there or copy them elsewhere. Enter cont in gdb to resume execution (until the next breakpoint). Example with rm -rf : $ gdb -q --args rm -rf /tmp/tmp.HudBncQ4NiReading symbols from rm...Reading symbols from /usr/lib/debug/.build-id/f6/7ac1d7304650a51950992d074f98ec88fe2f49.debug...(gdb) catch syscall unlinkCatchpoint 1 (syscall 'unlink' [87])(gdb) catch syscall unlinkatCatchpoint 2 (syscall 'unlinkat' [263])(gdb) catch syscall rmdirCatchpoint 3 (syscall 'rmdir' [84])(gdb) runStarting program: /bin/rm -rf /tmp/tmp.HudBncQ4NiCatchpoint 2 (call to syscall unlinkat), 0x00007ffff7eb6fa7 in __GI_unlinkat () at ../sysdeps/unix/syscall-template.S:120120 ../sysdeps/unix/syscall-template.S: No such file or directory.(gdb) info registersrax 0xffffffffffffffda -38rbx 0x555555569830 93824992319536rcx 0x7ffff7eb6fa7 140737352789927rdx 0x0 0rsi 0x555555569938 93824992319800rdi 0x4 4rbp 0x555555568440 0x555555568440rsp 0x7fffffffda48 0x7fffffffda48r8 0x3 3r9 0x0 0r10 0xfffffffffffffa9c -1380r11 0x206 518r12 0x0 0r13 0x7fffffffdc30 140737488346160r14 0x0 0r15 0x555555569830 93824992319536rip 0x7ffff7eb6fa7 0x7ffff7eb6fa7 <__GI_unlinkat+7>eflags 0x206 [ PF IF ]cs 0x33 51ss 0x2b 43ds 0x0 0es 0x0 0fs 0x0 0gs 0x0 0(gdb) x/s $rsi0x555555569938: "test"(gdb) info procprocess 7524cmdline = '/bin/rm -rf /tmp/tmp.HudBncQ4Ni'cwd = '/export/home/stephane'exe = '/bin/rm'(gdb) !readlink /proc/7524/fd/4/tmp/tmp.HudBncQ4Ni(gdb) !find /tmp/tmp.HudBncQ4Ni -ls 1875981 4 drwx------ 2 stephane stephane 4096 Aug 8 09:30 /tmp/tmp.HudBncQ4Ni 1835128 4 -rw-r--r-- 1 stephane stephane 5 Aug 8 09:30 /tmp/tmp.HudBncQ4Ni/test Here, the breakpoint was on the unlinkat() system call for the test entry inside /tmp/tmp.HudBncQ4Ni on a x86_64 Linux system where the first two arguments of the syscall are in the rdi and rsi registers. strace can inject signals to a process when a syscall is called ( strace -e inject=unlink,unlinkat,rmdir:signal=STOP to suspend for instance), but AFAICT it always does it after the syscall returns, so once the file has already been removed. You can however delay the entry so you can suspend by hand with Ctrl + Z for instance: $ strace -e inject=unlink,unlinkat,rmdir:delay_enter=5s -e unlink,unlinkat,rmdir rm -rf /tmp/tmp.HudBncQ4Niunlinkat(4, "test", 0^Zzsh: suspended strace -e inject=unlink,unlinkat,rmdir:delay_enter=10s -e rm -rf Or, as suggested by @PhilippWendler, you can use: strace -e inject=unlink,unlinkat,rmdir:retval=0 -e unlink,unlinkat,rmdir ... or: strace -e inject=unlink,unlinkat,rmdir:error=EACCES -e unlink,unlinkat,rmdir ... To hijack the syscalls and pretend they succeed (with retval=0 ) or fail (with EACCES here meaning Permission denied ) without actually calling them. Both gdb and strace can attach to an already running process with --pid <the-process-id> / -p <the-process-id> respectively. They can also be told to follow forks and execs and trace the children as well so you can attach to the parent and watch for or hijack unlinks in the children (see -f in strace and the follow-* settings in gdb ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52636/"
]
} |
712,942 | Using, for example, the tools on a Debian or Debian-derived system, a regex like N* , which could match the empty string, could result in a match in sed: $ echo 'Hello' | sed 's/N*/ xx&xx /g' xxxx H xxxx e xxxx l xxxx l xxxx o xxxx Which is the correct result of an empty match (thus the xxxx strings, no character in the middle xx&xx ) before each string character (6 times in Hello . The trailing newline doesn't count, it is not matched). And, if any character (or group of characters) would match it would appear between the xx and xx : $ echo 'Hello' | sed 's/e*/ xx&xx /g' xxxx H xxexx l xxxx l xxxx o xxxx However, the same regex in grep would not match the empty string: $ echo 'Hello' | grep -o 'N*' But will print only non-empty matches: $ echo 'Hello' | grep -o 'e*'e Is there an additional internal rule in grep to avoid empty regex matches? | grep -o is documented in grep --help as -o, --only-matching show only nonempty parts of lines that match and in the manual as Print only the matched (non-empty) parts of matching lines, with each such part on a separate output line. So yes, there is an additional rule in grep -o : matches are only output if they are non-empty. In echo 'Hello' | grep -o 'N*' , the regular expression matches (as can be seen by looking at the return code, or with echo 'Hello' | grep 'N*' ), but because the matches are empty, nothing is output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/529738/"
]
} |
712,996 | Why are some relative file paths displayed in the form of ./file , instead of just file ? For example, when I do: find . I get this output: ./file1./file2./file3 What is the practical purpose, other than making the path more confusing? It's not like it is preventing me from some accident. Both are relative paths, and cat ./file1 works same as cat file1 . Is this behavior coming from find command, or is it some system-wide c library? OK, I understand why using ./file for -exec construct is necessary (to make sure I have ... | xargs rm ./-i , and not ... | xargs rm -i ). But in what situation would missing ./ break anything when using -print statement? I am trying to construct any statement that breaks something: touch -- -b -d -f -ifind -printf '%P\n' | sort-b-d-f-i Everything works fine. Just out of curiosity, how could I construct a -print statement that would demonstrate this issue? | This behaviour comes from find , and is specified by POSIX : Each path operand shall be evaluated unaltered as it was provided, including all trailing <slash> characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a <slash> if the current path operand did not end in one, and the filename relative to the path operand. The default action, -print , outputs the full pathname to standard out. find outputs the paths of files it finds starting from the path(s) given on its command line. find . asks find to look for files under . and its subdirectories, and it presents the results starting with ./ ; find foo would do the same but starting with foo , and it would produce results starting with foo/ . I don’t think find does this specifically to prevent problems with un-prefixed file names; rather, it does this for consistency — regardless of the path provided as argument, the output of -print always starts with that path. With the GNU implementation of find , you can strip the initial path off the start of the printed file by using -printf '%P\n' in place of -print . For instance with find foo/bar -name file -printf '%P\n' or find . -name file -printf '%P\n' , you'd get dir/file instead of foo/bar/dir/file or ./dir/file for those files. More generally, having ./ as a prefix can help prevent errors, e.g. if you have files with names starting with dashes; for example if you have a file named -f , rm -f won’t delete it, but rm ./-f will. When running commands with a shell or with exec*p() standard C functions (and their equivalent in other languages), when the command name doesn't contain a / , the path of command is looked in $PATH instead of being interpreted as a relative path (the file in the current working directory). Same applies for the argument to the . / source special builtins of several shells (including POSIX compliant sh implementations). Using ./cmd in that case instead of cmd , which is another way to specify the same relative path, but with a / in it is how you typically invoke a command stored in the current working directory. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/712996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
713,038 | I would like to replace straight single and double quotes with curly quotes ( ‘ ’ , “ ” ).How can I do this with a shell command? | This behaviour comes from find , and is specified by POSIX : Each path operand shall be evaluated unaltered as it was provided, including all trailing <slash> characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a <slash> if the current path operand did not end in one, and the filename relative to the path operand. The default action, -print , outputs the full pathname to standard out. find outputs the paths of files it finds starting from the path(s) given on its command line. find . asks find to look for files under . and its subdirectories, and it presents the results starting with ./ ; find foo would do the same but starting with foo , and it would produce results starting with foo/ . I don’t think find does this specifically to prevent problems with un-prefixed file names; rather, it does this for consistency — regardless of the path provided as argument, the output of -print always starts with that path. With the GNU implementation of find , you can strip the initial path off the start of the printed file by using -printf '%P\n' in place of -print . For instance with find foo/bar -name file -printf '%P\n' or find . -name file -printf '%P\n' , you'd get dir/file instead of foo/bar/dir/file or ./dir/file for those files. More generally, having ./ as a prefix can help prevent errors, e.g. if you have files with names starting with dashes; for example if you have a file named -f , rm -f won’t delete it, but rm ./-f will. When running commands with a shell or with exec*p() standard C functions (and their equivalent in other languages), when the command name doesn't contain a / , the path of command is looked in $PATH instead of being interpreted as a relative path (the file in the current working directory). Same applies for the argument to the . / source special builtins of several shells (including POSIX compliant sh implementations). Using ./cmd in that case instead of cmd , which is another way to specify the same relative path, but with a / in it is how you typically invoke a command stored in the current working directory. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/713038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513913/"
]
} |
713,059 | Is there a key difference between sudo -e and sudo vim . I have set up the sudoers file so that vim is my default editor. Is there a key difference between the two?Plus, should I switch from vim to rvim ? I tried it but I had some problems with my config file | The big difference is who is editing what file. With sudo vim (assuming successful authentication), the root user invokes vim and edits the file in place (with root's environment and vim swap files parallel to the file being edited). With sudo -e or sudoedit the user who invoked sudo edits a temporary copy of the file owned by themselves with their own environment (including things like ~/.vimrc ). Once the user saves the output, the content of the temporary file is copied back into the original file that the user didn't have the permissions to edit. This method also has a couple checks that prevent editing under a few circumstances: the user is trying to edit a symbolic link the user is trying to edit a file using a path containing a symbolic link the user has write permissions on the directory containing the file Why those specific rules are strictly enforced, I do not know (some sort of security issues I'd assume). P.S. Users are also disallowed with sudo's edit mode from editing files that are device special files (block devices, serial devices, etc.). EDIT: Another consequence of not running vim as root, is that the user cannot use vim's shell capabilities this way to run arbitrary commands as root. This allows giving the user access to edit certain files via sudoers rules, while not handing over the keys to the kingdom. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/713059",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/531949/"
]
} |
713,108 | I used shred to wipe my external hard disk: sudo shred -vz /dev/sdb I should also add that the disk had 5 bad sectors. I want to verify the disk has been zeroed, per https://superuser.com/questions/1510233/is-there-a-faster-way-to-verify-that-a-drive-has-been-fully-zeroed I'm not that familiar with dd , but I believe that these show it's been zeroed: sudo dd if=/dev/sdb status=progress | hexdump0000000 0000 0000 0000 0000 0000 0000 0000 0000*5000916670976 bytes (5.0 TB, 4.5 TiB) copied, 45754 s, 109 MB/s9767541167+0 records in9767541167+0 records out5000981077504 bytes (5.0 TB, 4.5 TiB) copied, 45756.7 s, 109 MB/s48c61b35e00 sudo dd if=/dev/sdb status=progress | od | head5000952267264 bytes (5.0 TB, 4.5 TiB) copied, 45739 s, 109 MB/s9767541167+0 records in9767541167+0 records out5000981077504 bytes (5.0 TB, 4.5 TiB) copied, 45741.1 s, 109 MB/s0000000 000000 000000 000000 000000 000000 000000 000000 000000*110614154657000 But using a simple cmp shows an exception: sudo cmp /dev/zero /dev/sdbcmp: EOF on /dev/sdb after byte 5000981077504, in line 1 Has the disk been zeroed? | Has the disk been zeroed? Yes. The output of your dd command shows that it has written 5000981077504 bytes. Your cmp command says that it's reached EOF (end of file) after 5000981077504 bytes, which is the same. Be aware that this only works well with hard drives. For solid-state devices, features such as wear leveling and overprovisioning space may result in some data not being erased. Furthermore, your drive must not have any damaged sectors, as they will not be erased. Note that cmp will not be very efficient for this task. You would be better off with badblocks : badblocks -svt 0x00 /dev/sdb From badblocks(8) , the -t option can be used to verify a pattern on the disk. If you do not specify -w (write) or -n (non-destructive write), then it will assume the pattern is already present: -t test_pattern Specify a test pattern to be read (and written) to disk blocks. The test_pattern may either be a numeric value between 0 and ULONG_MAX-1 inclusive, or the word "random", which specifies that the block should be filled with a random bit pattern. For read/write (-w) and non-destructive (-n) modes, one or more test patterns may be specified by specifying the -t option for each test pattern desired. For read-only mode only a single pattern may be specified and it may not be "random". Read-only testing with a pattern assumes that the specified pattern has previously been written to the disk - if not, large numbers of blocks will fail verification. If multiple patterns are specified then all blocks will be tested with one pattern before proceeding to the next pattern. Also, using dd with the default block size (512) is not very efficient either. You can drastically speed it up by specifying bs=256k . This causes it to transfer data in chunks of 262,144 bytes rather than 512, which reduces the number of context switches that need to occur. Depending on the system, you can speed it up even more by using iflag=direct , which bypasses the page cache. This can improve read performance on block devices in some situations. Although you didn't ask, it should be pointed out that shred overwrites a target using three passes by default. This is unnecessary. The myth that multiple overwrites is necessary on hard disks comes from an old recommendation by Peter Gutmann. On ancient MFM and RLL hard drives, specific overwrite patterns were require to avoid theoretical data remanence issues. In order to ensure that all types of disks could be overwritten, he recommended using 35 patterns so that at least one of them would be right for your disk. On modern hard drives using modern data encoding techniques such as EPRML and NPML, there is no need to use multiple patterns. According to Gutmann himself: In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. In your position, I would recommend something along this line instead: dd if=/dev/urandom of=/dev/sdb bs=256k oflag=direct conv=fsync When it finishes, just make sure it has written enough bytes after it says "no space left on device". You can also use ATA Secure Erase which initiates firmware-level data erasure. I would not use it on its own because you would be relying on the firmware authors to have implemented the standard securely. Instead, use it in addition to the above in order to make sure dd didn't miss anything (such as bad sectors and the HPA). ATA Secure Erase can be managed by the command hdparm : hdparm --user-master u --security-set-pass yadayada /dev/sdbhdparm --user-master u --security-erase yadayada /dev/sdb Note that this doesn't work on all devices. Your external drive may not support it. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/713108",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265166/"
]
} |
713,149 | Some colleauge suggested that I added a cron job for doing some things, but executing crontab -e and adding the following line: 0 1 * * * . /path/to/some/file.bash To make sure this thing was working, I changed it to 0 1 * * * . /path/to/some/file.bash && date >> /some/log so that I could check that /some/log has indeed none more line per day. This did not happen though. For the purpose of debugging, I've just added these three lines to crontab -e , * * * * * echo "uffa" >> /home/me/without-user * * * * * me echo "uffa" >> /home/me/with-user * * * * * . echo "uffa" >> /home/me/with-any-user where me is my user name, which resulted in all three files being created, but only the first one growing one line per minute, as you can see from this check I'm making after 8 minutes: $ for f in ~/with*; do echo ''; echo $f; cat $f; done/home/emdeange/with-any-user/home/emdeange/without-useruffauffauffauffauffauffauffauffa/home/emdeange/with-user What's happening? Are the second and third line using a wrong syntax by having one more entry on the line? If so, then why does the file get created anyway? I've just verified that running a nonsense command like jflkdasjflaksd > someFile does create an empty someFile , which tells me that the two lines * * * * * me echo "uffa" >> /home/me/with-user * * * * * . echo "uffa" >> /home/me/with-any-user are just wrong, and the files are created before the error even takes place, because of how shell command line processing works. However those are the lines that work for somebody else. What is happening? | Ok, first, there's two slightly different crontab formats. The one used for per-user crontabs, and another for the system crontabs ( /etc/crontab and files in /etc/cron.d/ ). Personal crontabs have five fields for the time and date, and the rest of the line for the command. System crontabs have five fields for the time and date, a sixth one for the user to run the command , and the rest of the line for the command. For totals of 6 and 7, if you like, though the last "field" with the command is a bit differently defined than the others. Personal crontabs don't have the username field, since it's implicit from the crontabs who the owner is, and regular users aren't allowed to run programs under the identity of others anyway. (As noted in comments, the root user's personal crontab is also just a personal crontab like that of any other user. It does not have the username field, even though root is a bit special in other ways. So not only is /etc/crontab a different file from the one you get with crontab -e as root, it also has a different format.) Then there's the . . It tells the shell to read the script named as an argument, and to run it in the current shell (Some shells know source as an alias for . ). Any function definitions and variable assignments would be visible after that, unlike when running a script as a separate program. The line 0 1 * * * . /path/to/some/file.bash tells the shell (that cron started) to run .../file.bash in the same shell. I'm not sure why they'd recommend doing that instead of just running the command directly without the dot. There's a possible slight optimization in not having to initialize a new shell, but the downside is that the script has to be runnable in the shell that cron starts. That wouldn't work if cron starts e.g. a plain sh, but the script is for zsh, or for Python for that matter. If that line was in a global crontab, it'd mean to run /path/to/some/file.bash as the user . . That's likely not how it was meant. I'd suggest just this for simplicity (after making the script executable and adding a proper hashbang line, if not already done): 0 1 * * * /path/to/some/file.bash Then, if the . /some/script && date >> logfile didn't work, the first thing to look is if the script exits with an error. You used the && operator there, and it tells the shell to only run the right-hand command if the left-hand one exits successfully. You could do . /some/script; date >> logfile to run it unconditionally. Or heck, you could try . /some/script; printf "run at %s, exit status %d\n" "$(date)" "$?" >> logfile to save the exit status too. As for these: * * * * * echo "uffa" >> /home/me/without-user* * * * * me echo "uffa" >> /home/me/with-user* * * * * . echo "uffa" >> /home/me/with-any-user In a personal crontab, the first tells the shell to run echo , the second to run a command called me , and the third to source a script called echo . All contain a redirection, and redirections are processed by the shell before the command starts, so your file is created in all cases. (They have to be, since the shell can't know if the command is runnable before it tries to, and if it succeeds, control passes to that command, so the shell can't do anything about the redirections any more.) The two later ones probably give error messages, which you should get in email if your cron is set up properly. However those are the lines that work for somebody else. What is happening? As mentioned above, . /path/to/some/script tries to run the given script in the shell, it'll fail for a binary command, so . echo ... is not likely to work. 0 1 * * * username echo ... would work in a global crontab, but likely not in a personal one. 0 1 * * * . whatever isn't likely to work in a global one, as . probably isn't a valid username. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/713149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/164309/"
]
} |
713,401 | I have strings which contain both escaped and unescaped forward slashes. I'm looking for a sed substitution to escape only the unescaped slashes , but it seems like negative lookbehinds are not supported. Example: input: "https:\/\/github.com\/foo\/bar\/pull\/2934) is live at https://baz/test.com"desired output: "https:\/\/github.com\/foo\/bar\/pull\/2934) is live at https:\/\/baz\/test.com" | sed uses POSIX basic regular expressions by default, which does not include lookahead and other zero-width assertions usually found in Perl-compatible regular expression languages. Instead, simply unescape the escaped slashes, and then escape all slashes in the modified string: sed -e 's@\\/@/@g' -e 's@/@\\/@g' This first changes all instances of \/ into / , and then all / into \/ . The @ is an alternative delimiter for the substitution command to avoid the leaning toothpick syndrome (you could use almost any other character). Example: $ echo '"https:\/\/github.com\/foo\/bar\/pull\/2934) is live at https://baz/test.com"' | sed -e 's@\\/@/@g' -e 's@/@\\/@g'"https:\/\/github.com\/foo\/bar\/pull\/2934) is live at https:\/\/baz\/test.com" If the line of text is stored in a string in the bash shell, you could do something similar there: $ string='"https:\/\/github.com\/foo\/bar\/pull\/2934) is live at https://baz/test.com"'$ string=${string//\\\///} # leaning toothpick warning!$ string=${string//\//\\/}$ printf '%s\n' "$string""https:\/\/github.com\/foo\/bar\/pull\/2934) is live at https:\/\/baz\/test.com" The above uses the ${variable//pattern/replacement} variable substitution which replaces all matches of pattern in $variable with replacement . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/713401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/331025/"
]
} |
713,527 | I regularly need to transfer millions of small files (small images, txt, json) with average of 5-50k per file between servers or to aws s3. Is there a faster to merge them into a single file to optimize transfer speed other than zip/tar -cf? | Something similar to tar cz * | ssh <host> "tar xfc -" ? Seriously what is wrong with tar ? This command does not create any intermediate files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/713527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/460059/"
]
} |
713,570 | I have this in my .zshrc file export MANLESS=""export LESS="--RAW-CONTROL-CHARS"export LESS_TERMCAP_mb=$(tput bold; tput setaf 5)export LESS_TERMCAP_md=$(tput bold; tput setaf 1)export LESS_TERMCAP_so=$(tput setaf 1; tput setab 3)export LESS_TERMCAP_se=$(tput rmso; tput sgr0)export LESS_TERMCAP_us=$(tput setaf 3)export LESS_TERMCAP_ue=$(tput sgr0) When I use man as: man zshexpn , I have nicely colored sections: When I use man as: man zshexpn | less '+/PROCESS SUBSTITUTION' , on same part of the manual page, it is no longer colored: How can I get colored output in second case, same as first case? | I’m assuming you’re using man on a mainstream Linux distribution. man there (and on other systems) defaults to removing formatting if its output isn’t a terminal; since you’re manually piping to less , that’s what’s happening here. You can override this by setting MAN_KEEP_FORMATTING to a non-empty value: MAN_KEEP_FORMATTING=1 man zshexpn | less '+/PROCESS SUBSTITUTION' If you want this behaviour to be the default, export MAN_KEEP_FORMATTING along with your other settings; bear in mind that this will affect all man invocations, which will have side-effects when the output doesn’t end up being processed by a terminal ( e.g. if you want to grep the output). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/713570",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
713,632 | In the Gnome tweak tool, there is a Appearance section. In there, there SHOULD be a Applications dropdown menu. However, (and IDK whether this a Gnome 42 thing) there is no Applications menu. Please help, and I am willing to try different alternatives to GNOME Tweak Tool. Oh and I use Arch. (seriously, no joke) Here is a screenshot: | I’m assuming you’re using man on a mainstream Linux distribution. man there (and on other systems) defaults to removing formatting if its output isn’t a terminal; since you’re manually piping to less , that’s what’s happening here. You can override this by setting MAN_KEEP_FORMATTING to a non-empty value: MAN_KEEP_FORMATTING=1 man zshexpn | less '+/PROCESS SUBSTITUTION' If you want this behaviour to be the default, export MAN_KEEP_FORMATTING along with your other settings; bear in mind that this will affect all man invocations, which will have side-effects when the output doesn’t end up being processed by a terminal ( e.g. if you want to grep the output). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/713632",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/537618/"
]
} |
713,668 | My nslookup nslookup richardrublev.xyzServer: 127.0.0.53Address: 127.0.0.53#53Non-authoritative answer:*** Can't find richardrublev.xyz: No answer On Advanced DNS,I set the CNAME I checked 3 times external IP.How to inspect this? | I’m assuming you’re using man on a mainstream Linux distribution. man there (and on other systems) defaults to removing formatting if its output isn’t a terminal; since you’re manually piping to less , that’s what’s happening here. You can override this by setting MAN_KEEP_FORMATTING to a non-empty value: MAN_KEEP_FORMATTING=1 man zshexpn | less '+/PROCESS SUBSTITUTION' If you want this behaviour to be the default, export MAN_KEEP_FORMATTING along with your other settings; bear in mind that this will affect all man invocations, which will have side-effects when the output doesn’t end up being processed by a terminal ( e.g. if you want to grep the output). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/713668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162292/"
]
} |
713,786 | I have a .tsv file (values separated by tabs) with four values. So each line should have only three tabs and some text around each tab like this: value value2 value3 value4 But it looks that some lines are broken (there is more than three tabs). I need to find out these lines. I came up with following grep pattern. grep -v "^[^\t]+\t[^\t]+\t[^\t]+\t[^\t]+$" My thinking: first ^ matches the beggining [^\t]+ matches more than one "no tab character" \t matches single tab character $ matches end And than I just put it into right order with correct number of times. That should match correct lines. So I reverted it by -v option to get the wrong lines. But with the -v option it matches any line in the file and also some random text I tried that don't have any tabs inside. What is my mistake please? EDIT: I am using debian and bash. | As you already saw, \t isn't special for Basic regular Expressions, and grep uses BRE by default. GNU grep , the default on Linux has -P for Perl Compatible Regular Expressions which lets you use \t for tab characters. However, what you want is much easier to do with awk . Just set the input field separator to a tab ( -F '\t' ) and then print any lines whose number of fields ( NF ) is not 3: awk -F'\t' 'NF!=3' file That will print all lines in file with more or less than three fields. To limit to only more than three fields, use: awk -F'\t' 'NF>3' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/713786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102791/"
]
} |
713,936 | I'm trying to check the status of AWS AMI and execute some commands if the status is available . Below is my small script to achieve that. #!/usr/bin/env bashREGION="us-east-1"US_EAST_AMI="ami-0130c3a072f3832ff"while :do AMI_STATE=$(aws ec2 describe-images --region "$REGION" --image-ids $US_EAST_AMI | jq -r .Images[].State) [ "$AMI_STATE" == "available" ] && echo "Now the AMI is available in the $REGION region" && break sleep 10done The above script works fine if the first call was a success. But I'm expecting something for the below scenarios If the value of the $AMI_STATE is equal to "available" (currently working), "failed" it should break the loop If the value of the $AMI_STATE is equal to "pending" , the loop should continue until it meets the expected value. | You want to run the loop while the value of AMI_STATE is equal to pending … so write just that. while AMI_STATE=$(aws ec2 describe-images --region "$REGION" --image-ids $US_EAST_AMI | jq -r .Images[].State) && [ "$AMI_STATE" = "pending" ]do sleep 10donecase $AMI_STATE in "") echo "Something went wrong: unable to retrieve AMI state";; available) echo "Now the AMI is available in the $REGION region";; failed) echo "The AMI has failed";; *) echo "AMI in weird state: $AMI_STATE";;esac | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/713936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145056/"
]
} |
714,125 | sort -o seems superfluous. What is the point of using it when we can use sort > ? Is it sometimes impossible to use shell redirection? | Sort a file in-place: sort -o file file Using sort file >file would start by truncating the file called file to zero size, then calling sort with that empty file, resulting in an empty output file no matter what the original file's contents was. Also, in situations where commands or lists of options are automatically generated by e.g. scripts, adding -o somefile to the end of the options would override any previously set output file, which allows controlling the output file location by way of appending options. sort_opt=( some list of options )if [ ... something ... ]; then # We don't need to go through and delete any old use of "-o" # because this later option would override it. sort_opt+=( -o somefile.out )fisort "${sort_opt[@]}" "$thefile" There might also be instances where the sort binary executable is called directly, without a shell to do any redirection to any file. Note that -o is a standard option whereas --output is a GNU extension. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/714125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
714,227 | When I use mapfile with parallel to create an array inside a function the array isn't created properly. Why is this? Array creation not in function mapfile -t arr < <(parallel -j 0 echo ::: {1..5}) declare -p arrdeclare -a arr=([0]="1" [1]="2" [2]="3" [3]="4" [4]="5") Same thing but inside a function mapRay() { mapfile -t "$1" < <(parallel -j 0 "$2" ::: "$3"); }mapRay arr echo {1..2}declare -p arrdeclare -a arr=([0]="1") | Why is this? $ cat un714227.shmapRay(){ mapfile -t "$1" < <(parallel -j 0 "$2" ::: "$3"); }mapRay arr echo {1..2}$ bash -x ./un714227.sh++ mapRay arr echo 1 2++ mapfile -t arr+++ parallel -j 0 echo ::: 1 As you see, mapRay is invoked with $1=arr $2=echo $3=1 $4=2 and parallel -j0 "$2" ::: "$3" runs echo with argument 1 only, ignoring the 2 . The array correctly contains the output of the parallel command; it is the input to the parallel command that wasn't what you apparently wanted. You probably want something like "${@:3}" to get all arguments after the first 2. Alternatively, a classic way to handle special (sometimes optional) then homogenous but varying args is to handle the special args and shift them out, then handle the rest : mapRay(){ local var="$1" cmd="$2" shift 2 mapfile -t "$var" < <(parallel -j0 "$cmd" ::: "$@")} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/714227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/474559/"
]
} |
714,371 | I would like to selectively replace a command-line argument that is being passed to automatically format it for the downstream command being executed. The argument will have spaces and that is the point of contention. I'm presently doing this: set -- $(echo $* | sed -e "s/$_ARG/--description=\"$_ID - $_SUMMARY\"/") The new argument, --description="$_ID - $_SUMMARY" gets split. I run a downstream command: <cmd> "$@" I may have any number of arguments, but a sample use case is: FROM activity --description='handle null' TO: activity --description='$SOME_VARIABLE - handle null' Ultimately, when I run the downstream command even with "$@" it is already split there, so it doesn't work as I intend. It ends up like activity --description=value - handle null --description=value , - , handle , and null then are considered separate arguments. | There are a few issues in your code. One of them is using $* unquoted, which will cause the shell to split the original arguments into words on whatever characters are in $IFS (space, tab, newline, by default) and apply filename globbing on the generated words. Quoting $* as "$*" is also not quite what you want if you ever want to support multiple arguments containing spaces, tabs or newlines as this would be a single string. Switching to using "$@" would not help as echo would just produce a each argument with spaces in-between for sed to read. echo may do special processing of any string containing backslash sequences like \n and \t , depending on the shell and its current settings. In some shells, echo -n may not output -n (there may be other problematic strings too, like -e ). Using sed to modify the arguments would possibly work on a single argument if you're happy treating it as text (arguments could potentially be multi-line strings), but in this case you are applying some editing script on all arguments at once, which may misfire. What splits the resulting string though, is the non-quoting of the command substitution used with set . This re-splits the result from sed and applies filename globbing on the result again. You will need to parse the command line options that you intend to modify.In short, loop over the arguments, and modify the ones you want to modify. The following sh script adds the string hello - at the start of the option-argument of each instance of the --description long option. If the long option is immediately followed by a space, as in --description "my thing" , then this is rewritten with a = , as if the script had been called with --description="my thing" , before this is modified into the final --description="hello - my thing" . #!/bin/shSOME_VARIABLE=helloskip=falsefor arg do if "$skip"; then skip=false continue fi # Re-write separate option-argument with "=". # This consumes an extra argument, so need to skip # next iteration of the loop. case $arg in --description) arg=--description=$2 shift skip=true esac # Add the value "$SOME_VARIABLE - " to the start of the # option-argument of the --description long option. case $arg in --description=*) arg=--description="$SOME_VARIABLE - ${arg#--description=}" esac # Put the (possibly modified) argument back at the end # of the list of arguments and shift off the first item. set -- "$@" "$arg" shiftdone# Print out the list of arguments as strings within "<...>":printf '<%s>\n' "$@" ${arg#--description=} removes the prefix string --description= from the value of $arg , leaving the original option-argument string. Example run: $ sh ./script -a -b --description="my thing" -c -d --description "your thing" -e<-a><-b><--description=hello - my thing><-c><-d><--description=hello - your thing><-e> The code may be simplified significantly if you always will be expecting to have the long option and its option-argument delimited by a = character: #!/bin/shSOME_VARIABLE=hellofor arg do # Add the value "$SOME_VARIABLE - " to the start of the # option-argument of the --description long option. case $arg in --description=*) arg=--description="$SOME_VARIABLE - ${arg#--description=}" esac # Put the (possibly modified) argument back at the end # of the list of arguments and shift off the first item. set -- "$@" "$arg" shiftdoneprintf '<%s>\n' "$@" Test run using same arguments as above (the second instance of --description will not be modified as it does not match the pattern --description=* ): $ sh ./script -a -b --description="my thing" -c -d --description "your thing" -e<-a><-b><--description=hello - my thing><-c><-d><--description><your thing><-e> A bash variant of the shorter second script from above, using shell pattern matching with [[ ... ]] in place of case ... esac , and using an array to hold the possibly modified arguments during the course of the loop: #!/bin/bashSOME_VARIABLE=helloargs=()for arg do if [[ $arg == --description=* ]]; then arg=--description="$SOME_VARIABLE - ${arg#--description=}" fi args+=( "$arg" )doneset -- "${args[@]}"printf '<%s>\n' "$@" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/714371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50680/"
]
} |
714,463 | WSL allows the user to use any distribution of their choice, and Ubuntu is installed by default. I don't really understand the relevance of the distribution in the WSL context. My understanding is that Linux distribution refers to the skin of the OS. A UI layer on top of the OS core. But when using WSL you just use the command line, or perhaps run an independant single gui application. So what is the relevance of distribution in this context, and what difference does it make which distribution you choose? | My understanding is that Linux distribution refers to the skin of the OS. A UI layer on top of the OS core. That's not correct. Linux distributions differ in many fundamental ways which go beyond the look&feel of the GUI: versions of the kernel, plus various customizations (Want the latest kernel and cutting-edge software? Use Fedora. Want stability? Use RHEL.) support for different hardware devices package managers (RHEL has rpm , yum , and dnf ; Debian has dpkg and apt ; ArchLinux has pacman ; and so on) software packages and tools (Debian, for instance, has only free and open source software in its main repo) repositories, with different timelines for updates and bugfixes However, if your main use is just writing some Bash script, your experience will be more or less the same regardless of the distro. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/714463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/538512/"
]
} |
714,505 | I have a CH341a Programmer and when I plug it into a usb port everything seems to be working except it doesn't get assigned to a Device Path (eg /dev/ttyUSB0). Does anyone have any clue as to why this might be happening, or how to resolve this issue? Here are some of the things I've done to troubleshoot. lsusb Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 004: ID 1a86:5512 QinHeng Electronics CH341 in EPP/MEM/I2C mode, EPP/I2C adapterBus 001 Device 003: ID 0e0f:0002 VMware, Inc. Virtual USB HubBus 001 Device 002: ID 0e0f:0003 VMware, Inc. Virtual MouseBus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub dmesg [347.965641] usb 1-2.1: new full-speed USB device number 4 using uhci_hcd[348.196659] usb 1-2.1: New USB device found, idVendor=1a86, idProduct=5512[348.196661] usb 1-2.1: New USB device strings: Mfr=0, Product=2, SerialNumber=0[348.196662] usb 1-2.1: Product: USB UART-LPT You can see that the device is being recognized as connected, but no device path assignment. I've connected this thing to 4 different devices and they all behave in the same way with the same output to lsusb and dmesg. This example output is from an Ubuntu VM, but the others were Linux Mint running on a Thinkpad P15s, Zorin running on an old Dell Latitude, and Kali on a Raspi 4. Oh, I've also tried installing drivers from https://github.com/juliagoda/CH341SER and I've uninstalled the BRLTTY software from all test devices (which actually cleared up the issue for my Arduino Nano, but not this device). All updates have been installed and every system is UTD as of the time of this posting. Any and all help is much appreciated. Thanks. | Your device isn't being assigned a serial device path because it's not a serial port. From your lsusb output, we see: Bus 001 Device 004: ID 1a86:5512 QinHeng Electronics CH341 in EPP/MEM/I2C mode, EPP/I2C adapter The key part is in EPP/MEM/I2C mode . The device is not configured as a UART; if it were, we would see: Bus 001 Device 004: ID 1a86:5523 QinHeng Electronics CH341 in serial mode, usb to serial port converter No amount of driver installation is going to make the device in its current configuration show up as a USB serial port. The issue is entirely in how the device itself is configured. If you have a bare board, you can configure it yourself. According to the data sheet , the selection between UART and SPI/I2C mode is configured via the SCL and SDA pins (see section 5.3, "Function configuration"). If you have a consumer product that's meant to be a UART-to-USB device, I would return it for a replacement. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/714505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153418/"
]
} |
714,726 | I moved /root to /home/root and changed the appropriate entry in /etc/passwd in my Linux system quite some time ago and everything's worked until recently when I discovered that at least the firejail application hardcodes the root home directory and stops working otherwise under some circumstances. I presume it will be patched eventually but still. I wonder if the root user home directory in Linux must be in /root , or it's still movable just like home directories of other users. Is there anything in POSIX which standardizes this? What about other Unixes? | POSIX doesn’t have much to say about the administrative user; when privileges are discussed, they are discussed in terms of process privileges (since that’s what really matters in POSIX-style systems). It acknowledges the existence of the root user but doesn’t define any requirements on its home directory. The FHS explicitly marks /root as optional , saying The root account's home directory may be determined by developer or local preference, but this is the recommended default location. It’s worth considering that root’s home is somewhat special, in that it makes life easier if it is accessible and on a volume with some available space when root needs to log in, or if it doesn’t block unmount operations on anything other than the root volume; this is why it is traditionally on the root volume, and not on user home directory volume(s) when the latter are separate from / . But that’s just a practical consideration, not a requirement in any widely-acknowledged standard I’m aware of. The flip side of the coin is that many operating environments no longer have a root home directory at all (and not just in containers). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/714726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260833/"
]
} |
714,742 | I'm trying to identify an embedded Linux distribution. Here are the commands I have typed so far: $ uname -aLinux LIN-SRV-EMB01 3.10.105 #25556 SMP Sat Aug 28 02:14:22 CST 2021 x86_64 GNU/Linux synology_bromolow_rs3412rpxs$ lsb_release-sh: lsb_release: command not found$ ls /usr/lib/os-releasels: cannot access /usr/lib/os-release: No such file or directory$ cat /proc/versionLinux version 3.10.105 (root@build1) (gcc version 4.9.3 20150311 (prerelease) (crosstool-NG 1.20.0) ) #25556 SMP Sat Aug 28 02:14:22 CST 2021$ cat /proc/cmdlineroot=/dev/md0 netif_seq=2130 ahci=0 SataPortMap=34443 DiskIdxMap=03060e0a00 SataLedSpecial=1 ihd_num=0 netif_num=4 syno_hw_version=RS3412rpxs macs=001132109b1e,001132109b1f,001132109b20,001132109b21 sn=LDKKN90098$ dmesg | grep "Linux version"[ 0.000000] Linux version 3.10.105 (root@build1) (gcc version 4.9.3 20150311 (prerelease) (crosstool-NG 1.20.0) ) #25556 SMP Sat Aug 28 02:14:22 CST 2021[ 342.396803] Loading modules backported from Linux version v3.18.1-0-g39ca484$ python -m platformLinux-3.10.105-x86_64-with-glibc2.2.5$ which python2 && python2 -c "import platform;print platform.linux_distribution()[0]"/bin/python2$ which python3 && python3 -c "import distro;print(distro.name())"$ more /etc/issue /etc/*release /etc/*version /boot/config*more: stat of /etc/issue failed: No such file or directorymore: stat of /etc/*release failed: No such file or directorymore: stat of /etc/*version failed: No such file or directorymore: stat of /boot/config* failed: No such file or directory$ zcat /proc/config.gz /usr/src/linux/config.gz | moregzip: /proc/config.gz: No such file or directorygzip: /usr/src/linux/config.gz: No such file or directory$ which dpkg apt apt-get rpm urpmi yum dnf zypper/bin/dpkg$ df -h /Filesystem Size Used Avail Use% Mounted on/dev/md0 2.3G 1.1G 1.1G 50% /$ sudo parted /dev/md0 printPassword:Model: Linux Software RAID Array (md)Disk /dev/md0: 2550MBSector size (logical/physical): 512B/512BPartition Table: loopDisk Flags:Number Start End Size File system Flags 1 0.00B 2550MB 2550MB ext4$ sudo mdadm -Q /dev/md0/dev/md0: 2.37GiB raid1 10 devices, 0 spares. Use mdadm --detail for more detail.$ which lsblk lscsci lshw lspci dmidecode/bin/lspci/sbin/dmidecode EDIT0: Tried two more commands : $ strings $(ps -p 1 -o cmd= | cut -d" " -f1) | egrep -i "ubuntu|debian|centos|redhat" -o | sort -u-sh: strings: command not found[remoteserver] $ ssh embedded-linux 'cat $(ps -p 1 -o cmd= | cut -d" " -f1)' | strings | egrep -i "ubuntu|debian|centos|redhat" -o | sort -uubuntu EDIT1: Tried three more commands : $ which initctl && initctl --version/sbin/initctlinitctl (upstart 1.13.2)Copyright (C) 2006-2014 Canonical Ltd., 2011 Scott James RemnantThis is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.$ which systemctl && systemctl --version$ cat /sys/class/dmi/id/product_nameTo be filled by O.E.M.$ EDIT2: Tried one more command (specific to Synology) : $ grep productversion /etc/VERSIONproductversion="6.2.4" EDIT3: Just in case one wants to identify the hardware : $ uname -u # Specific to Synology ?synology_bromolow_rs3412rpxs$ sudo dmidecode -t system | grep Product Product Name: To be filled by O.E.M.$$ cat /sys/devices/virtual/dmi/id/product_nameTo be filled by O.E.M.$ EDIT4 : On another Synology, I get : $ uname -usynology_broadwell_rs3618xs I guess it's based on Ubuntu+upstart. What other commands can I use to look a little deeper? | The uname -a output identifies this as a Synology device. Such devices run Synology DiskStation Manager . This is Linux-based, but it is not managed like a typical Linux system running a “traditional” Linux distribution. It has its own package manager, synopkg , for which third-party packages are made available by SynoCommunity . The DiskStation CLI guide describes a few administration tools available in DSM. If you’re interested in automating administrative tasks on such devices, you might find Synology’s Central Management System useful. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/714742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135038/"
]
} |
714,915 | you can call sed multiple times without using multiple pipes by delimiting the cmds with ; (thanks guys…) is there a way to use this for multiple awk -F cmds? Using sed multiple pipes echo "'text';" | \sed s"#';##"g | \ sed s"#'##"g text Using sed with ; as delimter echo "'text';" | \sed " \ s#';##g; \ s#'##g \"text Edit: So you can join multiple awk cmds using ; . But can't do so for multiple awk -F cmds The question is about stringing multiple awk -F cmds which is still unanswered. Background # '/x/ gives the href of the actual videos# awk -F '/x/' '{print$2}’ # because the /x/ is unique to the video urls# after this the video links appear# but I have to get rid of stuff # on the right of them so I do # awk —F 'title' '{print$1}' # this returns all the video links # but they have a double quotes # and a semi colon on the end.curl -s \ https://site.com/plist/page={0..50} | \grep '/x/' | \awk -F '/x/' '{print$2}' | \awk -F 'title' '{print$1}' | \sed ' \ s#";##g; \ s#"##g \' So now I have a bunch of video links and do further processing to get the video download links, I then use mapfile to get the download links into an array and use parallel to download them. I shortened a lot of the stuff I actually do in that code example. Edit: So it can’t be done. Thanks a lot to that user. This user commented about using sed for one of my specific cases which would remove the need for awk -F but I have at least 20 other cases. But it gives me something to think about, the reasons i was doing, awk -F is it because it got me the stuff I needed without knowing any sed regex. Anyway thanks all I wanted to know if it could be done and it can’t so I’m satisfied. Thanks To @StèphaneChazelas, their comment solved my problem. | Update: The question was altered substantially after this answer was posted, so the original answer - while still true - does not help much in solving the actual problem of the OP. It would seem that you try to process curl output of the form Ignore thishttp://some.url.involving/x/'video-link-1';title...http://some.url.involving/x/'video-link-2';title...Ignore that etc., where you want to only process lines where /x/ appears, and extract the part in between ' ... ' The easiest way is to simply use one field separator, the ' : curl -s https://site.com/plist/page={0..50} | awk -F"'" '/\/x\//{print $2}' This will in addition only consider lines that contain the /x/ pattern. So, for the above example, the output would be video-link-1video-link-2 If you want to do it by splitting at changing field separators , you can of course change the internal FS variable mid-way as indicated in the answer by Stéphane Chazelas . However, in that case I would rather use the fact that a multi-character field separator, whether set via -F as option parameter or via assignment of FS inside the awk program, is treated as a full regular expression. That means you can use an "or"-type alternative as field separator to cover both cases in one (but you should then also include the single quote and semicolon to avoid further post-processing needs): curl -s https://site.com/plist/page={0..50} | awk -F'/x/\047|\047;title' '/\/x\//{print $2}' This will set the field separator to be either /x/' or ';title . It will only consider lines that contain the /x/ pattern. On these lines, it will print the second field, which is the information you wanted (and already stripped of the ' and ; ). The single quotes are expressed as ASCII code \047 to avoid having the "single quote inside single quotes" problem (I will just assume your operating system is an ASCII-based system, not EBCDIC ). Another approach often encountered is to "replace the entire line by only the interesting part", as in curl -s https://site.com/plist/page={0..50} | awk '/\/x\//{print gensub(/.*\/x\/\047([^\047]+).*/,"\\1","1")}' This will again only consider lines where the pattern /x/ appears, replace the entire line by the content between single-quotes that follow this pattern, and print the modified line to extract only that part. The same is possible with a single sed call, albeit representation of a single-quote via ASCII code doesn't work here, so it is a little more involved. Assuming GNU sed with the -E option for ERE: curl -s https://site.com/plist/page={0..50} | sed -n -E 's|.*\/x\/'\''([^'\'']+).*|\1|p' This will suppress output by default -n , perform the substitution just like in the awk case, and print (the trailing p ) only if a substitution was made , which implies that the /x/' video-link ';title pattern was found. Original answer below Frame challenge: Is it necessary? In awk , you can repeat any modifying commands within the same program as often as you want, as in echo "'text';" | awk '{gsub(/\047;/,""); gsub(/\047/,"")} 1' or echo "'text';" | awk '{gsub(/\047;/,"")} {gsub(/\047/,"")} 1' (using \047 to express single-quotes inside the single-quoted program). And you can also write it up in an easy-to-read way, say echo "'text';" | awk '{gsub(/\047;/,"")}; {gsub(/\047/,"")}; 1' or as a dedicated program: echo "'text';" | awk -f multi-substitute.awk with multi-substitute.awk looking like #!/usr/bin/awk -f{gsub(/\047;/,"")}{gsub(/\047/,"")}1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/714915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/474559/"
]
} |
714,940 | To find all the files that have the word foo in it, and also the word bar in it, we can use grep -Ilri foo . | xargs grep -i bar (which is case insensitive and exclude binary files)... however, if the file path is like /Users/myusername/Text Files then it won't work, because now the xargs portion becomes grep -i bar /Users/myusername/Text Files but in reality it needs to be grep -i bar "/Users/myusername/Text Files" or grep -i bar /Users/myusername/Text\ Files How can it be made to work? (it is on macOS Monterey). | If your grep supports GNU's -Z / --null option to delimit file names with null bytes (FreeBSD's only supports the long --null variant), and your xargs supports the -0 option (also --null with some) to use the null byte as input item separator, you can use those to handle file names safely: grep -Ilri --null foo . | xargs -0 grep -i bar If your grep and xargs don’t support these non-standard options (but then again, neither -r nor -I are standard either), you can use find : find . -type f -exec grep -Iiq foo {} \; -exec grep -iq bar {} \; -print This looks for files starting in the current directory, and for each file it finds, runs grep to determine whether it contains “foo”, and if it does, runs grep again to determine whether it contains “bar”, and if it does, prints its name. This isn’t efficient but it works safely. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/714940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19342/"
]
} |
715,015 | In bash script, I have a program like this for i in {1..1000}do foo idone Where I call the function foo 1000 times with parameter i If I want to make it run in multi-process, but not all at once, what should I do? So if I have for i in {1..1000}do foo i &done It would start all 1000 processes at once, which is not what I want. Is there a way to make sure that there is always 100 process running? If some processes are finished, start some new ones, until all 1000 iterations are done. Alternatively, I could wait till all 100 are finished and run another 100. | With zsh instead of bash : autoload -Uz zargszargs -P100 -I{} -- {1..1000} -- foo {} But if you have GNU xargs , you can also do (in zsh , ksh93 or bash ): xargs -I{} -P100 -a <(echo {1..1000}) foo {} foo has to be a standalone command though. It won't work with a shell function or builtin. Note that zsh 's zargs runs one batch after the other: starts 100 jobs, waits for all of them to return and only then starts that next batch of 100. While GNU xargs , will try to keep up to 100 running: start 100 and then start another one every time one finishes. To get that xargs behaviour, in zsh you could start and manage your pool of jobs in a SIGCHLD trap which is triggered whenever a background process returns: ( todo=( {1..1000} ) max=100 TRAPCHLD() { while (( $#jobstates < max && $#todo )); do foo $todo[1] & shift 1 todo done } : start & while (( $#todo )) wait) Here, we need to run it in a subshell to get a fresh job list though. SIGCHLDs are blocked while the TRAPCHLD trap is being run so the trap should not re-enter itself which should avoid race conditions or the need to protect from concurrent access to the $todo list. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/715015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/368784/"
]
} |
715,130 | This is what I tried: $ mailx -s "test email" cloudCc: "again and agina".EOT Or, $ mailx -s "test email" cloudCc: "this is the first email"<< EOT But after pressing Enter nothing happens.Why? | There is nothing magical about the thee-letter-string EOT . You have probably seen it used as a delimiter in here-document redirections in shell scripts from time to time. Almost any word may be used as a delimiter for a here-document redirection, although it's customary to use a short-ish descriptive word written in all upper-case letters; so you could, for example, send a message with mailx by giving the utility the message on its standard input stream like so: mailx -s 'test message' myself <<'END_MESSAGE'This is the message.Possibly on many lines.END_MESSAGE This would use mailx non-interactively to send an email consisting of two lines of text to the user myself . The body of the message is quoted , i.e., the shell won't try to expand variables etc., in it, due to the quoting of the initial delimiter ( 'END_MESSAGE' ). However, from seeing the two commands in the question, you appear to want to use mailx interactively to type a message into the utility. If you have had the dot option set in your ~/.mailrc file ( set dot ), then typing a single dot on a line by itself as you did in the first part of your question would end the message body and cause the email to be sent: $ cat ~/.mailrcset dot $ mailx -s 'test message' myselfCc:This is the message.Possibly on many lines.. Typing the lone dot and pressing Enter causes the message to be sent. If you don't have the dot option set or if you have the nodot option set in ~/.mailrc , the message body is instead ended using Ctrl+D on an otherwise empty line. Pressing Ctrl+D sends (commits, submits) the current line to the program waiting for input, and if the current line is empty , this will signal the end of input. This is true not just for mailx but for most programs reading interactive input from their standard input stream. Using . on an empty line is also how you signal the end of user input in the ed editor when finishing entering text after issuing the i , a , or c command to insert, append or change the text in the current editing buffer. It wouldn't surprise me if mailx inherited this custom from ed . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/715130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/162292/"
]
} |
715,154 | I have two files. File1 contains some sentences, and File2 contains the line numbers I want to keep in File1 . For example, File1 : He is a boy.She is a cook.Okay.She went to school.She is pretty. File2 : 14 Output: He is a boy.She went to school. Is there a way I could do that using sed , grep , or awk ? I don't want to manually write the line number as here . | We could transform the list of numbers into a sequence of sed commands and run them as a sed editing script in a single sed invocation: sed 's/$/p/' lines.list | sed -n -f /dev/stdin file.txt Here, the first sed creates a sed script consisting of commands such as 1p , 4p etc., by simply inserting p at the end of each line. This script is then sent to the second sed after the pipe, which reads it with -f /dev/stdin and applies it with the text file as input. This would require reading each file only once. Using awk , read the line numbers into an associative array as keys, then, while reading the other file, see if the current line number is one of the ones that was previously made a key in the array: awk 'FNR == NR { lines[$0]; next } (FNR in lines)' lines.list file.txt In awk , the special variables NR and FNR are the total number of records (lines) read so far, and the total number of records (lines) read in the current file, respectively. If NR is equal to FNR , we're reading from the first input file, and we create an array entry using the current line, $0 , as the key (no value is given), and immediately skip to the next line of input. If we're not reading from the current line, we test with FNR in lines to see whether FNR , the line number in the current file, is a key in the array called lines . If it is, the current line will be printed. Without heavy support from other tools, the grep utility is not really made for performing this type of task. It extracts lines from text files whose contents match (or do not match) a given pattern. The pattern is therefore supposed to match the line, not the line number. The following is just for fun and should not be considered a suggestion for how to actually solve this issue. You can insert line numbers with grep using grep -n '.*' file.txt This inserts line numbers at the start of all lines in the file, directly followed by : and the original contents of the line. We may then, as with the sed solution, modify the pattern file to make it match a selection of those specific numbers: sed 's/.*/^&:/' lines.list This would output regular expressions such as ^1: and ^4: , each matching a particular line number at the start of a line. We may then get grep to use these expressions (here with the help of a process substitution). Finally, we remove the temporary line numbers using cut : grep -n '.*' file.txt | grep -f <(sed 's/.*/^&:/' lines.list) | cut -d : -f 2- ... but this is too contrived to even be considered a reasonable solution. Each of the above solutions will always display the selected lines in the order in which they occur in the text file. If you want to lines outputted in the order they occur in the line number file, then you may instead use ed (or awk , see further down): sed 's/$/p/' lines.list | ed -s file.txt Again, we create an editing script from our line number file by simply adding p at the end of each line. This script is then passed as the command input to the ed editor, which applies the commands, in order, to the text file. Testing: $ cat lines.list41 $ sed 's/$/p/' lines.list | ed -s file.txtShe went to school.He is a boy. Note that ed reads the whole file into memory, just like the following equivalent awk program does: awk 'NR == FNR { lines[FNR] = $0; next } { print lines[$0] }' file.txt lines.list Note that the input files are switched in comparison to the previous awk solution. This allows us to first read the text file into the lines array, line by line, and then select lines randomly out of that while reading the file with line numbers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/715154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/486264/"
]
} |
715,283 | Just playing with regular expressions to learn. Why does it match the other files: errsort, pytest.py, etc.? On the second line adding a question mark on the end matched two more files. Tried grep basic expressions too. Thanks! $ ls -x | egrep -i "(aa)(dd)?(cc)?(dd)?((bb(ccdd)?(bb)?)?)|(ccdd)"aa aaaa aabb aabbaa aabbbb aabbccdd aabbccddcc aabbddbbaaccaa aaccdd aaddaa aaddccddccdd aaddee errsort pytest.py TEST$ ls -x | egrep -i "(aa)(dd)?(cc)?(dd)?((bb(ccdd)?(bb)?)?)|(ccdd)?"aa aaaa aabb aabbaa aabbbb aabbccdd aabbccddcc aabbddbbaaccaa aaccdd aaddaa aaddccddccdd aaddee errsort pytest.py TESTtest.sh vimtest$ bash --versionGNU bash, version 5.1.16(1)-release (x86_64-pc-linux-gnu)$ grep --versiongrep (GNU grep) 3.7 $ ls | egrep -i "(aa)(dd)?(cc)?(dd)?((bb(ccdd)?(bb)?)?)|(ccdd)" aa aaaa aabb aabbaa aabbbb aabbccdd aabbccdd cc aabb ddbb aaccaa aaccdd aaddaa aaddccddccdd aadd ee$ ls -x | egrep -i "(aa)(dd)?(cc)?(dd)?((bb(ccdd)?(bb)?)?)|(ccdd)?" aa aaaa aabb aabbaa aabbbb aabbccdd aabbccdd cc aabb ddbb aaccaa aaccdd aaddaa aaddccddccdd aadd ee errsort pytest.py TESTtest.sh vimtest$ ls | egrep -i "(aa)(dd)?(cc)?(dd)?((bb(ccdd)?(bb)?)?)|(ccdd)?" aa aaaa aabb aabbaa aabbbb aabbccdd aabbccdd cc aabb ddbb aaccaa aaccdd aaddaa aaddccddccdd aadd eeerrsortpytest.pyTESTtest.shvimtest$ $ ls | egrep -io "(aa)(dd)?(cc)?(dd)?((bb(ccdd)?(bb)?)?)|(ccdd)?" aa aa aa aabb aabb aa aabbbb aabbccdd aabbccdd aabb aacc aa aaccdd aadd aa aaddccdd ccdd aadd This doesn't give you the color, but the bold is red on my screen to indicate the pattern match i think. The files on the bottom don't have any red. Removing -x did affect output on the first one.The -o helps a lot! | An optional match like (ccdd)? is exactly that: optional. It could happen that the matched text is empty , but still, that is a match, a line that has a match. $ ls -xaa aaaa aabb aabbaa aabbbb aabbccdd aabbccddcc aabbddbb aaccaa aaccdd aaddaa aaddccddccdd aaddeeerrsort pytest.py TEST test.sh vimtest$ ls -x | cataa aaaa aabb aabbaa aabbbb aabbccdd aabbccddcc aabbddbbaaccaa aaccdd aaddaa aaddccddccdd aaddee errsort pytest.py TESTtest.sh vimtest It is important to understand that when the ls output is piped, it changes.Now there are three lines to match with the grep. As the first and second lines match on the (several) aa , both are printed. $ ls -x | egrep -i "(aa)(dd)?(cc)?(dd)?((bb(ccdd)?(bb)?)?)|(ccdd)"aa aaaa aabb aabbaa aabbbb aabbccdd aabbccddcc aabbddbbaaccaa aaccdd aaddaa aaddccddccdd aaddee errsort pytest.py TEST$ ls -x | egrep -i "(aa)(dd)?(cc)?(dd)?((bb(ccdd)?(bb)?)?)|(ccdd)?"aa aaaa aabb aabbaa aabbbb aabbccdd aabbccddcc aabbddbbaaccaa aaccdd aaddaa aaddccddccdd aaddee errsort pytest.py TESTtest.sh vimtest As shown, the third line gets a match on the empty string due to the (ccdd)? . I am assuming the format of your list of files is incorrect. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/715283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/539324/"
]
} |
715,653 | When I do a sudo systemctl status elasticsearch.service or a journalctl -xe on èDebian 11_ I have this output: many lines are ended with > characters and I don't know how to handle these lines to see their remaining characters. Or to disable this feature and see the commands doing a normal output, dumping the whole content of these lines, and not only the characters that can fit horizontally. | You can see the full text using the Right Key (>) of your keyboard.If I'm not wrong when you use some commands like journalctl options... , systemctl options... these page their output through less command.This happens when the output lines are bigger than the width of your terminal. If you want to avoid this behavior you can use: systemctl status --no-pager elasticsearch.servicejournalctl -xe --no-pager Or if the command doesn't have some option like --no-pager you can try piping the output to cat command: systemctl status elasticsearch.service | cat | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/715653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350549/"
]
} |
715,751 | From the manpage man limits.h : {WORD_BIT} Number of bits in an object of type int. Minimum Acceptable Value: 32 However if I run a simple program: #include <limits.h>#include <stdio.h>int main() { printf("%d\n", WORD_BIT); return 0;} However when trying to compile with gcc ( gcc file.c -o file ) I get the following: error: ‘WORD_BIT’ undeclared (first use in this function) How come this is not defined on my system, and where else can I find this information (in a C program)? My system: Fedora 36 (Silverblue) gcc version 12.2.1 20220819 (Red Hat 12.2.1-1) (GCC) ldd (GNU libc) 2.35 | I’m not sure this is documented, but with the GNU C library you need to set _GNU_SOURCE to get WORD_BIT : $ gcc -include limits.h -E - <<<"WORD_BIT" | tail -n1WORD_BIT$ gcc -D_GNU_SOURCE -include limits.h -E - <<<"WORD_BIT" | tail -n132 You should really use sysconf : #include <unistd.h>#include <stdio.h>int main(int argc, char **argv) { printf("%ld\n", sysconf(_SC_WORD_BIT));} or, as recommended in the GNU C library documentation : #include <limits.h>#include <unistd.h>#include <stdio.h>int main(int argc, char **argv) {#ifdef WORD_BIT printf("%d\n", WORD_BIT);#else printf("%ld\n", sysconf(_SC_WORD_BIT));#endif} (As for why this appears in man limits.h , this man page is the POSIX reference for limits.h , and doesn’t necessarily document the C library on your system exactly. You can see this by looking at the section of the man page — the “P” suffix indicates that it’s POSIX documentation.) You could use sizeof(int) * CHAR_BIT instead; CHAR_BIT is always defined. What’s more, POSIX extends the C standard and specifies that CHAR_BIT is 8, so if you assume POSIX, you could use sizeof(int) * 8 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/715751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4098/"
]
} |
715,831 | I'm reading the GNU docs , and I see the following sentence as part of the definition of PS4 : The first character of the expanded value is replicated multiple times, as necessary, to indicate multiple levels of indirection. The default is ‘+ ’. I interpret this to mean that I will see a + symbol for each level of abstraction in my code. I wanted to reproduce this behavior in my shell, so I wrote the following code: #!/usr/bin/env bashfunction baz() { echo "inside baz"}function foo() { echo "inside foo"; baz;}function bar() { echo "inside bar"; foo;}set -x;bar; Since bar calls foo , and foo calls baz , I had expected up to 3 levels of indirection, and therefore I expected to see something like the following (or similar) as output: + bar+ echo 'inside bar'inside bar++ foo++ echo 'inside foo'inside foo+++ baz+++ echo 'inside baz'inside baz However, that's not what I see. Instead, I see: + bar+ echo 'inside bar'inside bar+ foo+ echo 'inside foo'inside foo+ baz+ echo 'inside baz'inside baz Am I mis-understanding what "level of indirection" means in this context, or am I just failing to reproduce said levels of indirection correctly? | The indirection is not related to functions as you have observed, but rather relates to eval , as this example shows: #!/usr/bin/env bashset -xecho 1eval "echo 2"eval 'eval "echo 3"' this should emit something like + echo 11+ eval 'echo 2'++ echo 22+ eval 'eval "echo 3"'++ eval 'echo 3'+++ echo 33 And, as muru points out, also sourced files: $ cat codeecho helpsource code$ bash -x code Hopefully your control + c is ready and warmed up... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/715831",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142469/"
]
} |
715,899 | Say I have a C program main.c that statically links to libmine.a . Statically linking to a library causes library functions to be embedded into the main executable at compile time. If libmine.a were to feature functions that weren't used by main.c , would the compiler (e.g. GCC) discard these functions? This question is inspired by the "common messaging" that using static libraries make executables larger, so I'm curious if the compiler at least strips away unused code from an archive file. | By default, linkers handle object files as a whole. In your example, the executable will end up containing the code from main.c ( main.o ), and any object files from libmine.a (which is an archive of object files) required to provide all the functions used by main.c (transitively). So the linker won’t necessarily include all of libmine.a , but the granularity it can use isn’t functions (by default), it’s object files (strictly speaking, sections). The reason for this is that when a given .c file is compiled to an object file, information from the source code is lost; in particular, the end of a function isn’t stored, only its start, and since multiple functions can be combined, it’s very difficult to determine from an object file what can actually be removed if a function is unused. It is however possible for compilers and linkers to do better than this if they have access to the extra information needed. For example, the LightspeedC programming environment on ’80s Macs could use projects as libraries, and since it had the full source code in such cases, it would only include functions that were actually needed. On more modern systems, the compiler can be told to produce object files which allow the linker to handle functions separately. With GCC, build your .o files with the -ffunction-sections -fdata-sections options enabled, and link the final program with the --gc-sections option. This does have an impact, notably by preventing certain categories of optimisation; see discard unused functions in GCC for details. Another option you can use with modern compilers and linkers is link-time optimisation; enable this with -flto . When optimisation is enabled ( e.g. -O2 when compiling the object files), the linker will not include unused functions in the resulting binary. This works even without -ffunction-sections -fdata-sections . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/715899",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/287718/"
]
} |
716,060 | I'm able to auto detect RAM in GB as below and round off to the nearest integer: printf "%.f\n" $(grep MemTotal /proc/meminfo | awk '$3=="kB"{$2=$2/1024^2;$3="GB";} 1' | awk '{print $2}') Output: 4 I multiply by 2 to determine the required swap as 8GB ans=`expr $(printf "%.f\n" $(grep MemTotal /proc/meminfo | awk '$3=="kB"{$2=$2/1024^2;$3="GB";} 1' | awk '{print $2}')) \* 2`echo "$ans"G Output: 8G With the below commands I try to create 8GB swap memory. echo "Creating $ans GB swap memory"sudo dd if=/dev/zero of=/swapfile bs="$ans"G count=1048576sudo chmod 600 /swapfilesudo mkswap /swapfilesudo swapon /swapfilesudo swapon --show However, I get the below error: Creating 8 GB swap memorydd: memory exhausted by input buffer of size 8589934592 bytes (8.0 GiB)mkswap: error: swap area needs to be at least 40 KiBswapon: /swapfile: read swap header failed. Can you please suggest and help me auto-create swap memory which Ideally should be double of that of the RAM. System details: root@DKERP:~# uname -aLinux DKERP 5.4.0-124-generic #140-Ubuntu SMP Thu Aug 4 02:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linuxroot@DKERP:~# free -g -h -t total used free shared buff/cache availableMem: 3.8Gi 1.0Gi 207Mi 54Mi 2.6Gi 2.5GiSwap: 0B 0B 0BTotal: 3.8Gi 1.0Gi 207Mi | The reason why your dd command didn't work is because you set dd's block size to 8 GB. i.e. you told it to read and write 8 GiB at a time, which would require a RAM buffer of 8 GB. As Marcus said, 8 GiB is more RAM than you have, so a buffer of that size isn't going to work. And ~ 8 billion megabytes (8 GiB x 1M = 8 peta bytes, 9,007,199,254,740,992 bytes) is way more disk space than you have too....it's way more than most high-end storage clusters in the world would have. It would work if you used reasonable values for both bs and count. For example, 1 MiB x 8K = 8 GiB: dd if=/dev/zero of=/swapfile bs=1048576 count=8192 or dd if=/dev/zero of=/swapfile bs=1M count=8K | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/716060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/392596/"
]
} |
716,088 | Background I am running a server with RHEL 8 in a closed network environment. I can upload files to the server, but I can't use yum update on the server.I want to update a specific package to the latest version. To update the package, I have to update the packages required from the updated package. Problem If you can enumerate all the URLs of .rpm files required to update the package, I can download the .rpm files from the other environment and upload them to the server. I know you can use repoquery to retrieve a dependency tree and the URL of the packages in the tree, but there is no RHEL environment other than the one in the closed network. Question Is there any way to enumerate all the URLs of .rpm files required to update a specific package without a RHEL environment? | The reason why your dd command didn't work is because you set dd's block size to 8 GB. i.e. you told it to read and write 8 GiB at a time, which would require a RAM buffer of 8 GB. As Marcus said, 8 GiB is more RAM than you have, so a buffer of that size isn't going to work. And ~ 8 billion megabytes (8 GiB x 1M = 8 peta bytes, 9,007,199,254,740,992 bytes) is way more disk space than you have too....it's way more than most high-end storage clusters in the world would have. It would work if you used reasonable values for both bs and count. For example, 1 MiB x 8K = 8 GiB: dd if=/dev/zero of=/swapfile bs=1048576 count=8192 or dd if=/dev/zero of=/swapfile bs=1M count=8K | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/716088",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/451425/"
]
} |
716,092 | I know we can export named PDF destinations (hypertargets of external links, like we can reference them in LaTeX with \href{myfile.pdf#page.42}{link text} ) in Linux with the command line pdfinfo -dests . But how can we add/create such named destination links, in a ready made PDF file we receive from somewhere, with a command line tool? | The reason why your dd command didn't work is because you set dd's block size to 8 GB. i.e. you told it to read and write 8 GiB at a time, which would require a RAM buffer of 8 GB. As Marcus said, 8 GiB is more RAM than you have, so a buffer of that size isn't going to work. And ~ 8 billion megabytes (8 GiB x 1M = 8 peta bytes, 9,007,199,254,740,992 bytes) is way more disk space than you have too....it's way more than most high-end storage clusters in the world would have. It would work if you used reasonable values for both bs and count. For example, 1 MiB x 8K = 8 GiB: dd if=/dev/zero of=/swapfile bs=1048576 count=8192 or dd if=/dev/zero of=/swapfile bs=1M count=8K | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/716092",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104277/"
]
} |
716,458 | I have a big json file, I have put two sections of that file below. [ { "description": null, "icmp-options": null, "is-stateless": false, "protocol": "17", "source": "1.0.0.0/8", "source-type": "CIDR_BLOCK", "tcp-options": null, "udp-options": { "destination-port-range": null, "source-port-range": { "max": 1433, "min": 521 } } }, { "description": null, "icmp-options": null, "is-stateless": false, "protocol": "17", "source": "1.0.0.0/8", "source-type": "CIDR_BLOCK", "tcp-options": null, "udp-options": { "destination-port-range": null, "source-port-range": { "max": 1899, "min": 1435 } } }] I want to change the destination-port-range value as below "destination-port-range": { "max": 100, "min": 90 }, As the json file is very big, can someone help me how can this be done using jq or any other method? | Using the JSON processing tool jq to add the min and max values taken from the command line: jq --argjson min 90 --argjson max 100 \ 'map(."udp-options"."destination-port-range" = $ARGS.named)' file By using the --argjson options like this, we create an internal variable $ARGS , whose named key will be the object {"min":90,"max":100} . I'm using --argjson instead of --arg as the latter would import the values as strings . The expression ."udp-options"."destination-port-range" = $ARGS.named assigns this object to the destinaton-port-range sub-object of udp-options , and we use map() to apply this to all array elements in the input. The key names have to be quoted as they contain dashes. The result, given the data in the question, will be the equivalent of this: [ { "description": null, "icmp-options": null, "is-stateless": false, "protocol": "17", "source": "1.0.0.0/8", "source-type": "CIDR_BLOCK", "tcp-options": null, "udp-options": { "destination-port-range": { "max": 100, "min": 90 }, "source-port-range": { "max": 1433, "min": 521 } } }, { "description": null, "icmp-options": null, "is-stateless": false, "protocol": "17", "source": "1.0.0.0/8", "source-type": "CIDR_BLOCK", "tcp-options": null, "udp-options": { "destination-port-range": { "max": 100, "min": 90 }, "source-port-range": { "max": 1899, "min": 1435 } } }] Would you only want to update the value if there is no pre-existing non-null (or non-false) value, use the following expression instead: map(."udp-options"."destination-port-range" |= (. // $ARGS.named)) This updates the value to the same as the current value unless it is false or null , in which case the new data is used. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/716458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/525521/"
]
} |
716,471 | I need to copy one very large file (3TB) on the same machine from one external drive to another. This might take (because of low bandwidth) many days. So I want to be prepared when I have to interrupt the copying and resume it after, say, a restart.From what I've read I can use rsync --append for this (with rsync version>3). Two questions about the --append flag here: Do I use rsync --append for all invocations? (For the first invocation when no interrupted copy on the destination drive yet exists and for the subsequent invocations when there is an interrupted copy at the destination.) Does rsync --append resume for the subsequent invocations the copying process without reading all the already copied data? (In other words: Does rsync mimic a dd -style seek-and-read operation ?) | Do I use rsync --append for all invocations? Yes, you would use it each time (the first time there is nothing to append, so it's a no-op; the second and subsequent times it's actioned). But do not use --append at all unless you can guarantee that the source is unchanged from the previous run (if any), because it turns off the checking of what has previously been copied. Does rsync --append resume for the subsequent invocations… without reading all the already copied data? Yes, but without rsync --partial would probably have first deleted the target file. The correct invocation would be something like this: rsync -a -vi --append --inplace --partial --progress /path/to/source/ /path/to/target You could remove --progress if you didn't want to see a progress indicator, and -vi if you are less bothered about a more informational result (you'll still get told if it succeeds or fails). You may see -P used in other situations: this is the same as --partial --progress and can be used for that here too. --append to continue after a restart without checking previously transferred data --partial to keep partially transferred files --inplace to force the update to be in-place If you are in any doubt at all that the source might have changed since the first attempt at rsync , use the (much) slower --append-verify instead of --append . Or better still, remove the --append flag entirely and let rsync delete the target and start copying it again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/716471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45940/"
]
} |
716,493 | Trying to read nth line from file and split in to array based on delimiter HEAD_START=4IFS='|' read -r -a headers < sed "${HEAD_START}q;d" "/FILE_UPLOADS/Checklist-Relationship (4).txt" The above gives "sed: cannot open [No such file or directory]" But when i run just sed "${HEAD_START}q;d" "/FILE_UPLOADS/Checklist-Relationship (4).txt" in the prompt it works fine | read -r -a headers < sed ... is trying to open a file named sed for reading. In bash, to run sed as a command and make its output available on the standard input stream, you can use a process substitution : IFS='|' read -r -a headers < <(sed "${HEAD_START}q;d" "/FILE_UPLOADS/Checklist-Relationship (4).txt") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/716493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/442726/"
]
} |
716,497 | I'm using a CentOS and I want to write a shell script. So I have a file with a date: > > cat VM1_EOMAP_TIME.log07 Sep 2022 16:30> And I want to get the minutes since current time and the date in the file My idea is:("EPOCH from current time" - "EPOCH from the date of that file") / 60 But I can't get the "EPOCH from current time" > cat VM1_EOMAP_TIME.log07 Sep 2022 16:30 > date --date='07 Sep 2022 16:30' +%s1662568200 > date --date=$(cat VM1_EOMAP_TIME.log) +%s date: extra operand ‘2022’Try 'date --help' for more information.> date --date=`cat VM1_EOMAP_TIME.log` +%sdate: extra operand ‘2022’Try 'date --help' for more information.> TTT="07 Sep 2022 16:30"> echo $TTT07 Sep 2022 16:30> date --date=$TTT +%sdate: extra operand ‘2022’Try 'date --help' for more information. Why I get "date: extra operand ‘2022’" error? | read -r -a headers < sed ... is trying to open a file named sed for reading. In bash, to run sed as a command and make its output available on the standard input stream, you can use a process substitution : IFS='|' read -r -a headers < <(sed "${HEAD_START}q;d" "/FILE_UPLOADS/Checklist-Relationship (4).txt") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/716497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/540542/"
]
} |
716,653 | I take a date at nanosecond precision: $ start=$(date '+%s.%N') ...then print it: $ echo ${start}1662664850.030126174 So far so good. But look what I get when I printf with some arbitrarily huge precision: 1662664850.0301261739805340766906738281250000000000000000000000000 Q1. Did the date command actually populate the start variable with that much information, or are those digits just garbage? Here's the second part of the question. Let's say I want to do some time math. Create an "end" timestamp: $ end=$(date '+%s.%N')$ echo ${end}1662665413.471669572$ printf "%.55f\n" ${end}1662665413.4716695720562711358070373535156250000000000000000000000 Now I get the result I expect using bc: $ echo $(bc <<< ${end}-${start})563.441543398 But check out what I get when I use python or perl: $ echo $(python -c "print(${end} - ${start})")563.441543579$ echo $(perl -e "print(${end} - ${start})")563.441543579102 At a certain point, the number goes off the rails: bc 563.441543 398 python 563.441543 579 perl 563.441543 579 102 Q2. The numbers are different, but not in a way that you'd expect if it was due to rounding. What gives? System info: Linux 3.10.0-1160.71.1.el7.x86_64 #1 SMP Wed Jun 15 08:55:08 UTC 2022 Command info: date (GNU coreutils) 8.22 bc 1.06.95 Python 2.7.5 perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux-thread-multi | Yes, some imprecision is to be expected using floats, that is an unavoidable result of using a limited set of bits, or bytes to express a real number. Or also called a number with several fractional digits. For example, with a shorter version of the number you used: ➤ printf '%.50f\n' 50.03012617450.03012617399999999862059141264580830466002225875854 A1 Q1. Did the date command actually populate the start variable with that much information, or are those digits just garbage? A1.1. No, date did not populate the value with that much information. A1.2. Garbage? Well, depends on who you ask. But, for me, yes they are pretty much garbage. A2 Q2. The numbers are different, but not in a way that you'd expect if it was due to rounding. What gives? That is entirely the result of rounding 64 bits floats (53 bits mantissa). No more than 15 decimal digits should be considered reliable for a double float. Solutions You already found that bc works fine, but here are some alternatives: [date]$ date -ud "1/1/1970 + $end sec - $start sec " +'%H:%M:%S.%N'00:09:23.441543398[bc]$ bc <<<"$end - $start"563.441543398[awk] (GNU) $ awk -M -vPREC=200 -vend="$end" -vstart="$start" 'BEGIN{printf "%.30f\n",end - start}'563.441543398000000000000000000000[Perl]$ perl -Mbignum=p,-50 -e 'print '"$end"' - '"$start"', "\n"'563.44154339800000000000000000000000000000000000000000[python]$ python3 -c "from mpmath import *;mp.dps=50;print('%.30s'%(mpf('$end')-mpf('$start')));"563.44154339800000000000000000 Integer Math But actually both start and end are not floats. Each one is an string concatenation of two integers with a dot in the middle. We can separate each (directly in the shell) and use almost anything to do the math, even the integer shell math: Unreliable digits Some may argue that we can get the mathematically exact result of the best representation of the given number. And, yes, we can calculate a lot of binary digits: ➤ bc <<<'scale=100; obase=2; 50.030126174/1'110010.0000011110110110010110010101010000010101011001110010101110011\00101110010000100101010100000110100011110000011110111100101011110011\11111100000010000001101100100011111010100011011101100011001110110000\01010011000100001110101110000010000100010000100100110100001010001001\11011111101101001010100001100000001010000101011000101100110101001100 And those are 339 binary digits. But in any case, we have to fit that into the memory space of a float (which could have several memory representations, but probably a long double). We may choose to talk about a float representation capable of 64 binary digits (An extended float, 80 bits in an Intel FP-87) as is the most commonly used on x86 machines by the most common linux compiler gcc . Other compilers may use something else, like a 53 bits mantissa for a 64 bits double float. Then we must cut the binary number from above down to either of this two numbers: 110010.0000011110110110010110010101010000010101011001110010101110110010.0000011110110110010110010101010000010101011001110010101111 From both, the one which is closer to the original is the best representation. The exact (mathematical) decimal values of both numbers are: 50.03012617399999999862059141264580830466002225875854492187500000050.030126174000000002090038364599422493483871221542358398437500000 The differences from the original number are: 00.00000000000000000137940858735419169533997774124145507812500000000.000000000000000002090038364599422493483871221542358398437500000 Thus, also mathematically, the best number is the one that ends in 0. That is why some may argue that the result could be mathematically calculated. And while that is true, here is the problem. That is an approximation, a good approximation, the best approximation, but an approximation anyway. And, there is no way to know before hand (before converting the real number into a binary) what is going to be the exact magnitude of the distance between the original number and the approximation value: The approximation error. The distances are pretty much random. The error magnitude is pretty much a random number. That is why I say that the digits after the 18 digit (for a 64 float) are unreliable. For 53 bits (double) anything longer than 15 digits is unreliable. $ bc <<<"scale=20;l2=l(2)/l(10); b=53 ;d=((b-1)*l2);scale=0;d/1"15 Formula copied from 1967 D.W.Matula paper, but it is easier to find in the C standard: C11 5.2.4.2.2p11. If the limit is 15 digits, you can see where to cut: 1662664850.0301261741662665413.4716695721234567890.12345 ^----cut here! That is why you get some imprecision in Python and Perl at that point. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/716653",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/540699/"
]
} |
716,677 | If below should return true but it actually fails with an error reporting too many arguments when executed in bash shell. I think I followed all the guidelines with double quotes and can't figure out why it is failing for the life of me... Any ideas? #!/usr/bin/bash# These are the values in the environment#echo ">"$PS_APP_HOME"<" => >/app/psoft/app_85909<#echo ">"PS_CUST_HOME"<" => >/app/psoft/cust_85909<#echo ">"$(pwd)"<" => >/app/psoft/app_85909<if [ "$(pwd)" != "$PS_APP_HOME" ] -o [ "$(pwd)" != "$PS_CUST_HOME" ] | [ "$(pwd)" != "$PS_APP_HOME" ] -o [ "$(pwd)" != "$PS_CUST_HOME" ] calls the [ command with the following as arguments: output of pwd ¹, != , the contents of the PS_APP_HOME variable, ] , -o , [ , the output of another invocation of pwd , != , the contents of the PS_CUST_HOME variable, and ] . The ] is meant to be the last argument, so when [ sees -o after the first ] , it is confused. [ has a deprecated -o operator for OR , but that's meant to be used as [ some-condition -o some-other-condition ] . It should however not be used as it makes for unreliable test expressions. Here, using OR also doesn't make sense. The current working directory cannot be at the same time something ( $PS_APP_HOME ) and something else ( $PS_CUSTOM_HOME ), so at least one of "$(pwd)" != "$PS_APP_HOME" or "$(pwd)" != "$PS_CUST_HOME" is going to be true. Presumably you meant AND instead of OR . So: with standard syntax: if [ "$PWD" != "$PS_APP_HOME" ] && [ "$PWD" != "$PS_CUST_HOME" ]; then echo current directory is neither the APP nor CUST homefi (where we run a second [ command if the first one was successful using the && shell (not [ ) operator). Korn-like syntax: if [[ $PWD != "$PS_APP_HOME" && $PWD != "$PS_CUST_HOME" ]]; then echo current directory is neither the APP nor CUST homefi or if [[ ! ($PWD = "$PS_APP_HOME" || $PWD = "$PS_CUST_HOME") ]]; then echo current directory is neither the APP nor CUST homefi where [[...]] is a special construct with its own conditional expression micro-language code inside which also has some && (and) / || (or), ! (not) boolean operators. Though you could also use case : case $PWD in ("$PS_APP_HOME" | "$PS_CUST_HOME") ;; (*) echo current directory is neither the APP nor CUST homeesac $PWD is like $(pwd) except it's more efficient as it doesn't need to fork another process and get its output through a pipe and means it still works if the path of the current working directory ends in newline characters.² Beware the double quotes above are important, I've only put them where they are strictly necessary (to prevent split+glob in the arguments of [ and to prevent variable values to be taken as a pattern in the argument the != / = operators of the [[...]] construct or case ), though having all expansions quoted would not harm. Instead of doing lexical comparisons, also note that ksh/bash/zsh's [[....]] and most [ implementations including the [ builtin of bash support a -ef operator, to check whether two files are the same (after symlink resolution), so you could use that instead of = : if [[ ! (. -ef $PS_APP_HOME || . -ef $PS_CUST_HOME) ]]; then echo current directory is neither the APP nor CUST homefi Or for sh (most sh s): if [ ! . -ef "$PS_APP_HOME" ] && [ ! . -ef "$PS_CUST_HOME" ]; then echo current directory is neither the APP nor CUST homefi Here also using . which unlike $PWD or $(pwd) is guaranteed to refer to the current working directory. That way if $PWD is /opt/app or /some/link/to/app and $PS_APP_HOME is /opt/./app or /opt/foo/../app or /opt//app for instance, that will still work. ¹ stripped of all trailing newline characters, so maybe not the current working directory. ² in some shells, $PWD might give you stale information though if the current working directory has been renamed under your feet. But then again, that's also true of pwd in some shells which just output the value of $PWD and only update it upon cd / pushd / popd . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/716677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/540082/"
]
} |
716,848 | In the script it reads: rm -v !\(*.yaml\) ;\ this produces rm: cannot remove '!(*.yaml)': No such file or directory but works fine in the command line.Have tried escaping in various ways: '\!\(\*.yaml\)'`\!\(\*.yaml\)``!\(*.yaml\"\)`"\!\(\*.yaml\)" Can't seem to figure out the appropriate escape sequence, I simply don't understand. Escaping the brackets was my first step. Then trying to escape the ! , then the * . Also tried no escaping, using back-ticks but I got the error "rm missing operand". Im a little stumped. Have been at it for about an hour - just "rm everything not yaml"... Can anyone perhaps spot the error/suggest a fix? I have also tried #!/bin/sh and #!/bin/bash. Thought it would maybe have some effect. | !(x) is a Korn shell glob operator. bash supports a subset of ksh 's globs operators including that one but only after shopt -s extglob ¹ So: shopt -s extglobrm -f -- !(*.yaml) Will remove all the non-hidden files in the current directory except those whose name ends in .yaml . In any case, glob operators are not recognised as such when quoted, so using \ (one of the quoting operators in Bourne-like shells) won't help. The equivalent in the zsh shell would be: rm -f -- ^*.yaml For which you need set -o extendedglob . (it does also support !(x) after emulate ksh or set -o kshglob ). There is no equivalent in the sh language, though you may find that some sh implementations, like those based on ksh support !(x) as an extension. ¹ as that affects the shell syntax parsing, the shopt command must have been executed before a line containing one of those ksh extended operators is read and parsed. For instance in bash -c 'shopt -s extglob; echo !(x)' , you get a syntax error because the whole line is parsed first (with extglob not on yet), and then shopt is run (well, would have been run if not for the syntax error). bash -O extglob -c 'echo !(x)' is fine. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/716848",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/540904/"
]
} |
717,101 | I am encountering strange behaviour when trying to switch on and off background colors in terminal output: #!/bin/shprintf "\e[48;5;203m"printf "AAA\n"printf "\e[0m"printf "BBB\n"printf "CCC\n" I want AAA to be printed with red background, then switch off the background color, and print the next lines. However, this is how the output looks like: UPDATE OK, I tried from a new terminal, and there it works as expected.But I still have the old terminal window open, where I get the output as shown.What is happening there? Is there some "garbage" left in the terminal, that is causing this? I did reset in the old terminal window, and the output is now correct. | When AAA\n is printed at the very bottom of the terminal , the terminal needs to scroll the text and make an empty line appear at the bottom. It displays the line using the current background color, which is red. Then BBB\n is printed over this background, using its own background color. The new background color affects only few characters in the current line ( BBB ), but it is relevant when the next empty line appears. In effect the next line (where CCC is going to appear) looks normal. When AAA\n is printed not at the bottom, the terminal does not need to add a line, empty space is already there. It so happens the empty space is black. To reproduce, run your code several times until you get to the bottom of the terminal and "beyond". The following two commands, when repeated (each one in its own terminal), give output that looks identical, until the bottom is reached: printf "\e[48;5;203mAAA\n\e[0m" printf "\e[48;5;203mAAA\e[0m\n" In the second case the background gets reset before \n . My testbed: Konsole 21.12.3, TERM=xterm-256color . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/717101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
717,114 | I have thousands of folders with files inside them and I want to copy some of them in another directory. I have a .csv file with two columns with part of the folder name (the folder contain one string value or another, not both). Example of folder names: PLASMA_32150129_B5/PLASMA_AAA3891784_B3/... The CSV file has no header and the fields are separated by , : 32150129,AAA061693832140203,AAA389178432140204,AAA061723732140205,AAA061726132140206,AAA0617285... I found this little script as a starting point: while IFS=, read -r file restdo find /path/to/Main_directory -type d -name "${file}" -exec cp '{}' /path/to/New_directory/ \;done < mylist.csv Now I need to specify that the csv values are just a pattern (like *_32150129 _* ), and I want to try pattern in the first column first, and if that doesn't generate a match, try with the other one. Is this possible? Thank you! | When AAA\n is printed at the very bottom of the terminal , the terminal needs to scroll the text and make an empty line appear at the bottom. It displays the line using the current background color, which is red. Then BBB\n is printed over this background, using its own background color. The new background color affects only few characters in the current line ( BBB ), but it is relevant when the next empty line appears. In effect the next line (where CCC is going to appear) looks normal. When AAA\n is printed not at the bottom, the terminal does not need to add a line, empty space is already there. It so happens the empty space is black. To reproduce, run your code several times until you get to the bottom of the terminal and "beyond". The following two commands, when repeated (each one in its own terminal), give output that looks identical, until the bottom is reached: printf "\e[48;5;203mAAA\n\e[0m" printf "\e[48;5;203mAAA\e[0m\n" In the second case the background gets reset before \n . My testbed: Konsole 21.12.3, TERM=xterm-256color . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/717114",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/541196/"
]
} |
717,118 | I have one details.txt file which has below data size=190000date=1603278566981repo-name=testuploadrepo-path=/home/test/testuploadsize=140000date=1603278566981repo-name=testupload2repo-path=/home/test/testupload2size=170000date=1603278566981repo-name=testupload3repo-path=/home/test/testupload3 and so on and below awk script process that to #!/bin/bashawk -vOFS='\t' 'BEGIN{ FS="=" }/^size/{ if(++count1==1){ header=$1"," } sizeArr[++count]=$NF next}/^@repo-name/{ if(++count2==1){ header=header OFS $1"," } repoNameArr[count]=$NF next}/^date/{ if(++count3==1){ header=header OFS $1"," } dateArr[count]=$NF next }/^@blob-name/{ if(++count4==1){ header=header OFS $1"," } repopathArr[count]=$NF next}END{ print header for(i=1;i<=count;i++){ printf("%s,%s,%s,%s,%s\n",sizeArr[i],repoNameArr[i],dateArr[i],repopathArr[i]) }}' details.txt | tr -d @ |awk -F, '{$3=substr($3,0,10)}1' OFS=,|sed 's/date/creationTime/g' which prints value well formatted size date repo-name repo-path190000 1603278566981 testupload /home/test/testupload140000 1603278566981 testupload2 /home/test/testupload2170000 1603278566981 testupload3 /home/test/testupload3 I want to add if when any of the size/date/repo-name/repo-path has no value it should print zero instead I tried to add below in awk script but it is not working, I dont know how to get that }/^@repo-name/{ if(++count2==1){ header=header OFS $1"," } if(-z "${repo-name}") ; then repoNameArr=0 repoNameArr[count]=$NF next} I am not sure how to use if in awk script, can you please help me here final output should be printing zero if there is no value against size/date/repo-name/repo-path size date repo-name repo-path190000 1603278566981 testupload /home/test/testupload140000 1603278566981 testupload2 /home/test/testupload2170000 1603278566981 testupload3 /home/test/testupload3170000 1603278566981 0 /home/test/testupload4170000 1603278566981 0 /home/test/testupload5170000 1603278566981 testupload6 /home/test/testupload6 please guide | When AAA\n is printed at the very bottom of the terminal , the terminal needs to scroll the text and make an empty line appear at the bottom. It displays the line using the current background color, which is red. Then BBB\n is printed over this background, using its own background color. The new background color affects only few characters in the current line ( BBB ), but it is relevant when the next empty line appears. In effect the next line (where CCC is going to appear) looks normal. When AAA\n is printed not at the bottom, the terminal does not need to add a line, empty space is already there. It so happens the empty space is black. To reproduce, run your code several times until you get to the bottom of the terminal and "beyond". The following two commands, when repeated (each one in its own terminal), give output that looks identical, until the bottom is reached: printf "\e[48;5;203mAAA\n\e[0m" printf "\e[48;5;203mAAA\e[0m\n" In the second case the background gets reset before \n . My testbed: Konsole 21.12.3, TERM=xterm-256color . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/717118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/434257/"
]
} |
717,339 | I have several hundred .xhtml files in a sub-directory(*) and I want to delete all DIVs with a specific class (and the entire contents of those DIVs - including other divs, spans, image and paragraph elements) from them. The DIV may appear zero, one, or more times at any arbitrary depth within each .xhtml file. The specific DIVs I want to delete are: <div class="portlet solid author-note-portlet">.....</div> Using the xml_grep utility from the perl XML::Twig module, I can run xml_grep -v 'div[@class="portlet solid author-note-portlet"]' file*.xhtml and it will remove all instances of that div from the .xhtml files and display the result on stdout. Exactly what I want, except for "display on stdout". If xml_grep had some kind of in-place edit option, that would be fine, I'd just use that....but it doesn't, so I'd have to write a wrapper script that used a temporary file or sponge and run xml_grep against each .xhtml file individually, which would be slow and tedious. Or I could hack a copy of xml_grep so that it could edit its input file(s). But I don't want to do either of these things, I want to use the existing tool which can already do this, I want to use xmlstarlet - it'll be faster, has in-place edit, and I won't have to run it once per filename. The trouble is that no matter what I try (and I have tried dozens of variations), I cannot figure out the correct xpath specification to delete a div with this class. e.g. I have tried: xmlstarlet ed -d "div[@class='portlet solid author-note-portlet']" file.xhtml and (with different quoting) xmlstarlet ed -d 'div[@class="portlet solid author-note-portlet"]' file.xhtml and xmlstarlet ed -d '//html/body/div/div/div[@class="portlet solid author-note-portlet"]' and dozens of other variations. None of them have resulted in any change to the xhtml output. This is the point at which I usually give up on xmlstarlet and write a perl script, but this time I'm determined to do it with xmlstarlet. So, what's the correct way to specify this div class for xmlstarlet? BTW, for one example .xhtml file (with two instances of this div, which happen to be at the same depth...which is fairly typical but not universal), xmlstarlet el -v says: $ xmlstarlet el -v OEBPS/file0007.xhtml | grep author-note-portlethtml/body/div/div[@class='portlet solid author-note-portlet']html/body/div/div[@class='portlet solid author-note-portlet'] (*) Not that it matters, but these .xhtml files are inside a .epub file(**) generated by the FanFicFare plugin for Calibre - which downloads all chapters from books on various fiction web sites and turns them into an epub file (which is basically a zip archive containing XHTML and CSS files and maybe jpeg or gif files, along with a bunch of metadata files). <div class="portlet solid author-note-portlet"> is used by one site (Royal Road) for authors to include a note with a chapter. Some authors use it sparingly, and insert short notes about either the chapter or the book or brief announcements about random stuff, with maybe a link to their patreon page...fine, no big deal. Others use it to add a half page note with links to 10 of their other books at the start of each chapter and again to add three and half pages of links (with cover images) to those books at the end of each chapter. Which is kind of OK-ish if you're reading it in serial form chapter-by-chapter on the web site, but not if you're reading it as a book - ~4 pages of self-promotion for every 6-10 or so pages of story is excessive and distracting. And, BTW, that's 4 "pages" on my 10 inch android tablet - it's more than double that on my phone. I can easily add display: none to the epub's style sheet for this class, but I want to actually delete the divs from the .xhtml files. They noticeably inflate the .epub file size. (**) extracting the contents of the .epub with unzip and rebuilding it afterwards are way outside of the scope of this question, so please don't get distracted by irrelevant details. Already handled. Sample .xhtml file, edited down to the bare minimum (and story/chapter/author name anonymised to protect the "guilty :-): <?xml version="1.0" encoding="utf-8"?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>Chapter Five - Chapter Name</title><link href="stylesheet.css" type="text/css" rel="stylesheet"/><meta name="chapterurl" content="https://www.royalroad.com/fiction/URL"/><meta name="chapterorigtitle" content="Chapter Five - Chapter Name"/><meta name="chaptertoctitle" content="Chapter Five - Chapter Name"/><meta name="chaptertitle" content="Chapter Five - Chapter Name"/></head><body class="fff_chapter"><h3 class="fff_chapter_title">Chapter Five - Chapter Name</h3><div class="chapter-inner chapter-content"><div class="portlet solid author-note-portlet"> <div class="portlet-title"> <div class="caption"> <i class="fa fa-sticky-note"></i> <span class="caption-subject bold uppercase">A note from Author Name</span> </div> </div> <div class="portlet-body author-note"><p><span>About a dozen or so p, span, img, and br tags here</span></p></div> </div><p> story text here. a few hundreds p, br, etc tags</p> <div class="portlet solid author-note-portlet"> <div class="portlet-title"> <div class="caption"> <i class="fa fa-sticky-note"></i> <span class="caption-subject bold uppercase">A note from Author Name</span> </div> </div> <div class="portlet-body author-note"><p>several dozen more p, span, br, img, etc tags here</p></div> </div></div></body></html> | The correct way to do it with xmlstarlet is xmlstarlet ed --inplace -N xmlns="http://www.w3.org/1999/xhtml" \ --delete '//xmlns:div[@class="portlet solid author-note-portlet"]' file or, using short options, xmlstarlet ed -L -N xmlns="http://www.w3.org/1999/xhtml" \ -d '//xmlns:div[@class="portlet solid author-note-portlet"]' file Since the document uses a default namespace, we need to let xmlstarlet know that all nodes belong to this namespace and then also prefix the node's name with the namespace placeholder in the XPath expression. According to the documentation, -N must be the last "global option", i.e. it must come after -L (another global option). The -d is the "delete operation" to xmlstarlet ed , so it's not one of the global options. The XPath //xmlns:div will look recursively for a node called div in the xmlns namespace. In the question, apart from not handling the namespace, you either under-specified or over-specified this. Using div , which is the same as /div , would be matching a root node, and //html/body/div/div/div would be matching an immediate child node of html/body/div/div , anywhere. The yq wrapper (by Andrey Kislyuk) around the JSON processor jq comes with an XML parser wrapper called xq . You can use that too: xq -x 'del(.. | .div? | select(."@class"? == "portlet solid author-note-portlet"))' file The -x ( --xml-output ) option gives you XML output rather than JSON output. Using xq with -i ( --in-place ) will make it do in-place editing. This XML parser doesn't care about namespaces. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717339",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7696/"
]
} |
717,341 | So I have several files which contain CIDR entries (such as 1.1.1.0/24 ). The task is to add entries from these files to one NFTables set using a bash script. In doing so, I am limited to OpenWRT utilities. The catch is that there can be many entries in these files and they can exhaust the limit of 4096 characters per command . And also these files are automatically updated by cron , so a set needs to be periodically erased and re-filled as well. It seems to me that there is an easier way to do this than I have already done it. I also want to reduce the execution time of this mess. Here is my attempt to do this. nft add element $TARGET_SET { $(awk '{print $1 ", "}' "$CUSTOM_CIDRS_FILE") } Here's another question, if my file has a very large number of entries, will I overcome this limit of 4096 characters per command? And one last question, will it take a very long time to form a set if I add entries one at a time in a loop? I'm waiting primarily for answers with good practice. | The correct way to do it with xmlstarlet is xmlstarlet ed --inplace -N xmlns="http://www.w3.org/1999/xhtml" \ --delete '//xmlns:div[@class="portlet solid author-note-portlet"]' file or, using short options, xmlstarlet ed -L -N xmlns="http://www.w3.org/1999/xhtml" \ -d '//xmlns:div[@class="portlet solid author-note-portlet"]' file Since the document uses a default namespace, we need to let xmlstarlet know that all nodes belong to this namespace and then also prefix the node's name with the namespace placeholder in the XPath expression. According to the documentation, -N must be the last "global option", i.e. it must come after -L (another global option). The -d is the "delete operation" to xmlstarlet ed , so it's not one of the global options. The XPath //xmlns:div will look recursively for a node called div in the xmlns namespace. In the question, apart from not handling the namespace, you either under-specified or over-specified this. Using div , which is the same as /div , would be matching a root node, and //html/body/div/div/div would be matching an immediate child node of html/body/div/div , anywhere. The yq wrapper (by Andrey Kislyuk) around the JSON processor jq comes with an XML parser wrapper called xq . You can use that too: xq -x 'del(.. | .div? | select(."@class"? == "portlet solid author-note-portlet"))' file The -x ( --xml-output ) option gives you XML output rather than JSON output. Using xq with -i ( --in-place ) will make it do in-place editing. This XML parser doesn't care about namespaces. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/541464/"
]
} |
717,347 | I am using sed to replace some text in shell script, this is the original file looks like: server { listen 80; server_name localhost; location /api/test { proxy_connect_timeout 3000; proxy_send_timeout 3000; proxy_read_timeout 3000; send_timeout 3000; client_max_body_size 10M; client_body_buffer_size 100M; proxy_pass http://www.1.com; } location /api/demo { proxy_connect_timeout 3000; proxy_send_timeout 3000; proxy_read_timeout 3000; send_timeout 3000; client_max_body_size 10M; client_body_buffer_size 100M; proxy_pass http://www.2.com; }} and this is my shell script: #!/usr/bin/env bashset -uset -e# set -xecho "please input url1:"read URL1echo "plase input url2:"read URL2URL1_ESCAPED_REPLACE=$(printf '%s\n' "$URL1" | sed -e 's/[\/&]/\\&/g')URL2_ESCAPED_REPLACE=$(printf '%s\n' "$URL2" | sed -e 's/[\/&]/\\&/g')timestamp=$(date +%s)sed -Ei."$timestamp".bak -e "/\/api\/test/,/proxy\_pass/ s/.*proxy\_pass.*/proxy\_pass $URL1_ESCAPED_REPLACE;/" \-e "/\/api\/demo/,/proxy\_pass/ s/.*proxy\_pass.*/proxy\_pass $URL2_ESCAPED_REPLACE;/" nginx.conf this script works fine but I am facing a problem that, the replace action make the space indent disappear. is it possible just do the replace text without delete the whitle space? I want to keep the original text format. | The correct way to do it with xmlstarlet is xmlstarlet ed --inplace -N xmlns="http://www.w3.org/1999/xhtml" \ --delete '//xmlns:div[@class="portlet solid author-note-portlet"]' file or, using short options, xmlstarlet ed -L -N xmlns="http://www.w3.org/1999/xhtml" \ -d '//xmlns:div[@class="portlet solid author-note-portlet"]' file Since the document uses a default namespace, we need to let xmlstarlet know that all nodes belong to this namespace and then also prefix the node's name with the namespace placeholder in the XPath expression. According to the documentation, -N must be the last "global option", i.e. it must come after -L (another global option). The -d is the "delete operation" to xmlstarlet ed , so it's not one of the global options. The XPath //xmlns:div will look recursively for a node called div in the xmlns namespace. In the question, apart from not handling the namespace, you either under-specified or over-specified this. Using div , which is the same as /div , would be matching a root node, and //html/body/div/div/div would be matching an immediate child node of html/body/div/div , anywhere. The yq wrapper (by Andrey Kislyuk) around the JSON processor jq comes with an XML parser wrapper called xq . You can use that too: xq -x 'del(.. | .div? | select(."@class"? == "portlet solid author-note-portlet"))' file The -x ( --xml-output ) option gives you XML output rather than JSON output. Using xq with -i ( --in-place ) will make it do in-place editing. This XML parser doesn't care about namespaces. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171959/"
]
} |
717,457 | Similar to Why doesn't echo called as /bin/sh -c echo foo output anything? but using ssh . Why doesn't this output anything: ssh localhost sh -c 'echo "0=$0 1=$1"' arg1 arg2 arg3 But if I change it to ssh localhost sh -c 'true; echo "0=$0 1=$1"' arg1 arg2 arg3# Output is0=bash 1= arg1 arg2 arg3 I see behavior that implies the echo command is being run but the way the variable substitution is working is not as expected. See https://unix.stackexchange.com/a/253424/119816 Running without ssh works as expected Same as above but removing the ssh command sh -c 'echo "0=$0 1=$1"' arg1 arg2 arg3# Output is0=arg1 1=arg2# Adding true gives same outputsh -c 'true; echo "0=$0 1=$1"' arg1 arg2 arg30=arg1 1=arg2 I'm trying copy a root accessible file from one system to another I'm trying to get an ssh command working to copy a root file to another system. The command I'm trying is: file=/etc/hostsset -xssh host1 sudo cat $file | ssh host2 sudo sh -c 'exec cat "$0"' $file# Output is+ ssh host2 sudo sh -c 'exec cat > "$0"' /etc/hosts+ ssh host1 sudo cat /etc/hostsbash: : No such file or directory This looks OK to me, and I'm not sure how else to troubleshoot. My solution My solution is to fall back to what I've used before ssh host1 sudo cat $file | ssh host2 sudo tee $file > /dev/null The above works. Searching for a solution I've had this problem and asked the question: Does Unix have a command to read from stdin and write to a file (like tee without sending output to stdout)? Other's have had problems/questions with sh -c command: Why doesn't echo called as /bin/sh -c echo foo output anything? But there must be a subtlety that occurs when using ssh which results in an additional shell evaluation. | It's the same issue as in your referenced question. The local shell removes a layer of quotes. Here's what's happening Initial code ssh localhost sh -c 'echo "0=$0 1=$1"' arg1 arg2 arg3 After local shell expansion but before command execution ssh localhost sh -c echo "0=$0 1=$1" arg1 arg2 arg3# ^^^^^^^^^^^^^^^^ a single token# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ passed to the remote shell In the remote shell after shell expansion but before execution sh -c echo 0= 1= arg1 arg2 arg3# ^^^^^ a single token# ^^^^ argument for -c The echo gets bound to the sh -c as the command to execute and the remainder are arguments. You should get a blank line. Personally, I learned that ssh passed its args directly to the remote shell, and that's where it was executed. As a result I tend to write ssh commands like this, which reminds me that the some command line… part will be passed verbatim to a remote shell for execution as if I'd typed it directly : ssh remoteHost 'some command line…' (Wrap the line in single quotes to avoid local evaluation, or use double quotes to interpolate local variables.) You then say that actually you're trying to get an ssh command working to copy a root file to another system file=/etc/hostsssh -n host1 "scp -p '$file' host2:'/path/to/destination'" Or, since the file name is safe, and assuming you have root equivalence between the local client and host1 , and from host1 to host2 : ssh -n root@host1 scp -p /etc/hosts host2:/etc/hosts With a modern scp that in turn can be simplified further: scp -OR -p root@host1:/etc/hosts root@host2:/etc/ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119816/"
]
} |
717,460 | Can somebody help me with a query that I'm trying to figure out? I am currently using 2 files. One file has the contents in a list with 2 columns with values below. File_A.txt: 1 MSFT2 YHOO3 GOOG4 APPL5 SUN FILE_B.txt: ### Client A ###123### Client B ###2345 ++++ What I'm trying to achieve using any substitution method, for every occurrence of 1, change the value in File B to MFST. For every occurrence of 2 in File B, change that value to YHOO, 3 to GOOG, etc. The only way I can do this currently is a manual and long process with the use of interactive sed. Is there any looping syntax I could use? sed -i 's/\1\>/MSFT/g' FILE_B.txt Thanks Very Much | Like that? awk 'NR==FNR{a[$1]=$2} NR!=FNR{if($1 in a){$1=a[$1]};print}' File_A.txt FILE_B.txt### Client A ###MSFTYHOOGOOG### Client B ###YHOOGOOGAPPLSUN If that works for you redirect the output to a tmp-file and then mv the tmp file over FILE_B.txt ... if sponge is installed you could just awk 'NR==FNR{a[$1]=$2} NR!=FNR{if($1 in a){$1=a[$1]};print}' File_A.txt FILE_B.txt | sponge FILE_B.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444844/"
]
} |
717,671 | I'm reading up on exec and how to use the -a flag. The SS64 docs seem accurate; they say the following: exec: Execute a command Syntax: exec [-cl] [-a name] [command [arguments]] Options: -c Causes command to be executed with an empty environment.-l Place a dash at the beginning of the zeroth arg passed to command. (This is what the login program does.)-a The shell passes name as the zeroth argument to command. To test this out, I wrote two scripts, one named foo/baz and one named foo/buzz : #!/usr/bin/env bash# foo/bazexec -a blah ./foo/bar 1 2 #!/usr/bin/env bash# foo/buzzexec ./foo/bar 1 2 Each of these scripts runs the same child script, foo/bar , which does the following: #!/usr/bin/env bashecho "Hello world"echo "0: $0"echo "1: $1"echo "2: $2" My goal is to see what effect -a has on the 0th and subsequent arguments. If -a causes the 0th argument to change to the argument you pass to the -a flag, then I would expect the 0th argument when I run foo/baz to be blah , since that's what I pass to -a . However, when I run the scripts, the output is the same in both cases: ~/Workspace/OpenSource (master) $ ./foo/baz Hello world0: /Users/richiethomas/Workspace/OpenSource/foo/bar1: 12: 2~/Workspace/OpenSource (master) $ ./foo/buzzHello world0: /Users/richiethomas/Workspace/OpenSource/foo/bar1: 12: 2 Am I doing something wrong? Or is my expectation incorrect somehow? Also, a related question- what is the use case of overriding the 0th argument, as opposed to just accessing any passed-in args via $1, $2, $3, etc.? | what is the use case of overriding the 0th argument...? Some programs change their behavior based on how they are called. For example, busybox is a multi-call binary that behaves this way. Using exec -a we get different behavior depending on the value of the zeroth argument: $ bash -c 'exec -a date /usr/sbin/busybox'Sat Sep 17 20:22:14 EDT 2022$ bash -c 'exec -a uptime /usr/sbin/busybox' 20:22:17 up 23:48, load average: 0.10, 0.20, 0.15 This also demonstrates that exec -a <something> behaves as documented. Am I doing something wrong? Or is my expectation incorrect somehow? The problem here is that you're working with shell scripts. When you enter ./foo/baz on the command line, you're not actually running a command named ./foo/baz : you're running something like /bin/bash /path/to/foo/baz . While exec -a effects the zeroth argument passed to the shell...the shell doesn't care, and it uses its own logic when setting up the variables visible to the shell script, including $0 (which contains the script name), and the positional parameters $1 , $2 ... (which contain the arguments to the script). (This isn't specific to shell scripts -- the same would hold true of pretty any interpreted code.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/717671",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142469/"
]
} |
717,771 | Why does += work as a concatenation in script? while read tdo t+=2echo $t I get a 2 added to the end... Why? | In most Bourne-like shells (ksh, bash,zsh), the += operator works over three different variable types. For a normal variable, a string variable, this happens: $ a=hello$ a+=3$ echo "$a"hello3 If the variable is defined as an integer type, or the operation is carried out in an arithmetic environment, the operator has the usual meaning that it also has in the c language: $ typeset -i a$ a=31$ a+=3$ echo "$a"34 or inside an arithmetic environment: $ unset a$ a=31$ let a+=3 # an odd example to make you think!!. # better use ((a+=3)) # or, in a POSIX sh: [ $((a+=3)) -eq 0 ]$ echo "$a"34 And, the += is also used to add elements to an array (where the shell does have arrays). $ unset a$ a=()$ a+=(one)$ a+=(111)$ printf '<%s> ' "${a[@]}"; echo<one> <111> So, the answer to your initial question: Why does += work as a concatenation in the script? Is because t was a normal string variable (used outside an arithmetic environment). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/541911/"
]
} |
717,786 | I need to add items to a list of items. I am not sure if these are the correct terms. Example: Suppose I am inside a directory with 2 files, "a" and "b". If I pipe ls to less my list will have two items, "a" and "b": ls | less I want to pipe ls to less, but before reach the less command I want an item added to the list. e.g: ls -->unknow shell feature<-- less And the content listed by less would be "a", "b" and "c" (file "c" doesn't exist) Is it possible? | The simplest would be (ls;echo c) | less | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/717786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273356/"
]
} |
717,827 | I have a text file of format something like path1/path2/path3a 34474538656path1/path2/path3a 8115147679path1/path2/path3b 2266371027path1/path2/path3b 3860823 path1/path2/path3b 554247 And this pattern continues. I am looking to remove only the column 1 duplicate entry and print it as path1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Is this possible? The columns are delimited by a single space All paths have same length Globally aligned would be preferred to make it easier to read. | Here's one way: $ awk '{ print seen[$1]++ ? " "$2 : $0}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Which could also be written as: $ awk -v spacer=' ' '{ print seen[$1]++ ? spacer$2 : $0}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Or $ awk -v spacer=' ' '{ if(seen[$1]++){print spacer$2}else{print}}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Or, in perl , calculating the length of the spacer on the fly: $ perl -lane '$spacer=$seen{$F[0]}++ ? " " x length($F[0]) : $F[0]; print "$spacer $F[1]"' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222047/"
]
} |
717,836 | Suppose I have a folder with lots of subfolders and they are only supposed to contain archived files and I want to check that there are no other file types. I can use the command ls -l -R /backup --ignore="*.zip" --ignore="*.7z" This will display this information I need, but also include every folder name e.g. /backup/2000/Jan:total 0/backup/2000/Feb:total 0 etc Is there any way of excluding details of folders which only contain the ignored files? | Here's one way: $ awk '{ print seen[$1]++ ? " "$2 : $0}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Which could also be written as: $ awk -v spacer=' ' '{ print seen[$1]++ ? spacer$2 : $0}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Or $ awk -v spacer=' ' '{ if(seen[$1]++){print spacer$2}else{print}}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Or, in perl , calculating the length of the spacer on the fly: $ perl -lane '$spacer=$seen{$F[0]}++ ? " " x length($F[0]) : $F[0]; print "$spacer $F[1]"' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717836",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/542000/"
]
} |
717,838 | I have an Ubuntu 20.04.4 server with 32GB RAM.The server is running a bunch of LXD containers and two VMs (libvirt+qemu+kvm). After startup, with all services running, the RAM utilization is about ~12GB. After 3-4 weeks the RAM utilization reaches ~90%. If I stop all containers and VMs the utilization is still ~20GB. However, I cannot figure out who is claiming this memory. I have already tried clearing the cache, but that doesn't change much. I compiled the kernel with support for kmemleak but it did not detect anything useful but shows up in slabtop. systemd-cgtop: / 593 - 23.7G - -machine.slice - - 1.4G - -system.slice 116 - 301.1M - -user.slice 11 - 141.9M - -user.slice/user-1000.slice 11 - 121.6M - -system.slice/systemd-journald.service 1 - 83.8M - -user.slice/user-1000.slice/session-297429.scope 5 - 81.0M - -system.slice/libvirtd.service 22 - 46.2M - -user.slice/user-1000.slice/[email protected] 6 - 39.8M - -system.slice/snapd.service 36 - 19.8M - -system.slice/cron.service 1 - 19.3M - -init.scope 1 - 14.0M - -system.slice/systemd-udevd.service 1 - 13.2M - -system.slice/multipathd.service 7 - 10.8M - -system.slice/NetworkManager.service 3 - 5.8M - -system.slice/networkd-dispatcher.service 1 - 5.4M - -system.slice/ssh.service 1 - 5.0M - -system.slice/ModemManager.service 3 - 4.5M - -system.slice/systemd-networkd.service 1 - 3.5M - -system.slice/accounts-daemon.service 3 - 3.5M - -system.slice/udisks2.service 5 - 3.4M - -system.slice/polkit.service 3 - 3.0M - -system.slice/rsyslog.service 4 - 2.8M - -system.slice/systemd-resolved.service 1 - 2.4M - -system.slice/unattended-upgrades.service 2 - 1.8M - -system.slice/dbus.service 1 - 1.8M - -system.slice/systemd-logind.service 1 - 1.7M - -system.slice/smartmontools.service 1 - 1.5M - -system.slice/systemd-machined.service 1 - 1.5M - -system.slice/systemd-timesyncd.service 2 - 1.4M - -system.slice/virtlogd.service 1 - 1.3M - -system.slice/rtkit-daemon.service 3 - 1.2M - - /proc/meminfo: MemTotal: 32718604 kBMemFree: 11480728 kBMemAvailable: 11612788 kBBuffers: 28 kBCached: 144512 kBSwapCached: 855404 kBActive: 520504 kBInactive: 541588 kBActive(anon): 441708 kBInactive(anon): 484240 kBActive(file): 78796 kBInactive(file): 57348 kBUnevictable: 18664 kBMlocked: 18664 kBSwapTotal: 33043136 kBSwapFree: 32031680 kBDirty: 0 kBWriteback: 0 kBAnonPages: 94680 kBMapped: 126592 kBShmem: 660 kBKReclaimable: 432484 kBSlab: 10784740 kBSReclaimable: 432484 kBSUnreclaim: 10352256 kBKernelStack: 10512 kBPageTables: 5052 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 49402436 kBCommitted_AS: 1816364 kBVmallocTotal: 34359738367 kBVmallocUsed: 152512 kBVmallocChunk: 0 kBPercpu: 8868864 kBHardwareCorrupted: 0 kBAnonHugePages: 0 kBShmemHugePages: 0 kBShmemPmdMapped: 0 kBFileHugePages: 0 kBFilePmdMapped: 0 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBHugetlb: 0 kBDirectMap4k: 19383100 kBDirectMap2M: 14053376 kBDirectMap1G: 0 kB slabtop: Active / Total Objects (% used) : 30513607 / 33423869 (91.3%) Active / Total Slabs (% used) : 1384092 / 1384092 (100.0%) Active / Total Caches (% used) : 123 / 203 (60.6%) Active / Total Size (% used) : 9965969.20K / 10757454.91K (92.6%) Minimum / Average / Maximum Object : 0.01K / 0.32K / 16.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 27156909 26001970 95% 0.30K 1194104 26 9552832K kmemleak_object 754624 742232 98% 0.06K 11791 64 47164K kmalloc-64 654675 278378 42% 0.57K 23382 28 374112K radix_tree_node 593436 348958 58% 0.08K 11636 51 46544K Acpi-State 559744 418325 74% 0.03K 4373 128 17492K kmalloc-32 496320 483104 97% 0.12K 15510 32 62040K kernfs_node_cache 487104 155952 32% 0.06K 7611 64 30444K vmap_area 394240 165965 42% 0.14K 14080 28 56320K btrfs_extent_map 355580 342674 96% 0.09K 7730 46 30920K trace_event_file 339573 338310 99% 4.00K 42465 8 1358880K kmalloc-4k 306348 154794 50% 0.19K 7410 42 59280K dentry 145931 104400 71% 1.13K 11552 28 369664K btrfs_inode 137728 137174 99% 0.02K 538 256 2152K kmalloc-16 112672 74034 65% 0.50K 3671 32 58736K kmalloc-512 102479 62366 60% 0.30K 4093 26 32744K btrfs_delayed_node 68880 66890 97% 2.00K 4305 16 137760K kmalloc-2k 66656 48345 72% 0.25K 2083 32 16664K kmalloc-256 64110 47818 74% 0.59K 2376 27 38016K inode_cache 50176 50176 100% 0.01K 98 512 392K kmalloc-8 44710 43744 97% 0.02K 263 170 1052K lsm_file_cache 43056 11444 26% 0.25K 1418 32 11344K pool_workqueue 36480 29052 79% 0.06K 570 64 2280K kmalloc-rcl-64 33920 25846 76% 0.06K 530 64 2120K anon_vma_chain 24822 14264 57% 0.19K 832 42 6656K kmalloc-192 23552 23552 100% 0.03K 184 128 736K fsnotify_mark_connector 23517 17994 76% 0.20K 603 39 4824K vm_area_struct 19572 14909 76% 0.09K 466 42 1864K kmalloc-rcl-96 18262 15960 87% 0.09K 397 46 1588K anon_vma 14548 12905 88% 1.00K 459 32 14688K kmalloc-1k 14162 14162 100% 0.05K 194 73 776K file_lock_ctx 13104 12141 92% 0.09K 312 42 1248K kmalloc-96 13062 13062 100% 0.19K 311 42 2488K cred_jar 13056 10983 84% 0.12K 408 32 1632K kmalloc-128 12192 8922 73% 0.66K 508 24 8128K proc_inode_cache 11730 11444 97% 0.69K 1444 46 46208K squashfs_inode_cache 11067 11067 100% 0.08K 217 51 868K task_delay_info 10752 10752 100% 0.03K 84 128 336K kmemleak_scan_area 10656 8666 81% 0.25K 333 32 2664K filp 10252 10252 100% 0.18K 235 44 1880K kvm_mmu_page_header 10200 10200 100% 0.05K 120 85 480K ftrace_event_field 10176 10176 100% 0.12K 318 32 1272K pid 9906 9906 100% 0.10K 254 39 1016K Acpi-ParseExt 9600 9213 95% 0.12K 300 32 1200K kmalloc-rcl-128 9520 9520 100% 0.07K 170 56 680K Acpi-Operand 8502 8063 94% 0.81K 218 39 6976K sock_inode_cache 7733 7733 100% 0.70K 169 46 5408K shmem_inode_cache 7392 7231 97% 0.19K 176 42 1408K skbuff_ext_cache 6552 6552 100% 0.19K 163 42 1304K kmalloc-rcl-192 6480 6480 100% 0.11K 180 36 720K khugepaged_mm_slot 6144 6144 100% 0.02K 24 256 96K ep_head 5439 5439 100% 0.42K 147 37 2352K btrfs_ordered_extent 5248 4981 94% 0.25K 164 32 1312K skbuff_head_cache 4792 4117 85% 4.00K 606 8 19392K biovec-max 4326 4326 100% 0.19K 103 42 824K proc_dir_entry 4125 4125 100% 0.24K 125 33 1000K tw_sock_TCPv6 3978 3978 100% 0.10K 102 39 408K buffer_head 3975 3769 94% 0.31K 159 25 1272K mnt_cache 3328 3200 96% 1.00K 104 32 3328K RAW 3136 3136 100% 1.12K 112 28 3584K signal_cache 3072 2560 83% 0.03K 24 128 96K dnotify_struct 2910 2820 96% 1.06K 97 30 3104K UNIX 2522 2396 95% 1.19K 97 26 3104K RAWv6 2448 2448 100% 0.04K 24 102 96K pde_opener 2400 2400 100% 0.50K 75 32 1200K skbuff_fclone_cache 2112 2080 98% 1.00K 66 32 2112K biovec-64 1695 1587 93% 2.06K 113 15 3616K sighand_cache 1518 1518 100% 0.69K 33 46 1056K files_cache 1500 1500 100% 0.31K 60 25 480K nf_conntrack 1260 894 70% 6.06K 252 5 8064K task_struct 1260 1260 100% 1.06K 42 30 1344K mm_struct 1222 1158 94% 2.38K 94 13 3008K TCPv6 1150 1150 100% 0.34K 25 46 400K taskstats 924 924 100% 0.56K 33 28 528K task_group 888 888 100% 0.21K 24 37 192K file_lock_cache 864 864 100% 0.11K 24 36 96K btrfs_trans_handle 855 855 100% 2.19K 62 14 1984K TCP 851 851 100% 0.42K 23 37 368K uts_namespace 816 816 100% 0.12K 24 34 96K seq_file 816 816 100% 0.04K 8 102 32K ext4_extent_status 792 792 100% 0.24K 24 33 192K tw_sock_TCP 782 782 100% 0.94K 23 34 736K mqueue_inode_cache 720 720 100% 0.13K 24 30 96K pid_namespace 704 704 100% 0.06K 11 64 44K kmem_cache_node 648 648 100% 1.16K 24 27 768K perf_event 640 640 100% 0.12K 20 32 80K scsi_sense_cache 624 624 100% 0.30K 24 26 192K request_sock_TCP 624 624 100% 0.15K 24 26 96K fuse_request 596 566 94% 8.00K 149 4 4768K kmalloc-8k 576 576 100% 1.31K 24 24 768K UDPv6 494 494 100% 0.30K 19 26 152K request_sock_TCPv6 480 480 100% 0.53K 16 30 256K user_namespace 432 432 100% 1.15K 16 27 512K ext4_inode_cache 416 416 100% 0.25K 13 32 104K kmem_cache 416 416 100% 0.61K 16 26 256K hugetlbfs_inode_cache 390 390 100% 0.81K 10 39 320K fuse_inode 306 306 100% 0.04K 3 102 12K bio_crypt_ctx 292 292 100% 0.05K 4 73 16K mbcache 260 260 100% 1.56K 13 20 416K bdev_cache 256 256 100% 0.02K 1 256 4K jbd2_revoke_table_s 232 232 100% 4.00K 29 8 928K names_cache 192 192 100% 1.98K 12 16 384K request_queue 170 170 100% 0.02K 1 170 4K mod_hash_entries 168 168 100% 4.12K 24 7 768K net_namespace 155 155 100% 0.26K 5 31 40K numa_policy 132 132 100% 0.72K 3 44 96K fat_inode_cache 128 128 100% 0.25K 4 32 32K dquot 128 128 100% 0.06K 2 64 8K ext4_io_end 108 108 100% 2.61K 9 12 288K x86_emulator 84 84 100% 0.19K 2 42 16K ext4_groupinfo_4k 68 68 100% 0.12K 2 34 8K jbd2_journal_head 68 68 100% 0.12K 2 34 8K abd_t 64 64 100% 8.00K 16 4 512K irq_remap_cache 64 64 100% 2.00K 4 16 128K biovec-128 63 63 100% 4.06K 9 7 288K x86_fpu 56 56 100% 0.07K 1 56 4K fsnotify_mark 56 56 100% 0.14K 2 28 8K ext4_allocation_context 42 42 100% 0.75K 1 42 32K dax_cache 40 40 100% 0.20K 1 40 8K ip4-frags 36 36 100% 7.86K 9 4 288K kvm_vcpu 30 30 100% 1.06K 1 30 32K dmaengine-unmap-128 24 24 100% 0.66K 1 24 16K ovl_inode 15 15 100% 2.06K 1 15 32K dmaengine-unmap-256 6 6 100% 16.00K 3 2 96K zio_buf_comb_16384 0 0 0% 0.01K 0 512 0K kmalloc-rcl-8 0 0 0% 0.02K 0 256 0K kmalloc-rcl-16 0 0 0% 0.03K 0 128 0K kmalloc-rcl-32 0 0 0% 0.25K 0 32 0K kmalloc-rcl-256 0 0 0% 0.50K 0 32 0K kmalloc-rcl-512 0 0 0% 1.00K 0 32 0K kmalloc-rcl-1k 0 0 0% 2.00K 0 16 0K kmalloc-rcl-2k 0 0 0% 4.00K 0 8 0K kmalloc-rcl-4k 0 0 0% 8.00K 0 4 0K kmalloc-rcl-8k 0 0 0% 0.09K 0 42 0K dma-kmalloc-96 0 0 0% 0.19K 0 42 0K dma-kmalloc-192 0 0 0% 0.01K 0 512 0K dma-kmalloc-8 0 0 0% 0.02K 0 256 0K dma-kmalloc-16 0 0 0% 0.03K 0 128 0K dma-kmalloc-32 0 0 0% 0.06K 0 64 0K dma-kmalloc-64 0 0 0% 0.12K 0 32 0K dma-kmalloc-128 0 0 0% 0.25K 0 32 0K dma-kmalloc-256 0 0 0% 0.50K 0 32 0K dma-kmalloc-512 0 0 0% 1.00K 0 32 0K dma-kmalloc-1k 0 0 0% 2.00K 0 16 0K dma-kmalloc-2k 0 0 0% 4.00K 0 8 0K dma-kmalloc-4k 0 0 0% 8.00K 0 4 0K dma-kmalloc-8k 0 0 0% 0.12K 0 34 0K iint_cache 0 0 0% 1.00K 0 32 0K PING 0 0 0% 0.75K 0 42 0K xfrm_state 0 0 0% 0.37K 0 43 0K request_sock_subflow 0 0 0% 1.81K 0 17 0K MPTCP 0 0 0% 0.62K 0 25 0K dio 0 0 0% 0.19K 0 42 0K userfaultfd_ctx_cache 0 0 0% 0.03K 0 128 0K ext4_pending_reservation 0 0 0% 0.08K 0 51 0K ext4_fc_dentry_update 0 0 0% 0.04K 0 102 0K fat_cache 0 0 0% 0.81K 0 39 0K ecryptfs_auth_tok_list_item 0 0 0% 0.02K 0 256 0K ecryptfs_file_cache 0 0 0% 0.94K 0 34 0K ecryptfs_inode_cache 0 0 0% 2.82K 0 11 0K dm_uevent 0 0 0% 3.23K 0 9 0K kcopyd_job 0 0 0% 1.19K 0 26 0K PINGv6 0 0 0% 0.18K 0 44 0K ip6-frags 0 0 0% 2.00K 0 16 0K MPTCPv6 0 0 0% 0.13K 0 30 0K fscrypt_info 0 0 0% 0.25K 0 32 0K fsverity_info 0 0 0% 1.25K 0 25 0K AF_VSOCK 0 0 0% 0.19K 0 42 0K kcf_sreq_cache 0 0 0% 0.50K 0 32 0K kcf_areq_cache 0 0 0% 0.19K 0 42 0K kcf_context_cache 0 0 0% 4.00K 0 8 0K zfs_btree_leaf_cache 0 0 0% 0.44K 0 36 0K ddt_entry_cache 0 0 0% 1.22K 0 26 0K zio_cache 0 0 0% 0.05K 0 85 0K zio_link_cache 0 0 0% 0.50K 0 32 0K zio_buf_comb_512 0 0 0% 1.00K 0 32 0K zio_buf_comb_1024 0 0 0% 1.50K 0 21 0K zio_buf_comb_1536 0 0 0% 2.00K 0 16 0K zio_buf_comb_2048 0 0 0% 2.50K 0 12 0K zio_buf_comb_2560 0 0 0% 3.00K 0 10 0K zio_buf_comb_3072 0 0 0% 3.50K 0 9 0K zio_buf_comb_3584 0 0 0% 4.00K 0 8 0K zio_buf_comb_4096 0 0 0% 8.00K 0 4 0K zio_buf_comb_5120 0 0 0% 8.00K 0 4 0K zio_buf_comb_6144 0 0 0% 8.00K 0 4 0K zio_buf_comb_7168 0 0 0% 8.00K 0 4 0K zio_buf_comb_8192 0 0 0% 12.00K 0 2 0K zio_buf_comb_10240 0 0 0% 12.00K 0 2 0K zio_buf_comb_12288 0 0 0% 16.00K 0 2 0K zio_buf_comb_14336 0 0 0% 16.00K 0 2 0K lz4_cache 0 0 0% 0.24K 0 33 0K sa_cache 0 0 0% 0.96K 0 33 0K dnode_t 0 0 0% 0.32K 0 24 0K arc_buf_hdr_t_full 0 0 0% 0.38K 0 41 0K arc_buf_hdr_t_full_crypt 0 0 0% 0.09K 0 42 0K arc_buf_hdr_t_l2only 0 0 0% 0.08K 0 51 0K arc_buf_t 0 0 0% 0.38K 0 42 0K dmu_buf_impl_t 0 0 0% 0.37K 0 43 0K zil_lwb_cache 0 0 0% 0.15K 0 26 0K zil_zcw_cache 0 0 0% 0.13K 0 30 0K sio_cache_0 0 0 0% 0.15K 0 26 0K sio_cache_1 0 0 0% 0.16K 0 24 0K sio_cache_2 0 0 0% 1.06K 0 30 0K zfs_znode_cache 0 0 0% 0.09K 0 46 0K zfs_znode_hold_cache | Here's one way: $ awk '{ print seen[$1]++ ? " "$2 : $0}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Which could also be written as: $ awk -v spacer=' ' '{ print seen[$1]++ ? spacer$2 : $0}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Or $ awk -v spacer=' ' '{ if(seen[$1]++){print spacer$2}else{print}}' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 Or, in perl , calculating the length of the spacer on the fly: $ perl -lane '$spacer=$seen{$F[0]}++ ? " " x length($F[0]) : $F[0]; print "$spacer $F[1]"' filepath1/path2/path3a 34474538656 8115147679path1/path2/path3b 2266371027 3860823 554247 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/542004/"
]
} |
717,995 | I have a myfile.txt that contains several record types. The record type is at position 27, with length of 3 chars, like this: 12345678901234567890123456E20XXXXXXXXX12345678901234567890123456I47XXXXXXXXX12345678901234567890123456I49XXXXXXXXX12345678901234567890123456I50XXXXXXXXX12345678901234567890123456W55XXXXXXXXX12345678901234567890123456E20XXXXXXXXX12345678901234567890123456I47XXXXXXXXX12345678901234567890123456Q11XXXXXXXXX12345678901234567890123456R11XXXXXXXXX12345678901234567890123456W55XXXXXXXXX12345678901234567890123456E20XXXXXXXXX12345678901234567890123456I47XXXXXXXXX12345678901234567890123456I49XXXXXXXXX12345678901234567890123456I50XXXXXXXXX12345678901234567890123456Q11XXXXXXXXX12345678901234567890123456R11XXXXXXXXX12345678901234567890123456W55XXXXXXXXX I would like to split it by record type, like this: grep -E '^.{26}(E20)' myfile.txt > E20.txtgrep -E '^.{26}(I47)' myfile.txt > I47.txtgrep -E '^.{26}(I49)' myfile.txt > I49.txtgrep -E '^.{26}(I50)' myfile.txt > I50.txtgrep -E '^.{26}(Q11)' myfile.txt > Q11.txtgrep -E '^.{26}(R11)' myfile.txt > R11.txtgrep -E '^.{26}(W55)' myfile.txt > W55.txt and do something else, for example echo "Unexpected record type" when the record type is not in (E20, I47, I49, I50, Q11, R11, W55). For example, E20.txt file will be: 12345678901234567890123456E20XXXXXXXXX12345678901234567890123456E20XXXXXXXXX12345678901234567890123456E20XXXXXXXXX and so on. Is there an elegant way to do it (in a script) on Linux? | Here's one awk way. First, create a file with the "good" records, one per line: $ cat goodRecs E20I47I49I50Q11R11W55 Then: gawk 'FNR==NR{good[$1]; next} { rec=substr($1,27,3); if(rec in good){ print > rec".txt" } else{ print "Bad record: "rec } }' goodRecs myfile.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/717995",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184179/"
]
} |
718,015 | I'm looking for an answer to this question that doesn't involve the less command: Is there a way to run the ls command in terminal, and prevent line wrapping for long filenames? For example, upon running ls, rather than the following: shortfilethisisalongfilenameshortfile I'd like it to cut off the long filename at the point where it would wrap, or not wrap it at all, so that it shows simply as: shortfilethisisalongfishortfile | setterm --linewrap offlssetterm --linewrap on setterm will do its best to configure your terminal (terminal emulator), but in general the terminal may or may not support the feature. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/718015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/537545/"
]
} |
718,042 | I'm a little confused about the missing -e option from the bash manual. man bash But it is working with a script shebang like : #!/bin/bash -e and of course it is defined in help set . Why isn't it listed in the options in the bash manual ? | It is implicitly mentioned at the start of the manual: OPTIONS All of the single-character shell options documented in the descriptionof the set builtin command, including -o , can be used as options whenthe shell is invoked. [...] You are then expected to look up the set builtin command further down in the manual, use help set in an interactive shell session (as you mention in the question), or access the longer reference manual in some appropriate way (e.g. by using the info bash set command, on systems where this works). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/718042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119603/"
]
} |
718,052 | This is the current state of my disk at present , how can I use the unused space or move stuff to free up space , without formatting or loosing data Filesystem Size Used Avail Use% Mounted ondevtmpfs 7.6G 0 7.6G 0% /devtmpfs 7.7G 121M 7.5G 2% /dev/shmtmpfs 7.7G 2.0M 7.7G 1% /runtmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup/dev/mapper/Root 50G 50G 336M 100% //dev/nvme0n1p2 3.0G 467M 2.6G 16% /boot/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi/dev/mapper/Home 100G 30G 71G 30% /hometmpfs 1.6G 52K 1.6G 1% /run/user/119637 Output of lvmdiskscan /dev/mapper/luks-e66c5c74-2af5-4500-9e5f-011c23ab17aa [ 235.26 GiB] LVM physical volume /dev/nvme0n1p1 [ 200.00 MiB] /dev/nvme0n1p2 [ 3.00 GiB] /dev/nvme0n1p3 [ <235.28 GiB] 0 disks 3 partitions 1 LVM physical volume whole disk 0 LVM physical volumes Can I merge home partition or add a partition for root from home as it has more space ?Are below steps logical ? Make another partition of home Merge that to root ( No idea on commands how to do) Say if I provide 10G from home to root it will resolve storage issue for my machine and all data will be intact . As workaround for now , just moved the most heavy files # find . -type f -size +1G./VirtualBox VMs/origin-1.3.0/box-disk1.vmdk./VirtualBox VMs/virtualBox-related_default_1654693896122_36201/centos-7-1-1.x86_64.vmdk./.vagrant.d/boxes/thesteve0-VAGRANTSLASH-openshift-origin/1.2.0/virtualbox/box-disk1.vmdk# mv "./VirtualBox VMs/origin-1.3.0/box-disk1.vmdk" /home/# df -hFilesystem Size Used Avail Use% Mounted ondevtmpfs 7.6G 0 7.6G 0% /devtmpfs 7.7G 155M 7.5G 2% /dev/shmtmpfs 7.7G 2.0M 7.7G 1% /runtmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup/dev/mapper/Root 50G 40G 11G 79% //dev/nvme0n1p2 3.0G 467M 2.6G 16% /boot/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi/dev/mapper/Home 100G 40G 61G 40% /hometmpfs 1.6G 48K 1.6G 1% /run/user/119637 Not sure how this will impact using virtualbox :) | It is implicitly mentioned at the start of the manual: OPTIONS All of the single-character shell options documented in the descriptionof the set builtin command, including -o , can be used as options whenthe shell is invoked. [...] You are then expected to look up the set builtin command further down in the manual, use help set in an interactive shell session (as you mention in the question), or access the longer reference manual in some appropriate way (e.g. by using the info bash set command, on systems where this works). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/718052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388380/"
]
} |
718,569 | How do I do a search for any of the following: Find folders that start with a space on linux and then remove any starting spaces so the folder can start with a letter or a number? Find FILES that start with a space on linux and then remove any starting spaces so the files can start with a letter or a number? | find . -depth -name ' *' -exec sh -c ' for pathname do newname=${pathname##*/} newname=${newname#"${newname%%[! ]*}"} newpathname=${pathname%/*}/$newname if [ -z "$newname" ] || [ -e "$newpathname" ]; then continue fi mv -v "$pathname" "$newpathname" done' sh {} + The above finds any file or directory whose name starts with at least one space character in or below the current directory. It does this in a depth-first order (due to -depth ) to avoid renaming directories that it hasn't yet processed the contents of; renaming a directory would otherwise cause find not to find it later, as it was renamed. A short in-line shell script is called for batches of names that start with a space. The script iterates over the given pathnames and starts by extracting the actual name from the end of the current pathname into the variable newname . It does this using the standard parameter substitution, ${pathname##*/} , removing everything to the last / in the string (the longest prefix matching */ ), leaving the final pathname component. This is essentially the same as "$(basename "$pathname")" in this case. We then need to trim off the spaces guaranteed to exist at the start of the string in $newname . We do this by first removing everything but the spaces with ${newname%%[! ]*} (the longest suffix matching [! ]* , i.e. from the first non-space character onwards), and then removing the result of that from the start of the $newname string with ${newname#"${newname%%[! ]*}"} . The destination path in the mv command is made up of the directory path of $pathname concatenated by a slash and the new name, i.e. ${pathname%/*}/$newname , which is essentially the same as "$(dirname "$pathname")/$newname" . The code detects name collisions and silently skips the processing of names that would collide. It also skips names that collapse to empty strings. This is what the if statement before mv does. If you want to bring these names to the user's attention, then do so before continue . Test run on a copy of your backed-up data. Test-running the code above: $ tree -Q"."|-- " dir0"| |-- " o dir1"| | |-- " otherfile"| | `-- "dir2"| | |-- " dir3"| | `-- " moar"| `-- "file"`-- "script"4 directories, 4 files $ sh script./ dir0/ o dir1/dir2/ moar -> ./ dir0/ o dir1/dir2/moar./ dir0/ o dir1/dir2/ dir3 -> ./ dir0/ o dir1/dir2/dir3./ dir0/ o dir1/ otherfile -> ./ dir0/ o dir1/otherfile./ dir0/ o dir1 -> ./ dir0/o dir1./ dir0 -> ./dir0 Notice how it starts at the bottom of the directory hierarchy. If it had tried renaming dir0 first, it would have failed to enter it later to rename the other directories and files. $ tree -Q"."|-- "dir0"| |-- "file"| `-- "o dir1"| |-- "dir2"| | |-- "dir3"| | `-- "moar"| `-- "otherfile"`-- "script"4 directories, 4 files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/718569",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444473/"
]
} |
718,606 | Commands can be stored in variables and executed (though not a good practice) in shell like: command='ls -l A* "B\" type"'$command It lists files beginning with A , "B\" and type" . Argument separating and globbing are performed, but no quote removal and escaping. This behavior makes passing arbitrary arguments using one variable very difficult in shells not supporting arrays, and impossible to combine find and other commands with for safely (which is often discussed). It even limits the usage of globbing as a lot of characters are uncontrolled in unquoted variable expansion (you cannot store globbing sequences containing `'"*?\n literals and reuse them properly). The situation would be very different if quotes and escape sequences in variables can be processed. But why most shells don't do that in fact? Is it specially designed with some obscure considerations I didn't notice, or just simply passed down to keep compatibility? I know there are similar questions like Why does bash variable expansion retain quotes? and Quoting / escaping / expansion issue in "command in a variable" discussing the behavior, but the answers there didn't talk about reasons. | This behavior makes passing arbitrary arguments using one variable very difficult [...] Perhaps. But having the results of expansions go through all the usual command line processing would make it impossible to pass even one arbitrary argument intact. Consider e.g. a script that gets a filename from somewhere, and tries to pass it to a command. Let's say we get the filename with read : echo -n "please enter filename: "read -r filenamesome command "$filename" Now, if the user enters a filename like don't stop me now.txt , running some command will crash with a syntax error due to the single quote. Similarly if the script is run e.g. as myscript don*.txt and gets the filename from a command line argument: filename=$1some command "$filename" Again, $filename (or $1 already) would contain that single quote. Worse, the filename or user input string could contain a command substitution, making it possible for merely using the variable to run arbitrary commands. The script writer would have to go through hoops to add escapes to every string read from outside the script, and some ways of doing that might already trigger the expansion processing. Plus people just wouldn't do that, and the shell would be even more unsafe a tool to use. (For what you want it wouldn't be necessary to process expansions, just quotes and backslashes, but the issues with unpaired quotes would still be there.) Of course, you could also say that read should just add the necessary escapes, but would they need to be added for all other sorts of input too? How would string operations work, would they need to process the quotes too? Even something as simple as ${#var} for the length of a variable would turn much more expensive to implement. And what would the length even mean for a variable that contained multiple distinct quoted strings? In the end, it's best to consider the code of the script distinct from the data the script processes, and to organize it so that they don't get mixed up, so that the data only gets processed in ways explicitly set in the code.That's what the shell pretty much does, if you remember to quote the variable expansions. Using data in variables as-is is what every other programming language also does. E.g. in this C snippet, the string that gets printed is "foo bar" , with the quotes, they're not parsed by the runtime environment: char *s = "\"foo bar\"";printf("%s\n", s); Similarly, if it was s = "foo()" instead, that printf() call would not call the function foo() , but would just print the string foo() . (If you want to object about interpreted vs. compiled languages, we could change the example to Perl or Python.) Now, that's just an argument as to why what you suggest doesn't seem a good idea, to me, in 2022. But really, you asked about the "why" and the design rationale. Those didn't happen in 2022, but in the 1970's and 1980's or so. Wikipedia mentions the initial release of the Bourne Shell as having happened in 1979. That's a long time ago, and the existing history of computing was a lot shorter back then than it is now. We now have the some benefit of hindsight, which probably has helped in the creation of other tools, like those shell arrays. Faster computers and more memory too. I wouldn't reject the idea that the actual explanation behind the design might be something like "that's what they came up with when they were first figuring all this out, and for some reason it stuck". Backward compatibility goes two ways. At least now you have those shells with arrays, and completely different shells too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/718606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/542651/"
]
} |
718,976 | I have a script where I need to do all possible comparisons between the contents of a string variable. Each combination requires a different approach to the contents of the variable, so something like this: if $a contains "a" AND "b" AND "c"; then do xelif $a contains "a" AND "b" but NOT "c"; then do yelif $a contains "b" AND "c" but NOT "a"; then do z... As far as I know, the way of doing that is constructing an if conditional like this one: if [[ $A == *"b"* ]] && [[ $A == *"c"* ]] && [[ $A == *"d"* ]]elif [[ $A == *"b"* ]] && [[ $A == *"c"* ]]elif [[ $A == *"c"* ]] && [[ $A == *"d"* ]]elif [[ $A == *"b"* ]] && [[ $A == *"d"* ]]elif [[ $A == *"b"* ]]elif [[ $A == *"c"* ]]elif [[ $A == *"d"* ]]fi But that is of course too complicated to read, understand and write without mistakes, given that the name of my variable ($A) and the substrings (b, c, d) are way longer than that. So I wanted to see if there was a way of saving the contents of a conditional expression to a variable: contains_b= *a condition for [[ $A == *"b"* ]]*; echo $contains_b -> true | false I only found the response here [ contains_b=$(! [ "$A" = *"b"* ]; echo $?) ]; however, $contains_b -> 0 won't work in subsequent conditionals because: if $contains_b; then do x; fi -> bash: [0]: command not found So the only solution that I could think of is doing it manually: if [[ $A == *"b"* ]]; then contains_b=trueelse contains_b=falsefi However, I would end up doing the three if statements to get the three variables, and the other 7 comparisons for each of the different combinations. I am wondering if there is a different/more efficient way of doing it. If not, do you have a suggestion for another way of doing these multiple comparisons? I feel that I am making it overly complicated... Thank you for any help. | Create a binary mask and then act on it. This has the benefit of only performing each test once, and it separates the testing from acting on the result of the tests. Note that the code uses the patterns as extended regular expressions. To compare them as strings, use [[ $string == *"$pattern"* ]] in place of [[ $string =~ $pattern ]] in the code below. patterns=( a b c )string='abba'mask=0; i=0for pattern in "${patterns[@]}"; do if [[ $string =~ $pattern ]]; then # setting the i:th bit from the right to one mask=$(( mask | (1 << i) )) fi i=$(( i + 1 ))donecase $mask in 0) echo no match ;; 1) echo first pattern matched ;; 2) echo second pattern matched ;; 3) echo first and second pattern matched ;; 4) echo third pattern matched ;; 5) echo first and third pattern matched ;; 6) echo second and third pattern matched ;; 7) echo all patterns matched ;; *) echo erroresac Or, with a string mask with ones and zeros (zero denoting no match and one denoting match). Note that the string in mask below is the reverse of the actual binary representation of the numbers used in the code above. patterns=( a b c )string='abba'unset -v maskfor pattern in "${patterns[@]}"; do ! [[ $string =~ $pattern ]] # string concatenation of the exit status of the previous command mask+=$?donecase $mask in 000) echo no match ;; 100) echo first pattern matched ;; 010) echo second pattern matched ;; 110) echo first and second pattern matched ;; 001) echo third pattern matched ;; 101) echo first and third pattern matched ;; 011) echo second and third pattern matched ;; 111) echo all patterns matched ;; *) echo erroresac The output from each of these scripts would be first and second pattern matched ... since the string abba matches the first two patterns, a and b . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/718976",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/391907/"
]
} |
719,201 | When I use less and press v it switches into the currently set editor (Emacs or vim). MISCELLANEOUS COMMANDSv Edit the current file with $VISUAL or $EDITOR. Is it possible to prevent this behavior where I don't want current file to be open in the editor? | You can disable v by binding it to noaction : add # commandv noaction to ~/.lesskey (or, if $XDG_CONFIG_HOME is set and you’re using less 582 or later, $XDG_CONFIG_HOME/lesskey ), and, if you’re using less 581 or older, run lesskey . You can also bind v to a different command. For example, to make it move down a line instead of opening an editor, use # commandv forw-line instead. (The default binding is visual .) Another way to disable v in less is to set VISUAL to true : VISUAL=true less foo Pressing v will then run true foo , which will immediately return to less . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/719201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/198423/"
]
} |
719,379 | I have just upgraded two servers from Debian 10 (Buster) to Debian 11 (Bullseye). Afterwards, I could not reach either of them via the network any more. After some investigation, the following problem turned out: Both machines have a bridge device configured. Obviously, the algorithm which Debian uses to assign MAC addresses to bridge devices has changed from version 10 to 11. After the upgrade, the bridge device on the first server had the same MAC address as the bridge device on the second server , which for sure has not been the case before. One of the answers there claims that a bridge is a purely internal device and that therefore a bridge's MAC address does not matter. However, this is obviously wrong. At least in my case, packets from both machines were outgoing with the hardware source address being the bridge's MAC address, and the network ports on both machines were processing incoming packets only if they were destined for the bridge's MAC address. Since that MAC address was the same on both machines, the network became unusable, which is completely logical and understandable. How can I make Debian generate different MAC addresses for bridge devices which are on different machines (or even on the same machine, but that's currently not my issue)? | Browsing in Internet I found this bug report on systemd-udev related to Debian 11 bridges: systemd-udev interferes with MAC addresses of interfaces it's not supposed to do#21185 : ash.in.ffho.net:~# for n in 0 1 2 3; do ip l add br$n type bridge; doneash.in.ffho.net:~# ip -br lbr0 DOWN d2:9e:b3:32:53:42 <BROADCAST,MULTICAST> br1 DOWN e2:00:44:2c:5b:70 <BROADCAST,MULTICAST> br2 DOWN 0e:99:b7:42:f0:25 <BROADCAST,MULTICAST> br3 DOWN a6:3f:5f:b5:9a:d6 <BROADCAST,MULTICAST> ash.in.ffho.net:~# for n in 0 1 2 3; do ip link del br${n}; doneash.in.ffho.net:~# for n in 0 1 2 3; do ip l add br$n type bridge; doneash.in.ffho.net:~# ip -br lbr0 DOWN d2:9e:b3:32:53:42 <BROADCAST,MULTICAST> br1 DOWN e2:00:44:2c:5b:70 <BROADCAST,MULTICAST> br2 DOWN 0e:99:b7:42:f0:25 <BROADCAST,MULTICAST> br3 DOWN a6:3f:5f:b5:9a:d6 <BROADCAST,MULTICAST> As you can see, the bridges were created with low-level commands, but they always inherit the same MAC address value: a systemd component interferes and sets the MAC address.One can see this in action using ip monitor link : 22: brtest0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether 0a:ae:c3:0d:ec:68 brd ff:ff:ff:ff:ff:ff22: brtest0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether 1a:d0:fc:63:c1:71 brd ff:ff:ff:ff:ff:ffDeleted 22: brtest0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether 1a:d0:fc:63:c1:71 brd ff:ff:ff:ff:ff:ff23: brtest0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether 4e:e9:11:dd:a5:aa brd ff:ff:ff:ff:ff:ff23: brtest0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether 1a:d0:fc:63:c1:71 brd ff:ff:ff:ff:ff:ff You can see how the MAC address initially random is overwritten to a fixed one, twice to the same value for a given bridge name. An other side effect is that when interface is set administratively UP, the bridge operational status becomes DOWN instead of UNKNOWN initially because of this (see these answers of mine on SU and SF mentioning behaviors about DOWN and UNKNOWN: How does Linux determine the default MAC address of a bridge device? , linux ipv6 bridge address does not work when mac address is forced ). Anyway this doesn't matter anymore once its first bridge port is attached. Doing the same experiment inside a network namespace (eg: ip add netns experiment and ip netns exec experiment bash -l before running above commands twice) where systemd-udevd does not interfere will show the usual behavior of having different random addresses each time. This is an effect of systemd ecosystem and doesn't happen on systems not running systemd (or older versions of systemd). One proposed fix is to use: # /etc/systemd/network/90-bridge.link[Match]OriginalName=br*[Link]MACAddressPolicy=random but it appears the real fix is to change the file that participates in generating this "stable random" value, as described there: https://wiki.debian.org/MachineId Each machine should have a different value. This is especially important for cloned VMs from a base template. The relation between machine-id and the way the bridge "stable" MAC address is generated is mentioned in the patch having implemented the (quite breaking) change : === This patch This patch means that we will set a "stable" MAC for pretty much anyvirtual device by default, where "stable" means keyed off themachine-id and interface name . It was also mentioned that this would be having impacts , but this was shrugged off. This is not limited to interfaces of type bridge but to any interface that would generate a random MAC address: for example types veth , macvlan tuntap are also affected. I could verify that the same bridge name would get a different "stable random" value after doing the operations described in Debian's link: rm -f /etc/machine-id /var/lib/dbus/machine-iddbus-uuidgen --ensure=/etc/machine-iddbus-uuidgen --ensure giving now in previous ip monitor a new MAC address for the same bridge name: 32:ee:c8:92:9f:e8 instead of 1a:d0:fc:63:c1:71 when deleting and recreating brtest0 . Deleted 23: brtest0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether 1a:d0:fc:63:c1:71 brd ff:ff:ff:ff:ff:ff24: brtest0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether da:72:b6:63:23:e5 brd ff:ff:ff:ff:ff:ff24: brtest0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether 32:ee:c8:92:9f:e8 brd ff:ff:ff:ff:ff:ff Conclusion: Because the bridge MAC address is now manually set the bridge won't inherit anymore one of the MAC addresses of other interfaces set as bridge ports, including the usual permanent (physical or VM's) interfaces which are expected to have each a different MAC address. Two systems using the same machine-id and the same bridge name (eg: br0 ) with such bridge participating in routing (ie: there's an IP address configured on the bridge, but even if not the bridge can emit other frames related to bridging depending on its settings) on the same LAN will emit frames with the same source MAC address (bridge's), possibly disrupting switches in the path and anyway ignoring such same source MAC address from the peer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/719379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210810/"
]
} |
719,384 | The latest grep 3.8 emits a warning on a pattern where whitespace isescaped with a backslash $ grep "bla\ bazz" t /tmp/bin/grep: warning: stray \ before white space... whereas grep 3.6 does not complain. What's the correct way to deal with such a pattern ? Just do not escape the space? I.e. $ grep "bla bazz" t Are there some more exotic grep 's out there, which would incorrectly deal with unescaped space? Maybe, different quotas to be used to make it all nice and clean? | The space character in not special in regular expressions (except in perl -like ones when the x flag is enabled), so must not be escaped. \ followed by a space yields unspecified results in POSIX regexps. So you want: grep 'blah bazz' If you want to make it more visible, you can use: grep 'blah[ ]bazz' More generally you should not put \ is front of characters that are not regular expression operators. Where X is not a regular expression operator, \X may very well be, if not now maybe in a future versions. For instance, + , < , d are not basic regular expression operators , but \< , \+ and \d are for some grep implementations. You may want to use \ followed by a space in: grep -P '(?x) foo \ bar' perl -ne 'print if / foo \ bar /x' To match on foo bar when the x flag is on. But even there, you'd rather do: grep -P '(?x) foo [ ] bar' To make it more legible. The whole point of the x flag is to make regexps more legible like: perl -ne 'print if m{ \d{4} # year - \d{2} # month - \d{2} # day [ ] (foo | bar | baz)}x' vs perl -ne'print if/\d{4}-\d{2}-\d{2} (foo|bar|baz)/' You can't use [ ] with the xx flag (in perl 5.26+, not PCRE) though, where spaces are also ignored inside bracket expressions. See perldoc perlre for details of perl regular expressions, and man pcrepattern for the PCRE (perl-compatible regular expressions) ones. Using \Q \E is another option. In any case, while space is a special character in the syntax of the shell and not in regular expressions, there are a number of characters that are special in both such as * , \ , ( , ) , ? , $ , ^ , [ , ] , so would need to be escaped for both if meant to be matched literally, preferably with quotes for the shell, and with \ (or [...] , or \Q...\E in perl-like ones) for the regexps. As \ and $ are common in regular expressions, and those characters are still special to the shell inside double quotes, it's a good habit to put regexps in single quotes rather than double quotes. You'd only use double quotes if you needed to expand a shell parameter into the regular expression as in grep "^$var" or needed to include a ' in the regexp. To grep literal strings as opposed to regular expressions, or in other words, to escape every regular expression operators, you can use the -F (for F ixed string) option to grep . For instance: grep -F 'blah\ bazz' Would look for lines that contain blah\ bazz . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/719384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24028/"
]
} |
719,513 | I have a host behind a dynamic IP, so I used to have a script that would add its address to my .ssh/known_hosts file, recently though it seems like something has changed. My file looks like its been attacked by the hash monster: |1|Du0QWjqCUrdRK/pnE0PTww2O2Zk=|O31W+SPPLr9+sj1m1K7MfEb+xUQ= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUT1234567Xu2vvCE1likgUSOXLzEV123456783asaA|1|K3vgE86MLJTHx8W2sPv1cgP4DI0=|Jattsr5sEW443bnyMKT6W0Noc+k= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUT1234567Xu2vvCE1likgUSOXLzEV123456783asaA|1|UlAukzqGavXZvRtMzjvXmHoVeAQ=|0JVjq7YSFulCHmkF46VFwMV/ZBY= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILUT1234567Xu2vvCE1likgUSOXLzEV123456783asaA Is there anyway to go back to the old, less secure method? How can I easily create entries in this hashed format? (I want to write a script to tell ssh that any ip in the 10.0.0.0/24 range is should match the given fingerprint.) | You can disable the hostname hashing by setting the HashKnownHosts SSH client option to "no": either each time on the command line ( ssh -o HashKnownHosts=no ... ) or in your user configuration file ~/.ssh/config like this: Host * HashKnownHosts no Not sure how you can hash a hostname yourself, but you can have intermixed entries in the known_hosts file, hashed and non-hashed ones, so it would be fine for your script to create non-hashed entries. And you could use ssh-keygen -H to convert all entries in the known_hosts file to their hashed form. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/719513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17612/"
]
} |
719,560 | I have two files with the following permissions: -rwsr--r-- 1 root root 213 Oct 22 12:15 f1-r--rwxr-- 1 Bob staff 113 Oct 22 12:18 f4 Can the user Bob execute f1 and why? What is the effect of the s in the set of permission on f1 ? | You can disable the hostname hashing by setting the HashKnownHosts SSH client option to "no": either each time on the command line ( ssh -o HashKnownHosts=no ... ) or in your user configuration file ~/.ssh/config like this: Host * HashKnownHosts no Not sure how you can hash a hostname yourself, but you can have intermixed entries in the known_hosts file, hashed and non-hashed ones, so it would be fine for your script to create non-hashed entries. And you could use ssh-keygen -H to convert all entries in the known_hosts file to their hashed form. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/719560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/543663/"
]
} |
719,788 | Are x=$y and x="$y" always equivalent? Didn't know how to search for this. So far, I've always been using x="$y" to be "on the safe side". But I used x=$1 at one point and noticed that, apparently, I don't even need the extra double quotation marks. Where is the behavior defined in The Open Group POSIX document? | Yes, x=$y and x="$y" are guaranteed to be the same in a POSIX shell. If you or some other reader of your code are unsure where double quotes must be used (see When is double-quoting necessary? ), including the double quotes may be the safer option, as to not introduce confusion. From the POSIX specification ( section 2.9.1, "Simple Commands" ): When a given simple command is required to be executed [...], the following expansions, assignments, and redirections shall all be performed from the beginning of the command text to the end: The words that are recognized as variable assignments or redirections according to Shell Grammar Rules are saved for processing in steps 3 and 4. [...] Redirections shall be performed as described in Redirection . Each variable assignment shall be expanded for tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal prior to assigning the value. Note that the fourth point does not include field splitting or pathname expansion ("globbing"), which are usually part of the expansions done when the word is not recognised as a variable assignment. Since these steps are removed for assignments, the quoting is not necessary. See also section 2.6, "Word Expansions" : The order of word expansion shall be as follows: Tilde expansion (see Tilde Expansion ), parameter expansion (see Parameter Expansion ), command substitution (see Command Substitution ), and arithmetic expansion (see Arithmetic Expansion ) shall be performed, beginning to end. See item 5 in Token Recognition . Field splitting (see Field Splitting ) shall be performed on the portions of the fields generated by step 1, unless IFS is null. Pathname expansion (see Pathname Expansion ) shall be performed, unless set -f is in effect. Quote removal (see Quote Removal ) shall always be performed last. The following only matters if y in x=$y and x="$y" is actually one of the special parameters * or @ : Note that since "$@" expands to a list of strings, it is unspecified what x=$@ and x="$@" do, while x=$* is the same as x="$*" . In some shells (e.g. bash , ksh93 ), using $@ like this is the same as using $* when the first character of $IFS is a space, while in others (e.g. busybox sh , dash , zsh ) it's the same as $* and uses the first character of the set value of $IFS . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/719788",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491702/"
]
} |
719,802 | I can't create folders or files named 'com1', 'com2', ..., 'com9' in my extended hard drive. I'm trying to create a Wine prefix on my other drive where my games are stored, but I get some errors. Here is a pastebin of the whole output when I run winecfg to a new prefix. https://pastebin.com/SsaAFGdw I believe it's not a permission issue since I can make directories and files. And, I also tried creating a prefix from my main boot drive, then move it to my extended hard drive, then I get errors when it's now trying to copy files named 'com1', 'com2', ..., 'com9' . This is how my extended drive partitioned: sudo WINEPREFIX='path' winecfg also does not work, same result. EDIT:OS: Manjaro KDE Plasma Output from mount | grep /dev/sdb : /dev/sdb2 on /run/media/snich/Extended type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)/dev/sdb4 on /run/media/snich/Games type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)/dev/sdb3 on /run/media/snich/Personal type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2) | Assuming ntfs-3g is used, windows_names is probably set somewhere as an option. Seen man page OPTIONS windows_names This option prevents files, directories and extended attributes to be created with a name not allowed by windows, because it contains some not allowed character, or the last character is a space or a dot, or the name is reserved. The forbidden charactersare the nine characters " * / : < > ? \ | and those whose code is lessthan 0x20, and the reserved names are CON, PRN, AUX, NUL, COM1..COM9,LPT1..LPT9, with no suffix or followed by a dot. Existing such files can still be read (and renamed). Edited response : I'm currently with debian/Buster and there is a /etc/udisks2/udisks2.conf file containing : ### For the reference, these are the builtin mount options:# [defaults][...]# ntfs_defaults=uid=$UID,gid=$GID,windows_names# ntfs_allow=uid=$UID,gid=$GID,umask,dmask,fmask,locale,norecover,ignore_case,windows_names,compression,nocompression,big_writes So, for debian, and probably most of their derivatives, mounting an NTFS implies using option windows_names . As explained in the same file (a little bit higher), you could try putting your options in a /etc/udisks2/mount_options.conf file. Just edit/create the file, copy those two lines, remove leading hash and remove option windows_names . Do everything as root, and take care of permissions. Unmount and re-mount. (Now, I'm not sure all this is a good advise : as Wine will act "à la" MS-Windows, this will not end to be a good thing.) This is just a feeling, not fact, and many others did proved it doesn't hurt. Enjoy ! | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/719802",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/543933/"
]
} |
719,965 | I have a file with 10,000 lines, and at the end of each line, there is a number, like: asdf ggg ١gghh rtt ٣٢٥tyyu bnnn jigff ٢٧ Arrange the digits of all numbers in reverse order. The right order is: asdf ggg ١gghh rtt ٥٢٣tyyu bnnn jigff ٧٢ Can you help me (please) to reverse the order of the numbers in each line? Eastern Arabic digits: (٠، ١، ٢، ٣، ٤، ٥، ٦، ٧، ٨، ٩) | Using Perl: $ perl -CSD -pe 's/(\d+)$/reverse($1)/e' fileasdf ggg ١gghh rtt ٥٢٣tyyu bnnn jigff ٧٢ This applies a substitution command to each line of input. The substitution matches any string of digits at the very end of the line. Whatever string of digits is matched, the reverse() function is used to reverse it. The result of the reversion is used as the replacement text in the substitution. The trailing /e causes Perl to treat the reverse($1) as a command to evaluate. The -CSD option to perl enables UTF-8 I/O. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/719965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/544092/"
]
} |
720,025 | I have a text file, and I have a pattern that I want grep not to match. Thing is, I also want the line before not to match. My file: line 1line 2patternline 4 And I tried cat file | grep -v pattern , which outputs: line 1line 2line 4 Then I tried cat file | grep -B 1 pattern , which outputs: line 2pattern However, when I use both of them together cat file | grep -v -B 1 pattern , I get: line 2 How can I make it so that the output is: line 1line 4 | I tend only to use grep when extracting single lines from files, so when I need to perform more complicated edits in a text, I use other tools. All solutions here assume that the pattern may occur multiple times in the text and will remove the lines on which it occurs and the lines immediately previous to them. The first two solutions will have issues if the pattern matches on consecutive lines. You can use sed to match a pattern with /pattern/ and let that trigger the commands N and d , which appends the next line to the buffer and then discards both: sed '/pattern/ { N; d; }' file Since you want to discard the line before the match of the pattern, we feed the data backwards into sed , starting with the last line and moving towards the start of the file. Then we reverse the data again when sed is done. tac file | sed '/pattern/ { N; d; }' | tac The tac utility is part of GNU coreutils. Most non-GNU systems may use tail -r in place of tac (check your tail(1) manual). If the pattern matches two consecutive lines, this will fail to remove the line previous to the first of those lines (since the first line would get deleted). Using the ed editor: printf '%s\n' 'g/pattern/ -1,. d' ,p Q | ed -s file This applies the command g/pattern/ -1,. d to the contents of the file. This command searches for each line that matches pattern , and then deletes that line and the line previous to it. The final ,p and Q editing command prints the whole file and quit the editor without saving. If the pattern matches two consecutive lines, this will remove the line that becomes previous to the second line after removing the line previous to the first line. (That last sentence was correct when I wrote it, but it's obviously a write-only sentence.) We can also use grep and its non-standard but commonly implemented -B option for giving us the line numbers that need to be deleted. These numbers can be converted to a sed script that we run on the original data: grep -n -B1 'pattern' file | sed 's/[:-].*/d/' | sed -f /dev/stdin file The grep command would, given the text in the question, output 2-line 23:pattern ... and the first sed command converts this into the sed editing command 2d followed by 3d ("delete line 2 and 3"). The last sed command in the pipeline takes this editing script and applies it to the original text. This variant has no issues with consecutive lines matching the pattern as it uses a kind of 2-pass approach, first finding all lines that should be deleted and then deleting them (instead of deleting lines while reading the text for the first time). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/531949/"
]
} |
720,037 | I work on dual independent modem device (yocto based). I would like to assign each to a different NM connection. I register them using cmd: nmcli c add type gsm ifname cdc-wdm[0|1] con-name mdm1orange apn internet It works. The problem is, cdc-wdm suffix comes from device registration/turn on order. I would like to associate it with USB port (they will use different operators SIMs APNs, configs etc), so I've created an udev rule: SUBSYSTEM=="usbmisc", SUBSYSTEMS=="usb", KERNELS=="1-1.3:1.4", SYMLINK+="modem2", TAG+="systemd"SUBSYSTEM=="usbmisc", SUBSYSTEMS=="usb", KERNELS=="1-1.4:1.4", SYMLINK+="modem1", TAG+="systemd" It does work, I can see /dev/modem1 and /dev/modem2 being registered, but calling: nmcli c add type gsm ifname modem[1|2] con-name mdm1orange apn internet Just fails. NM does not have a clue which device I would like to use in connection. So how can I assign based-on-usb-port-alias or index (with udev or anything else) to modem network interface, not just the /dev/ symlink? It would be nice to have also a WWAN interface alias created too. Thanks! | I tend only to use grep when extracting single lines from files, so when I need to perform more complicated edits in a text, I use other tools. All solutions here assume that the pattern may occur multiple times in the text and will remove the lines on which it occurs and the lines immediately previous to them. The first two solutions will have issues if the pattern matches on consecutive lines. You can use sed to match a pattern with /pattern/ and let that trigger the commands N and d , which appends the next line to the buffer and then discards both: sed '/pattern/ { N; d; }' file Since you want to discard the line before the match of the pattern, we feed the data backwards into sed , starting with the last line and moving towards the start of the file. Then we reverse the data again when sed is done. tac file | sed '/pattern/ { N; d; }' | tac The tac utility is part of GNU coreutils. Most non-GNU systems may use tail -r in place of tac (check your tail(1) manual). If the pattern matches two consecutive lines, this will fail to remove the line previous to the first of those lines (since the first line would get deleted). Using the ed editor: printf '%s\n' 'g/pattern/ -1,. d' ,p Q | ed -s file This applies the command g/pattern/ -1,. d to the contents of the file. This command searches for each line that matches pattern , and then deletes that line and the line previous to it. The final ,p and Q editing command prints the whole file and quit the editor without saving. If the pattern matches two consecutive lines, this will remove the line that becomes previous to the second line after removing the line previous to the first line. (That last sentence was correct when I wrote it, but it's obviously a write-only sentence.) We can also use grep and its non-standard but commonly implemented -B option for giving us the line numbers that need to be deleted. These numbers can be converted to a sed script that we run on the original data: grep -n -B1 'pattern' file | sed 's/[:-].*/d/' | sed -f /dev/stdin file The grep command would, given the text in the question, output 2-line 23:pattern ... and the first sed command converts this into the sed editing command 2d followed by 3d ("delete line 2 and 3"). The last sed command in the pipeline takes this editing script and applies it to the original text. This variant has no issues with consecutive lines matching the pattern as it uses a kind of 2-pass approach, first finding all lines that should be deleted and then deleting them (instead of deleting lines while reading the text for the first time). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720037",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/543713/"
]
} |
720,044 | I am making a simple script to display some information about my CentOS 7 PC, similar to the System Information application on Windows. I wanted to know if there is a command which will display the total and remaining capacity of my virtual disk? Currently, I'm aware of the df command which I have used in this configuration to give me the remaining capacity: df -Ph | grep sda1 | awk '{print $4}' | tr -d '\n' I am also aware of the lsblk command, which does show the total size of my virtual disc. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 60G 0 disk ├─sda1 8:1 0 1G 0 part /boot└─sda2 8:2 0 59G 0 part ├─centos-root 253:0 0 37G 0 lvm / ├─centos-swap 253:1 0 3.9G 0 lvm [SWAP] └─centos-home 253:2 0 18.1G 0 lvm /homesr0 11:0 1 1024M 0 rom I can also filter the lsblk command via: lsblk -o NAME,SIZE which gives: NAME SIZEsda 60G├─sda1 1G└─sda2 59G ├─centos-root 37G ├─centos-swap 3.9G └─centos-home 18.1Gsr0 1024M I would like to know how I can filter the output to just show 60G which is the capacity of sda ? I don't know anything about awk but I have seen it pop-up is many other responses to similar questions, so maybe this is something I need to look further into. | I tend only to use grep when extracting single lines from files, so when I need to perform more complicated edits in a text, I use other tools. All solutions here assume that the pattern may occur multiple times in the text and will remove the lines on which it occurs and the lines immediately previous to them. The first two solutions will have issues if the pattern matches on consecutive lines. You can use sed to match a pattern with /pattern/ and let that trigger the commands N and d , which appends the next line to the buffer and then discards both: sed '/pattern/ { N; d; }' file Since you want to discard the line before the match of the pattern, we feed the data backwards into sed , starting with the last line and moving towards the start of the file. Then we reverse the data again when sed is done. tac file | sed '/pattern/ { N; d; }' | tac The tac utility is part of GNU coreutils. Most non-GNU systems may use tail -r in place of tac (check your tail(1) manual). If the pattern matches two consecutive lines, this will fail to remove the line previous to the first of those lines (since the first line would get deleted). Using the ed editor: printf '%s\n' 'g/pattern/ -1,. d' ,p Q | ed -s file This applies the command g/pattern/ -1,. d to the contents of the file. This command searches for each line that matches pattern , and then deletes that line and the line previous to it. The final ,p and Q editing command prints the whole file and quit the editor without saving. If the pattern matches two consecutive lines, this will remove the line that becomes previous to the second line after removing the line previous to the first line. (That last sentence was correct when I wrote it, but it's obviously a write-only sentence.) We can also use grep and its non-standard but commonly implemented -B option for giving us the line numbers that need to be deleted. These numbers can be converted to a sed script that we run on the original data: grep -n -B1 'pattern' file | sed 's/[:-].*/d/' | sed -f /dev/stdin file The grep command would, given the text in the question, output 2-line 23:pattern ... and the first sed command converts this into the sed editing command 2d followed by 3d ("delete line 2 and 3"). The last sed command in the pipeline takes this editing script and applies it to the original text. This variant has no issues with consecutive lines matching the pattern as it uses a kind of 2-pass approach, first finding all lines that should be deleted and then deleting them (instead of deleting lines while reading the text for the first time). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720044",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/438666/"
]
} |
720,136 | I wanted to use 2.6.2 Parameter Expansion to remove leading characters from a string, but was surprised to find out that "Remove Largest Prefix Pattern" doesn't automatically repeat the pattern. $ x=aaaaabc$ printf %s\\n "${x##a}"aaaabc As you can see, only the first a has been removed. Expected output was bc for any of x=bc , x=abc , x=aabc , x=aaabc or x=aaaabc . I'm struggling to figure out how I have to write the pattern if I want to remove as many a as possible from the beginning of $x . I had no luck searching for other threads either, because many answers use bash, but I'm looking for a POSIX shell solution. | I don’t think you can do this in a generic fashion ( i.e. ignoring specific features of the pattern), using only POSIX shell constructs, without using a loop: until [ "${x#a}" = "$x" ]; do x="${x#a}"; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491702/"
]
} |
720,151 | I would like to sum numerical values listed in the columns of the table below based on entries that were given in the first column of the same table. The table content is as follows: 10,Mumbai,0,4,5,0,6,3,55,M2,Mumbai,1,3,2,0,4,4,4,M4,Chennai,5,6,7,8,9,0,6,F The expected outcome is as follows (data is grouped by the 2nd and last columns): 12,Mumbai,1,7,7,0,10,7,59,M4,Chennai,5,6,7,8,9,0,6,F How can I use awk on Linux to get this output? | I don’t think you can do this in a generic fashion ( i.e. ignoring specific features of the pattern), using only POSIX shell constructs, without using a loop: until [ "${x#a}" = "$x" ]; do x="${x#a}"; done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/544272/"
]
} |
720,154 | I have a text file of IP:PORT , like for example 1.1.1.1:19192.2.2.2:1111.1.1.1:987 I need to use them in a script which has a JSON format: async def main(loop): servers = [{ "address": "ip", "port": port }, { "address": "ip", "port": port }] I need to output as async def main(loop): servers = [{ "address": "1.1.1.1", "port": 1919 }, { "address": "2.2.2.2, "port": 111 }, { "address": "1.1.1.1, "port": 987 }] I am using Linux. | jq -nRr ' [ inputs | split(":") | {address: first, port: last} ] | "async def main(loop):\n servers = \(.)"' addresses outputs async def main(loop): servers = [{"address":"1.1.1.1","port":"1919"},{"address":"2.2.2.2","port":"111"},{"address":"1.1.1.1","port":"987"}] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/525521/"
]
} |
720,179 | Please assume the below data .For easy understanding , I using 29th Column Position belowI wanted to check if Position 29 starts with Alphabets or number . If Alphabets for example in 1st line Letter 'U' needs to be removed for 2nd line 'D' needsto be removed and 3rd line no action since its starts with Number 47720920010500002 U31417837966744783100812 D12345537966880762200334 356678 I tried the following sed 's/^\(.\{212\}\)U/\&/' $file_name ... to replace the 212th Character 'U' with Space. cut -c -211,213- $file_name ... to remove the Space from the 212th Position If its constant U , this code should work. Need some help with commands if any available to check for all the aplhabets from a-z | jq -nRr ' [ inputs | split(":") | {address: first, port: last} ] | "async def main(loop):\n servers = \(.)"' addresses outputs async def main(loop): servers = [{"address":"1.1.1.1","port":"1919"},{"address":"2.2.2.2","port":"111"},{"address":"1.1.1.1","port":"987"}] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/544304/"
]
} |
720,235 | When tee sends its stdout to a no-op ( : ) command via a pipe, then nothing is printed and the file size is zero. When tee sends its stdout to a cat via a pipe, then everything is printed properly and the file size is greater than zero. Here is a code example which shows it (conditioned by script's first input argument): #!/usr/bin/env bashlog_filepath="./log.txt"[ -f "$log_filepath" ] && { rm "$log_filepath" || exit 1 ; }fail_tee="$1"while IFS= read -r -d $'\n' line ; do printf "%s%s\n" "prefix: " "$line" | \ tee -a "$log_filepath" | \ { if [ -n "$fail_tee" ]; then # Nothing is printed to stdout/terminal # $log_filepath size is ZERO. : # do nothing. else # Each line in the input is prefixed w/ "prefix: " and sent to stdout # $log_filepath size is 46 bytes cat fi }done <<'EOF'1234567890EOF Would appreciate the explanation behind it. My expectation for the no-op : command was that it shouldn't block tee from sending output to the file. | The : in tee ... | : is still a process holding the read-end of the pipeline set up by the shell, the other end of which tee is writing to. It's just that : exits immediately, which stops it from reading from the pipe. (For the simultaneous action of the pipeline to work, the shell has to spawn a new process for each part of the pipeline, even if it's just to process the no-op : . In your example, that process would run the if statement in the last part of the pipe, and then eventually exit after "running" the : builtin.) The usual behaviour is that when the reader of a pipe exits (the read-end file descriptors are closed), the writer gets the SIGPIPE signal on the next write, and that causes it to exit. That's usually what you want since it means that if the right-hand side of a pipeline exits, the left-hand side also exits, and doesn't hang around continuing a potentially long task uselessly. Or (worse) get stuck helplessly trying to write to a blocked pipe that doesn't admit any writes because the data has nowhere to go. For, tee it doesn't look like there's any exception in the POSIX specification ; the part that goes nearest is a mention of write errors to file operands: If a write to any successfully opened file operand fails, writes to other successfully opened file operands and standard output shall continue, but the exit status shall be non-zero. If SIGPIPE is ignored, the implementations I tested continue past the EPIPE error that then gets returned from the write() call. The GNU coreutils version of tee has the -p and --output-error options to control what it does when a write fails: The default operation when --output-error is not specified, is to exit immediately on error writing to a pipe, and diagnose errors writing to non pipe outputs. Though the way it exits is via the SIGPIPE, so starting tee with the signal ignored it makes it not exit. And the default with -p is the warn-nopipe mode, which is described as "diagnose errors writing to any output not a pipe", as opposed to the other options that make it exit. Under the hood, it also ignores the SIGPIPE signal and then stops trying to write to the pipe. So, with the GNU version at least, you can use tee -p ... | ... to prevent it from exiting when the pipe reader exits. Alternatively, you could arrange for the right-hand side program to be something mimicking a black hole instead, e.g. cat > /dev/null (which still reads and writes everything it gets, but the kernel eventually then ignores the data written to /dev/null ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4313/"
]
} |
720,541 | I cannot work out why sort is not working correctly for this, but it's sorting based on columns I'm telling it not to. I want to sort with priorities first by column 3, then column 4, then column 5, then column 6. What's going on? Here is my code: sort -n -s -t ',' -k3,6 Here is my input: a1,b1,2,15,50,ABBA a1,a1,2,26,55,ABBA a11,2a1,2,33,55,ABBA b1,a1,2,80,99,ABA c2,a1,3,20,40,CAN a1,b2,3,51,300,CAN a3,a3,4,1000,2000,ART d3,c3,4,1700,2050,ART d3,c2c,4,1600,2050,ART b1,a3,4,1800,2051,ART Here is my current output: a1,b1,2,15,50,ABBA a1,a1,2,26,55,ABBA a11,2a1,2,33,55,ABBA b1,a1,2,80,99,ABA c2,a1,3,20,40,CAN a1,b2,3,51,300,CAN a3,a3,4,1000,2000,ART d3,c3,4,1700,2050,ART d3,c2c,4,1600,2050,ARTb1,a3,4,1800,2051,ART but my desired and expected output should be: a1,b1,2,15,50,ABBA a1,a1,2,26,55,ABBA a11,2a1,2,33,55,ABBA b1,a1,2,80,99,ABA c2,a1,3,20,40,CAN a1,b2,3,51,300,CAN a3,a3,4,1000,2000,ART d3,c2c,4,1600,2050,ART d3,c3,4,1700,2050,ARTb1,a3,4,1800,2051,ART I am using Linux. | The issue is that your sorting key is a string containing commas. When comparing two of these keys, like 4,1700,2050,ART and 4,1600,2050,ART , they compare equal since (in your locale) only the very first part of the keys can be converted to a numeric value (4, and 4). To solve this, compare each field separately with the correct type for that field (numeric or non-numeric): sort -s -t, -k3,3n -k4,4n -k5,5n -k6,6 file Most implementations of sort provide a --debug option that is very helpful for detecting issues like these. On my FreeBSD system, this clearly shows that your original command has issues when comparing fields like the ones I mentioned: $ sort --debug -n -s -t ',' -k3,6 file[...]; k1=<4,1000,2000,ART >, k2=<4,1700,2050,ART >; s1=<a3,a3,4,1000,2000,ART >, s2=<d3,c3,4,1700,2050,ART >; cmp1=0; k1=<4,1700,2050,ART >, k2=<4,1600,2050,ART >; s1=<d3,c3,4,1700,2050,ART >, s2=<d3,c2c,4,1600,2050,ART >; cmp1=0; k1=<4,1600,2050,ART >, k2=<4,1800,2051,ART >; s1=<d3,c2c,4,1600,2050,ART >, s2=<b1,a3,4,1800,2051,ART >; cmp1=0[...] cmp1=0 shows that the keys, k1 and k2 , compares equal. As a comparison: $ sort --debug -s -t, -k3,3n -k4,4n -k5,5n -k6,6 file[...]; k1=<4>, k2=<4>; k1=<1000>, k2=<1700>; s1=<a3,a3,4,1000,2000,ART >, s2=<d3,c3,4,1700,2050,ART >; cmp1=-1; k1=<4>, k2=<4>; k1=<1700>, k2=<1600>; s1=<d3,c3,4,1700,2050,ART >, s2=<d3,c2c,4,1600,2050,ART >; cmp1=1; k1=<4>, k2=<4>; k1=<1000>, k2=<1600>; s1=<a3,a3,4,1000,2000,ART >, s2=<d3,c2c,4,1600,2050,ART >; cmp1=-1; k1=<4>, k2=<4>; k1=<1700>, k2=<1800>; s1=<d3,c3,4,1700,2050,ART >, s2=<b1,a3,4,1800,2051,ART >; cmp1=-1[...] GNU sort likely produces debug output in a totally different format. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170497/"
]
} |
720,634 | I am trying to split a csv file - sample.csv by following command. split -n 2 -a 1 -d sample.csv test_ I get two files equal size one with header and other without header and lines trimmed off at the end. when I se below script, The result displays on terminal but no file generated. split -n 1/2 -a 1 -d sample.csv test_ | Seems like both commands are working as they are designed. The first splits on bytes, which won't respect lines — useful when you are splitting, say, a large binary object, not so much when it's something like a CSV. The second looks like a misreading of the documentation (but is still technically valid). I think you meant to do this: split -n l/2 -a 1 -d sample.csv test_ Note that is lower case L slash two not one slash two as you appear to have done. You won't see the header row on the second part, because split is not really content aware. One approach would be to do something like this # extract header row and savehead -n1 sample.csv > header# skip header row and split records to temporary filestail -n+2 sample.csv | split -n l/2 -a 1 -d temp_# make final files from header and split recordsfor f in temp_? ; do cat header "$f" > "${f/temp/test}" # clean up temporary file rm "$f"done# clean up header filerm header | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/544815/"
]
} |
720,829 | Consider a simple debugging style where $debug would be set either to true or false according to some command-line flag: $debug && echo "Something strange just happened" >&2 I know there are better ways to protect against value setting, such as ${debug:-false} or even a full-featured logMessage() -type function, but let's park that for now. Assume the bad situation where $debug remains unset or empty. I expected that $debug && echo … for such an empty $debug would be parsed and evaluated into && echo … , which would trigger a consequent syntax error. But instead it seems to be evaluated into something like : && echo … , which ends up executing the echo . I've tested with bash , ksh , and dash and the scenario remains consistent. My understanding was that these two lines parsed out as equivalent, but clearly they do not: unset debug; $debug && echo stuffunset debug; && echo stuff Why does the code act as if an unset variable is logically true instead of failing with an error such as syntax error near unexpected token `&&' ? | From the Shell Command Language specification, section 2.9.1 Simple Commands , after all the redirections, glob/parameter expansions, etc. are performed: If there is a command name, execution shall continue as described in Command Search and Execution . If there is no command name, but the command contained a command substitution, the command shall complete with the exit status of the last command substitution performed. Otherwise, the command shall complete with a zero exit status. You're in the "otherwise" case. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100397/"
]
} |
720,852 | I have many .csv files with customer information. In all these files I want to add an additional column FIRSTNAME right next to the column FULLNAME . The Firstname can be generated with grabbing the first word from FULLNAME . There are no two-word firstnames like Jean Paul. In the last column a comma is used in the fieldtext Input COMPANY,FULLNAME,EMAIL,FUNCTION,CITY,INDUSTRY,COMMENTCompany name,Firstname Lastname,[email protected],Marketing Manager,New York,Health Care,"home, work"Company name,Firstname infix Lastname,[email protected],Marketing Manager,New York,Health Care,"home, workhome, work"Company name,Firstname infix infix2 Lastname,[email protected],Marketing Manager,New York,Health Care,"home, work" Expected output COMPANY,FULLNAME,FIRSTNAME,EMAIL,FUNCTION,CITY,INDUSTRY,COMMENTCompany name,Firstname Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, work"Company name,Firstname infix Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, work"Company name,Firstname infix infix2 Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, work" How to do this with awk, sed or something else? | Using the CSV-aware utility Miller ( mlr ): mlr --csv \ put '$FIRSTNAME = sub($FULLNAME," .*","")' then \ reorder -f COMPANY,FULLNAME,FIRSTNAME file ... which, given the data in the question, results in COMPANY,FULLNAME,FIRSTNAME,EMAIL,FUNCTION,CITY,INDUSTRY,COMMENTCompany name,Firstname Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, work"Company name,Firstname infix Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, workhome, work"Company name,Firstname infix infix2 Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, work" This use of Miller first creates a new field, FIRSTNAME , through a regular expression-based substitution that removes everything after the first space character in the FULLNAME field. Since new fields are presented last, the fields are then reordered to ensure that the first few fields are COMPANY , FULLNAME , and FIRSTNAME , in this order. The remaining fields are left in their original order. Instead of the put expression using sub() , you may use put with its splitnv() function to split the FIRSTNAME field's value on spaces and pick out the 1st generated string: mlr --csv \ put '$FIRSTNAME = splitnv($FULLNAME," ")[1]' then \ reorder -f COMPANY,FULLNAME,FIRSTNAME file For prettier output: $ mlr --icsv --opprint --barred put '$FIRSTNAME = splitnv($FULLNAME," ")[1]' then reorder -f COMPANY,FULLNAME,FIRSTNAME file+--------------+---------------------------------+-----------+--------------------------------+-------------------+----------+-------------+----------------------+| COMPANY | FULLNAME | FIRSTNAME | EMAIL | FUNCTION | CITY | INDUSTRY | COMMENT |+--------------+---------------------------------+-----------+--------------------------------+-------------------+----------+-------------+----------------------+| Company name | Firstname Lastname | Firstname | [email protected] | Marketing Manager | New York | Health Care | home, work || Company name | Firstname infix Lastname | Firstname | [email protected] | Marketing Manager | New York | Health Care | home, workhome, work || Company name | Firstname infix infix2 Lastname | Firstname | [email protected] | Marketing Manager | New York | Health Care | home, work |+--------------+---------------------------------+-----------+--------------------------------+-------------------+----------+-------------+----------------------+ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533874/"
]
} |
720,893 | Consider the following common pattern to handle a command exit status: if COMMAND; then echo successelse echo failurefi Using the same for variable assignment (given set -o nounset ) doesn't work: $ if foo="$no_such_variable"; then echo success; else echo failure; fibash: no_such_variable: unbound variable A variable substitution check also doesn't work: $ if foo="${no_such_variable:?}"; then echo success; else echo failure; fibash: no_such_variable: parameter null or not set Is there some way to catch the return code of an assignment in an if statement? I'm trying to avoid the standard if [[ $# -eq N ]] workaround, because it couples that statement to each parameter assignment in the rest of the script rather than catching any issues at each assignment itself. This is similar to the Python if (foo := bar()): pattern. | Using the CSV-aware utility Miller ( mlr ): mlr --csv \ put '$FIRSTNAME = sub($FULLNAME," .*","")' then \ reorder -f COMPANY,FULLNAME,FIRSTNAME file ... which, given the data in the question, results in COMPANY,FULLNAME,FIRSTNAME,EMAIL,FUNCTION,CITY,INDUSTRY,COMMENTCompany name,Firstname Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, work"Company name,Firstname infix Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, workhome, work"Company name,Firstname infix infix2 Lastname,Firstname,[email protected],Marketing Manager,New York,Health Care,"home, work" This use of Miller first creates a new field, FIRSTNAME , through a regular expression-based substitution that removes everything after the first space character in the FULLNAME field. Since new fields are presented last, the fields are then reordered to ensure that the first few fields are COMPANY , FULLNAME , and FIRSTNAME , in this order. The remaining fields are left in their original order. Instead of the put expression using sub() , you may use put with its splitnv() function to split the FIRSTNAME field's value on spaces and pick out the 1st generated string: mlr --csv \ put '$FIRSTNAME = splitnv($FULLNAME," ")[1]' then \ reorder -f COMPANY,FULLNAME,FIRSTNAME file For prettier output: $ mlr --icsv --opprint --barred put '$FIRSTNAME = splitnv($FULLNAME," ")[1]' then reorder -f COMPANY,FULLNAME,FIRSTNAME file+--------------+---------------------------------+-----------+--------------------------------+-------------------+----------+-------------+----------------------+| COMPANY | FULLNAME | FIRSTNAME | EMAIL | FUNCTION | CITY | INDUSTRY | COMMENT |+--------------+---------------------------------+-----------+--------------------------------+-------------------+----------+-------------+----------------------+| Company name | Firstname Lastname | Firstname | [email protected] | Marketing Manager | New York | Health Care | home, work || Company name | Firstname infix Lastname | Firstname | [email protected] | Marketing Manager | New York | Health Care | home, workhome, work || Company name | Firstname infix infix2 Lastname | Firstname | [email protected] | Marketing Manager | New York | Health Care | home, work |+--------------+---------------------------------+-----------+--------------------------------+-------------------+----------+-------------+----------------------+ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
720,926 | I have two files: F1.txt : a:erwer|eeb:eeeee|eeec:ewrew|erwe and F2.txt : a.T1a.T2b.T3C.T7c.T4 I need to check the number of occurrences of the a , b , c keywords by checking F1.txt) from F2.txt. Expected output in F1.txt: a:erwer|ee:total:2b:eeeee|eeet:total:1c:ewrew|erwe:total:2 Update o/p in another file: a:2b:1c:2 | If your file isn't too large, you can use awk : awk 'BEGIN{FS=".";OFS=":"}NR==FNR{a[tolower($0)]=0;next}{ if(tolower($1) in a){ a[tolower($1)]++ }} END{ for(key in a){ print key, a[key] }}' F1.txt F2.txt And in case you want something case sensitive remove the tolower function. For your edited question : awk 'BEGIN{FS="[:.]";OFS=":"}NR==FNR{l[tolower($1)]=$0;cpt[tolower($1)]=0;next}{ if(tolower($1) in cpt){ cpt[tolower($1)]++ }}END{ for(key in cpt){ print l[key],"total",cpt[key] }}' F1.txt F2.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/720926",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/402943/"
]
} |
721,179 | In bash, I press ctrl + v to start verbatim insert. In the verbatim mode, I press the Esc key and bash shows ^[ . I redirect it to file esc . Also in the verbatim mode, I press ctrl key with [ key, and bash shows ^[ . I redirect it to file ctrl . Next, I compare the two files, and they are the same! $ echo '^[' > esc$ echo '^[' > ctrl$ diff esc ctrl$ Why do Ctrl + [ and Esc produce the same content? Is ^[ here the C0 and C1 control codes ? If so, the wiki article says ^[ is Escape, so why is ctrl + [ also Escape? The root problem is that I want to check and create a key binding. (zsh)$ bindkey -L ...bindkey "^['" quote-line... So do I need to type ESC+' or ctrl+[+' ? | This looks to follow the same logic as Ctrl-A, or ^A being character code 1, and ^@ being used to represent the NUL byte. Here, the ^ is a common way of representing Ctrl with another key. Namely, entering Ctrl- foo gives the character code of foo with bit 6 cleared, reducing the character code by 64. So, A is character code 65, and ^A is character code 1; @ is 64, and ^@ is 0, NUL; and also [ is 91, and ^[ is 27, ESC. It's just that for ESC you also have a separate key, but you do have the enter and tab keys too, which also produce control characters, so it's not that out of the ordinary. Of course, how Ctrl- something works on modern systems probably depends on other things too, like how your keymaps and key bindings are set up. Also don't ask me how that works for character codes < 64, e.g. ^1 . With the terminal I tried that on, Ctrl-space gave the NUL byte. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/721179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77141/"
]
} |
721,227 | For example I have a text file named input.txt with this sentence in it: This is my base64 string: ${BASE64} I have this variable: myvar="SGVsbG8gV29ybGQuIERvIHlvdSBsaWtlIG15IGJhc2U2NCBzdHJpbmc/IFRoYXQgaXMgdmVyeSBuaWNlIQ==" expected output: This is my base64 string: SGVsbG8gV29ybGQuIERvIHlvdSBsaWtlIG15IGJhc2U2NCBzdHJpbmc/IFRoYXQgaXMgdmVyeSBuaWNlIQ== I thought myself this command should do the work but it gives an error. $ sed -i -e "s/\${BASE64}/${myvar}/g" text.txtsed: -e expression #1, char 70: unknown option to `s' I tried many options and searched on the internet for hours but I cannot figure it out exactly. I think it has something to do with the \ At first I want to know what I am doing wrong? Secondly I want to know how to do it right. I would love an correct awk example too. | The main problem is what you said: I think it has something to do with the / . When you use with / as delimiter you have to be careful with the strings you use. So you should use another delimiter in your sed command: sed -i -e "s@\${BASE64}@${myvar}@g" text.txt Using awk myvar="SGVsbG8gV29ybGQuIERvIHlvdSBsaWtlIG15IGJhc2U2NCBzdHJpbmc/IFRoYXQgaXMgdmVyeSBuaWNlIQ=="awk -i inplace -v var='\\$\\{BASE64\\}' -v base="$myvar" '{sub(var,base);}1' Another way to replace variables is by using envsubst . If your file contains ${BASE64} you can create a variable with the same name and replace it inside the file. But you will have to export that variable ( just once ) before you run envsubst : BASE64="SGVsbG8gV29ybGQuIERvIHlvdSBsaWtlIG15IGJhc2U2NCBzdHJpbmc/IFRoYXQgaXMgdmVyeSBuaWNlIQ=="export BASE64#Not recommended: envsubst < text.txt | tee text.txt#Orenvsubst < text.txt > newtext.txtmv newtext.txt text.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/721227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533874/"
]
} |
Subsets and Splits