source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
627,716 | I'm trying to configure a Client Authentication Certificate with OpenSSL with this command: openssl ca -config intermediate/openssl.cnf \ -extensions usr_cert -days 7200 -notext -md sha256 \ -in intermediate/csr/ADO-CSR0.csr \ -out intermediate/certs/29-12-20/ADO-CSR0.cert.pem But the certificate is empty: ls -l intermediate/certs/29-12-20-rw-r--r-- 1 root root 0 Jan 5 17:29 ADO-CSR0.cert.pem I have this configuration: [ ca ]# `man ca`default_ca = CA_default[ CA_default ]# Directory and file locations.dir = /root/ca/intermediatecerts = $dir/certscrl_dir = $dir/crlnew_certs_dir = $dir/newcertsdatabase = $dir/index.txtserial = $dir/serialRANDFILE = $dir/private/.rand# The root key and root certificate.private_key = $dir/private/intermediate.key.pemcertificate = $dir/certs/intermediate.cert.pem# For certificate revocation lists.crlnumber = $dir/crlnumbercrl = $dir/crl/intermediate.crl.pemcrl_extensions = crl_extdefault_crl_days = 30# SHA-1 is deprecated, so use SHA-2 instead.default_md = sha256name_opt = ca_defaultcert_opt = ca_defaultdefault_days = 375preserve = nopolicy = policy_loose[ policy_strict ]# The root CA should only sign intermediate certificates that match.# See the POLICY FORMAT section of `man ca`.countryName = matchstateOrProvinceName = matchorganizationName = matchorganizationalUnitName = optionalcommonName = suppliedemailAddress = optional[ policy_loose ]# Allow the intermediate CA to sign a more diverse range of certificates.# See the POLICY FORMAT section of the `ca` man page.countryName = optionalstateOrProvinceName = optionallocalityName = optionalorganizationName = optionalorganizationalUnitName = optionalcommonName = suppliedemailAddress = optional[ req ]# Options for the `req` tool (`man req`).default_bits = 2048distinguished_name = req_distinguished_namestring_mask = utf8only# SHA-1 is deprecated, so use SHA-2 instead.default_md = sha256# Extension to add when the -x509 option is used.x509_extensions = v3_ca[ req_distinguished_name ]# See <https://en.wikipedia.org/wiki/Certificate_signing_request>.countryName = Country Name (2 letter code)stateOrProvinceName = State or Province NamelocalityName = Locality Name0.organizationName = Organization NameorganizationalUnitName = Organizational Unit NamecommonName = Common NameemailAddress = Email Address# Optionally, specify some defaults.countryName_default = DOstateOrProvinceName_default = Santo DomingolocalityName_default =0.organizationName_default = Altice Dominicana#organizationalUnitName_default =#emailAddress_default =[ v3_ca ]# Extensions for a typical CA (`man x509v3_config`).subjectKeyIdentifier = hashauthorityKeyIdentifier = keyid:always,issuerbasicConstraints = critical, CA:truekeyUsage = critical, digitalSignature, cRLSign, keyCertSign[ v3_intermediate_ca ]# Extensions for a typical intermediate CA (`man x509v3_config`).subjectKeyIdentifier = hashauthorityKeyIdentifier = keyid:always,issuerbasicConstraints = critical, CA:true, pathlen:0keyUsage = critical, digitalSignature, cRLSign, keyCertSign[ usr_cert ]# Extensions for client certificates (`man x509v3_config`).basicConstraints = CA:FALSEnsCertType = client, emailnsComment = "OpenSSL Generated Client Certificate"subjectKeyIdentifier = hashauthorityKeyIdentifier = keyid,issuerkeyUsage = critical, nonRepudiation, digitalSignature, keyEnciphermentextendedKeyUsage = clientAuth, emailProtection[ server_cert ]# Extensions for server certificates (`man x509v3_config`).basicConstraints = CA:FALSEnsCertType = servernsComment = "OpenSSL Generated Server Certificate"subjectKeyIdentifier = hashauthorityKeyIdentifier = keyid,issuer:alwayskeyUsage = critical, digitalSignature, keyEnciphermentextendedKeyUsage = serverAuth[ crl_ext ]# Extension for CRLs (`man x509v3_config`).authorityKeyIdentifier=keyid:always[ ocsp ]# Extension for OCSP signing certificates (`man ocsp`).basicConstraints = CA:FALSEsubjectKeyIdentifier = hashauthorityKeyIdentifier = keyid,issuerkeyUsage = critical, digitalSignatureextendedKeyUsage = critical, OCSPSigning Can you help me?Regards,Ronald | As extra blank lines don't matter gawk 'BEGIN {RS=""} !/^[$(]/ {gsub("\n"," ")} {print;print "\n"}' Explanation. RS="" sets gawk into paragraph mode. !/^[$(]/ matches paragraphs that don't start with ( or $ . gsub("\n"," ") changes newlines into spaces. print;print "\n" outputs the data and a newline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449634/"
]
} |
627,773 | When I tried to install deltarpm sudo yum install deltarpm The following error returned. Delta RPMs disabled because /usr/bin/applydeltarpm not installed. How to fix them ? I wonder applydeltarpm needs deltarpm so it was complicated. If someone has opinion, please let me know Thanks | That’s not an error, it’s an informational message telling you that delta RPMs won’t be used (because the required tool isn’t installed). It’s a chicken-and-egg situation: since you don’t have applydeltarpm , you can’t benefit from delta RPMs. In any case, since you’re installing a new package ( deltarpm isn’t already installed on your system), yum wouldn’t use a delta RPM. Once deltarpm is installed, future package upgrades will use delta RPMs if possible. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449704/"
]
} |
627,779 | I'm writing quizzes for my students in a markdown language. One of the quizzesmight look like this: % QUESTIONWho played drums for The Beatles?(X) Ringo( ) John( ) Paul( ) George% QUESTIONWhat is the first line of MOBY DICK?(X) Call me Ishmael.( ) foo( ) bar( ) spam( ) eggs I'd like to randomize all of these multiple choice options. So, I think I need ashell script that: Finds all blocks of consecutive lines that start with (X) or ( ). Shuffles each of these blocks of lines. Is this possible? I know that shuf and sort -R will randomize the lines ofany text but I'm not sure of how to go about isolating these blocks of options. | Using AWK: BEGIN { srand() answers[1] = "" delete answers[1]}function outputanswers(answers, len, i) { len = length(answers) while (length(answers) > 0) { i = int(rand() * len + 1) if (answers[i]) { print answers[i] } delete answers[i] }}/^$/ { outputanswers(answers) print}/^[^(]//^\(/ { answers[length(answers) + 1] = $0}END { outputanswers(answers) } This works by accumulating answers in the answers array, and outputting its contents in a random order when necessary. Lines are considered to be answers if they start with an opening parenthesis (I’m hoping that’s a valid simplification of your specification). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
627,800 | I've reviewed the "Similar questions", and none seem to solve my problem: I have a large CSV input file; each line in the file is an x,y data point. Here are a few lines for illustration, but please note that in general the data are not monotonic : 1.904E-10,2.1501E+00 3.904E-10,2.1827E+00 5.904E-10,2.1106E+00 7.904E-10,2.2311E+00 9.904E-10,2.2569E+00 1.1904E-09,2.3006E+00 I need to create an output file that is smaller than the input file. The output file will contain no more than one line for every N lines in the input file. Each single line in the output file will be a x,y data point which is the average of the x,y values for N lines of the input file. For example, if the total number of lines in the input file is 3,000, and N=3 , the output file will contain no more than 1,000 lines. Using the data above to complete this example, the first 3 lines of data above would be replaced with a single line as follows: x = (1.904E-10 + 3.904E-10 + 5.904E-10) / 3 = 3.904E-10 y = (2.1501E+00 + 2.1827E+00 + 2.1106E+00) / 3 = 2.1478E+00, or: 3.904E-10,2.1478E+00 for one line of the output file. I've fiddled with this for a while, but haven't gotten it right. This is what I've been working with, but I can't see how to iterate the NR value to work through the entire file: awk -F ',' 'NR == 1, NR == 3 {sumx += $1; avgx = sumx / 3; sumy += $2; avgy = sumy / 3} END {print avgx, avgy}' CB07-Small.csv To complicate this a bit more, I need to "thin" my output file still further: If the value of avgy (as calculated above) is close to the last value of avgy in the output file, I will not add this as a new data point to the output file. Instead I will calculate the next avgx & avgy values from the next N lines of the input file. "Close" should be defined as a percentage of the last value of argy . For example: if the current calculated value of avgy differs by less than 10% from the last value of avgy recorded in the output file, then do not write a new value to the output file. see edit history | Here’s a generic variant: BEGIN { OFS = FS = "," }{ for (i = 1; i <= NF; i++) sum[i] += $i count++}count % 3 == 0 { for (i = 1; i <= NF; i++) $i = sum[i] / count delete sum count = 0 if ($NF >= 1.1 * last || $NF <= 0.9 * last) { print last = $NF }}END { if (count > 0) { for (i = 1; i <= NF; i++) $i = sum[i] / count if ($NF >= 1.1 * last || $NF <= 0.9 * last) print }} I’m assuming that left-overs should be handled in a similar fashion to blocks of N lines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286615/"
]
} |
627,804 | I'm trying to write a GUI tool to restrict a pointer to a specific monitor (e.g. a touchscreen pointer should map onto its own screen and not across the union of all monitors). The tool is in Python (using pygtk). For the UI, I need to list all the pointers, so you can select the one you mean, and then call xinput map-to-output pointer_id monitor_id . If this command is given the id of a non-pointer device, then it raises an error, so I'd like to avoid giving those IDs as options. The output of xinput list looks like this: ⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ ELAN Touchscreen id=18 [slave pointer (2)]⎜ ↳ SynPS/2 Synaptics TouchPad id=21 [slave pointer (2)]⎜ ↳ TPPS/2 IBM TrackPoint id=22 [slave pointer (2)]⎜ ↳ Cherry USB Optical Mouse Consumer Control id=10 [slave pointer (2)]⎜ ↳ Cherry USB Optical Mouse id=12 [slave pointer (2)]⎜ ↳ HID 04b4:3003 Consumer Control id=14 [slave pointer (2)]⎜ ↳ HID 04b4:3003 Mouse id=24 [slave pointer (2)]⎜ ↳ WALTOP Graphics Tablet Pen (0) id=26 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Sleep Button id=8 [slave keyboard (3)] ↳ Integrated Camera: Integrated C id=19 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=20 [slave keyboard (3)] ↳ ThinkPad Extra Buttons id=23 [slave keyboard (3)] ↳ Cherry USB Optical Mouse System Control id=9 [slave keyboard (3)] ↳ Cherry USB Optical Mouse Consumer Control id=11 [slave keyboard (3)] ↳ HID 04b4:3003 System Control id=13 [slave keyboard (3)] ↳ HID 04b4:3003 Consumer Control id=15 [slave keyboard (3)] ↳ HID 04b4:3003 Keyboard id=16 [slave keyboard (3)] ↳ HID 04b4:3003 id=17 [slave keyboard (3)] ↳ WALTOP Graphics Tablet id=25 [slave keyboard (3)] To build the menu, I need to get the name and id of all pointers (I guess slave pointers, I don't know what would happen if I selected Virtual core pointer). On the one hand, xinput list --id-only and xinput list --name-only give the exact information I need, except that I need to filter out the ids and names which are not pointers. On the other hand, I could do xinput list | grep pointer to get the relevant lines, but the resulting thing does not look very nice to parse (there are extraneous brackets and the weird ↳ arrow character). I tried looking for options in man xinput to either do some filtering or to simplify the output, but couldn't find anything. My project is based off ptxconf , and their solution is as follows. I'm hoping to find something more elegant. def getPenTouchIds(self): """Returns a list of input id/name pairs for all available pen/tablet xinput devices""" retval = subprocess.Popen("xinput list", shell=True, stdout=subprocess.PIPE).stdout.read() ids = {} for line in retval.split("]"): if "pointer" in line.lower() and "master" not in line.lower(): id = int(line.split("id=")[1].split("[")[0].strip()) name = line.split("id=")[0].split("\xb3",1)[1].strip() if self.getPointerDeviceMode(id) == "absolute": ids[name+"(%d)"%id]={"id":id} return ids | Here’s a generic variant: BEGIN { OFS = FS = "," }{ for (i = 1; i <= NF; i++) sum[i] += $i count++}count % 3 == 0 { for (i = 1; i <= NF; i++) $i = sum[i] / count delete sum count = 0 if ($NF >= 1.1 * last || $NF <= 0.9 * last) { print last = $NF }}END { if (count > 0) { for (i = 1; i <= NF; i++) $i = sum[i] / count if ($NF >= 1.1 * last || $NF <= 0.9 * last) print }} I’m assuming that left-overs should be handled in a similar fashion to blocks of N lines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284679/"
]
} |
627,839 | I've checked some answers online, but they do not work when there's a -name parameter in the command (maybe I'm doing something wrong, not sure). What I'd like to do is to list only filenames (not the directory/path), and to complicate more, I have same filename with different extension, so I need to filter the result using -name and -o , the trouble is the suggestions that I saw use either -printf or basename , and those doesn't work well when there's -name *.txt -o name *.pdf . Is there any other way to solve this problem? Example of my files: /dir1/dir2/dir3/dir4/ /file1.txt /file1.pdf /file1.ods ...etc. I'm only interested in listing one or more type of file per listing (by using -name to filter my results. Is it possible to accomplish it using only 1 find command or I have to do it in 2 steps (saving the result in a temporary file, then filter/strip the directory from the temp file)? The result I'm trying to accomplish is a text file with filename per line: file1.txtfile1.pdffile1.odsfile2.txtfile2.pdfetc. The command I was using is find /dir1/dir2/dir3/dir4/ -type f -name '*.txt' -o -name '*.pdf' -o -name '*.ods' -printf "%f\n" I am using GNU find. Update It turned out I have to put \(...\) , as reminded by αғsнιη | When using multiple expressions in logical OR mode with -o you need enclose them all within \( ... \) , else the last expression result is always respected; and -printf '%f\n' is used to print the files' name only with any leading directories removed. find . -type f \( -name '*.pdf' -o -name '*.txt' \) -printf '%f\n' see man find under OPERATORS section about ( expr ) : OPERATORS ... ( expr ) Force precedence. Since parentheses are special to the shell, you will normally need to quote them. so you can either use \( ... \) or '(' ... ')' (whitespaces after and before parenthesis are important) to escape them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449765/"
]
} |
627,858 | I know what Debian package management says: Do not erase or alter files in /var/lib/dpkg/ . However, there are 11,501 files in that folder dating from 2006: $ ls -l /var/lib/dpkg/info | wc -l 11502sudo find /var/lib/dpkg/info -type f -printf '%T+ %p\n' | sort | head -n 20 | awk '{print $NF}' | xargs ls -l -rwxr-xr-x 1 root root 184 Jul 19 2008 /var/lib/dpkg/info/extra-xdg-menus.postrm -rw-r--r-- 1 root root 27 Aug 10 2006 /var/lib/dpkg/info/freepats.conffiles -rw-r--r-- 1 root root 12993 Aug 10 2006 /var/lib/dpkg/info/freepats.md5sums How can I determine the status of the files and the correct management method? Seeing files there with a date of 2006 is surprising to me because this little machine (Acer Aspire One Netbook) was only released 2008 and I didn't get it for a few years after that (it previously ran Windows, before Linux gave it a new life). | The modification time on files installed by dpkg, including the metadata files in /var/lib/dpkg/info , is the modification time they have inside the package archive file (the .tar.gz or equivalent contained inside the .deb file). It's normally the time they were last changed by the maintainer for files that are manually written. For files that are generated automatically by the package build tools, it's normally the time the package was built, or, with reproducible builds , the date of the latest entry in the changelog file . It has nothing to do with the time the package was installed. It's unsurprising that some package metadata files haven't changed in a very long time. Even if the content of the package changes because the program has updates, new features and so on, some things don't change often. For example a .postrm script contains things to do as the very last step of uninstallation, and most packages don't need one at all; it's unsurprising that what needs to be done at that point rarely changes. In the case of freepats.md5sum , the .md5sum file does change any time the package changes; this is a data file (some audio synthesis data) which hasn't changed in years (it's just data, no code, so there's a pretty low risk of bugs, and nobody has been interested in doing any enhancement to it). Each package averages about half a dozen files under /var/lib/dpkg/info , and Debian tends to split software into fine-grained packages (each library into its own package, data and documentation are often packaged separately from code, etc.). So having ~10k files there is typical. The state of your system is normal and there is no action to take. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84879/"
]
} |
627,865 | I need to convert a date into a day number and I can use date +%s --date "10 March 1990" 637027200 To get the second number. Then I divide that number to the number of seconds in a day - 86400 date +%s --date "10 March 1990" / 86400 7373 But it doesn't work for dates before 1 January 1970 and I need to use for dates between 1950 and 2020 Is there any fix for that? Maybe there is some date binary that allows to redefine epoch time to a different date than 1/1/1970 ? | The modification time on files installed by dpkg, including the metadata files in /var/lib/dpkg/info , is the modification time they have inside the package archive file (the .tar.gz or equivalent contained inside the .deb file). It's normally the time they were last changed by the maintainer for files that are manually written. For files that are generated automatically by the package build tools, it's normally the time the package was built, or, with reproducible builds , the date of the latest entry in the changelog file . It has nothing to do with the time the package was installed. It's unsurprising that some package metadata files haven't changed in a very long time. Even if the content of the package changes because the program has updates, new features and so on, some things don't change often. For example a .postrm script contains things to do as the very last step of uninstallation, and most packages don't need one at all; it's unsurprising that what needs to be done at that point rarely changes. In the case of freepats.md5sum , the .md5sum file does change any time the package changes; this is a data file (some audio synthesis data) which hasn't changed in years (it's just data, no code, so there's a pretty low risk of bugs, and nobody has been interested in doing any enhancement to it). Each package averages about half a dozen files under /var/lib/dpkg/info , and Debian tends to split software into fine-grained packages (each library into its own package, data and documentation are often packaged separately from code, etc.). So having ~10k files there is typical. The state of your system is normal and there is no action to take. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627865",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/448198/"
]
} |
627,872 | I am writing a script to automate my Arch Linux installation process and I've bumped into an environmental problem. I have a file with the installation sequence and a file that holds all the functions, sourced in the beginning of the script. Running arch-chroot /mnt /bin/bash -c someFunction does not recognize the function nor the variables set before this line of code. If I export -f someFunction it will recognize the function, but someFunction in itself is just a decomposition function, which only calls other functions. What is the most elegant way to source all the functions inside the chroot environment? I also need a way of exporting all variables, set by the user at the beginning of the script, into the chrooted environment.(I'm guessing solving the above will also solve this problem) | The modification time on files installed by dpkg, including the metadata files in /var/lib/dpkg/info , is the modification time they have inside the package archive file (the .tar.gz or equivalent contained inside the .deb file). It's normally the time they were last changed by the maintainer for files that are manually written. For files that are generated automatically by the package build tools, it's normally the time the package was built, or, with reproducible builds , the date of the latest entry in the changelog file . It has nothing to do with the time the package was installed. It's unsurprising that some package metadata files haven't changed in a very long time. Even if the content of the package changes because the program has updates, new features and so on, some things don't change often. For example a .postrm script contains things to do as the very last step of uninstallation, and most packages don't need one at all; it's unsurprising that what needs to be done at that point rarely changes. In the case of freepats.md5sum , the .md5sum file does change any time the package changes; this is a data file (some audio synthesis data) which hasn't changed in years (it's just data, no code, so there's a pretty low risk of bugs, and nobody has been interested in doing any enhancement to it). Each package averages about half a dozen files under /var/lib/dpkg/info , and Debian tends to split software into fine-grained packages (each library into its own package, data and documentation are often packaged separately from code, etc.). So having ~10k files there is typical. The state of your system is normal and there is no action to take. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/434518/"
]
} |
627,884 | I have a 64GB SD card with Linux (Debian) installed on it.I want to move it to a smaller SD Card (16 GB). I used resize2fs and cfdisk to resize the filesystem and partitions, so that now, they look like this: Disk /dev/rdisk4: 122519552 sectors, 58.4 GiBSector size (logical): 512 bytesDisk identifier (GUID): C133B5DA-A507-4080-8DBC-9FAD0E960A17Partition table holds up to 128 entriesMain partition table begins at sector 2 and ends at sector 33First usable sector is 34, last usable sector is 122519518Partitions will be aligned on 2048-sector boundariesTotal free space is 93159357 sectors (44.4 GiB)Number Start (sector) End (sector) Size Code Name 1 2048 1050623 512.0 MiB EF00 2 1050624 7342079 3.0 GiB 8200 3 7342080 15730687 4.0 GiB 8300 4 15730688 29362175 6.5 GiB 8300 Now, I want to take an image of it with dd. According to this: https://en.wikipedia.org/wiki/GUID_Partition_Table The GPT backup header is the last 33 sectors. The last sector in use by my last partition is 29362175. As far as I can tell, sectors start at 0, so that's a total of 29362176 sectors, plus the 33 sectors for the GPT backup headers. In the end, I would expect a command like this to work: sudo dd if=/dev/rdisk4 of=disk4_backup.img bs=512 count=29362209 When I run that, the resulting disk4_backup.img is the size I expect (15033451008 bytes), but when I run gdisk on it: gdisk disk4_backup.img It tells me that the GPT backup header is corrupted.I'm pretty sure I can just get gdisk to fix it using the primary GPT header, but why can't I back up the backup-header in the first place?Is my math wrong? Are my assumptions about where the GPT backup header lies wrong? Note: gdisk does NOT complain about my original 64 GB SD card with resized partitions. It's happy with the GPT headers on it. | The modification time on files installed by dpkg, including the metadata files in /var/lib/dpkg/info , is the modification time they have inside the package archive file (the .tar.gz or equivalent contained inside the .deb file). It's normally the time they were last changed by the maintainer for files that are manually written. For files that are generated automatically by the package build tools, it's normally the time the package was built, or, with reproducible builds , the date of the latest entry in the changelog file . It has nothing to do with the time the package was installed. It's unsurprising that some package metadata files haven't changed in a very long time. Even if the content of the package changes because the program has updates, new features and so on, some things don't change often. For example a .postrm script contains things to do as the very last step of uninstallation, and most packages don't need one at all; it's unsurprising that what needs to be done at that point rarely changes. In the case of freepats.md5sum , the .md5sum file does change any time the package changes; this is a data file (some audio synthesis data) which hasn't changed in years (it's just data, no code, so there's a pretty low risk of bugs, and nobody has been interested in doing any enhancement to it). Each package averages about half a dozen files under /var/lib/dpkg/info , and Debian tends to split software into fine-grained packages (each library into its own package, data and documentation are often packaged separately from code, etc.). So having ~10k files there is typical. The state of your system is normal and there is no action to take. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627884",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/211001/"
]
} |
628,091 | I need to use a sed command to search a text file and replace $Date$ with: $Date: 2021-01-06... $ where ... is the text for the date. I have a sed command that will search for Date and replace it with Date: 2021-01-06... : sed "s/Date/Date $(date)/" but I can't get a sed command to work to replace $Date$ . Also, I was able to get a sed command to replace $Date using sed "s/\$Date/\$Date $(date)/" but I can't figure out the syntax to search and replace $Date$ . I tried: sed "s/\$Date\$/\$Date $(date) \$/" but it does not work. | I tried: sed "s/\$Date\$/\$Date $(date) \$/" but it does not work. Because \$ in double-quotes only tells the shell not to interpret (expand) $ , but then there's sed . It gets s/$Date$/$Date … $/ (where … denotes what $(date) expanded to) and interprets the second $ as an anchor matching the end of the line. In s/regexp/replacement/ $ is interpreted as the anchor only at the end of regexp . So you need to escape this particular $ also for sed , i.e. you need sed to literally get \$ . This can be done with: sed "s/\$Date\\$/\$Date $(date) \$/" or sed "s/\$Date\\\$/\$Date $(date) \$/" It works with two or three backslashes because double-quoted $ before / does not need to (but may) be escaped, and to get \ you need \\ in double-quotes. This is kinda complicated, therefore consider single-quoting all fragments that don't need to be double-quoted: sed 's/$Date\$/$Date '"$(date)"' $/' Here all $ characters that should get to sed are single-quoted; and the only \ that should get to sed is also single-quoted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/628091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/338943/"
]
} |
628,400 | I am doing this script (with zsh, but I guess that it is not very important): mylist=`seq 1 3`for i in $mylist ; do echo "abc/$i" ; done This gives : abc/123 while I would like to see : abc/1abc/2abc/3 A huge thanks to somebody that may find why it does not work/how to do. | You can keep it simple: for i in 1 2 3; do echo "abc/$i" ; done OR for i in $(seq 1 3); echo "abc/$i" Output: abc/1abc/2abc/3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/628400",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/450318/"
]
} |
628,414 | I am trying to redirect all outbound DNS requests on my local network. I have a few devices made up of PC's, cell phones, etc. How would I go about redirecting (ex: www.domain.com to 192.168.1.80). Basically spoofing www.domain.com to 192.168.1.80 so anyone on the network will not be able to connect to www.domain.com . | You can keep it simple: for i in 1 2 3; do echo "abc/$i" ; done OR for i in $(seq 1 3); echo "abc/$i" Output: abc/1abc/2abc/3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/628414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/450333/"
]
} |
628,428 | Sometime, it is not possible to have the luxury of bash on a system, but conditions are easier to make on bash compared to sh or ash, what one should verify to ensure condition won't break with typical "unexpected operator" when rewriting from bash to sh or ash ? | I don't agree conditions are easier to make with ((...)) and [[...]] (assuming that's what you're referring to; note that those operators are not specific to bash and come from ksh) than the standard [ or test command. [[ ... ]] and (( ... )) have several problems of their own¹ and which are much worse than those of [ . If your [ fails with an unexpected operator error, it's likely you're not using the shell properly (typically, you forgot to quote an expansion) rather than the [ command. For how to use [ / test properly and portably, best is to refer to its POSIX specification . The few ground rules to make it safe are: quote all word expansions ( $var , $(cmd) , $((arithmetic)) ) in its arguments. That applies to every command, not just [ and not just to dash . [[ ... ]] and (( ... )) themselves are special constructs with their own specific syntax (varying from shell to shell), where it's not always obvious when things may or may not be quoted. don't pass more than 4 arguments beside the [ and ] ones. That is, don't use the deprecated -o and -a operators, nor ( or ) for grouping. So typically your [ expression should be either: a single argument as in [ "$string" ] , though I much prefer the [ -n "$string" ] variant to make it explicit that you check for $string being non-empty. a unary operator ( -f , -r , -n ...) and its operand, optionally preceded with a ! for negation. a binary operator ( = , -gt ...) and its 2 operands, optionally preceded by ! operands to arithmetic operators must be decimal integer literal constants with an optional sign. dash , like bash accepts leading and trailing whitespace in those operands, but not all [ implementations do. Check the POSIX standard for the list of portable operators. Note that dash also has a few extensions over the standard ( -nt , -ef , -k , -O , < , > ² ...) For pattern matching, use a case construct ( case $var in ($pattern)... ) instead of if [[ $var = $pattern ]]... . For extended regexp matching, you can use awk : ere_match() { awk -- 'BEGIN{exit !(ARGV[1] ~ ARGV[2])}' "$1" "$2"; }if ere_match "$string" "$regex"... Instead of: if [[ $string =~ $regex ]]... For AND/OR, chain several [ invocations with the && or || shell operators (which have equal precedence) and use command groups for grouping, the same you'd use for any other command, not just [ . if [ "$mode" = "$mode1" ] || { [ -f "$file" ] && [ -r "$file" ] }then... in place of: if [[ $mode = "$mode1" || ( -f $file && -r $file ) ]]; then... Some standard equivalents to some of bash 's/ dash 's/ zsh 's/ ksh 's/ yash 's test / [ non-POSIX operators: -a file -e file (but note both return false for symlink to inaccessible files for instance) -k file has_t_bit() (export LC_ALL=C ls -Lnd -- "$1" 2> /dev/null | { unset -v IFS read mode rest && case $mode in (*[Tt]) true;; (*) false;; esac }) -O file -> is_mine() (export LC_ALL=C ls -Lnd -- "$1" 2> /dev/null | { unset -v IFS read mode links fuid rest && euid=$(id -u) && [ "$fuid" -eq "$euid" ] }) -G file is_my_group() (export LC_ALL=C ls -Lnd -- "$1" 2> /dev/null | { unset -v IFS read mode links fuid fgid rest && egid=$(id -g) && [ "$fgid" -eq "$egid" ] }) -N file : no equivalent as there's no POSIX API to retrieve a file's modification or access time with full precision. file1 -nt file2 / file1 -ot file2 newer() (export LC_ALL=C case $1 in ([/.]*) ;; (*) set "./$1" "$2"; esac case $2 in ([/.]*) ;; (*) set "$1" "./$2"; esac [ -n "$(find -L "$1" -prune -newer "$2" 2> /dev/null)" ])older() { newer "$2" "$1"; }newer file1 file2 beware the behaviour of -nt / -ot varies between implementations if either file is not accessible. Here newer and older return false in those cases. file1 -ef file2 same_file() (export LC_ALL=C [ "$1" = "$2" ] && [ -e "$1" ] && return inode1=$(ls -Lid -- "$1" | awk 'NR == 1 {print $1}') && inode2=$(ls -Lid -- "$2" | awk 'NR == 1 {print $1}') && [ -n "$inode1" ] && [ "$inode1" -eq "$inode2" ] && dev1=$(df -P -- "$1" | awk 'NR == 2 {print $1}') && dev2=$(df -P -- "$2" | awk 'NR == 2 {print $1}') && [ -n "$dev1" ] && [ "$dev1" = "$dev2" ]) 2> /dev/nullsame_file file1 file2 (assuming filesystem sources in the df output don't contain whitespace; the filesystem comparison also doesn't work properly if the operands are actually devices with filesystems mounted on them). -v varname [ -n "${varname+set}" ] -o optname case $- in (*"$single_letter_opt_name"*)... . For long option name, I can't think of a standard equivalent. -R varname is_readonly() (export LC_ALL=C _varname=$1 is_readonly_helper() { [ "${1%%=*}" = "$_varname" ] && exit } eval "$( readonly -p | sed 's/^[[:blank:]]*readonly/is_readonly_helper/' )" exit 1) Though beware it's dangerous as sed may be affected by a LINE_MAX limit while variables have no length limit. Also note that in ksh93 , -R name is to check whether name is a nameref variable. "$a" == "$b" "$a" = "$b" "$a" '<' "$b" , "$a" '>' "$b" collate() { awk -- 'BEGIN{exit !(ARGV[1] "" '"$2"' ARGV[2] "")}' "$1" "$3"}collate "$a" '<' "$b"collate "$a" '>' "$b" (that collate also supports <= , >= , == , != ). Whether [ , or awk 's < compares strings using byte-to-byte comparison or the locale's collation order depends on the implementation. dash has not been internationalised, so it only works with byte values. For yash 's [ , it's based on locale collation. For bash , [[ $a < $b ]] works with locale collation while [ "$a" '<' "$b" ] works with byte value. POSIX requires awk 's < to use the locale's collation order but some awk implementations like mawk have not been internationalised so work with byte values. To force byte-value comparison, fix the locale to C . For awk 's == and != operators, POSIX used to require collation be used, but few implementations do and the POSIX specification now leaves it unspecified. [ 's = and != always do byte-to-byte comparison, but see yash 's === / !== below. "$a" '=~' "$b" see above (while in bash / ksh , the =~ operator is available only in [[...]] , that's not the case of zsh or yash whose [ supports it). "$string" -pcre-match "$pcre" no equivalent as PCREs are not specified by POSIX. "$a" === "$b" / "$a" !== "$b" ( strcoll() comparison) expr "z $a" = "z $b" > /dev/null / expr "z $a" != "z $b" (also note that some awk implementations == / != operator also use strcoll() for comparison). -o "?$option_name" (is valid option name) no standard equivalent that I can think of as the output format of set -o or set +o is unspecified. "$version1" -veq "$version2" (and -vne / -vgt / -vge / -vlt / -vle to compare version numbers) version_compare() { awk -- ' function pad(s, r) { r = "" while (match(s, /[0123456789]+/)) { r = r substr(s, 1, RSTART - 1) \ sprintf("%020u", substr(s, RSTART, RLENGTH)) s = substr(s, RSTART + RLENGTH) } return r s } BEGIN {exit !(pad(ARGV[1]) '"$2"' pad(ARGV[2]))}' "$1" "$3"} used as version_compare "$v1" '<' "$v2" (or <= , == , != , >= , > ), using the same meaning as yash 's operators (or zsh 's numeric order) do, and assuming none of the numbers have more than 20 digits. The bosh shell's [ also has -D file and -P file to test whether a file is a door or event port . Those are types of files that are specific to Solaris, so not covered by POSIX, but you could always define a: is_of_type() (export LC_ALL=C case "$2" in ([./]*) file="$2";; (*) file="./$2" esac [ -n "$(find -H "$file" -prune -type "$1")" ]) Then, you can do is_of_type D "$file" which even though not POSIX would likely work on Solaris where doors are relevant, or the equivalent on other systems and their own specific types of files. ¹ see for instance the command injection vulnerabilities in [[...]] 's arithmetic operators and ((...)) , the mess with the =~ operator in bash3.2+, ksh93 or yash. ² some of which are quite widespread among [ implementations and might be added in a future version of the POSIX standard | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/628428",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53490/"
]
} |
628,485 | I have the following brace expansion (bash shell): echo -e {0..4..2}" "{0..2..2}"\n" I expected this to produce 0 00 22 02 24 04 2 but every line of the output except the first has a leading space and there is an extra blank line at the end that I didn't expect. Why is this. Is there a simple way to fix it? Obviously I can do something clunky like pipe to sed 's/^ //' , but is there a prettier way without piping to extra commands? | echo prints its arguments separated by spaces, even if they include (or generate) newline characters. Additionally it adds one newline character at the end of its output (unless it's echo -n ). Use printf : printf '%s\n' {0..4..2}" "{0..2..2} When echo does something unexpected, always consider printf . After you get familiar with printf it may even become your first choice. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/628485",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120484/"
]
} |
628,617 | I have a bunch of files with the same extension (let's say .txt) and I would like to concatenate them. I am using cat *.txt > concat.txt , but I would like to add a new line between each file so to distinguish them in concat.txt. Is it possible to do it with a single bash command rather than an implementation such as this ? Thank you | Not a single command, but a simple one-liner: for f in *.txt; do cat -- "$f"; printf "\n"; done > newfile.txt That will give this error: cat: newfile.txt: input file is output file But you can ignore it, at least on GNU/Linux systems. Stéphane Chazelas pointed out in the comments that apparently, on other systems this could result in an infinite loop instead, so to avoid it, try: for f in *.txt; do [[ "$f" = newfile.txt ]] || { cat -- "$f"; printf "\n"; }done > newfile.txt Or just don't add a .txt extension to the output file (it isn't needed and doesn't make any difference at all, anyway) so that it won't be included in the loop: for f in *.txt; do cat -- "$f"; printf "\n"; done > newfile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/628617",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277882/"
]
} |
628,790 | I installed Intellij IDEA from on my arch using yay . I worked fine but recently it started doing this: john@arch-thinkpad ~ [1]> intellij-idea-ultimate-editionUnrecognized VM option 'UseConcMarkSweepGC'Error: Could not create the Java Virtual Machine.Error: A fatal exception has occurred. Program will exit. How can I fix this problem so I can run intellij IDEA normaly like in good old times ? Thank you for help | You can switch to java-11. Intellij calls the vm with an option that is no longer supported by java 15. If you start Intellij via terminal (and java 11) it shows you that message: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. For now this workaround works. Remember to change the Java version in arch with archlinux-java. Install java 11: $sudo pacman -S jdk11-openjdk Switch to java 11: $sudo archlinux-java set java-11-openjdk | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/628790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/429595/"
]
} |
629,029 | So I'm trying to find a template for bash/shell script that essentially run a command, let's call it "command1" using an input "X" and then use the output of command1 on itself...but in a loop. Here a real world example: echo "hello" | cut -c2- which will remove the first character at the beginning of the input string and output: ello Now, the above is just an example to illustrate the template mentioned above. Following this example, how could i use command1 output: echo "hello" | cut -c2- But as input, in a loop, either indefinite loop or until only one byte/character remain. So that i don't need to copy/paste the output and replace it with the old input: echo "ello" | cut -c2- Or need to use multiple pipe which would be too slow/inefficient. Simpler Explanation Using manual action, this would be the replacement of me(the user) copy pasting the output of the command i gave as example (or the pseudo code i described earlier) and use that as input for that same command, repeating that same action until "one" byte or char remain. | If I understand correctly, you are looking for something like this: $ var=hello$ while [ -n "$var" ]; do printf -- "Var is now '%s'\n" "$var" var=$(printf -- '%s\n' "$var" | cut -c2-); doneVar is now 'hello'Var is now 'ello'Var is now 'llo'Var is now 'lo'Var is now 'o' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/629029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409852/"
]
} |
629,146 | I'm writing a bash script named test.sh as below on my Ubuntu: #!/bin/bashwhile true; do echo 'abc' # pwd, df... sleep 1done When I execute ./test.sh at one terminal, I open another terminal to execute the commands below: $ pgrep test31110$ ps -ef | grep sleepme 31140 31110 0 20:58 pts/1 00:00:00 sleep 1me 31142 16389 0 20:58 pts/0 00:00:00 grep --color=auto sleep$ ps -ef | grep sleepme 31146 31110 0 20:58 pts/1 00:00:00 sleep 1me 31148 16389 0 20:58 pts/0 00:00:00 grep --color=auto sleep$ ps -ef | grep sleepme 31150 31110 0 20:58 pts/1 00:00:00 sleep 1me 31152 16389 0 20:58 pts/0 00:00:00 grep --color=auto sleep So, the PID of the process ./test.sh is 31110 , and when I execute the command ps -ef | grep sleep , I get many processes of sleep 1 (PIDs are 31140 , 31146 , 31150 ...), which are all the child-processes of the process ./test.sh . Well, for now it seems that I can understand everything, the child-processes of sleep 1 come from that loop of while true . However, when I try to ps -ef | grep echo , I get nothing. I've also tried to execute other commands, such as pwd , df , but they can't be grep ed either. So my question is why the command sleep is an independent process whereas other commands aren't. | There are two aspects to consider here: echo and pwd can indeed not appear as independent processes because they are builtin commands of the bash shell (see the output of type echo e.g. - but note that they could very well be implemented as external executable, and what is or is not implemented as builtin or external program does vary from shell to shell). Commands launched by the shell that call an external executable (e.g. df ) on the other hand are independent processes, but they often complete so quickly that you will have a hard time "catching" them with ps , (i.e. they are alredy finished when ps starts and for that reason won't show up in the output). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/629146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145824/"
]
} |
629,281 | I'm able to open gitk but it crashes as soon as I open a commit whom changes contains an emoji (not the commit message). Error ❯ gitk --allX Error of failed request: BadLength (poly request too large or internal Xlib length error) Major opcode of failed request: 139 (RENDER) Minor opcode of failed request: 20 (RenderAddGlyphs) Serial number of failed request: 6687 Current serial number in output stream: 6706 Env ❯ cat /etc/os-release --plainNAME="Linux Mint"VERSION="20 (Ulyana)"ID=linuxmintID_LIKE=ubuntuPRETTY_NAME="Linux Mint 20"VERSION_ID="20"HOME_URL="https://www.linuxmint.com/"SUPPORT_URL="https://forums.linuxmint.com/"BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"PRIVACY_POLICY_URL="https://www.linuxmint.com/"VERSION_CODENAME=ulyanaUBUNTU_CODENAME=focal Git ❯ git --versiongit version 2.25.1 Examples 6e05ecd add v3->v4 migration script to update variables https://github.com/rafaelrinaldi/pure/pull/271/commits/6e05ecdad0e4f623050e154e16c0af0315767940 Questions I tried various things: removing ~/.Xresources config related to fonts editing then removing ~/.config/fontconfig/conf.d/30-icons.conf Without success, most of the issues I found were related to st terminal . However, I'm not using it but guake , and the issue also happened with yakuake , gnome-terminal and hyper How could I fix this? | After digging with XFT_DEBUG flag I found something weird. I run the command with the flag and navigate to the problematic commit: ❯ XFT_DEBUG=1 gitk --allXFT_DEBUG=1XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (15.9999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (15.9999 pixels)…XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Bold.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSansSymbols2-Regular.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc: 2 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: 0 (109 pixels)X Error of failed request: BadLength (poly request too large or internal Xlib length error) Major opcode of failed request: 139 (RENDER) Minor opcode of failed request: 20 (RenderAddGlyphs) Serial number of failed request: 6687 Current serial number in output stream: 6706 Then spotted the last line prior to the fail had an excentric pixel size XftFontInfoFill: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: 0 (17.5999 pixels) XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: 0 ( 109 pixels ) Removing the Noto-color-emoji fonts solved the issue apt remove --purge fonts-noto-color-emoji No more crash and a consistent font size is used to render XftFontInfoFill: /usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc: 2 (17.5999 pixels) Version ❯ apt show fonts-noto-color-emoji…Version: 0~20200408-1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/629281",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17362/"
]
} |
629,292 | I was doing some hard drive speed benchmarking and decided to try the following command to see what would happen: dd if=/dev/zero of=/dev/null After I canceled it, I got: (269 GB, 251 GiB) copied, 107.101 s, 2.5 GB/s If I change the bytesize or count, I get different results: dd if=/dev/zero of=/dev/null bs=4096 count=2000(8.2 MB, 7.8 MiB) copied, 0.00134319 s, 6.1 GB/sdd if=/dev/zero of=/dev/null bs=16384 count=2000(33 MB, 31 MiB) copied, 0.00305674 s, 10.7 GB/s Since I'm not reading or writing to any real drive, what am I actually measuring here? How fast the CPU can generate zeros and move them into a virtual trash can? This actually seems really slow to me for just that. Does it hit the RAM? Does the motherboard bus come into play? Does this test have any practical benchmarking applications (ie. comparing different generation processors)? Is the result here the fastest I/O the system could ever achieve, even if the real disks were (theoretically) infinite speed? What does dd if=/dev/zero of=/dev/null actually measure? | After digging with XFT_DEBUG flag I found something weird. I run the command with the flag and navigate to the problematic commit: ❯ XFT_DEBUG=1 gitk --allXFT_DEBUG=1XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (15.9999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (15.9999 pixels)…XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Bold.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSansSymbols2-Regular.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc: 2 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: 0 (109 pixels)X Error of failed request: BadLength (poly request too large or internal Xlib length error) Major opcode of failed request: 139 (RENDER) Minor opcode of failed request: 20 (RenderAddGlyphs) Serial number of failed request: 6687 Current serial number in output stream: 6706 Then spotted the last line prior to the fail had an excentric pixel size XftFontInfoFill: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: 0 (17.5999 pixels) XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: 0 ( 109 pixels ) Removing the Noto-color-emoji fonts solved the issue apt remove --purge fonts-noto-color-emoji No more crash and a consistent font size is used to render XftFontInfoFill: /usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc: 2 (17.5999 pixels) Version ❯ apt show fonts-noto-color-emoji…Version: 0~20200408-1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/629292",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/56199/"
]
} |
629,302 | I have a Raspberry Pi 3 currently hosting a web server.I'd like to turn off the activity leds, and power led as they consume energy for nothing.How can I do this using debian ? Under Raspbian it's easy, but under Debian I found no way of doing this. The image I am using is : https://raspi.debian.net/tested-images/ | After digging with XFT_DEBUG flag I found something weird. I run the command with the flag and navigate to the problematic commit: ❯ XFT_DEBUG=1 gitk --allXFT_DEBUG=1XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (15.9999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (15.9999 pixels)…XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Medium.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSans-Bold.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoSansSymbols2-Regular.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc: 2 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: 0 (17.5999 pixels)XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: 0 (109 pixels)X Error of failed request: BadLength (poly request too large or internal Xlib length error) Major opcode of failed request: 139 (RENDER) Minor opcode of failed request: 20 (RenderAddGlyphs) Serial number of failed request: 6687 Current serial number in output stream: 6706 Then spotted the last line prior to the fail had an excentric pixel size XftFontInfoFill: /usr/share/fonts/truetype/dejavu/DejaVuSans.ttf: 0 (17.5999 pixels) XftFontInfoFill: /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: 0 ( 109 pixels ) Removing the Noto-color-emoji fonts solved the issue apt remove --purge fonts-noto-color-emoji No more crash and a consistent font size is used to render XftFontInfoFill: /usr/share/fonts/opentype/noto/NotoSansCJK-Regular.ttc: 2 (17.5999 pixels) Version ❯ apt show fonts-noto-color-emoji…Version: 0~20200408-1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/629302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/451277/"
]
} |
629,412 | I am currently using Teams for a master and I would like to blur my backgrounds, do you know any way to do it? | Microsoft Teams has this option built-in ( as described in the official Microsoft documentation ). Unfortunately, the documentation also states that as of today: Notes: For now, Linux users aren't able to use this feature. Background effects won't be available to you if you're using Teams through optimized virtual desktop infrastructure (VDI). So unfortunately, until Microsoft Teams is updated for Linux, if you want to use this feature, you need to either try your luck with a Wine version or pass your camera stream through an external filtering tool (such as OBS ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/629412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308906/"
]
} |
629,432 | During the 1980s and 1990s Bulletin Board Systems (BBS) were widely used on all kinds of computers. The BBS Documentary documents the rise and fall of the BBS subculture. Watching this documentary left me wondering about Unix systems. They are - similar to BBS - designed to support remote TTYs over a modem, using getty and login to provide access to the system. Does there exist any record of a publicly accessible Unix system during the 1980s and 1990s that people could call into using a modem? | Even before they were routinely interconnected, Unix systems could be considered as effectively their own BBSes. They were usually multi-user systems, and they allowed their users to swap files, and exchange messages (initially as a specific use of shared files). Users didn’t have to connect to a separate BBS. Interconnecting Unix systems, whether permanently or intermittently (using UUCP ), provided an extension to this, and protocols were created to allow users to share files with users on other systems, and to send and receive messages to and from users on other systems. Successive changes to protocols and new protocols provided more and more indirection, for example going from bang paths for email to the present-day DNS-based system. There was one significant difference between BBSes as seen from micros, and Unix systems: the former tended to favour asynchronous use, since only a small number of users could be connected simultaneously (compared to the overall number of users), whereas the latter also allowed synchronous messaging systems (starting with write which was already present in V1 Unix) and related programs. This led in particular to the emergence of MUDs (multi-user dungeons) and the communities surrounding them; these were nominally multi-user text-based games, but often users were interested more in chatting with other users than actually playing the game. (MUDs weren’t only available on Unix systems; Micronet in particular in the UK offered one which was accessible over Prestel .) Many Unix systems also provided dial-up access; this was fairly straightforward since modems could be considered as extensions of the serial lines used to connect terminals to Unix systems. I don’t know of any which had the same access patterns as BBSes; most would have provided dial-up access for regular users working from home (although logins could of course be shared, or given out more widely than the bean-counters intended), and some provided commercial access to specific services. Many early ISPs effectively provided dial-up connectivity to Unix systems; but the latter weren’t the end-users’ target, they were simply a hop on the way to the rest of the Internet, either directly (for interactive protocols) or indirectly (for email, Usenet etc.). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/629432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18718/"
]
} |
629,616 | I was reading this message from the zsh mailing list about key bindings and I'd like to know which key I need to press: ^X^I (I think Ctrl-X Ctrl-I , the capital X and I ) ^[^@ (I think Ctrl-Esc-@ ??) ^X^[q (I think Ctrl-X Esc-q ??) ^XQ (I think Ctrl-X and Q ??) From the Archlinux wiki page on zsh ^[[1;3A ^[[1;3D From bindkey ^[[1;5C ^[[A I know that ^[ means Esc, but I'm not sure how to find others.Is there any official reference or website that lists these? | ^ c is a common notation for Ctrl + c where c is a (uppercase) letter or one of @[\]^_ . It designates the corresponding control character . The correspondence is that the numeric code of the control character is the numeric code of the printable character (letter or punctuation symbol) minus 64, which corresponds to setting a bit to 0 in base 2. In addition, ^? often means character 127. Some keys send a control character: Escape = Ctrl + [ Tab = Ctrl + I Return (or Enter or ⏎ ) = Ctrl + M Backspace = Ctrl + ? or Ctrl + H (depending on the terminal configuration) Alt (often called Meta because that was the name of the key at that position on historical Unix machines) plus a printable character sends ^[ (escape) followed by that character. Most function and cursor keys send an escape sequence, i.e. the character ^[ followed by some printable characters. The details depend on the terminal and its configuration. For xterm, the defaults are documented in the manual . The manual is not beginner-friendly. Here are some tips to help: CSI means ^[[ , i.e. escape followed by open-bracket. SS3 means ^[O , i.e. escape followed by uppercase-O. "application mode" is something that full-screen programs usually turn on. Some keys send a different escape sequence in this mode, for historical reasons. (There are actually multiple modes but I won't go into a detailed discussion because in practice, if it matters, you can just bind the escape sequences of both modes, since there are no conflicts.) Modifiers ( Shift , Ctrl , Alt / Meta ) are indicated by a numerical code. Insert a semicolon and that number just before the last character of the escape sequence. Taking the example in the documentation: F5 sends ^[[15~ , and Shift + F5 sends ^[[15;2~ . For cursor keys that send ^[[ and one letter X , to indicate a modifier M , the escape sequence is ^[[1; M X . Xterm follows an ANSI standard which itself is based on historical usage dating back from physical terminals. Most modern terminal emulators follow that ANSI standard and implement some but not all of xterm's extensions. Do expect minor variations between terminals though. Thus: ^X^I = Ctrl + X Ctrl + I = Ctrl + X Tab ^[^@ = Ctrl + Alt + @ = Escape Ctrl + @ . On most terminals, Ctrl + Space also sends ^@ so ^[^@ = Ctrl + Alt + Space = Escape Ctrl + Space . ^X^[q = Ctrl + X Alt + q = Ctrl + X Escape q ^XQ = Ctrl + X Shift + q ^[[A = Up ^[[1;3A = Alt + Up ( Up , with 1;M to indicate the modifier M ). Note that many terminals don't actually send these escape sequences for Alt + cursor key . ^[[1;3D = Alt + Left ^[[1;5C = Ctrl + Right There's no general, convenient way to look up the key corresponding to an escape sequence. The other way round, pressing Ctrl + V followed by a key chord at a shell prompt (or in many terminal-based editors) inserts the escape sequence literally. See also How do keyboard input and text output work? and key bindings table? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/629616",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120293/"
]
} |
629,627 | in my raspberry pi running debian os i have 2 interfaces, wlan0 & eth0.Both of interfaces got dhcp from both gateway server.How can i ping both LAN ?For example : eth0 -> gateway 10.1.22.1 -> LAN 10.0.0.0/8wlan0 -> gateway 192.168.10.1 -> LAN 192.168.10.0/24 -> also can browse internet the route table that i get is : Destination Gateway Genmask Flags Metric Ref Use Ifacedefault 10.1.22.1 0.0.0.0 UG 203 0 0 eth0default 192.168.10.1 0.0.0.0 UG 304 0 0 wlan010.1.22.0 0.0.0.0 255.255.255.0 U 203 0 0 eth0192.168.10.0 0.0.0.0 255.255.255.0 U 304 0 0 wlan0 I can ping LAN 10.0.0.0/8 but cannot browsing internet.How can i browse internet and also ping LAN 10.0.0.0/8 ? Sorry, it's basic linux network configuration. I'm not familiar with linux os.May someone can help me to figure it out. | ^ c is a common notation for Ctrl + c where c is a (uppercase) letter or one of @[\]^_ . It designates the corresponding control character . The correspondence is that the numeric code of the control character is the numeric code of the printable character (letter or punctuation symbol) minus 64, which corresponds to setting a bit to 0 in base 2. In addition, ^? often means character 127. Some keys send a control character: Escape = Ctrl + [ Tab = Ctrl + I Return (or Enter or ⏎ ) = Ctrl + M Backspace = Ctrl + ? or Ctrl + H (depending on the terminal configuration) Alt (often called Meta because that was the name of the key at that position on historical Unix machines) plus a printable character sends ^[ (escape) followed by that character. Most function and cursor keys send an escape sequence, i.e. the character ^[ followed by some printable characters. The details depend on the terminal and its configuration. For xterm, the defaults are documented in the manual . The manual is not beginner-friendly. Here are some tips to help: CSI means ^[[ , i.e. escape followed by open-bracket. SS3 means ^[O , i.e. escape followed by uppercase-O. "application mode" is something that full-screen programs usually turn on. Some keys send a different escape sequence in this mode, for historical reasons. (There are actually multiple modes but I won't go into a detailed discussion because in practice, if it matters, you can just bind the escape sequences of both modes, since there are no conflicts.) Modifiers ( Shift , Ctrl , Alt / Meta ) are indicated by a numerical code. Insert a semicolon and that number just before the last character of the escape sequence. Taking the example in the documentation: F5 sends ^[[15~ , and Shift + F5 sends ^[[15;2~ . For cursor keys that send ^[[ and one letter X , to indicate a modifier M , the escape sequence is ^[[1; M X . Xterm follows an ANSI standard which itself is based on historical usage dating back from physical terminals. Most modern terminal emulators follow that ANSI standard and implement some but not all of xterm's extensions. Do expect minor variations between terminals though. Thus: ^X^I = Ctrl + X Ctrl + I = Ctrl + X Tab ^[^@ = Ctrl + Alt + @ = Escape Ctrl + @ . On most terminals, Ctrl + Space also sends ^@ so ^[^@ = Ctrl + Alt + Space = Escape Ctrl + Space . ^X^[q = Ctrl + X Alt + q = Ctrl + X Escape q ^XQ = Ctrl + X Shift + q ^[[A = Up ^[[1;3A = Alt + Up ( Up , with 1;M to indicate the modifier M ). Note that many terminals don't actually send these escape sequences for Alt + cursor key . ^[[1;3D = Alt + Left ^[[1;5C = Ctrl + Right There's no general, convenient way to look up the key corresponding to an escape sequence. The other way round, pressing Ctrl + V followed by a key chord at a shell prompt (or in many terminal-based editors) inserts the escape sequence literally. See also How do keyboard input and text output work? and key bindings table? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/629627",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/452023/"
]
} |
629,642 | I am trying to start nodepool-launcher on centOs 7 so that i can run Zuul API gateway management. Initially, I got this error: Failed at step EXEC spawning /usr/bin/nodepool-launcher.service: No such file or directory. I created a file named nodepool-launcher.service in /usr/bin directory. The file contains: [Service]ExecStart= /bin/bash /usr/bin/nodepool-launcher.service Now, I have this error: [root@mypc ~]# systemd-analyze verify nodepool-launcher.servicenodepool-launcher.service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. Refusing.Error: org.freedesktop.DBus.Error.InvalidArgs: Unit is not loaded properly: Invalid argument.Failed to create nodepool-launcher.service/start: Invalid argumentnodepool-launcher.service: command /usr/bin/nodepool-launcher is not executable: No such file or directory I have followed this documentation for installing and configuring nodepool. Any suggestions to overcome this problem are most welcome. | ^ c is a common notation for Ctrl + c where c is a (uppercase) letter or one of @[\]^_ . It designates the corresponding control character . The correspondence is that the numeric code of the control character is the numeric code of the printable character (letter or punctuation symbol) minus 64, which corresponds to setting a bit to 0 in base 2. In addition, ^? often means character 127. Some keys send a control character: Escape = Ctrl + [ Tab = Ctrl + I Return (or Enter or ⏎ ) = Ctrl + M Backspace = Ctrl + ? or Ctrl + H (depending on the terminal configuration) Alt (often called Meta because that was the name of the key at that position on historical Unix machines) plus a printable character sends ^[ (escape) followed by that character. Most function and cursor keys send an escape sequence, i.e. the character ^[ followed by some printable characters. The details depend on the terminal and its configuration. For xterm, the defaults are documented in the manual . The manual is not beginner-friendly. Here are some tips to help: CSI means ^[[ , i.e. escape followed by open-bracket. SS3 means ^[O , i.e. escape followed by uppercase-O. "application mode" is something that full-screen programs usually turn on. Some keys send a different escape sequence in this mode, for historical reasons. (There are actually multiple modes but I won't go into a detailed discussion because in practice, if it matters, you can just bind the escape sequences of both modes, since there are no conflicts.) Modifiers ( Shift , Ctrl , Alt / Meta ) are indicated by a numerical code. Insert a semicolon and that number just before the last character of the escape sequence. Taking the example in the documentation: F5 sends ^[[15~ , and Shift + F5 sends ^[[15;2~ . For cursor keys that send ^[[ and one letter X , to indicate a modifier M , the escape sequence is ^[[1; M X . Xterm follows an ANSI standard which itself is based on historical usage dating back from physical terminals. Most modern terminal emulators follow that ANSI standard and implement some but not all of xterm's extensions. Do expect minor variations between terminals though. Thus: ^X^I = Ctrl + X Ctrl + I = Ctrl + X Tab ^[^@ = Ctrl + Alt + @ = Escape Ctrl + @ . On most terminals, Ctrl + Space also sends ^@ so ^[^@ = Ctrl + Alt + Space = Escape Ctrl + Space . ^X^[q = Ctrl + X Alt + q = Ctrl + X Escape q ^XQ = Ctrl + X Shift + q ^[[A = Up ^[[1;3A = Alt + Up ( Up , with 1;M to indicate the modifier M ). Note that many terminals don't actually send these escape sequences for Alt + cursor key . ^[[1;3D = Alt + Left ^[[1;5C = Ctrl + Right There's no general, convenient way to look up the key corresponding to an escape sequence. The other way round, pressing Ctrl + V followed by a key chord at a shell prompt (or in many terminal-based editors) inserts the escape sequence literally. See also How do keyboard input and text output work? and key bindings table? | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/629642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/450711/"
]
} |
629,687 | Could someone explain to me what !d command does? It seems to recall an old command ( du -sh for me) but I have no idea how this command is chosen. | ! is event designator in bash. You can read more about it here https://alexbaranowski.github.io/bash-bushido-book/#event-designator-word-designator-modifiers (this is my own book about bash and tricks). !STRING will invoke the last command starting with STRING. Edit: Excerpt from the link/book: To invoke the last command that starts with a given string, use the event designator with a string so that the command looks like !<string> . Example below: [Alex@SpaceShip cat1]$ whoamiAlex[Alex@SpaceShip cat1]$ whoAlex :0 2018-04-20 09:37 (:0)...[Alex@SpaceShip cat1]$ !whowhoAlex :0 2018-04-20 09:37 (:0)...[Alex@SpaceShip cat1]$ !whoawhoamiAlex | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/629687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49275/"
]
} |
629,770 | I am working with thousands of files whose names contain sequential dates ranging from 2001-01-01 to 2020-12-31. A sample of such files can be seen below: gpm_original_20010101.ncgpm_cressman_20010101_cor_method-add_fac-0.5_pass-1_radius-500km.ncgpm_cressman_20010101_cor_method-add_fac-0.5_pass-2_radius-250km.ncgpm_cressman_20010101_cor_method-add_fac-0.5_pass-3_radius-150km.ncgpm_cressman_20010101_cor_method-add_fac-0.5_pass-4_radius-75km.ncgpm_cressman_20010101_cor_method-add_fac-0.5_pass-5_radius-30km.nc...gpm_original_20010131.ncgpm_cressman_20010131_cor_method-add_fac-0.5_pass-1_radius-500km.ncgpm_cressman_20010131_cor_method-add_fac-0.5_pass-2_radius-250km.ncgpm_cressman_20010131_cor_method-add_fac-0.5_pass-3_radius-150km.ncgpm_cressman_20010131_cor_method-add_fac-0.5_pass-4_radius-75km.ncgpm_cressman_20010131_cor_method-add_fac-0.5_pass-5_radius-30km.nc and so on until 2020-12-31 . What I need to do is to reorganize these files into new folders based on years and months. The directory tree needs to follow the logic year with sub-directories months , like this: 2001 01 02 03 04 05 06 07 08 09 10 11 122002 01 02 03 04 05 06 07 08 09 10 11 12 and so on. And the files should be moved to these directories based on the equivalent date in their filenames. For example: all files containing 200101xx in their names should be moved to the 2001/01 folder. What is the most straightforward way to achieve this using bash? | Here is my proposal if I understood correctly: for i in *.nc; do [[ "$i" =~ _([0-9]{8})[_.] ]] && d="${BASH_REMATCH[1]}" mkdir -p "${d:0:4}/${d:4:2}" mv "$i" "${d:0:4}/${d:4:2}"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/629770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102505/"
]
} |
629,925 | I have a series of commands, e.g.: ssh -i key 10.10.10.10 mkdir -p "${DEST_PATH}/subdir1" "${DEST_PATH}/subdir2"rsync "${SOURCE_PATH}" "$DEST_HOST:${DEST_PATH}/subdir1"... These are fed variables as DEST_PATH=$( myfun "foo" )SOURCE_PATH=$( myfun "bar" ) which in turn are fed string as function myfun { local VALUE=$( grep "$1" yadayada.txt | tail -1 | sed 's/someregex//' ) echo "$VALUE"} How would you deal with $VALUE having some spaces.. like if $VALUE was Hello World ?I thought always using "" was going to solve all issues but I was wrong. Several other questions suggest either escaping or using single quotes, but I'm not sure of the best way to go about it. Thank you | If you know the remote system has a xargs command that supports a -0 option, you can do: printf '%s\0' "$DEST_PATH/subdir1" "$DEST_PATH/subdir2" | ssh -i key 10.10.10.10 'xargs -0 mkdir -p --' That xargs -0 mkdir -p -- piece of shell code would be interpreted the same by all shells (remember sshd on the remote machine runs the login shell of the remote user to interpret the code given to ssh which could be anything). The list of arguments for mkdir is passed NUL delimited via the remote shell's stdin, inherited by xargs . Another advantage with that approach is that you can pass any number of arguments. If needed xargs will break that list and run several invocations of mkdir -p -- dirn dirn+1... to bypass the limit of the size of the argument+environment list, even possibly starting the directory creation before the full list has been transferred for very large lists. See How to execute an arbitrary simple command over ssh without knowing the login shell of the remote user? for other approaches. rsync itself by default has the same problem as it passes the name of the remote file/dir to sync from/to as part of that remote shell command line, and the better solution again is to take that remote shell out of the loop by using the -s / --protect-args which instead of passing the file/dir names in the shell code, passes it on the rsync server's stdin. So: rsync -s -- "$SOURCE_PATH" "$DEST_HOST:$DEST_PATH/subdir1/" Quoting the rsync man page: If you need to transfer a filename that contains whitespace, you can either specify the --protect-args ( -s ) option, or you’ll need to escape the whitespace in a way that the remote shell will understand. For instance: rsync -av host:'file\ name\ with\ spaces' /dest $ rsync --rsync-path='set -x; rsync' /etc/issue localhost:'1 2'+zsh:1> rsync --server -e.LsfxC . 1 2$ rsync --rsync-path='set -x; rsync' /etc/issue localhost:'1;uname>&2'+zsh:1> rsync --server -e.LsfxC . 1+zsh:1> unameLinux See how the file name is interpreted as shell code. With -s : ~$ rsync -s --rsync-path='set -x; rsync' /etc/issue localhost:'1;uname>&2'+zsh:1> rsync --server -se.LsfxC no trace of the file names on the remote shell command line. ~$ cat '1;uname>&2'Ubuntu 20.04.1 LTS \n \l | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/629925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388493/"
]
} |
629,978 | I know that running cat with no argument reflects the user input $ cattesttestreflectedreflected I want to pipe the reflected output to another program such as base64 . The expected behavior is like so $ cat | base64testdGVzdA==anotherYW5vdGhlcg== I want this for encoding text line-by-line as I type, and/or send them over something like nc . However, when used like this, no output seems to be reflected and escaping with ctrl+C just terminates the whole thing without output $ cat | base64testfail^C With everything working correctly, I should be able to establish an encoded/encrypted connection (very simple chat application) like so # client$ $CMD | encrypt | nc $SERVER $PORTthis is a testmultiple lines^C# host$ nc -lvp | decryptthis is a testmultiple lines Similarly, I should be able to encode & save as follows $ CMD | base64 | tee log_filetestdGVzdA==anotherYW5vdGhlcg==^C$ cat log_filedGVzdA==YW5vdGhlcg== Note that the whole thing should be a single pipe line. Using a loop wouldn't work well as nc would establish a new connection every iteration and tee without -a would overwrite the file every line (per iteration). I want 1 single instance of the final command (e.g. nc , base64 ) taking input pipe from CMD like with cat but with user input instead of a file. I'm looking for a way to do said piping of user input line-by-line to another process, preferably as a short one-liner. How can I get such piping of user input? | If you know the remote system has a xargs command that supports a -0 option, you can do: printf '%s\0' "$DEST_PATH/subdir1" "$DEST_PATH/subdir2" | ssh -i key 10.10.10.10 'xargs -0 mkdir -p --' That xargs -0 mkdir -p -- piece of shell code would be interpreted the same by all shells (remember sshd on the remote machine runs the login shell of the remote user to interpret the code given to ssh which could be anything). The list of arguments for mkdir is passed NUL delimited via the remote shell's stdin, inherited by xargs . Another advantage with that approach is that you can pass any number of arguments. If needed xargs will break that list and run several invocations of mkdir -p -- dirn dirn+1... to bypass the limit of the size of the argument+environment list, even possibly starting the directory creation before the full list has been transferred for very large lists. See How to execute an arbitrary simple command over ssh without knowing the login shell of the remote user? for other approaches. rsync itself by default has the same problem as it passes the name of the remote file/dir to sync from/to as part of that remote shell command line, and the better solution again is to take that remote shell out of the loop by using the -s / --protect-args which instead of passing the file/dir names in the shell code, passes it on the rsync server's stdin. So: rsync -s -- "$SOURCE_PATH" "$DEST_HOST:$DEST_PATH/subdir1/" Quoting the rsync man page: If you need to transfer a filename that contains whitespace, you can either specify the --protect-args ( -s ) option, or you’ll need to escape the whitespace in a way that the remote shell will understand. For instance: rsync -av host:'file\ name\ with\ spaces' /dest $ rsync --rsync-path='set -x; rsync' /etc/issue localhost:'1 2'+zsh:1> rsync --server -e.LsfxC . 1 2$ rsync --rsync-path='set -x; rsync' /etc/issue localhost:'1;uname>&2'+zsh:1> rsync --server -e.LsfxC . 1+zsh:1> unameLinux See how the file name is interpreted as shell code. With -s : ~$ rsync -s --rsync-path='set -x; rsync' /etc/issue localhost:'1;uname>&2'+zsh:1> rsync --server -se.LsfxC no trace of the file names on the remote shell command line. ~$ cat '1;uname>&2'Ubuntu 20.04.1 LTS \n \l | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/629978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/370354/"
]
} |
630,053 | I'm on Manjaro GNOME. I installed this package from AUR via Add/Remove Software: https://aur.archlinux.org/packages/github-desktop/ It told me that I need to restart as it /installed/reinstalled some kernel modules or something. Now my default gnome-terminal does not work. I cannot open it. When I try to open it either via keyboard shortcut or an icon, it tries to start but never shows up. I can see it as a process, it shows up for a second in my System Monitor and then just disappears. I installed another (Deepin) terminal. When I open it, that's what I get: _p9k_init_params:72: character not in range manjaro% The same also happens with alacritty terminal. The GNOME terminal had ZSH and a lot of emojis, etc. | I have the same problem after a recent update. I am no expert but I guess something wrong with the locale settings. I fixed it by regenerating the locale settings as shown here : Open a terminal with Ctrl + Alt + F3 In /etc/locale.gen add/uncomment en_US.UTF-8 UTF-8 then run with sudo: locale-gen Then my gnome terminal is functional again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/630053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/414687/"
]
} |
630,057 | I have this script: #!/bin/shSERVERLIST=hostsICMD='cat /etc/redhat-release'while read SERVERNAMEdo ssh $SERVERNAME $ICMDdone < "$SERVERLIST" With hosts just like: hostnamehostnamehostname When I run the script ./run.sh it only goes to a single host and stops. Why is it not reading the entire host list? | ssh pre-reads stdin on the assumption that the remote job will want some data, and it makes for a faster start-up. So the first job eats the rest of your hosts list. You can prevent this happening either using the ssh -n option, or by redirecting like ssh < /dev/null . Being a belt-and-braces man, I do both: ssh -n < /dev/null <other ssh options> . If you really want the remote job to read something from the local system, you need to pipe it, or here-doc it, for each iteration. Even with for <host> , ssh will read an indeterminate amount of data from stdin for each host start-up. I would always ping a remote host to get status before ssh to it. IIRC, ping -W has a hard timeout option, while ssh can hang in DNS or while connecting for 90+ seconds, for an unknown or dead host. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/630057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137794/"
]
} |
630,186 | I have Ubuntu 18.04 and we have an admin account and an account for other users. I have added the public SSH keys of users who need admin access to the admin account, but when I try to do the same for an individual user, I don't see the authorized_keys file in .ssh directory for that user. How should I proceed here? The below are the commands that I have tried: cd /homecd /adminls -anano .ssh/authorized_keys Then I add the public key to the admin account. This works for admin but for other users I can't see any authorized_keys file. | Generate an ssh-key: ssh-keygen -t rsa -b 4096 -C "comment" copy it to your remote server: ssh-copy-id user@ip or you can manually copy the ~/.ssh/id_rsa.pub to ~/.ssh/authorized_keys . Edit It can be done through ssh command as mentioned @chepner : ssh user@ip 'mkdir ~/.ssh' ssh user@ip 'cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/630186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/452155/"
]
} |
630,290 | I have data in a text file of the form: 1 1 2 1 2 1 4 1 6 2 2 1 2 2 2 3 2 4 3 3 1 3 5 3 9 3 11 For rows with the same ID (1st column) I want to add a column which is sum all the values in column 2 upto the previous row. Which a desired output: 1 1 2 1 2 21 4 41 6 82 2 1 02 2 1 2 3 32 4 63 3 1 03 5 13 9 63 11 14 Which I am close to achieving with: awk -v OFS='' 'NR == 1 { next}{ print $0, (NR > 1 && p1 == $1 ? " " (sum+=p2) : "")}{ p1 = $1 p2 = $2}' input > output However this is summing ALL the values in column 2, not just values with the same ID. So The output is correct for ID=1, but obviously gets worse: 1 21 2 21 4 41 6 822 1 82 2 92 3 112 4 1433 1 143 5 153 9 203 11 29 How can I alter my sum to only include the correct section? (rows with the same ID) | Increment the count after printing the current line. awk '{print $1, $2, sum[$1]; sum[$1] += $2}' file 11 2 01 2 21 4 41 6 822 1 02 2 12 3 32 4 633 1 03 5 13 9 63 11 15 This takes advantage of awk treating undefined variables as the empty string, or (in numeric context) as zero. If you don't waant the incremental sum 0 printed, use if ($2 != "") sum[$1] += $2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/630290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/452268/"
]
} |
630,483 | I need to run a command that looks like this mycli --file test.zip --file another_test.zip How could I run that dynamically with all zip files in a directory? I'm sure I could pipe the files from a find command, but I don't know how to actually append them as arguments to another command and my bash-fu is not great | Using an array: unset -v argsdeclare -a argsfor file in *.zipdo args+=( --file "$file" )donemycli "${args[@]}" Or, POSIXly: set --for file in *.zipdo set -- "$@" --file "$file"donemycli "$@" Or, assuming GNU tools: find . -maxdepth 1 -name '*.zip' -printf '--file\0%f\0' | xargs -0 -- mycli A relevant difference between the array-based approach and the xargs -based one: while the former may fail with an "Argument list too long" error (assuming mycli is not a builtin command), the latter will not, and will run mycli more than once instead. Note, however, that in this last case all but the last invocation's argument lists may end with --file (and the following one start with a file name). Depending on your use case you may be able to use a combination of xargs ' options (e.g. -n and -x ) to prevent this. Also, note that find will include hidden files in its output, while the array-based alternatives will not, unless the dotglob shell option is set in Bash or, in a POSIX shell, both the *.zip and .*.zip globbing expressions are used. For details and caveats on this: How do you move all files (including hidden) from one directory to another? . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/630483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20536/"
]
} |
630,626 | Is there an equivalent to Debian/Ubuntu's /var/run/reboot-required for Arch Linux to determine if a system restart is required? I'm looking for a comprehensive solution that also accounts for when critical libraries and the kernel are updated and a reboot is required to complete the upgrade. This is my current workaround which only accounts for the kernel: if [[ $(pacman -Q linux | cut -d " " -f 2) > $(uname -r) ]]; then # reboot...fi | I use this script to check if the boot kernel matches the current kernel and if a process is using any old libraries. #!/bin/bashget_boot_kernel() { local get_version=0 for field in $(file /boot/vmlinuz*); do if [[ $get_version -eq 1 ]]; then echo $field return elif [[ $field == version ]]; then # the next field contains the version get_version=1 fi done}rc=1libs=$(lsof -n +c 0 2> /dev/null | grep 'DEL.*lib' | awk '1 { print $1 ": " $NF }' | sort -u)if [[ -n $libs ]]; then cat <<< $libs echo "# LIBS: reboot required" rc=0fiactive_kernel=$(uname -r)current_kernel=$(get_boot_kernel)if [[ $active_kernel != $current_kernel ]]; then echo "$active_kernel < $current_kernel" echo "# KERNEL: reboot required" rc=0fiexit $rc Sample output: Xorg: /usr/lib/libedit.so.0.0.63Xorg: /usr/lib/libgssapi_krb5.so.2.2Xorg: /usr/lib/libk5crypto.so.3.1Xorg: /usr/lib/libkrb5.so.3.3Xorg: /usr/lib/libkrb5support.so.0.1Xorg: /usr/lib/libzstd.so.1.4.5# LIBS: reboot required5.10.8-arch1-1 < 5.10.10-arch1-1# KERNEL: reboot required If you only have processes using old libraries you can restart the processes instead of rebooting. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/630626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224048/"
]
} |
630,721 | This answer on opening all files in vim except [condition]: https://unix.stackexchange.com/a/149356/98426 gives an answer similar to this: find . \( -name '.?*' -prune \) -o -type f -print (I adapted the answer because my question here is not about vim) Where the negated condition is in the escaped parentheses. However, on my test files, the following find . -type f -not -name '^.*' produces the same results, but is easier to read and write. The -not method, like the -prune method, prunes any directories starting with a . (dot). I am wondering what are the edge cases where the -not and the -prune -o -print method would have different results. Findutils' infopage says the following: -not expr : True if expr is false -prune: If the file is a directory, do not descend into it. (and further explains that -o -print is required to actually exclude the top matching directory) They seem to be hard to compare this way, because -not is a test and -prune is an action, but to me, they are interchangeable (as long as -o -print comes after -prune) | First, note that -not is a GNU extension and is the equivalent of the standard ! operator. It has virtually no advantage over ! . The -prune predicate always evaluates to true and affects the way find walks the directory tree. If the file for which -prune is run is of type directory (possibly determined after symlink resolution with -L / -H / -follow ), then find will not descend into it. So -name 'pattern' -prune (short for -name 'pattern' -a -prune ) is the same as -name 'pattern' except that the directories whose name matches pattern will be pruned , that is find won't descend into them. -name '.?*' matches on files whose name starts with . followed by one character (the definition of which depends on the current locale) followed by 0 or more characters . So in effect, that matches . followed by one or more characters (so as not to prune . the starting directory). So that matches hidden files with the caveat that it matches only those whose name is also entirely made of characters , that is are valid text in the current locale (at least with the GNU implementation). So here, find . \( -name '.?*' -a -prune \) -o -type f -a -print Which is the same as find . -name '.?*' -prune -o -type f -print since AND ( -a , implied) has precedence over OR ( -o ). finds files that are regular (no symlink, directory, fifo, device...) and are not hidden and are not in hidden directories (assuming all file paths are valid text in the locale). find . -type f -not -name '^.*' Or its standard equivalent: find . -type f ! -name '^.*' Would find regular files whose name doesn't start with ^. . find . -type f ! -name '.*' Would find regular files whose name doesn't start with . , but would still report files in hidden directories. find . -type f ! -path '*/.*' Would omit hidden files and files in hidden directories, but find would still descend into hidden directories (any level deep) only to skip all the files in them, so is less efficient than the approach using -prune . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/630721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98426/"
]
} |
630,860 | I have a data file A.txt (field separator = \t) : Well Well Type Well Name Dye Target A1 Unknown HIGH-001 FAM ViroFAM A1 Unknown HIGH-001 HEX ViroHEX And a template file B.txt : kitSoftware Version = NOVA_v1Date And Time of Export = 07/02/2020 13:44:11 UTCExperiment Name =Instrument Software Version =Instrument Type = CFXInstrument Serial Number =Run Start Date =Run End Date =Run Operator =Batch Status = VALIDMethod = NovaprimeDate And Time of Export,Batch ID,Sample Name,Well,Sample Type,Status,Interpretive Result,Action*,Curve analysis,taq,205920777.1,A01,Unkn-01,taq,neg5,A02,Unkn-09,,,,,,,,,,*reporting. And I want to print replace the value after the = in the second line of B.txt with VIRO_v1 , but only when the pattern ViroHEX is present in the 5th column of A.txt . In order to do that I start something like : awk -F'\t' ' FNR==NR{ a[NR]=$2; next } $1=="Software Version"{ print $0,"VIRO_v1"; next } 1' B.txt FS=" =" B.txt > result.txt But I didn't figure it out the part with A.txt . Do you have an idea how to do that? | awk -F'\t' ' NR==FNR{ if ($5=="ViroHEX"){ viro=1 } next } viro && $1=="Software Version"{ $2="VIRO_v1" } 1' A.txt FS=" = " OFS=" = " B.txt > result.txt This replaces the second field ( NOVA_v1 ) with VIRO_v1 in the second file if the first field equals Software Version and ViroHEX is present anywhere in the 5th column of the first file. I'm assuming the field separator of the second file is <space>=<space> (not a tab). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/630860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/416180/"
]
} |
630,921 | I have a document that looks like this: Bob:This is my lineThis is also my lineAlfred:What a great day!What should we do!Jess:Its so hotLets go to the Beach The format is always the same, it's always speakerline1line2speakerline1line2 There is never additional lines, etc. I want the 'speaker' to be at the beginning of every line, so in my example it is: Bob: This is my lineBob: This is also my lineAlfred: What a great day!Alfred: What should we do!Jess: Its so hotJess: Lets go to the Beach I've tried extracting every 'nth' line using awk awk '{if (NR % 3 == 1) print $0}' but I'm not sure how to add it back to the beginning of the next two lines. Thanks for your help | awk 'NR%3==1{ name=$0; next } { print name, $0 }' file If the condition is true, save the record to variable name and continue with the next record. Print name and current record otherwise. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/630921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240384/"
]
} |
631,065 | I have a directory ~/Documents/machine_learning_coursera/ . The command find . -type d -name '^machine' does not find anything I also tried find . -type d -regextype posix-extended -regex '^machine' so as to match the the beginning of the string and nothing. I tried also with -name : find . -type d -regextype posix-extended -regex -name '^machine' and got the error: find: paths must precede expression: `^machine' What am I doing wrong here? | find 's -name takes a shell/glob/ fnmatch() wildcard pattern, not a regular expression. GNU find 's -regex non-standard extension does take a regexp (old style emacs type by default), but that's applied on the full path (like the standard -path which also takes wildcards), not just the file name (and are anchored implicitly) So to find files of type directory whose name starts with machine , you'd need: find . -name 'machine*' -type d Or: find . -regextype posix-extended -regex '.*/machine[^/]*' -type d (for that particular regexp, you don't need -regextype posix-extended as the default regexp engine will work as well) Note that for the first one to match, the name of the file also needs to be valid text in the locale, while for the second one, it's the full path that needs to be valid text. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/631065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/440181/"
]
} |
631,075 | How can one retrieve status of terminal settings like smam and rmam ? Reason is that I set rmam by: tput rmam in script, then proceed to set smam on exit: tput smam But if terminal has rmam set when script starts, I do not want to set smam on exit. How can this be done? | On terminal emulators which support it, you can use the \033[?7$p escape ("Request DEC private mode") to query that parameter ( 7 => Auto-wrap Mode): decrqm()( exec </dev/tty t=$(stty -g) trap 'stty "$t"; return' EXIT QUIT INT TERM stty -icanon -echo time 1 min 0 e=$(printf '\033') printf "${e}[$1\$p" >/dev/tty case $(dd count=1 2>/dev/null) in "${e}[$1;1\$y") echo on;; "${e}[$1;2\$y") echo off;; *) echo unknown;; esac)$ tput smam # printf '\033[?7h'$ decrqm '?7'on$ tput rmam # printf '\033[?7l'$ decrqm '?7'off A better approach would be to save that setting upon starting the script with \033[?7s and restore it upon exiting with \033[?7r : save_am(){ printf '\033[?7s'; }restore_am(){ printf '\033[?7r'; }save_amtput rmam..restore_am But many terminal emulators (notably screen and tmux ) do not support those escapes. At least not by default. So all this is pure trivia -- it's not like you can use it for anything practical ;-) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631075",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140633/"
]
} |
631,093 | I have a Dell Inspiron 15R N5110 Laptop (Core i5 2nd Gen/4 GB/500 GB/Windows 7). I previously installed Windows 10 on my system, but my computer was very slow, so I decided to install Linux on it. It is currently running Windows 7. My problem is: the only drivers I have for my laptop are Windows 7's and I can't find Linux drivers for it. How can I download my laptop's drivers for Linux? And does my laptop support Linux? | It is very unlikely that you will need any additional device drivers other than those that already come with most popular Linux distributions, specially on non brand new laptops. The only exception regards GPU devices used for games, such as NVidia and AMD Radeon GPUs. In such cases, some manufacturers sometimes provide their own device drivers but, even though, most are also supported by Linux community. Anyway, a possible lack of Linux support from the manufacturer might not prevent you from installing Linux: you can install the system and later install any manufacturer-provided device driver (if available/necessary). Though again, it is very rare on laptop devices. If you are not very familiar with Linux, I suggest you choose a distribution with a friendly, intuitive interface, such as Linux Mint Cinnamon Edition - that would be definitely my pick, if you ask me for a recommendation. You can also try Ubuntu, Pop!_OS, Elementary OS, DeepIn or Fedora. Hope this helps. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/631093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/452982/"
]
} |
631,097 | My question applies to Debian/Ubuntu: Prior to updating/upgrading my Linux environment I want to implement best-practice procedures by taking a snapshot of my components' versions through a simple script that records into a text file version numbers for OS, mySQL, PHP and so on. However I can't find anywhere how to reliably obtain version number for phpMyAdmin via command line . If there is no better way I'm happy to grep files such as /usr/share/phpmyadmin/Config.php or /usr/share/phpmyadmin/package.json . Am I missing something? I know how to check version number via phpMyAdmin's web interface but that is not helpful for what I need. | If you’re also following the best practice of using packages, dpkg -l phpmyadmin will tell you which version of the phpmyadmin package is installed (assuming it is installed). If you only want the version, dpkg-query -W -f '${version}\n' phpmyadmin will only output the version of phpmyadmin . If you’re not using a packaged version, but you know where the package.json file lives, jq .version /path/to/package.json will give you the version. If you want to query the version from the web server, you need to look for PMA_VERSION in the home page: curl -s https://example.org/phpmyadmin | grep -E -o 'PMA_VERSION:"[[:digit:].]+"' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631097",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373171/"
]
} |
631,099 | I recently got a client-VPN at one of my Debian servers in my home network. I want to use it as another gateway in my network for certain devices. This is something I have succeeded with so that is all fine, just want to give you the backstory. Now, I have an RDP server (WS 2019) that I'm able to connect to through WAN on my VPN as long as I don't use iptables -P INPUT DROP. However, I'm using port forwarding, so I'm very confused why those ports won't work. I started using iptables yesterday, so it might be something very obvious however I don't know how to google this. My setup: $ iptables -L -n Chain INPUT (policy DROP) target prot opt source destination ACCEPT tcp -- 192.168.0.0/24 0.0.0.0/0 tcp dpt:22 Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:11111 Chain OUTPUT (policy ACCEPT) target prot opt source destination $ iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:11111 to:192.168.0.50:3389 <-(RDP server) Chain INPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination SNAT all -- 192.168.0.0/24 0.0.0.0/0 to:[my public VPN IP] Chain OUTPUT (policy ACCEPT) target prot opt source destination To be clear, the only thing I have to do to make everything work again is set policy for INPUT to ACCEPT, but I don't want to do that since it's a router to WAN. So, do the policy for INPUT also define the traffic for forward chain? How do I solve this so I use the DROP policy and still forward the 11111 traffic to 3389 at my local RDP server? | If you’re also following the best practice of using packages, dpkg -l phpmyadmin will tell you which version of the phpmyadmin package is installed (assuming it is installed). If you only want the version, dpkg-query -W -f '${version}\n' phpmyadmin will only output the version of phpmyadmin . If you’re not using a packaged version, but you know where the package.json file lives, jq .version /path/to/package.json will give you the version. If you want to query the version from the web server, you need to look for PMA_VERSION in the home page: curl -s https://example.org/phpmyadmin | grep -E -o 'PMA_VERSION:"[[:digit:].]+"' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631099",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/452999/"
]
} |
631,217 | AMD, Intel, Red Hat, and SUSE have defined a set of "architecture levels" for x86-64 CPUs . For example x86-64-v2 means that a CPU support not only the basic x86-64 instructions set, but also other instructions like SSE4.2, SSSE3 or POPCNT. How can I check which architecture levels are supported by my CPU? | This is based on gioele’s answer ; the whole script might as well be written in AWK: #!/usr/bin/awk -fBEGIN { while (!/flags/) if (getline < "/proc/cpuinfo" != 1) exit 1 if (/lm/&&/cmov/&&/cx8/&&/fpu/&&/fxsr/&&/mmx/&&/syscall/&&/sse2/) level = 1 if (level == 1 && /cx16/&&/lahf/&&/popcnt/&&/sse4_1/&&/sse4_2/&&/ssse3/) level = 2 if (level == 2 && /avx/&&/avx2/&&/bmi1/&&/bmi2/&&/f16c/&&/fma/&&/abm/&&/movbe/&&/xsave/) level = 3 if (level == 3 && /avx512f/&&/avx512bw/&&/avx512cd/&&/avx512dq/&&/avx512vl/) level = 4 if (level > 0) { print "CPU supports x86-64-v" level; exit level + 1 } exit 1} This also checks for the baseline (“level 1” here), only outputs the highest supported level, and exits with an exit code matching the first unsupported level. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/631217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14861/"
]
} |
631,344 | Trying to get a unique list of shells on my system. When I run this command: cat /etc/passwd | cut -d ':' -f 7 | uniq I get: /bin/bash /bin/sync /sbin/shutdown /sbin/halt /bin/bash I can't figure out why uniq isn't doing what I want. What don't I understand? I tried taking this output, making a copy, replacing one of the /bin/bash lines with the other in the copy, and then diff ing the files and I got no output, so I'm guessing not a hidden character? | It's because how uniq works, from the man: Note: 'uniq' does not detect repeated lines unless they areadjacent . You may want to sort the input first, or use 'sort -u'without 'uniq'. So, better use, no need of cat : $ cut -d ':' -f 7 /etc/passwd | sort -u/bin/bash/bin/false/bin/sync/usr/sbin/nologin Or, one command awk -F: '{ print $7 | "sort -u" }' /etc/passwd @kojiro's suggestion: awk -F: '!s[$7]++{print $7}' /etc/passwd | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40454/"
]
} |
631,376 | I would like to shorten strings with Bash. Unfortunately this does not work as planned. I would like to have only the front part with the letters. It has unfortunately one of the variants below still works similar. How must it be written correctly? var="backup_user-data_2101220046.tgz"var2="${var/_[0-9]{10}.tgz/''}"var2="${var/'\_[0-9]+\.tgz'/''}" | In the ${var/pattern/replacement} ksh93 operator, pattern is interpreted as a shell wildcard pattern, not a regexp. In ksh93, you can switch to basic, extended or augmented regexps with ~(G) , ~(E) , ~(X) respectively, so you could do: var2=${var/~(E)_[0-9]{10}\.tgz$/} For instance. Or use its extended glob patterns: var2=${var/%_{10}([0-9]).tgz/} (same as var2=${var%%_{10}([0-9]).tgz} ) bash , like zsh did copy the ${var/pattern/replacement} operator from ksh93, but its wildcard operators are much more limited. With the extglob option enabled, it supports the extended operators of ksh88 , but not the more advanced ones of ksh93 and in particular, not the {x,y}(...) one. It does support the +(...) one though. So you could do: shopt -s extglobvar2=${var/%_+([0-9]).tgz/} In bash , for extended regexp support, you can use the =~ operator of its [[...]] construct: regexp='^(.*)_[0-9]{10}\.tgz$'if [[ $var =~ $regexp ]]; then var2=${BASH_REMATCH[1]}else var2=$var1fi For completeness, while zsh did copy ksh93's ${var/pattern/replacement} and supports the ksh88 wildcard extensions with the kshglob option, it has its own extended glob operators with the extendedglob option. With those the equivalent of ERE {x,y} is (#cx,y) , so you can do: set -o extendedglobvar2=${var/%_[0-9](#c10).tgz} (note that of the three shell, zsh is the only one where [0-9] matches only on 0123456789. ksh93 and bash typically match on thousand more characters such as ⑱, , etc in modern locales). In zsh , you can do regexp matching with =~ as well. That's ERE by default but can be changed to PCRE with the rematchpcre option. set -o rematchpcreif [[ $var =~ '^(.*)_\d{10}\.tgz\z' ]]; then var2=$match[1]else var2=$varfi (with ERE or PCRE whether \d or [0-9] match on 0123456789 only or not depends on the locale and the system's regexp library used by zsh, like in bash). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631376",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/453269/"
]
} |
631,390 | I am using the SED command to strip out foreign language and other non-keyboard characters from very large text files: example: sed 's/[^a-zA-Z0-9]//g' The above command keeps any line that contains only alpha-numeric characters, which is close to what I want. The problem is that it also strips away any lines that contain common symbols like !@#$% and so on. I want to keep those. I tried searching for a bullion command something like !-), or something. But I could not find anything like that. So how do I filter out Arabic an Russian and untype-able characters from my lists? (ideally, I don't want to just nuke the character, I want to nuke the whole line where its found.) | In the ${var/pattern/replacement} ksh93 operator, pattern is interpreted as a shell wildcard pattern, not a regexp. In ksh93, you can switch to basic, extended or augmented regexps with ~(G) , ~(E) , ~(X) respectively, so you could do: var2=${var/~(E)_[0-9]{10}\.tgz$/} For instance. Or use its extended glob patterns: var2=${var/%_{10}([0-9]).tgz/} (same as var2=${var%%_{10}([0-9]).tgz} ) bash , like zsh did copy the ${var/pattern/replacement} operator from ksh93, but its wildcard operators are much more limited. With the extglob option enabled, it supports the extended operators of ksh88 , but not the more advanced ones of ksh93 and in particular, not the {x,y}(...) one. It does support the +(...) one though. So you could do: shopt -s extglobvar2=${var/%_+([0-9]).tgz/} In bash , for extended regexp support, you can use the =~ operator of its [[...]] construct: regexp='^(.*)_[0-9]{10}\.tgz$'if [[ $var =~ $regexp ]]; then var2=${BASH_REMATCH[1]}else var2=$var1fi For completeness, while zsh did copy ksh93's ${var/pattern/replacement} and supports the ksh88 wildcard extensions with the kshglob option, it has its own extended glob operators with the extendedglob option. With those the equivalent of ERE {x,y} is (#cx,y) , so you can do: set -o extendedglobvar2=${var/%_[0-9](#c10).tgz} (note that of the three shell, zsh is the only one where [0-9] matches only on 0123456789. ksh93 and bash typically match on thousand more characters such as ⑱, , etc in modern locales). In zsh , you can do regexp matching with =~ as well. That's ERE by default but can be changed to PCRE with the rematchpcre option. set -o rematchpcreif [[ $var =~ '^(.*)_\d{10}\.tgz\z' ]]; then var2=$match[1]else var2=$varfi (with ERE or PCRE whether \d or [0-9] match on 0123456789 only or not depends on the locale and the system's regexp library used by zsh, like in bash). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/453281/"
]
} |
631,438 | I have sudo 1.8.5p2-1+nmu3+deb7u1 installed in Debian Wheezy and I need to update it to fix the CVE-2021-3156 vulnerability. Unfortunately, there are no security patches for Wheezy anymore and upgrading to Stretch is not an option. I can not install the sudo package from Stretch because it depends on libc6 (>= 2.17) and the version available in Wheezy is 2.13. I have not found a patched sudo package for Wheezy. Do you know if it exists and where to find it?Do I have to patch myself the source code? I downloaded the source code of sudo 1.8.5p2-1+nmu3+deb7u1 and checked the patches for sudo_1.8.19p1-2.1+deb9u3 but the source code is quite different and I am not sure how to patch the code. | I think the simplest option for you is to build the Debian 9 version of sudo : apt-get install devscripts libpam0g-dev libldap2-dev libsasl2-dev libselinux1-dev autoconf autotools-dev bison flex libaudit-devdget -u http://security.debian.org/pool/updates/main/s/sudo/sudo_1.8.19p1-2.1+deb9u3.dscd sudo-1.8.19p1debian/rules binary If the tests fail (they failed for me on /dev/console ), disable them and build again: sed -i '/build-simple check/d' debian/rulesdebian/rules binary You will end up with the packages in the parent directory, you can install those you need from there with dpkg -i . Once all this is done, you can remove the build-dependencies: apt-get purge devscripts libpam0g-dev libldap2-dev libsasl2-dev libselinux1-dev autoconf autotools-dev bison flex libaudit-devapt-get --purge autoremove The same can be done using the latest sources from Debian unstable, as suggested by Artem : apt-get install devscripts libpam0g-dev libldap2-dev libsasl2-dev libselinux1-dev autoconf bison flex libaudit-dev zlib1g-devdget -u https://deb.debian.org/debian/pool/main/s/sudo/sudo_1.9.5p2-1.dsccd sudo-1.9.5p2/debian/rules binary (Traditionally, one would use apt-get build-dep and dpkg-buildpackage -uc -us , but that won’t work here without making more changes to the package — it has some build-dependencies which can’t be satisfied in Wheezy, but the package builds fine without them.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/631438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/426747/"
]
} |
631,461 | I have some servers running Ubuntu 18.04.5 LTS In last update of sudo package I can see that sudo:amd64 1.8.21p2-3ubuntu1.4 has been installed on 26/01/2021 (the same day that Heap-based buffer overflow in Sudo vulnerability, CVE-2021-3156 was published) As per this list , sudo version 1.8.21p2 is impacted by this vulnerability. However, if I run the test command to check if the systems are vulnerable, I get: # sudoedit -s /usage: sudoedit [-AknS] [-r role] [-t type] [-C num] [-g group] [-h host] [-p prompt] [-T timeout] [-u user] file ... Which is the output when the system is not impacted by this vulnerability. Are my systems vulnerables or not ? Is there any incongruence between the versions list and the command output ? | The list of versions you’re looking at only documents versions of sudo released by the sudo project itself. Distributions such as Ubuntu typically add patches to address such security vulnerabilities, instead of upgrading to the latest version of sudo . To determine whether your version is affected, you need to look at the security information provided by your distribution ; in this instance, the relevant notice is USN-4705-1 , which indicates that your version is fixed. You can also look at the package changelog, in /usr/share/doc/sudo/changelog.Debian.gz ; this should list the CVEs addressed by the version currently installed on your system (if any): * SECURITY UPDATE: dir existence issue via sudoedit race - debian/patches/CVE-2021-23239.patch: fix potential directory existing info leak in sudoedit in src/sudo_edit.c. - CVE-2021-23239 * SECURITY UPDATE: heap-based buffer overflow - debian/patches/CVE-2021-3156-pre1.patch: check lock record size in plugins/sudoers/timestamp.c. - debian/patches/CVE-2021-3156-pre2.patch: sanity check size when converting the first record to TS_LOCKEXCL in plugins/sudoers/timestamp.c. - debian/patches/CVE-2021-3156-1.patch: reset valid_flags to MODE_NONINTERACTIVE for sudoedit in src/parse_args.c. - debian/patches/CVE-2021-3156-2.patch: add sudoedit flag checks in plugin in plugins/sudoers/policy.c. - debian/patches/CVE-2021-3156-3.patch: fix potential buffer overflow when unescaping backslashes in plugins/sudoers/sudoers.c. - debian/patches/CVE-2021-3156-4.patch: fix the memset offset when converting a v1 timestamp to TS_LOCKEXCL in plugins/sudoers/timestamp.c. - debian/patches/CVE-2021-3156-5.patch: don't assume that argv is allocated as a single flat buffer in src/parse_args.c. - CVE-2021-3156 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206914/"
]
} |
631,501 | Does anybody know why this is happening and how to fix it? me@box:~$ echo "eyJmb28iOiJiYXIiLCJiYXoiOiJiYXQifQ" | base64 -di{"foo":"bar","baz":"bat"}base64: invalid input | If you do the reverse, you'll note that the string isn't complete: $ echo '{"foo":"bar","baz":"bat"}' | base64eyJmb28iOiJiYXIiLCJiYXoiOiJiYXQifQo=$ echo "eyJmb28iOiJiYXIiLCJiYXoiOiJiYXQifQo=" | base64 -di{"foo":"bar","baz":"bat"} Extracts of Why does base64 encoding require padding if the input length is not divisible by 3? What are Padding Characters? Padding characters help satisfy length requirements and carry no meaning. However, padding is useful in situations where base64 encoded stringsare concatenated in such a way that the lengths of the individualsequences are lost, as might happen, for example, in a very simplenetwork protocol. If unpadded strings are concatenated, it's impossible to recover theoriginal data because information about the number of odd bytes at theend of each individual sequence is lost. However, if padded sequencesare used, there's no ambiguity, and the sequence as a whole can bedecoded correctly. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/631501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38649/"
]
} |
631,542 | In the Linux-Ubuntu terminal: ifconfig throws bash: ifconfig: command not found locate command does the same. sudo yum install net-tools throws: bash: yum: command not found as well, but I might also have made a spelling mistake there when I tested it. What does it mean, do I have to install ifconfig ? Or are there alternative commands? | Different distributions of linux have different tools to install packages, known as package managers - you need to use the right one for your distribution. Yum is the package manager for Red Hat systems. Instead, you need to use apt, Ubuntu's package manager. Try: sudo apt install net-tools locate That pattern should work for most packages on Ubuntu. net-tools is the package which contains ifconfig on Ubuntu. HOWEVER, ifconfig is well out of date and has been for several years. You should use ip , which should be installed already in Ubuntu. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631542",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/453410/"
]
} |
631,587 | I have a file with almost 2000 lines. the file format is like this: 12 340 22 37 91 2306 2370 912 1342 72 9306 3 I'm trying to write a bash script to remove repeated first columns and organize by second column. I expect this output: 0 2 9 1 22 3 7 97 9 12 34 134306 237 3 I tried some piece of code but I didn't get the desired output. how can I achieve that and what should I use? | You can use sort and uniq to delete the repeated lines and then use awk arrays indexed by the first column value, and then append to each value of the array each second column, for example: sort test.txt | uniq | awk '{if(col[$1])col[$1]=col[$1]" "$2; else col[$1]=$2;}; END{for (i in col) print i, col[i]}' being test.txt your input file. Note that before adding a new column to the correct value of the array you have to check if the array is empty or not, just to add the space between values. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314925/"
]
} |
631,601 | After installing Pop OS 20.04 the internal mic stopped working in my Lenovo IdeaPad 330-15ARR laptop.What have I tried ? I'd referred to the official wiki here : All kinds of troubleshooting for audio related issues which was given in following post's answer . The workarounds helped to some extent but the annoying distortion sound still didn't stopped. | You can use sort and uniq to delete the repeated lines and then use awk arrays indexed by the first column value, and then append to each value of the array each second column, for example: sort test.txt | uniq | awk '{if(col[$1])col[$1]=col[$1]" "$2; else col[$1]=$2;}; END{for (i in col) print i, col[i]}' being test.txt your input file. Note that before adding a new column to the correct value of the array you have to check if the array is empty or not, just to add the space between values. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631601",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/453459/"
]
} |
631,652 | I'm quite certain this has been asked and answered before, however, I cannot find the answer to my specific use-case. I've got this file with accented characters in it: > ~ cat fileëêÝ,textÒÉ How would I convert them to their respective non-accented letters? So the outcome would be something along the lines of: > ~ convert file out.txt> ~ cat out.txteeY,textOE Note that the actual file itself contains more characters. | You can try iconv , with the //TRANSLIT (transliteration) option Ex. given $ cat fileëêÝ,textÒÉ then $ iconv -t ASCII//TRANSLIT fileeeY,textOE | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/631652",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/200654/"
]
} |
631,684 | I have a raspberry pi that I connected to my home network, but each time I reboot it the IP address changes. Instead of calling ssh user@ipaddress each time, I want to run a bash script to know which is the address that works. I'm trying to run ssh user@ipaddress several times changing the last number of the IP address. I don't know how bash works, so I attempted to make a file .sh to do it: for i in {1..100}do call=`ssh [email protected].$i` if [['$call'==*'user'*]]; then echo "$i" else : fidone My problems are: I don't print anything It is stuck in the first call and it doesn't make the loop So I want to know if there is a bash function to pass over a command when it's another device, because I know the name of my user and I only need to find my user name in the command output of the first line, and don't need to wait for more output. | The Right Way® to do this, would be to either set up the Pi to have a static IP so it doesn't change on reboot or to set up a local DNS server to resolve the hostname. Setting up the static IP is by far the simplest. You can find dozens of tutorials if you just search for "raspbian static IP". Here's one: https://thepihut.com/blogs/raspberry-pi-tutorials/how-to-give-your-raspberry-pi-a-static-ip-address-update Now, your script doesn't work for a multitude of reasons. First of all, this will never work: call=`ssh [email protected].$i` If the machine doesn't let you in or isn't accessible, then it will print an error, but that error is printed to standard error, not standard output, therefore $call will be empty. If it does work and you do ssh into the machine, then you will be logged in and, once more, $call will be empty since there is nothing returned. In any case, your if is a syntax error. This: if [['$call'==*'user'*]]; then should be this: if [[ "$call" == *user* ]]; then You need spaces after and before the [[ and ]] , and if you put a variable in single quotes ( '$call' ), then the variable isn't expanded: $ echo '$call'$call What you probably want to do is to try to log in, run a command, and if that runs you store the ip. Something like this: #!/bin/bashfor i in {1..100} do ## try to run a command and, if it works, we know the ip. ## you can use the command 'true' for this if ssh [email protected].$i true; then echo "The IP is 192.168.0.1.$i" break fidone This, however, is really, really inefficient and slow. Don't do this. Just set up a static IP. Some other options that might help are: Get the list of active IPs on the network nmap -sP 192.168.0.* Use ping instead of ssh to see which machine is up: for i in {1..100} do if ping -c1 192.168.0.$i 2>/dev/null; then echo "The first IP that is UP is 192.168.0.1.$i" break fi done But really, if you need help setting up a static IP, then please post a new question about that. This just isn't a good solution to your problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/360900/"
]
} |
631,734 | I know other questions similar to this have been asked, but I think this situation is a bit different. System is a Lenovo T430 with 2xSSDs (one is in the optical drive bay). Original install was Win10 in primary SSD. Added Mint to make it dual boot, all good. Decided to make it triple boot with Kali on the Secondary SSD - STILL good (Kali picked up both Mint and Win 10). All worked great until I got the bright idea of installing XP (there’s a reason for it, don’t worry) alongside Kali on the Secondary SSD. I pulled the Primary SSD and put the Secondary in its place. I imaged the original Kali first, then wiped the drive and installed XP. Once I got that going, I installed the newest version of Kali. During the process, it asked if I wanted to force UEFI. I initially said no, and then it seemed to want to wipe the drive, which I obviously didn’t want to do either. So, I said yes, force UEFI. Needless to say, Kali installed, but didn’t see XP (which is fine, I had partitioned them separately so it didn’t overwrite the XP install. I booted to the XP cd, fixed the MBR, and it booted to XP but Kali was invisible. Ok, I thought, whatever, maybe they don’t play nice together. No worries, I’ll just take images of each partition and reimage that drive when I need to use one or the other. Easy peasy. So, I got the primary and secondary back to original configuration (Win 10 + Mint on primary, XP and unreachable Kali on secondary), wiped the secondary (after imaging the XP and Kali parts separately), and installed Kali again as the sole OS on the secondary. The message popped up again saying “do you want to force UEFI?” I figured hey, I’m using all modern O/S now, so no problems and said yes. Only this time, it won’t see Windows on the Primary. It sees Mint and boots to it, but not Windows. The partitions look to still be there in gparted. I tried using Win 10 repair, no go. Tried reinstalling Mint, thinking maybe it would pick it up and revert to the original config, but no - Mint fails when trying to write Grub. So, I’m stuck. I already wiped out The secondary drive so it’s blank (take it out of the equation for now - less complicated). How do I get Windows back? I know Kali didn’t really eat it, it just overwrote something that points to it. I know it’s there, and I have a lot of things I need on it. Thanks in advance for any help. I know just enough about Linux to get myself into trouble (clearly), but not enough to say I’m proficient. Edit: photos are added. It looks to me like SSD2 is GPT and SSD1 is MBR? Also, both are 2.5” SSDs, this laptop is too old for NVMe drives. Thanks! Help! | OK, some background information first: An OS that is booted in UEFI-native way will have an ability to access the boot configuration while the OS is running as a set of UEFI NVRAM boot variables; this is part of the UEFI specification. In Linux, the most user-friendly way is the efibootmgr command; in Windows, the bcdedit command can access the UEFI boot variables when run as an Administrator. To view the boot variables, run efibootmgr -v as root in Linux, or bcdedit /enum FIRMWARE as Administrator in Windows. Some UEFI firmware implementations won't offer complete access to UEFI boot variables in the "BIOS" configuration menu, but insist on a simplified BIOS-like boot disk selection. This can bite you when trying to build advanced multi-boot scenarios. You'll need to be aware of the existence of the UEFI boot variables and be prepared to edit them if the various installers get them wrong. If you have a NVMe SSD, your system firmware needs to specifically support booting from it, as a NVMe SSD is not at all like a SATA drive. Some UEFI firmwares will only support booting from a NVMe device in UEFI mode. This usually does not prevent the OS from being able to access a NVMe device, if it has a NVMe driver available. Most UEFI-aware OS installers will detect the mode the OS installer is booted in (BIOS or UEFI) and will install a matching type of bootloader, no questions asked. Kali apparently can offer "forcing UEFI" i.e. installing an UEFI bootloader even if the installer is booted BIOS-style. With BIOS-style boot, the MBR of a disk can only be occupied by one bootloader/manager at a time; usually you'll choose the one that's most capable of booting multiple OSs (e.g. GRUB). If other OSs overwrite the MBR, you'll need to be able to boot the "designated MBR manager OS" using external boot media and rewrite the MBR. With UEFI-style boot, bootloaders are contained in an ESP partition (basically a FAT32 partition) in a standardized directory structure. The bootloaders of multiple OSs can coexist in a single ESP partition just fine. But there is a single "magic" bootloader filename that the UEFI firmware will seek if there are no UEFI boot variables to direct it to a specific bootloader file: on x86_64 hardware, it is \EFI\BOOT\BOOTX64.efi . Windows will normally place a second copy of its \EFI\Microsoft\Boot\bootmgfw.efi file in this location to enable booting Windows even if the UEFI NVRAM boot variables are lost (e.g. because of a BIOS update/reflash). Kali's "Force UEFI?" may or may not mean writing a copy of UEFI GRUB into \EFI\BOOT\BOOTX64.efi of the ESP partition instead. Different OSs have different levels of UEFI support and the choice of boot method may be tied to the choice of partitioning type: Windows XP (the common 32-bit version) cannot boot UEFI-style and cannot access GPT-partitioned disks. Only BIOS-style boot capable. A 64-bit version of Windows XP (rare, finding drivers may be difficult) can access GPT-partitioned disks but requires a MBR-partitioned system disk to boot from. Only BIOS-style boot capable. Windows 10 can support both boot styles, but requires a MBR-partitioned system disk to boot BIOS-style and a GPT-partitioned system disk to boot UEFI-style. You cannot mix and match. Linux can usually be configured to boot in any combination, although the more esoteric combinations may require special attention. UEFI on MBR-partitioned disk requires a FAT32 partition with partition type 0xef to contain the UEFI bootloader(s); BIOS-style boot on GPT-partitioned disk requires a BIOS that supports GPT and a special "biosboot" partition to contain the part of GRUB that's normally embedded between the MBR and the beginning of the first partition, as this space is not available in GPT partitioning. Unlike Windows XP, Windows 10 needs multiple partitions. When booting UEFI-style, it needs an ESP partition (which may be shared with other OSs), a main Windows system partition (typically the C: drive) and a small recovery partition. On new installations, there is usually also a "Microsoft reserved" partition, although it is not technically absolutely necessary: installations upgraded from earlier versions of Windows might not have it. Most bootloaders/boot managers can only boot OSs using the same boot style as the bootloader. If you have a dual-boot with one legacy OS and one UEFI-native OS, the only way to switch between the OSs may be to use the BIOS menus to switch between boot modes or "UEFI/legacy first" preferences. The rEFInd Boot Manager is an UEFI-native bootloader that apparently can in some cases start up a BIOS-style bootloader, but that is not guaranteed to work with all systems; you may need to try it and see if it works for you. It's useful if your system's BIOS menus offer a good selection of boot method options: enable/disable Compatibility Support Module = BIOS-style boot capability the ability to restrict to BIOS-style boot methods only prefer UEFI/BIOS style boot method when booting from removable media prefer UEFI/BIOS style boot method when booting from HDD/SSD or even the capability to include both BIOS and UEFI-style boot targets in the boot order list simultaneously Some laptops or name-brand desktops may offer a simplified BIOS menu with very limited configurability. In these cases, you may have to figure out whether the system prefers UEFI or BIOS, and in worst cases you may have to create OS installation media with the "wrong" type of bootloader intentionally disabled (for USB sticks, delete \EFI\boot\bootx64.efi to make it BIOS-only, or replace the MBR boot code with a valid non-boot MBR to make it UEFI-only). It sounds like your primary disk is GPT-partitioned and the OSs on it probably use UEFI. To confirm this, please run fdisk -l and edit the output into your original question. If that's true, your UEFI boot variables may currently be misconfigured and/or the Windows UEFI bootloader (located as /boot/efi/EFI/Microsoft/Boot/bootmgfw.efi as viewed from Mint) may be damaged. Please run sudo efibootmgr -v in Linux to check the current state of your UEFI boot variables and edit the output into your original question, or if it's quite long, e.g. put it into a pastebin site and link it into your question. The most convenient way to visualize the state of your ESP partition would probably be to run sudo tree --charset ASCII /boot/efi from Linux. Please add that to your original question too. To make it shorter, you can omit the sub-directories of the /boot/efi/EFI/Microsoft/Boot directory, as there are multiple language-specific directories. Armed with this information, I (or someone else in this StackExchange) will probably be able to help you without resorting to blind guesswork. From the pictures, your sda disk is partitioned MBR-style, but the efibootmgr -v output includes a Windows Boot Manager line, that indicates the system was booting Windows in UEFI style at some past point. UEFI variables identify the ESP partition they refer to by the GPT partition unique GUID ( PARTUUID in Linux), and the GUID on the Windows Boot Manager line does not match GUID on kali 's line. On the other hand, the ubuntu line refers to a MBR partition, and the line includes the value 0xd1e9685 which matches exactly the disk identifier of sda . Based on this, it looks like something like this happened: Either the sda disk was converted from GPT to MBR at some point, or both disks were already present when Windows was installed, and sda was already partitioned MBR-style. But the Windows installer was booted in GPT-style and so it looked for a place to add a ESP partition onto a GPT-partitioned disk for proper UEFI-style boot. So it formatted the secondary SSD in GPT-style and placed the ESP partition and the Windows bootloader within it instead, since it was probably completely uninitialized at that time. (This tendency to place the ESP on different disk from the rest of Windows if given a chance is a known issue with Windows 10 installer. The standard recommendation is to temporarily unplug or disable any other disks before running the Windows 10 installer if your system has more than 1 disk.) When installing Mint, the installer was again booted UEFI-style, but unlike Windows installer it won't touch disks unless explicitly told to, so it created its own ESP partition to a MBR-partitioned disk ( sda5 , partition type 0xef). The first Kali apparently also created its own ESP partition on the secondary SSD, but in GPT partitioning, each partition has an unique GUID and UEFI uses it to identify which ESP partition each OS uses to boot, so this did not cause any mix-ups. When you disconnected the primary SSD and put the secondary one in its place for installing XP, you may have seen a single partition with type 0xee on it. This was a dummy MBR partition table that's part of the GPT partition standard, to indicate to any GPT-unaware operating systems that the disk is in use. But you assumed the disk was unused, and ignored it - so as a result, the Windows bootloader was overwritten without you being aware of it. Your second installation of Kali must have also been booted UEFI-style, and it created a ESP partition on the MBR - just like Mint did. Kali's "Force UEFI" probably means two things: it will remove any existing MBR boot code, so the disk will be unbootable MBR-style it will also write a copy of its bootloader into \EFI\BOOT\BOOTx64.efi on the ESP partition. As a result, you now had two OSs with different boot methods on the same disk. Kali's UEFI GRUB cannot boot Windows XP because that would require switching the BIOS compatibility back on when jumping from GRUB to the XP bootloader, and GRUB does not know how to do that. Your system appears to prefer booting legacy-style over UEFI, as after fixing the XP's MBR the firmware went straight back into booting BIOS-style... which makes switching into Kali's UEFI GRUB impossible, as the 16-bit BIOS compatibility mode has no hope of meaningfully using UEFI GRUB's 64-bit code. When you moved the disks back into their original locations, and wiped the secondary disk again, the Windows bootloader was now wiped twice over. The Windows 10 boot recovery got confused as it saw the majority of Windows (configured to boot UEFI-style) on a MBR-partitioned disk and no ESP partition on a GPT-partitioned disk anywhere in sight. The Mint installer may have also gotten confused somehow, but that is probably not the most important thing. The best way out of this situation and into a sane configuration is probably to use Mint to access the Windows sda2 disk, and copy everything important out of there onto removable media or other safe location: sudo mkdir /windows_csudo mount -t ntfs-3g /dev/sda2 /windows_ccd /windows_ccp <whatever> </some/where/safe> Then disconnect the secondary SSD, wipe the primary one and start by installing Windows 10 UEFI-style. If possible, you may want to change the BIOS settings to switch the legacy BIOS-style compatibility off for this, to ensure everything goes in UEFI-style. Then install Mint next to it. Then you can remove the primary SSD, install the secondary one in its place, switch the BIOS-style compatibility back on and install XP. Now move both disks back into their original places, and maybe install Kali in UEFI mode onto the second disk (without selecting "Force UEFI"). Then boot into Mint, make sure the os-prober package is installed, and run sudo update-grub to get Kali onto Mint's boot menu. Now whether you use Kali's or Mint's GRUB, both should give you three options: Kali, Mint and Windows 10. To get into XP, you'll unfortunately need to go into BIOS settings and explicitly choose to boot legacy-style from the secondary disk. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/631734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/453611/"
]
} |
631,766 | When I change the permissions on a file using chmod , existing file descriptors can continue to access the file under the previous permissions. Can I cause those existing file descriptors to close or fail or become unusable immediately after the permission change? | The kernel doesn't check permissions on file descriptions. They can even be duplicated to other processes that never had access to the original file by fd passing . The only thing I think you could try would be to manually find out processess with open file descriptors, and pull a sneaky trick to close them 1 . There's an example of such a "sneaky trick" is to attach a debugger ( gdb ) and use that to close the fd. This is a very extreme thing to do. There's no way to know how a process will behave if it's FD is suddenly closed. In some cases processes may have a file mapped to memory so if you manage to close the file and you managed to remove any memory mapping without the processes expecting it would crash on a segmentation fault. Much better would be to find which processes are using a file and manually kill them. At least that way you can ask them to shutdown nicely and not corrupt other data. 1 as mentioned in comments, the best shot of making this work would be to call dup2 rather than close to change the fd to point to /dev/null instead of the original file. This is because the code won't expect it to have closed and may do some very weird and insecure things when the fd (number) gets recycled. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/631766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26068/"
]
} |
631,850 | I want to write a shell script that replaces A by BB and B by AA. For example, AYB becomes BBYAA . My usual solution to this kind of thing is to chain multiple calls to sed, like this: sed 's/A/BB/g;s/B/AA/g' However, that doesn't work in this situation because the A ends up being translated into AAAA instead of BB . tr also doesn't seem to be an option, because the replacement text is longer than one character. Is there something else I can do? It's OK if it uses something other than sed or tr. | This is the kind of problem where you need a loop so you can search for both patterns simultaneously. awk ' BEGIN { regex = "A|B" map["A"] = "BB" map["B"] = "AA" } { str = $0 result = "" while (match(str, regex)) { found = substr(str, RSTART, RLENGTH) result = result substr(str, 1, RSTART-1) map[found] str = substr(str, RSTART+RLENGTH) } print result str }' Of course, if Perl is available there's an equivalent oneliner: perl -pe ' BEGIN { %map = ("A" => "BB", "B" => "AA"); } s/(A|B)/$map{$1}/g;' If none of the patterns contain special characters, you can also build the regex dynamically: perl -pe ' BEGIN { %map = ("A" => "BB", "B" => "AA"); $regex = join "|", keys %map; } s/($regex)/$map{$1}/g;' By the way, Tcl has a builtin command for this called string map , but it's not easy to write Tcl oneliners. Demonstrating the effect that sorting the keys by length has: without sorting $ echo ABBA | perl -pe ' BEGIN { %map = (A => "X", BB => "Y", AB => "Z"); $regex = join "|", map {quotemeta} keys %map; print $regex, "\n"; } s/($regex)/$map{$1}/g' A|AB|BBXYX with sorting $ echo ABBA | perl -pe ' BEGIN { %map = (A => "X", BB => "Y", AB => "Z"); $regex = join "|", map {quotemeta $_->[1]} reverse sort {$a->[0] <=> $b->[0]} map {[length, $_]} keys %map; print $regex, "\n"; } s/($regex)/$map{$1}/g ' BB|AB|AZBX Benchmarking "plain" sort versus Schwartzian in perl: The code in the subroutines is lifted directly from the sort documentation #!perluse Benchmark qw/ timethese cmpthese /;# make up some key=value datamy $key='a';for $x (1..10000) { push @unsorted, $key++ . "=" . int(rand(32767));}# plain sorting: first by value then by keysub nonSchwartzian { my @sorted = sort { ($b =~ /=(\d+)/)[0] <=> ($a =~ /=(\d+)/)[0] || uc($a) cmp uc($b) } @unsorted}# using the Schwartzian transformsub schwartzian { my @sorted = map { $_->[0] } sort { $b->[1] <=> $a->[1] || $a->[2] cmp $b->[2] } map { [$_, /=(\d+)/, uc($_)] } @unsorted}# ensure the subs sort the same waydie "different" unless join(",", nonSchwartzian()) eq join(",", schwartzian());# benchmarkcmpthese( timethese(-10, { nonSchwartzian => 'nonSchwartzian()', schwartzian => 'schwartzian()', })); running it: $ perl benchmark.plBenchmark: running nonSchwartzian, schwartzian for at least 10 CPU seconds...nonSchwartzian: 11 wallclock secs (10.43 usr + 0.05 sys = 10.48 CPU) @ 9.73/s (n=102)schwartzian: 11 wallclock secs (10.13 usr + 0.03 sys = 10.16 CPU) @ 49.11/s (n=499) Rate nonSchwartzian schwartziannonSchwartzian 9.73/s -- -80%schwartzian 49.1/s 405% -- The code using the Schwartzian tranform is 4 times faster. Where the comparison function is only length of each element: Benchmark: running nonSchwartzian, schwartzian for at least 10 CPU seconds...nonSchwartzian: 11 wallclock secs (10.06 usr + 0.03 sys = 10.09 CPU) @ 542.52/s (n=5474)schwartzian: 10 wallclock secs (10.21 usr + 0.02 sys = 10.23 CPU) @ 191.50/s (n=1959) Rate schwartzian nonSchwartzianschwartzian 191/s -- -65%nonSchwartzian 543/s 183% -- Schwartzian is much slower with this inexpensive sort function. Can we get past the abusive commentary now? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/631850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23960/"
]
} |
632,109 | I have directories that begin with 164 but they differ based on the last few digits. I would like to cd into a directory based on the last few digits if not the last digit itself, say 9 vs 8 . The directories are unique in the last digits. Is it possible to do so? Autocomplete lists a number of possibilities when I begin with the first digits 164 . | With Bash, yes, you can use wildcards: cd /path/to/*9/ (replace 9 with however many digits you need; you can drop /path/to/ if you’re in the directory containing all the 164... directories). You need to ensure that the expression is specific enough to resolve to a single directory, otherwise cd will change to the first directory specified in its arguments (in Bash before version 4.4), or fail with an error (Bash 4.4 and later built with CD_COMPLAINS ). (Take care with Zsh or Ksh which have a two-argument form of cd which you might invoke accidentally, albeit only if your current path contains the first argument.) You can also tab-complete after typing the command above, before executing it; if more than one directory matches, your shell will list them all. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/632109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/313163/"
]
} |
632,111 | I am trying to execute gunicorn server in which I have to pass list of dict as input. But when I am sending empty string as value it's removed.My command is import subprocesscmd = """gunicorn 'myapp:create([{key: ""}])' --worker-class gevent -w 1 --bind 127.0.0.1:8019"""subprocess.call([cmd], shell=True) Inside myapp #myapp.pycreate(d_input): print(d_input) # OUT : [{key: }] As you can see '' are eliminated so I am unable to parse list & dict. Is there any way to avoid this. I have also tried passing input something like [{key : 'Something'}] in this case output is [{key : Something}] whereas I expect [{key : 'Something'}] .Any suggestion will be helpful | With Bash, yes, you can use wildcards: cd /path/to/*9/ (replace 9 with however many digits you need; you can drop /path/to/ if you’re in the directory containing all the 164... directories). You need to ensure that the expression is specific enough to resolve to a single directory, otherwise cd will change to the first directory specified in its arguments (in Bash before version 4.4), or fail with an error (Bash 4.4 and later built with CD_COMPLAINS ). (Take care with Zsh or Ksh which have a two-argument form of cd which you might invoke accidentally, albeit only if your current path contains the first argument.) You can also tab-complete after typing the command above, before executing it; if more than one directory matches, your shell will list them all. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/632111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/439190/"
]
} |
632,220 | Trying to update some latex code. The four lines: something $^\text{TT}$ and more stuffsomething $^\text{T}$ and more stuffsomething $^\text{M\times N}$ and more stuffsomething $^\text{T} {otherstuff}$ should turn into something $^{\text{TT}}$ and more stuffsomething $^{\text{T}}$ and more stuffsomething $^{\text{M\times N}}$ and more stuffsomething $^{\text{T}} {otherstuff}$ In other words, encase the super-script by an extra {...} . My attempt at using sed is the following sed 's/\^\\text{\(.*\)}/\^{\\{text{\1}}/' testregex This works on the first three lines, but the final line doesn't work and produces something $^{\text{T} {otherstuff}}$ instead. The problem is that sed is matching with the last } on each line, but I need it to match the first } after the \text{ Also, it would be great if this could work multiple times on the same line, for example, ^\text{1} la la la ^\text{2} should turn into ^{\text{1}} la la la ^{\text{2}} I'm sure there's just one tiny little modification to make it work, but I can't figure it out and it's driving me nuts. Thanks in advance! | One method to overcome the greedy regular expression problem is to explicitly look for a string of non-delimiter characters, followed by the delimiter character. At the same time, you can probably simplify your replacement syntax via: sed 's/\^\(\\text{[^}]*}\)/\^{\1}/' input.tex It should be possible to use sed 's/\^\(\\text{[^}]*}\)/\^{\1}/g' input.tex for multiple matches per line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454075/"
]
} |
632,232 | I do a cd /folder/ && find . -not \( -path ./exclude_folder -prune \) > /log.log and get find: â<80><98>./qs/www/ergebnisse/validitaet/0df21b8a-e227-47b2-aaa5-9f54d1f9b8fd.txtâ<80><99>: No such file or directory inside log file. That is right, but: What is this â<80><98> (and â<80><99> )? Are these color codes? How to avoid them ( find does not have --no-color )? Hold on. When I do cat log.log instead of vi log.log I get: find: ‘./qs/www/ergebnisse/validitaet/0df21b8a-e227-47b2-aaa5-9f54d1f9b8fd.txt’: No such file or directory | Your distribution uses UTF-8 character encoding. This is normal for most current distributions. What you see is the effect of UTF-8 coded characters displayed as another encoding. Many GNU utilities try to use different quotation marks for opening and closing quotes. With some fonts this looks good, with others not so good. Let's look at the output produced by find : $ find /x 2>&1 | hexdump -C00000000 66 69 6e 64 3a 20 e2 80 98 2f 78 e2 80 99 3a 20 |find: .../x...: | Before and after the /x we have the sequences e2 80 98 and e2 80 99 . Your console is configured for UTF-8 and is able to display the UTF-8 sequence correctly. The cat program doesn't recognize or care about this sequence, so it doesn't matter. On the other hand, your vi is interpreting the file as latin1. This is at least unusual on a modern distribution. In latin1, the first byte, e2 , is interpreted as â while 80 , 98 and 99 are invalid in latin1 and displayed as <80> and so on. This results in â<80><98> and â<80><99> . How to avoid that? You can configure your vi to display files as UTF-8, or you can avoid the UTF-8 sequences in your output. $ LC_CTYPE=C find /x 2>&1 | hexdump -C00000000 66 69 6e 64 3a 20 27 2f 78 27 3a 20 |find: '/x': | Here find doesn't generate UTF-8 sequences and instead uses the single quote ' for opening and closing quote. Note that disabling UTF8 might change how programs process their input, although it shouldn't matter in your example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/239596/"
]
} |
632,248 | I have a file that contains full paths: /home/usr/file_name_delimited_by_underscore_123.txt /home/usr/another_example_456.rar I would like to print the file name from the path without the extension and to print next to it the string after the last _ . Output: file_name_delimited_by_underscore_123 123another_example_456 456 I figured a way to get the desired output using piped awk commands: cat file | awk -F[/.] '{print $(NF-1)}' | awk -F_ '{print $0" "$NF}' Is there a way to achieve this without piping? My question boils down to is it possible to perform actions on the fields parsed by awk? Thanks for your help. | Yes, you can perform any operations you like on the fields. For example: $ cat file | awk -F[/.] '{n = split($(NF-1),a,/_/); print $(NF-1)" "a[n]}'file_name_delimited_by_underscore_123 123another_example_456 456 Of course, you don't need cat here; you could have awk read the file directly - and since the default output field separator OFS is a space, it would be more idiomatic to write the results as separate output fields instead of a string concatenation: awk -F[/.] '{n = split($(NF-1),a,/_/); print $(NF-1), a[n]}' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/453131/"
]
} |
632,266 | Bash newbie here trying to get a hold of things by performing some simple tasks. Say I have a text file with the following text: ExperimentOne results = .5 participants: 80ExperimentTwo results = .4 participants: 75, 4 unclear reports I would like to extract the ExperimentOne and ExperimentTwo result data, or any data that has the word results in its line, so that my output is simply: ExperimentOne .5ExperimentTwo .4 How can I do this? | If you want to use cut you need to set the delimiter with the option -d : $ cut -d' ' -f1,4 fileExperimentOne .5ExperimentTwo .4 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/453809/"
]
} |
632,275 | I have a text file where I want to replace all spaces inside [[ and ]] with hyphens (brackets are never nested and always matched). The following is an example: $ cat test.txt abc [[foo]] xyzabc [[foo bar]] xyzabc [[foo bar baz]] xyz [[something else]] So the desired output is: abc [[foo]] xyzabc [[foo-bar]] xyzabc [[foo-bar-baz]] xyz [[something-else]] I thought I could use sed to match the string inside the brackets and then use the e flag to run the result through sed again to make the replacement. However the problem is that not only the matched string gets executed as a command, but the whole pattern space (which seems to be the entire line): $ sed -E 's@(\[\[)(.+)(\]\])@sed -e "s/ /-/g" <<< "\1\2\3"@gpe' test.txt abc sed -e "s/ /-/g" <<< "[[foo]]" xyzsh: 1: Syntax error: redirection unexpectedabc sed -e "s/ /-/g" <<< "[[foo bar]]" xyzsh: 1: Syntax error: redirection unexpectedabc sed -e "s/ /-/g" <<< "[[foo bar baz]]" xyzsh: 1: Syntax error: redirection unexpected Is there a way to limit what gets executed via the e flag to the matched string? If not, how would I solve this problem with sed ? | I don't think there is a way to limit what's passed to the shell by the e modifier; however you could do something like this: $ sed -E ':a;s@(.*\[\[)([^][]* [^][]*)(\]\].*)@printf "%s%s%s" "\1" "$(printf "\2" | sed "s/ /-/g")" "\3"@e;ta' test.txtabc [[foo]] xyzabc [[foo-bar]] xyzabc [[foo-bar-baz]] xyz [[something-else]] Note that handling of multiple replacements is done via a loop - and due to the greediness of the match, it actually makes the substitutions in reverse order. Note also that e uses /bin/sh which will likely not support the <<< input redirection (hence the use of the piped equivalent printf "\2" | sed "s/ /-/g" ). If perl is an option you could do something closer to your original intent ex.: $ perl -pe 's/(?<=\[\[)(.*?)(?=\]\])/$1 =~ s: :-:rg/ge' test.txtabc [[foo]] xyzabc [[foo-bar]] xyzabc [[foo-bar-baz]] xyz [[something-else]] Since perl provides a non-greedy modifier ? , this can handle multiple replacements per line more conventionally using the g flag on the outer substitution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137240/"
]
} |
632,342 | I understand that parentheses cause the commands to be run in a subshell and braces cause the commands to be grouped together but not in a subshell. When I run this with parentheses: no_func || ( echo "there is nothing" exit 1)echo $? This returns the exit status: /Users/myname/bin/ex5: line 34: n_func: command not foundthere is nothing1 But when I use braces: no_func || { echo "there is nothing" exit 1}echo $? This doesn't return the exit status. /Users/myname/bin/ex5: line 34: no_func: command not foundthere is nothing But why one returns the exit status and another doesn't? | Look at the command execution trace ( set -x ). With braces: + no_func./a: line 3: no_func: command not found+ echo 'there is nothing'there is nothing+ exit 1 exit exits the (sub)shell. Since braces don't create a subshell, exit exits the main shell process, so it never reaches the point where it would run echo $? . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/632342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120293/"
]
} |
632,348 | I have following changelog like file #1.2.3#InsertPart#1.2.2 or #1.2.3#InsertPart- something#1.2.2 And desired output is #1.2.3#InsertPart- new inserted line#1.2.2 or #1.2.3#InsertPart- new inserted line- something#1.2.2 I was able to insert a line after #InsertPart with following script awk '1;!inserted && /# InsertPart/{c=2}c && !--c{print "- new inserted line"; inserted=1}' But I'm stuck with inserting a blank line only if pattern #x.x.x is on next line. So Ive ended with following output #1.2.3#InsertPart- new inserted line#1.2.2 | Look at the command execution trace ( set -x ). With braces: + no_func./a: line 3: no_func: command not found+ echo 'there is nothing'there is nothing+ exit 1 exit exits the (sub)shell. Since braces don't create a subshell, exit exits the main shell process, so it never reaches the point where it would run echo $? . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/632348",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454205/"
]
} |
632,414 | I need to count the lines containing the words the and an in a text file ( poem.txt ), but not those containing both . I've tried using grep -c the poem.txt | grep -c an poem.txt but this gives me the wrong answer of 6 when the total number of the and an is 9 lines. I do want to count the lines containing the words and not the words themselves. Only the actual word should count, so the but not there , and an but not Pan . Example file: poem.txt Where is the misty shark?Where is she?The small reef roughly fights the mast.Where is the small gull?Where is he?The gull grows like a clear pirate.Clouds fall like old mainlands.She will Rise calmly like a dead pirate.Eat an orange.Warm, sunny sharks quietly pull a cold, old breeze.All ships command rough, rainy sails.Elvis Aaron Presley also known simply as the ElvisHe is also referred to as the KingThe best-selling solo music artist of all timeHe was the most commercially successful artist in many genresHe has many awards including a Grammy lifetime achievementElvis in the 1970s has numerous jumpsuits including an eagle one. To clarify further: how many lines in the poem contain the or an but you should not count the lines that include both the and an . the car is red - this countedan apple is in the corner - not countedhello i am big - not countedwhere is an apple - counted So here the output should be 2. Edit: I'm not worried about case sensitivity. Final edit: Thanks for all your help.i've managed to solve the problem. i used the one of the answer and changed it a little. i used cat poem.txt | grep -Evi -e '\<an .* the\>' -e '\<the .* an\>' | grep -Eci -e '\<(an|the)\> how ever i did change the -c in the second grep to a -n for some added info.Yet again thank you for all the help!! :) | With grep: cat poem.txt \ | grep -Evi -e '\<an\>.*\<the\>' -e '\<the\>.*\<an\>' \ | grep -Eci -e '\<(an|the)\>' This counts the matched lines . You can find an alternative syntax which counts the total number of matches down below. Breakdown: The frist grep command filters out all lines containing both 'an' and 'the'. The second grep command counts those lines, containing either 'an' or 'the'. If you remove the c from the second grep's -Eci , you will see all matches highlighted. Details: The -E option enables extended expression syntax (ERE) for grep. The -i option tells grep to match case-insensitive The -v option tells grep to invert the result (i.e. match lines not containing the pattern) The -c option tells grep to output the number of matched lines instead of the lines themselves The patterns: \< matches the beginning of a word (thanks @glenn-jackman ) \> matches the end of a word (thanks @glenn-jackman ) --> That way we can make sure to not match words containing 'the' or 'an' (like 'pan') grep -Evi -e '\<an\>.*\<the\>' thus matches all lines not containing 'an ... the' Similarly, grep -Evi -e '\<the\>.*\<an\>' matches all lines not containing 'the ... an' grep -Evi -e '\<an\>.*\<the\>' -e '\<the.*an\>' is the combination of the 3. and 4. grep -Eci -e '\<(an|the)\>' matches all lines containing either 'an' or 'the' (surrounded by whitespace or start/end of line) and prints the number of matched lines EDIT 1: Use \< and \> instead of ( |^) and ( |$) , as suggested by @glenn-jackman EDIT 2: In order to count the number of matches instead of the number of matched lines, use the following expression: cat poem.txt \ | grep -Evi -e '\<an\>.*\<the\>' -e '\<the\>.*\<an\>' \ | grep -Eio -e '\<(an|the)\>' \ | wc -l This uses the -o option of grep, which prints every match in a separate line (and nothing else) and then wc -l to count the lines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454299/"
]
} |
632,418 | The below fails: sudo -u chris ls /rootls: cannot open directory '/root': Permission denied While the below succeeds: sudo ls /root... I do not understand why. I assume -u just changes the $USER /running user to the parameter provided in addition to having root privliges. What is the cause behind this behavior? | sudo -u chris runs the given command as user chris , not as root with USER set to chris . So if chris can’t access /root , sudo -u chris won’t change that. See man sudo : -u user , --user = user Run the command as a user other than the default target user (usually root ). sudo isn’t specifically a “run as root” tool; it’s a “run as some other user or group” tool. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/632418",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124109/"
]
} |
632,422 | I have the following tsv-file (extract): File 1: NC_002163.1 RefSeq source 1 1641481 . + . organism=Campylobacter jejuni subsp. jejuni NCTC 11168;mol_type=genomic DNA;strain=NCTC 11168;sub_species=jejuni;db_xref=taxon:192222NC_002163.1 RefSeq misc_feature 19386 19445 . - . inference=protein motif:TMHMM:2.0;note=3 probable transmembrane helices predicted for Cj0012c Further possible textNC_002163.1 RefSeq misc_feature 19482 19550 . - . inference=protein motif:TMHMM:2.0;note=3 probable transmembrane helices predicted for Cj0014c Sometimes there is more textNC_002163.1 RefSeq misc_feature 22853 22921 . - . inference=protein motif:TMHMM:2.0;note=5 probable transmembrane helices predicted for Cj0017c... As you can see, the last column contains some identifiers ( Cj0014c, Cj0017c, etc ). Some of these IDs are saved in another file File 2: Cj0012cCj0027CjNC9Cjp01SRP_RNA_Cjs03CjNC11CjNC1Cj0113Cjp03Cj0197cCj0251c How can I use awk (or any bash-script-tool) to eliminate those lines from File 1, that contain as substring in the last column, any ID that is found in File 2? For example, the second line of File 1 would be deleted, since Cj0012c is found in File 2 and is part of the string in the last column of the line in File 1. I've been struggling already many hours, so thanks for any help (and, if possible, an explanation of the code!) | sudo -u chris runs the given command as user chris , not as root with USER set to chris . So if chris can’t access /root , sudo -u chris won’t change that. See man sudo : -u user , --user = user Run the command as a user other than the default target user (usually root ). sudo isn’t specifically a “run as root” tool; it’s a “run as some other user or group” tool. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/632422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454312/"
]
} |
632,477 | In the company where I work, we have several clients hosted on a server that use the same project, they use the same code base, but are different users on the server (using WHM / CPanel)As our repo is private, they are all authenticated by SSH keys, but we have about 80 clients that way, so in the github account you have almost 80 keys stored. The question is, is there any way to get these users to use the same key?To avoid creating a new key for each user.Or this is not a good approach to do, and better to keep making a key for each one | You can set up ssh to accept certificates signed by a single certificate authority (CA). So you create a single certificate for this authority, provide this certificate to where ever you want to use the SSH keys for your n users, and then each of the n users gets a certificate signed with the CA certificate. You can add more users without changing anything. So this is nearly the same as having each user use the same key (you only need to manage the CA certificate once), but each user still gets a different certificate, so you can distinguish users. Github supports CAs. Here is an article with more details (the CLI from smallstep is also pretty helpful for these things). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/452828/"
]
} |
632,489 | I tried to re-mount a LVM: mount /dev/vg1/lv1 /home buty I got this message: [409104.164857] EXT4-fs (dm-3): VFS: Can't find ext4 filesystem[409104.165223] EXT4-fs (dm-3): VFS: Can't find ext4 filesystem[409104.165568] EXT4-fs (dm-3): VFS: Can't find ext4 filesystem[409104.165916] FAT-fs (dm-3): invalid media value (0x64)mount: you must specify the filesystem type I am new to working with linux. I wrote: cp /var/log/*.gz.tar /dev/mapper/vg1/lv1 I thought that that way I could copy the files. So until I did a server reboot. When the server started, I got a message saying "Press M to log into maintenance mode. When I noticed that home was gone.I started researching and found the LVM topic. That's when I found out that I made a mistake wanting to copy the files the way I mentioned above. This is my case and I would be very grateful for your help and opinion.Thanks to all of you who spent your time reading my situation. | You can set up ssh to accept certificates signed by a single certificate authority (CA). So you create a single certificate for this authority, provide this certificate to where ever you want to use the SSH keys for your n users, and then each of the n users gets a certificate signed with the CA certificate. You can add more users without changing anything. So this is nearly the same as having each user use the same key (you only need to manage the CA certificate once), but each user still gets a different certificate, so you can distinguish users. Github supports CAs. Here is an article with more details (the CLI from smallstep is also pretty helpful for these things). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454154/"
]
} |
632,618 | I am running Arch Linux, 64bit latest update on one of my computers. I am currently a Computer Science student and we had a test yesterday where we were to implement a dynamic stack using linked lists. I am now interested in learning how the stack in my computer is built, however I am unable to find any "stack.c" with comments on my Arch Linux computer. Where is the stack programming located? I understand how the stack creates memory but I want to actually see the code and maybe play around with it myself. | The term "stack" is overloaded. Some possible interpretations include: A "stack" is an abstraction over a set of data structures that provide Last-In-First-Out (LIFO) access to the elements that it contains. There are realizations of the "stack" abstraction. The linked-list based stack that you developed in your class is one such realization. It is not the only realization. For example, it's also possible to build a stack using an array. The runtime activation stack --- the stack used to manage function calls and returns during the execution of a thread in a program --- is another realization of the "stack" abstraction. It's behavior is similar to an array-based stack. In terms of (3) a "chunk" of memory is allocated to each running thread in a program. As functions get called and return within those thread, they push and pop "stack frames" ("elements" on the runtime activation stack) from the stack that is associated with thread. The specifics of what is contained within a stack frame differ between hardware architectures. In general, stack frames contain: The return address of the the caller Some or all of the parameters to the function (depends on the hardware architecture, the number of parameters, and their size) Local variables defined within the function. The state of registers that are used by the function, but whose values need to be restored before the function returns. Because the number of parameters, the number and size of the local variables within a function, and what registers need to be saved differ from function to function, stack frames do not have a constant size. There is no stack.c to examine because the code that manages the activation record stack is generated by the compilers that build programs on a function-by-function basis. Compilers generate the instructions to trigger function calls. As the compiler produces a program's instructions, it knows: What hardware instruction is used to trigger a function call, and what affect that instruction has on the stack (e.g., does it automatically store the return address, does it adjust the register that tracks the "top" of the stack). If any parameters in the call are stored on the stack, where those parameters will be in terms of the "top" of the stack. The size of the local variables within the function, and location of those variables in relation to the "top" of the stack (and the compiler uses this to adjust the register that keeps track of the "top" of the stack). The registers the function is using and when they need to be saved/restored What hardware instruction is used to trigger a return from a function call, and what affect that instruction has on the stack. Compilers follow a well-established set of rules (calling conventions) for the hardware architecture so that programs composed of different pieces built by different compilers can interoperate. Note that although the runtime activation stack (3) is a realization of the "stack" abstraction (1), aside from the concept of pushing/popping records, it bear little resemblance to a linked-based implementation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632618",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/437798/"
]
} |
632,649 | One of the things I loved about Linux when I first switched was its package management. There are a few projects that bring Linux-style package management to Mac (Homebrew) and Windows (Chocolatey). I'm increasingly seeing apps that provide Snap install instructions for Linux, but what would be the purpose of adding a second package management system? Will new applications eventually migrate to Apt (or other Linux package management systems), or will Linux users have to run two different package managers on the same system to get all the applications they want? | Traditional package management on Linux has a lot of strengths. Dependencies can be shared, which keeps things small and efficient. They generally go through a committee of some kind, and thus several eyes (this varies between Linux distributions, some have less of a process than others). There's usually a central place to log bugs, too. Traditional package management has some downsides, however. For example, when you release the new version of libfoo, literally everyone using the distro gets that new version. This means that distribution maintainers must be incredibly conservative when shipping updates. They can't regress. Also, it's pretty rare to see the upstream developer of an application go through the work of packaging their application for the various Linux distributions out there. Especially once they realize that they won't be able to ship their updates as easily as they expected. For example, on Ubuntu, once a stable release is cut, new features are generally not allowed to be added to software in the distro. Only security fixes and bug fixes. And even that process is heavy . This is how Ubuntu stays stable, but it's also how the Ubuntu archive gets stale through the lifetime of the release. Another issue (this is a downside or an upside depending on how you look at it): generally in order to get software into a Linux distribution, it has to be open source. Not everything is. Snaps are different in a few ways. First of all, their dependencies are bundled into the same package. This means they can't be shared, so they aren't as efficient as traditional packages, but the reason this is done is to work around the libfoo example I just provided: it gives snap developers the freedom to change things as they see fit without potentially breaking unrelated software. As a result, they can be distributed out of the Linux distribution entirely, with no committee involved. In fact, it's very common to see the upstream developer of an application creating and maintaining the snap for their application, because they know they can ship an update to their users whenever they want, and if they break something, they only broke their users, not half the distribution with their libfoo breakage. Snaps are also just squashfs images that contain... stuff. There are no licensing restrictions. This makes it possible for Microsoft to release Skype as a snap, or Spotify to release their player. As a result of the fact that they bundle the dependencies and are just a squashfs image, it makes it pretty easy to install and run them on any Linux distribution, regardless of that distribution's built-in package management system. Will new applications eventually migrate to Apt (or other Linux package management systems), or will Linux users have to run two different package managers to get all the applications they want? Yes, traditional dependency management systems aren't going away. New software will over time be available via apt. But that will continue to be done by community members who just want the software in apt, not by the upstream developers themselves. And depending on how your Linux distribution works, that software can get stale over time until you upgrade to the next. Generally speaking, if you want the newest stuff, you need to get it from the upstream developers. They seem to be choosing snaps to do that more and more, as you've noticed. PPAs (or similar) are another option. Anyway, this isn't meant to be a complete pros/cons list, there's a lot more we could talk about here. I'm just trying to point out that each way to consume software has a purpose, and neither is going away. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/632649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88347/"
]
} |
632,670 | So I well understand that exit code of 0 is considered success in running of a program. Then we use this in bash scripts with logical AND and OR to run the next program based on the exit status of the first program. A good example can be found here: https://unix.stackexchange.com/a/187148/454362 Does this mean 0 is interpreted as true and any other number but zero is interpreted as false? This is contrary to all the programming languages that I know of. So can I assume that bash internally uses logical NOT to reverse the exit code to proper false / true value? So, these are some examples: #should result in false, and bash shows 0 as falseecho $(( 1 && 0 ))# should result in true, and bash shows 1 as trueecho $(( 1 || 0 )) # does not show 'hi'; even though if false is zero per first example, then# this should be considered success exit code and 'hi' to be shown!!false && echo 'hi' # does show 'hi'. It first runs the "echo 'hi'" and interprets its exit# status of zero as success, but then false stops the 2nd hi.echo 'hi' && false && echo ' 2nd hi' # shows 'hi 2nd hi'echo 'hi' && (( 1 || 0 )) && echo ' 2nd hi' It seems I am answering my own question. But I just want clarification if someone knows the internal of the bash processing. | The examples in the post you linked to are things like: false && echo "OK"true || echo "OK" In that context, with the exit status of a processes, yes, 0 is truthy, and anything else is falsy. (Yes, true and false are programs here. Probably builtin to the shell, but that works the same.) From the POSIX definition ( || is similar, as is what Bash's documentation says): AND Lists The control operator "&&" denotes an AND list. The format shall be: command1 [ && command2 ] ... First command1 shall be executed. If its exit status is zero, command2 shall be executed, and so on, ... Yes, that's opposite to the conventions used in pretty much all other contexts and programming languages. Then again, it is also somewhat common in C for functions to return a zero on success, and a non-zero error code on error. That way, you can tell different sorts of error apart (*) . That, of course, doesn't really work if your function needs to return some useful value, like a pointer, and as far as the C language is concerned, 0 is falsy. Anyway, different it is. I don't think it's a good idea to assume anything about the implementation. Just remember that in this context zero is true, and e.g. (true && true) is (true). (* Like families, happy system calls are all similar. It's the unhappy ones that are unhappy each in their own way.) Then your examples: #should result in false, and bash shows 0 as falseecho $(( 1 && 0 )) Here, you're using && in an arithmetic context, which doesn't obey the same rules as that of exit statuses. Here, 0 is false, and 1 is true. So 1 && 0 is (true AND false), which is (false), or 0. # should result in true, and bash shows 1 as trueecho $(( 1 || 0 )) Similar to the above. # does not show 'hi'; even though if false is zero per first example, then# this should be considered success exit code and 'hi' to be shown!!false && echo 'hi' The utility called false exits with a status of 1 (or at least some non-zero value). Which, in that context, is falsy, so the right hand side of && is skipped by way of short-circuiting logic. # does show 'hi'. It first runs the "echo 'hi'" and interprets its exit# status of zero as success, but then false stops the 2nd hi.echo 'hi' && false && echo ' 2nd hi' Same thing. # shows 'hi 2nd hi'echo 'hi' && (( 1 || 0 )) && echo ' 2nd hi' You used 1 || 0 here, which is true either way, and the numerical value somewhat disappears there inside the arithmetic context. Let's try these: $ echo foo && (( 0 )) && echo barfoo$ echo foo && (( 1 )) && echo barfoobar Now, ((...)) is an arithmetic construct (like $((...)) ), and within it, 0 is falsy. Unlike $((...)) , ((...)) is also a command, so has an exit status. It exits with zero (truthy), if the expression inside evaluates to non-zero (truthy); and with 1 (falsy), if the expression inside evaluates to zero (falsy). Ok, that may be confusing, but the end result is that C-like implicit comparisons against zero work inside it, and you get the same truth value out of it, when using it with the shell's conditionals. So, while (( i-- )); do ... loops until i goes to zero. ( (( foo )) is not standard, but supported by ksh/Zsh/Bash. The standard interpretation for it would be the command foo inside two nested subshells, so would probably give a "command 'foo' not found" error.) It might also be worth pointing out that something like (( true )) && echo maybe probably doesn't print anything. In the arithmetic context, the plain word is taken as a name of a variable (recursively in many shells), so unless you have a variable true set to a non-zero value, (( true )) will be false. (From the department of obfuscation comes the idea of running true=0; false=1; true() (exit 1); false() (exit 0); . Now what does true && (( false )) || echo maybe && echo maybe not print and why?.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454362/"
]
} |
632,760 | `someString` complete(1) I need "someString" that is in between backticks.tried below but doesn't work. cat file.txt | grep '\`.*\`' | With GNU grep that supports PCRE extension: grep -Po '(?<=`)[^`]*(?=`)' infile or to return a and c only for example in `a`b`c` , you would do : grep -Po '(?<=`)[^`]+(?=`(?:[^`]*`[^`]*`)*[^`]*$)' return everything between pair of backticks. Tips: (?<=...) : positive look-behind . (?=...) : positive look-ahead . (?:...) : non-capturing group . [^`]* : any character except a back-tick ` With awk : awk -F'`' '{ for(i=2; i<=NF; i+=2) print $i; }' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/176225/"
]
} |
632,922 | I have a directory structure like the following: .├── Untitled Folder│ ├── Untitled Folder│ │ └── zing│ │ ├── first| | | └── dum│ │ ├── second│ │ └── third│ ├── Untitled Folder 2│ └── Untitled Folder 3├── Untitled Folder 2│ ├── Untitled Folder│ ├── Untitled Folder 2│ │ └── zing│ │ ├── fifth| | | └── dum│ │ ├── fourth│ │ └── sixth│ └── Untitled Folder 3└── Untitled Folder 3 ├── Untitled Folder ├── Untitled Folder 2 └── Untitled Folder 3 └── zing ├── eighth └── seventh └── dum I want to list only direct child directories of all zing directories. Sample output is: firstsecondthirdfifthfourthsixtheighthseventh I tried % find . -type d | grep zing./Untitled Folder 3/Untitled Folder 3/zing./Untitled Folder 3/Untitled Folder 3/zing/eigthth./Untitled Folder 3/Untitled Folder 3/zing/seventh./Untitled Folder 3/Untitled Folder 3/zing/seventh/dum./Untitled Folder 2/Untitled Folder 2/zing./Untitled Folder 2/Untitled Folder 2/zing/fifth./Untitled Folder 2/Untitled Folder 2/zing/fifth/dum./Untitled Folder 2/Untitled Folder 2/zing/fourth./Untitled Folder 2/Untitled Folder 2/zing/sixth./Untitled Folder/Untitled Folder/zing./Untitled Folder/Untitled Folder/zing/first./Untitled Folder/Untitled Folder/zing/first/dum./Untitled Folder/Untitled Folder/zing/third./Untitled Folder/Untitled Folder/zing/second It is even showing the child of the child of zing (in this case dum ), which I do not want. How can I get my expected output? | Bash, Globstar activated (with shopt -s globstar ). printf '%s\n' **/zing/*/ For only the last path component, for dir in **/zing/*/; do basename "$dir"; done POSIX Find. find . -type d -path '*/zing/*' -prune -prune avoids descending in the matched directories. For only the last path component, find . -type d -path '*/zing/*' -prune -exec basename {} \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/632922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206574/"
]
} |
633,007 | So I have a docker container running a PHP 7.2.34 service as follows:- docker-compose.yml version: '3'services: #PHP Service app: build: context: . dockerfile: Dockerfile image: digitalocean.com/php container_name: app ... Dockerfile FROM php:7.2-fpm... Rather than restart Nginx I simply restart the containers like:- docker stop app db webserverdocker rm app db webserverdocker-compose up -d I'd like to upgrade to PHP 7.4+ or even 8. I tried to simply replace FROM php:7.2-fpm with FROM php:7.4-fpm but phpinfo() reports no change in PHP version? Does: image: digitalocean.com/php have any significance when upgrading? | This command will update your image and force stop and recreate the containers: docker-compose up -d --force-recreate --build To verify run docker exec -it app php -v this will return the php version info. # The new image name when running `docker-compose up/build`image: digitalocean.com/php Explanation : The command docker-compose up will build an image only it's not exist. To force build new image add --build flag or do docker-compose build and then docker-compose up Reference here If you want to force Compose to stop and recreate all containers, use the --force-recreate flag. --build Build images before starting containers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169700/"
]
} |
633,010 | $ ansible all -m ansible.builtin.shell -a 'echo $TERM'ERROR! this task 'ansible.builtin.shell' has extra params, which is onlyallowed in the following modules: import_role, win_command,include_vars, include_tasks, raw, win_shell, command, add_host, meta, include_role, shell, import_tasks, group_by, set_fact, script, include can anyone help me to find out whats the issue this is the one with -vvv tag $ ansible -vvv centos -m ansible.builtin.shell -a 'echo $TERM'ansible 2.9.6 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/chandru/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3/dist-packages/ansible executable location = /usr/bin/ansible python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]Using /etc/ansible/ansible.cfg as config filehost_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() methodscript declined parsing /etc/ansible/hosts as it did not pass its verify_file() methodauto declined parsing /etc/ansible/hosts as it did not pass its verify_file() methodParsed /etc/ansible/hosts inventory source with ini pluginERROR! this task 'ansible.builtin.shell' has extra params, which is only allowed in the following modules: group_by, include_tasks, set_fact, raw, add_host, win_shell, win_command, include_vars, meta, import_role, command, script, include, shell, import_tasks, include_role | This is due to a bug in Ansible which was introduced as part of the transition to FQCN: https://github.com/ansible/ansible/pull/71824 As a workaround, you can simply replace ansible.builtin.shell with the old-school shell name. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454826/"
]
} |
633,147 | We can write the next command in two ways: # using extended regex$ echo foobar | sed -E 's/(foo)(bar)/\2\1/'barfoo And: # using backslashes$ echo foobar | sed 's/\(foo\)\(bar\)/\2\1/'barfoo Using backslashes means that the command is more portable than the extended regex? | Yes The current POSIX standard of sed does not specify the -E flag, which enables extended regex (ERE). This alone is enough to conclude that the basic regex (BRE) form 's/\(foo\)\(bar\)/\2\1/' is the most portable. However, even if -E were included sed 's standard— and it will be —, the Regular Expressions document does not define back-references in EREs , so the BRE \(...\) == ERE (...) association is itself a GNU extension and not guaranteed to be supported by all programs. POSIX Grep , for example, includes the -E flag, but while each one of grep 'ee*'grep -E 'e+'grep '\(.\)\1' is compliant, grep -E '(.)\1' is not. Likewise, there are reports that concretely illustrate that BSD does not follow the extension: [In FreeBSD] sed -E '/(.)\1/d' removes lines that have a 1 after some other character. whereas GNU sed would treat that as an back-reference and remove lines containing two equal and adjacent characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/338177/"
]
} |
633,181 | I'm finding that Firefox and Chromium are both ignoring my /etc/hosts file. Is there a way for me to block all outgoing requests to a particular set of domains e.g. Facebook.com and Twitter.com. For instance, if I use nslookup to get a set of CIDR ranges, can I tell the firewall (UFW) to block these ranges? Or alternatively how hard would it be to set up a DNS on my own Linux desktop computer? Ideally I would have a tool like opensnitch but this isn't compatible with Debian. | Yes The current POSIX standard of sed does not specify the -E flag, which enables extended regex (ERE). This alone is enough to conclude that the basic regex (BRE) form 's/\(foo\)\(bar\)/\2\1/' is the most portable. However, even if -E were included sed 's standard— and it will be —, the Regular Expressions document does not define back-references in EREs , so the BRE \(...\) == ERE (...) association is itself a GNU extension and not guaranteed to be supported by all programs. POSIX Grep , for example, includes the -E flag, but while each one of grep 'ee*'grep -E 'e+'grep '\(.\)\1' is compliant, grep -E '(.)\1' is not. Likewise, there are reports that concretely illustrate that BSD does not follow the extension: [In FreeBSD] sed -E '/(.)\1/d' removes lines that have a 1 after some other character. whereas GNU sed would treat that as an back-reference and remove lines containing two equal and adjacent characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/435780/"
]
} |
633,220 | I have read about specifying "10#", but I don't think it's my case as I am not doing number comparison. I am trying to create an associative array in Bash, and the code worked fine until today (2021-02-08): dailyData["$today"]="$todayData" $today is a day in ISO format, $todayData is not relevant. I am getting the error 2021-02-08: value too great for base (error token is "08") . Why does Bash interpret this date format as a number, where an arbitrary string does the job (associative array key)? What if I wanted to use just "08" as dictionary key? | It's because dailyData is being automatically created as indexed array rather than an associative array. From man bash : An indexed array is created automatically if any variable is assignedto using the syntax name[subscript]=value . The subscript is treated asan arithmetic expression that must evaluate to a number. The issue goes away if you explicitly declare dailyData as an associative array: $ declare -A dailyData[2021-02-08]="$todayData"$ declare -p dailyDatadeclare -A dailyData=([2021-02-08]="" ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/633220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/423407/"
]
} |
633,260 | I am fairly new to shell scripting. I have a script similar to the following, while [ true ]do sleep 5m # PART X: code that executes every 5 minsdone While this is running (1) how can I intercept the program from outside and (2) stop the sleep process and execute PART X right away? Can I use SIGNALS for that purpose? Or is there a better way? Can you point out a general direction to solving these type of issues? | Handling the timing is the script's responsibility. Even if that means using /bin/sleep today, it might not in the future, so killing that isn't actually guaranteed to work long-term. Well, I guess you can make that guarantee, but it's neater not to. My point is you shouldn't kill the sleep directly from outside the script, since the sleep is an implementation detail. Instead, have your script trap a different signal, like SIGUSR1 and send that to your script. A simple example might look like #!/usr/bin/env bashkill_the_sleeper () { # This probably isn't really needed here # If we don't kill the sleep process, it'll just # hang around in the background until it finishes on its own # which isn't much of an issue for "sleep" in particular # But cleaning up after ourselves is good practice, so let's # Just in case we end up doing something more complicated in future if [ -v sleep_pid ]; then kill "$sleep_pid" fi}trap kill_the_sleeper USR1 EXITwhile true; do # Signals don't interrupt foreground jobs, # but they do interrupt "wait" # so we "sleep" as a background job sleep 5m & # We remember its PID so we can clean it up sleep_pid="$!" # Wait for the sleep to finish or someone to interrupt us wait # At this point, the sleep process is dead (either it finished, or we killed it) unset sleep_pid # PART X: code that executes every 5 minsdone Then you can cause PART X to run by running kill -USR1 "$pid_of_the_script" or pkill -USR1 -f my_fancy_script This script isn't perfect by any means, but it should be decent for simple cases at least. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/633260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/402053/"
]
} |
633,322 | Real dumb question, but suppose a computer with Linux does not have an encrypted hard drive. If I generated a hash with "openssl passwd" , couldn't I just run a live version of Ubuntu and add my hash to the /etc/shadow file? Or is /etc/shadow encrypted even if the hard drive isn't? | What prevents me from just editing the /etc/shadow file in unencrypted systems? Nothing, there is no specific protection for /etc/shadow . Some systems might have tampering detection, so the system administrator would know that /etc/shadow was changed (unless you also overrode the tampering detection, typically by updating it so it considered your modified /etc/shadow as correct), but nothing stops you from editing files in an unencrypted file system. Encrypting the drive (or the partition holding /etc/shadow ) is sufficient to prevent such attacks, but not to prevent more sophisticated attacks. Full protection against attacks involving physical access is still not quite there, although Secure Boot and TPM measurements do make successful attacks much harder. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/455116/"
]
} |
633,345 | I have a text file like below. variable1 10 20 30 40 50 60variable2 2 4 40 3 2 1variable3 2 4 2 3 2 1 If col3>20 the output should be like below with header(variable1/2) values variable1 10 20 30 40 50 60 variable2 2 4 40 If there is no match nothing should be printed and no header as well. | What prevents me from just editing the /etc/shadow file in unencrypted systems? Nothing, there is no specific protection for /etc/shadow . Some systems might have tampering detection, so the system administrator would know that /etc/shadow was changed (unless you also overrode the tampering detection, typically by updating it so it considered your modified /etc/shadow as correct), but nothing stops you from editing files in an unencrypted file system. Encrypting the drive (or the partition holding /etc/shadow ) is sufficient to prevent such attacks, but not to prevent more sophisticated attacks. Full protection against attacks involving physical access is still not quite there, although Secure Boot and TPM measurements do make successful attacks much harder. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/455142/"
]
} |
633,543 | I have inherited a Ubuntu 14.04 production server which needs to be upgraded to 20.04, and I would like a sandboxed version to experiment with first, hence I want to dump and restore the filesystems over the network from either a MacOS or another 14.04 virtualbox instance. An earlier version of this question is at https://askubuntu.com/q/1314747/963 . The server cannot "see" my machines so I cannot easily run dump and push the result remotely to my machine, but need to invoke ssh from my machine to run dump. ssh -t me@there "echo MYPASSWORD | sudo -S dump -y -f - /boot 2>/dev/null " > boot.dump Problem is that I've found that running this command inserts a lot of \r characters in front of \n characters which ruins the dump file so restore cannot use it. I understand that this is probably due to a driver translating linefeeds to the characters needed for printing, but I do not see where this is triggered. How should I do this to get the correct binary dump file? | It's the ONLCR .c_oflag termios setting which is causing the newline ( \n ) to be turned into carriage-return/newline ( \r\n ) by the pseudo-terminal allocated by ssh on the remote machine (because of ssh's -t option). Turn it off with stty -onlcr : ssh -t me@there 'stty -onlcr; ...' > output | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/633543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4869/"
]
} |
633,648 | I have a fileA.txt with many lines and I want to replace specific lines with other specific lines from a second file fileB.txt which also has many lines For example:fileA.txt ItalyKoreaUSAEnglandPeruJapanUruguay fileB.txt ArgentinaSwitzerlandSpainGreeceDenmarkSingaporeThailandColombia I want to replace line 2, 4, 5 and 7 in the first file with lines 1, 2, 5 and 8 from the second file: Output: ItalyArgentinaUSASwitzerlandDenmarkJapanColombia I guess I can do it with awk or sed but if awk this code does not seems provide the information of the lines of the second file: awk 'NR==FNR{ a[$2]=$1; next }FNR in a{ $0=a[FNR] }1' fileA.txt fileB.txt Any suggestion? | Using awk : awk -v a='2,4,5,7' -v b='1,2,5,8' 'BEGIN { split(a, ax, ","); split(b, bx, ","); for(n in ax) mapping[ bx[n] ] =ax[n];};NR==FNR { if (FNR in mapping) hold[ mapping[FNR] ]=$0; next; };{ print (FNR in hold)? hold[FNR]: $0; }' fileB fileA Here, we pass line numbers as an awk -v ariable in a='...' (for fileA) and b='...' (for fileB), then we split() them into an array on comma character as the separator (note that a and b were variables, while now ax and bx are arrays). then we build a another mapping array from ax and bx arrays to map the lines which those should be replaced in fileA with the ones from fileB; now keys (or indexes) of the mapping array is line numbers of the fileB and the values of these keys are the line numbers of the fileA, as below: the mapping array is: Key Value1 22 45 58 7 so now what we need, that is, just to read the line numbers from fileB that match with the keys above (FNRs of 1 , 2 , 5 and 8 ), so we do that with: NR==FNR { if (FNR in mapping) hold[ mapping[FNR] ]=$0; next; }; OK, now what is the value of the mapping[FNR] ? if you check the mapping array above, that would be: mapping[1] --> 2; then-we-have hold[ mapping[1] ] --> hold[2]=$0mapping[2] --> 4; then-we-have hold[ mapping[2] ] --> hold[4]=$0mapping[5] --> 5; then-we-have hold[ mapping[5] ] --> hold[5]=$0mapping[8] --> 7; then-we-have hold[ mapping[8] ] --> hold[7]=$0 so we used the value of mapping array as the key for the hold array and hold array is now contains: Key Value2 Argentina4 Switzerland5 Denmark7 Colombia now the last step is to use keys in hold array as the matched line number in fileA and replace that lines with the values of that key from the hold array if that line number found in the array or print the line itself if not found (Ternary operator: condition? if-true : if-false ), and we do that with: { print (FNR in hold)? hold[FNR]: $0; } | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361114/"
]
} |
633,730 | I need to paste a lot of .txt files together. I use this command: cat *.txt > newfile.txt I noticed that some of these files are empty. How can I insert a control in the script to prevent that the cat from acting on these empty files? Thank you. | Not really necessary, but if you need to rule out empty files: for i in *.txt; do [ "$i" != newfile.txt ] && [ -s "$i" ] && cat -- "$i"; done >newfile.txt The -s test will be true if the given file exists and is not empty (this is a standard test; see man test ). We also avoid processing the output file itself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633730",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/393805/"
]
} |
633,758 | I try to run a command from a server A to get the size of a directory located on sever B I got a blank result using this command : ssh root@remoteIP -t "size=$(du -hs /var/www/somesite.com/dir1/dir2/uploads/ | awk '{print $2}'); echo ${size} ; bash --login" I would appreciate any help | Not really necessary, but if you need to rule out empty files: for i in *.txt; do [ "$i" != newfile.txt ] && [ -s "$i" ] && cat -- "$i"; done >newfile.txt The -s test will be true if the given file exists and is not empty (this is a standard test; see man test ). We also avoid processing the output file itself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633758",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/455484/"
]
} |
633,764 | Suppose I have a couple of line formats: this:foo ...that: foo ... And I want to run two different routines for one form or the other: cat text_file \| awk -F ':' '/:foo/ { # do this } # else print the whole line unchanged' | awk -F ':' '/ +foo +/ { # do that } # else print the whole line unchanged' Ideally, I could write: cat text_file \| awk -F ':' ' /:foo/ { ... } / +foo +/ { ... } ... /.*/ { print }' But things seem to be behaving a lot differently than I expect. Is there a particular way to do this? | Not really necessary, but if you need to rule out empty files: for i in *.txt; do [ "$i" != newfile.txt ] && [ -s "$i" ] && cat -- "$i"; done >newfile.txt The -s test will be true if the given file exists and is not empty (this is a standard test; see man test ). We also avoid processing the output file itself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/633764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288274/"
]
} |
633,847 | I just came across the FAQs for no-cost Red Hat Enterprise Linux and looks like they are giving Red Hat Enterprise Linux (RHEL) for free for individual users, since CentOS has been my desktop for years and now it is going to reach EOL in 2021. I am wondering what are my options here as an individual to keep using CentOS/Red Hat as my desktop. Is RHEL a replacement for CentOS 8 by Red Hat (a free offering for individual users)? Is it better to install this no-cost Red Hat rather than installing CentOS Stream? | If you want to practice RHEL, you might as well use RHEL, which is now available for no-cost production use, in a personal capacity, for small workloads of up to 16 nodes or instances (in addition to the no-cost development-use developer subscriptions which have been available for a few years). CentOS Streams gives access to what will eventually become the next point-release of RHEL (8.4 as of this writing). It’s great if you want to anticipate upcoming changes in RHEL; its appropriateness for other uses depends on a number of factors. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/633847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8032/"
]
} |
634,046 | Often I find my self making single character aliases, because after all, they exist to save input time. I'm curious if this should be avoided. I do not know of any conflicts. | Things to avoid: standard or common commands with single character names: w (show logged in users' activity), X (X Window System server), R (R programming language interpreter), [ (similar to test ) builtins of your shell or of common shells: [ , . , : , - , r shell keywords: { , } , ! ? and * wildcard characters special characters in the shell syntax: `"$&();'#~|\<> , (also ^ , % in some shells), SPC, TAB, NL (and other blanks with some shells) better avoid non-ASCII characters (as those have different encoding depending on the locale) better avoid control characters (beside TAB and NL already mentioned above) as they're not that easy to enter, and depending on context, not always visible, or with different representations. Only zsh will let you define and use an alias for the NUL character. bash lets you define an alias for ^A (the control character with byte value 1) but not use it apparently. To find commands with single character names: bash : compgen -c | grep -x . | sort -u (also includes keywords, assumes command names don't contain newline characters) zsh : type -m '?' (or type -pm '?' if you don't want functions/aliases/builtins/keywords). Debian or derivatives: to find any command in any package with single character name: $ apt-file find -x '/s?bin/.$'coreutils: /usr/bin/[e-wrapper: /usr/bin/epython3-q-text-as-data: /usr/bin/qr-base-core: /usr/bin/Rr-base-core: /usr/lib/R/bin/Rr-cran-littler: /usr/bin/rr-cran-littler: /usr/lib/R/site-library/littler/bin/rwims: /var/lib/wims/public_html/bin/cxserver-xorg-core: /usr/bin/X | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/634046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/420883/"
]
} |
634,048 | I am new to shell world, writing a simple script to pull files from more that 300 servers. Wanted to know if I am writing like below then it will login to all 300 servers in one go and pull files or it will go one by one. Also I have passwordless login for one user that user I can mention in $username or I need to create other script for that. #!/bin/bashcd /backupfor server in $(cat server.txt)doscp -r $username@$server:/tmp/backup/*.txt* .done | Things to avoid: standard or common commands with single character names: w (show logged in users' activity), X (X Window System server), R (R programming language interpreter), [ (similar to test ) builtins of your shell or of common shells: [ , . , : , - , r shell keywords: { , } , ! ? and * wildcard characters special characters in the shell syntax: `"$&();'#~|\<> , (also ^ , % in some shells), SPC, TAB, NL (and other blanks with some shells) better avoid non-ASCII characters (as those have different encoding depending on the locale) better avoid control characters (beside TAB and NL already mentioned above) as they're not that easy to enter, and depending on context, not always visible, or with different representations. Only zsh will let you define and use an alias for the NUL character. bash lets you define an alias for ^A (the control character with byte value 1) but not use it apparently. To find commands with single character names: bash : compgen -c | grep -x . | sort -u (also includes keywords, assumes command names don't contain newline characters) zsh : type -m '?' (or type -pm '?' if you don't want functions/aliases/builtins/keywords). Debian or derivatives: to find any command in any package with single character name: $ apt-file find -x '/s?bin/.$'coreutils: /usr/bin/[e-wrapper: /usr/bin/epython3-q-text-as-data: /usr/bin/qr-base-core: /usr/bin/Rr-base-core: /usr/lib/R/bin/Rr-cran-littler: /usr/bin/rr-cran-littler: /usr/lib/R/site-library/littler/bin/rwims: /var/lib/wims/public_html/bin/cxserver-xorg-core: /usr/bin/X | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/634048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/356506/"
]
} |
634,112 | I've been having trouble with some network configuration lately which has been tricky to resolve. It seems this would be much easier to diagnose if I knew which direction the traffic was failing to get through. Since all ping requests receive no responses back I'd like to know if the ping-request packets are getting through and the responses failing, or if it's the requests themselves that are failing. To be clear, standard utilities like ping and traceroute rely on sending a packet out from one machine and receiving a packet in response back to that same machine. When no response comes back, it's always impossible to tell if the initial request failed to get through, or the response to it was blocked or even if the response to it was simply never sent. It's this specific detail, "which direction is the failure", that I'd like to analyse. Are there any utilities commonly available for Linux which will let me monitor for incoming ICMP ping requests? | tcpdump can do this, and is available pretty much everywhere: tcpdump -n -i enp0s25 icmp will dump all incoming and outgoing ICMP packets on enp0s25 . To see only ICMP echo requests: tcpdump -n -i enp0s25 "icmp[0] == 8" ( -n avoids DNS lookups, which can delay packet reporting and introduce unwanted traffic of their own.) This allows you to find if it is receiving the packets from the other machine (from which you would e.g. ping it), so the problem is with the return path, or if they directly don't arrive. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/634112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20140/"
]
} |
634,315 | I've been struggling with this for a couple days so I'm hoping someone on SE can help me. I've downloaded a large file from Dropbox using wget (following command) wget -O folder.zip https://www.dropbox.com/sh/.../.../dropboxfolder?dl=1 I'm sure it's a zip because 1), file dropboxfolder.zip yields dropboxfolder.zip: Zip archive data, at least v2.0 to extract , and 2) the download and extraction works find on my Windows machine. When I try to unzip to the current directory using unzip dropboxfolder.zip , on Linux, I get the following output: warning: stripped absolute path spec from / mapname: conversion of failed creating: subdir1/creatingL subdir2/extracting: subdir1/file1.tif error: invalid zip file with overlapped components (possible zip bomb) I'm unsure what the issue is, since as I said it works fine on Windows. Since the zip is rather large (~19GB) I would like to avoid transferring it bit by bit, so I would be very thankful for any help. I've run unzip -t but it gives the same error. When listing all the elements in the archive it shows everything as it should be. Could it be an issue with the file being a tif file? | I had the exact same issue with dropbox , wget and zip .I used an alternative compressing tool and extracted the file with: 7z e file.zip | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/634315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/455986/"
]
} |
634,364 | I have a file of patterns and I want to return all the line numbers where the pattern was found, but in a wide format and not long/spread.Example: fileA.txt GermanyUSAUK fileB.txt USAUSAItalyGermanyUKUKCanadaCanadaGermanyAustraliaUSA I have done something like this: grep -nf fileA.txt fileB.txt which returned me: 1:USA2:USA4:Germany5:UK6:UK9:Germany11:USA However, I want to have something like: Germany 4 9USA 1 2 11UK 5 6 | Using GNU datamash : $ grep -n -x -F -f fileA.txt fileB.txt | datamash -s -t : -g 2 collapse 1Germany:4,9UK:5,6USA:1,2,11 This first uses grep to get the lines from fileB.txt that exactly matches the lines in fileA.txt , and outputs the matching line numbers along with the lines themselves. I'm using -x and -F in addition to the options that are used in the question. I do this to avoid reading the patterns from fileA.txt as regular expressions ( -F ), and to match complete lines, not substrings ( -x ). The datamash utility is then parsing this as lines of : -delimited fields ( -t : ), sorting it ( -s ) on the second field ( -g 2 ; the countries) and collapsing the first field ( collapse 1 ; the line numbers) into a list for each country. You could then obviously replace the colons and commas with tabs using tr ':,' '\t\t' , or with spaces in a similar way. $ grep -n -x -f fileA.txt -F fileB.txt | datamash -s -t : -g 2 collapse 1 | tr ':,' '\t\t'Germany 4 9UK 5 6USA 1 2 11 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/634364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361114/"
]
} |
634,415 | I was reading about filesystems and a few questions came up to mind. Q. If files are an integral part of unix/linux (i.e. to represent processes in /proc or device files in /dev ) as in a famous saying ' everything is a file ', do they exist outside of the context of filesystem? I feel like some files such as network socket files or block device files are filesystem independent and more like part of the OS itself. Follow-up Q. Can unix/linux function without filesystems? For example, can a Linux system work by accessing the secondary storage manually? | Yes. And no. Maybe. Not everything is a file; obviously a hard drive can't contain a partition that then contains a filesystem, that then contains the hard drive itself. It's just that a number of things are accessible through names visible through the filesystem tree. As far as the filesystem is concerned (the logical filesystem, or a concrete one in the data structure sense, like ext4), it's just that some files are marked as "device nodes" for some particular numbered device. But their functionality is implemented by separate drivers. When a process access them, the OS just diverts the access to the appropriate driver, not to the filesystem. Think of them as references, or pointers, or such. With that in mind, it's easy to understand that e.g. /proc is in no way mandatory. The system can function and it can run processes even without it. You just won't have that method of viewing those processes. But stuff like fork() , kill() and wait*() will still work, since they refer to processes by PID, not by some filesystem name. Network sockets also don't appear as named files, in general. Unix domain sockets can do that, but IIRC they don't have to. And TCP or UDP sockets etc. just don't. But network sockets do appear as file descriptors to processes, and the read() and write() system calls work with them the same as with pipes or "real" files. So in some sense, network sockets also walk, talk and quack like files, even though the network protocols have not much to do with storing bits on a disk. But, as far as I can think of, you don't really have a way to refer to an arbitrary hard drive without those named device nodes. Your hardware still exists, the system still has the necessary SATA/USB/whatever drivers required to work with them, but you have no way of telling it to do so. Though you could mount a filesystem, and then remove the device node pointing to the device the filesystem was on. There's no problem here, since the device node is just a way for userspace to access the device. You asked, "Can unix/linux function without filesystems?". Linux doesn't run without a file system, for one, because it starts userspace by looking for an executable file to run (eventually running what stays around as init ). That filesystem doesn't need to be one on a regular disk drive, though, it can be the special rootfs filesystem the kernel sets up from data included with the kernel image. (Incidentally, you can't get rid of rootfs. It's always there, even if empty, so that the kernel doesn't need to deal with the idea of not having any mounted filesystems.) See ramfs-rootfs-initramfs.txt in the kernel documentation for the details on rootfs. I suppose we could assume some hypothetical OS that could function without the filesystem, but e.g. the execve() system call takes a file name, so whatever was running couldn't launch other programs (the one running would need to become loaded some other way), and without the named device nodes, accessing storage would also be hard. It wouldn't look a lot like other Unixen anyway. On Linux, it might be possible to engineer an oddball single-purpose system that would launch a single userspace program from rootfs at bootup, then clear up rootfs and never mount any other filesystems. That would get as close to having no filesystem as possible, and the program could still run and e.g. access the network. I doubt it would have any practical use, though, and as usual, any open files would still exist until closed, so removing their names might not be very useful. See also Does the Linux kernel need a file system to run? , parts of the answers to which I echoed above. For a longer discussion on that "everything is a file" mantra, see this answer on A layman's explanation for "Everything is a file" — what differs from Windows? . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/634415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/451495/"
]
} |
634,435 | I have a fileA.txt: RS0255_RS0083:115,124,129,141,143,168,170,180RS0343_RS0083:112,113,163,175,181RS0343_RS0255:94,101,107,164,179,183 I would like to perform mathematical operations on the numbers after :For example, I want to add 10 to each number:Output: RS0255_RS0083:125,134,139,151,153,178,180,190RS0343_RS0083:122,123,173,185,191RS0343_RS0255:104,111,117,174,189,193 I know how to do it in R but how to do this mathematical operation on numbers in a file in perl or awk ? | There are quite literally dozens of tools you can use for different manipulation of text files. For the specific case you mention, I would probably use perl: $ perl -pe 's/\b(\d+)\b/$1 + 10/ge' fileA.txt RS0255_RS0083:125,134,139,151,153,178,180,190RS0343_RS0083:122,123,173,185,191RS0343_RS0255:104,111,117,174,189,193 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/634435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361114/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.