source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
634,461 | It seems that the most recent Evince release don't allow you to clear all the "recent documents" list by GUI. That list shows up when you open Evince without giving it a document to open. What I tried: Find a "Settings" button, but there's nothing let aside a "Open..." button and the usual windows buttons. Right click everywhere on the GUI, nothing shows up. Removing the listed document one by one by right-clicking every documents. Nothing. Trying to use Firefox's way of showing up the menu bar. Still nothing. I don't remember Evince being so limited few years ago, finding images of past versions do show a menu bar and more options. What happened since then with Gnome dveloppers making every GUI works like the Windows 8/Mac OS/Tablet way ? So the question is : How do I clear Evince's (flatpaked) document list? | Evince uses the shared GNOME recent document list. To clear that, open the privacy settings: Click on “Usage & History”: The “Clear Recent History” button will clear the document list. You can also disable history entirely, or specify how long history entries should be kept. If you’d rather not use the UI, or are not able to, the following Python script will clear the list for you: #!/usr/bin/python3import gi, sysgi.require_version('Gtk', '3.0')from gi.repository import Gtk, GLibrec_mgr = Gtk.RecentManager.get_default()rec_mgr.purge_items()GLib.idle_add(Gtk.main_quit)Gtk.main() It has fewer dependencies than gnome-control-center . To run this against your Flatpak installation of Evince, save the Python script in a file named clear-recent somewhere, make it executable, and run flatpak run --command=/path/to/clear-recent org.gnome.Evince This will clear the recent documents list in Evince in Flatpak. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/634461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88282/"
]
} |
634,462 | I was attempting to upgrade from debian 9 to 10 but the installation failed when attempting to install systemd-sysv_241-7~deb10u6_amd64.deb . I get the following error: My attempts to run apt --fix-broken install have not been successful and lead to the same error below. (Reading database ... 59371 files and directories currently installed.)Preparing to unpack .../systemd-sysv_241-7~deb10u6_amd64.deb ...Unpacking systemd-sysv (241-7~deb10u6) ...dpkg: error processing archive /var/cache/apt/archives/systemd-sysv_241-7~deb10u6_amd64.deb (--install): trying to overwrite '/usr/share/man/man8/halt.8.gz', which is also in package sysvinit 2.88dsf-41+deb7u1Processing triggers for man-db (2.7.6.1-2) ...Errors were encountered while processing: /var/cache/apt/archives/systemd-sysv_241-7~deb10u6_amd64.deb In my attempts to isolate, I get the following when I run the failed command in verbose mode: # dpkg --debug=77777 -i /var/cache/apt/archives/systemd-sysv_241-7~deb10u6_amd64.deb...D000040: ok 2 msgs >><<D010000: check_triggers_cycle pnow=man-db:amd64D020000: check_triggers_cycle pnow=man-db:amd64 firstProcessing triggers for man-db (2.7.6.1-2) ...D000002: fork/exec /var/lib/dpkg/info/man-db.postinst ( triggered /usr/share/man )D000001: ensure_diversions: same, skippingD020000: post_postinst_tasks - trig_incorporateErrors were encountered while processing: /var/cache/apt/archives/systemd-sysv_241-7~deb10u6_amd64.deb Unfortunately, this 'verbose' debugging is too terse for me and I am stuck mid-upgrade. I tried running the man-db post install as follows: sh -x /var/lib/dpkg/info/man-db.postinst configure 2.6.7.1-2 and it completed successfully without error so I am unsure what the error is to be able to try and fix. I know that the installation scripts are located in ls /var/lib/dpkg/info , but I do not know which are related to this package. Can anyone tell me where to get more detail to debug this more thoroughly and fix it? | I don't see how you could get more relevant information than you already have.The error message says: trying to overwrite '/usr/share/man/man8/halt.8.gz', which is also in package sysvinit 2.88dsf-41+deb7u1 this is basically it: two packages want to install the same file, and debian forbids this (as a single file cannot have two different contents). Since the file in question is only a manpage, there shouldn't be any real problem (as in: catastrophic problem that leads to an unbootable system), regardless which of the two packages "wins". So I would personally just do a forced installation of the broken package: # dpkg --force-overwrite -i /var/cache/apt/archives/systemd-sysv_241-7~deb10u6_amd64.deb and after that restart the upgrade. Please do note however, that the --force-*** options of dpkg are usually considered dangerous and you shouldn't just blindly force things by copying shell snippets from the internet without understanding the implications. Debian upgrades OTOH, Debian spends a lot of blood , sweat and tears into making systems smoothly upgradable between Debian releases (e.g. 9 to 10 ).So why does it not work for you? You should use apt-get dist-upgrade to upgrade between major releases (as this relaxes the resolver and allows to upgrade more complicated situtations than a simple apt-get upgrade ). You should also make sure to remove cruft from older installations.E.g. your conflicting sysvinit package has a version number 2.88dsf-41+deb7u1 which indicates that it is from Debian 7 .And indeed, there hasn't been a sysvinit package since Debian 8 . So you should first make sure that you actually run a Debian 9 system before you try to upgrade it to Debian 10 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/634462",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/390903/"
]
} |
634,555 | I am using the below SED command: sed '/cell.* '"alatch {"'/,/^}/p' -n file Input file is as under: cell abc { pins on T { a b c } }cell xyz { pins on T { x y z } }cell alatch { pins on T { VSS VDDPI VDDP } pins on L { IN CT CB } pins on R { OUT } inputs { CB CT IN } outputs { OUT }}cell alatch { pins on T { VSS VDDPI VDDP } pins on L { IN CT CB } pins on R { OUT } inputs { CB CT IN } outputs { OUT }} Output is as under: cell alatch { pins on T { VSS VDDPI VDDP } pins on L { IN CT CB } pins on R { OUT } inputs { CB CT IN } outputs { OUT }}cell alatch { pins on T { VSS VDDPI VDDP } pins on L { IN CT CB } pins on R { OUT } inputs { CB CT IN } outputs { OUT }} Expected out is as under: cell alatch { pins on T { VSS VDDPI VDDP } pins on L { IN CT CB } pins on R { OUT } inputs { CB CT IN } outputs { OUT }} What is needed is that only the first occurrence should be the output. Any suggestion for command? | Assuming you want the first of the two identical blocks: $ sed '/cell alatch {/,/^}/!d; /^}/q' filecell alatch { pins on T { VSS VDDPI VDDP } pins on L { IN CT CB } pins on R { OUT } inputs { CB CT IN } outputs { OUT }} The /cell alatch {/,/^}/ range is the range of lines that you want to get as output. The sed expressions first deletes all lines not in this range, and then quits as soon as a } is found at the start of a line. The q instruction will cause sed to terminate after it outputs the current line, so the final } will get printed. Executing the d instruction immediately skips to the next input line and branches back to the start of the editing script, so the q instruction has no way of executing unless it's in the range which does not cause d to execute. With awk , achieving the same effect with code that should be reminiscent of the sed code above: $ awk '/cell alatch {/,/^}/ { print; if ($0 ~ /^}/) exit }' filecell alatch { pins on T { VSS VDDPI VDDP } pins on L { IN CT CB } pins on R { OUT } inputs { CB CT IN } outputs { OUT }} Actually, this is closer to the sed command sed -n '/cell alatch {/,/^}/{ p; /^}/q; }' file which does the same thing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/634555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/98346/"
]
} |
634,622 | I have a space-separated text file, e.g. text1a text2a id1 text4a text5atext1b text2b id2 text4b text5btext1c text2c id3,id4 text4c text5ctext1d text2d id5,id6,id7 text4d,text4di text5d The file is about 1.5 million lines long. Some lines have two ids, separated by a comma, e.g. line 3 in the example. This is causing issues when attempting to join the file with another file in which the id could either be id3 or id4 . I would like to find all instances of column 3 in which a comma is present, and separate whatever is on either side into separate lines, e.g the above file would turn into. text1a text2a id1 text4a text5atext1b text2b id2 text4b text5btext1c text2c id3 text4c text5ctext1c text2c id4 text4c text5ctext1d text2d id5 text4d,text4di text5dtext1d text2d id6 text4d,text4di text5dtext1d text2d id7 text4d,text4di text5d There are rows that contain 3 or more comma-separated ids. Commas can appear in other columns but they should stay as they are. The order does not matter, e.g. whether id3 or id4 come first in the file. I'm fairly inexperienced with awk , sed etc, which I assume is the best tool for this job. Would anyone be able to point me in the right direction please? | $ awk 'split($3,f,/,/)>1{for (i=1; i in f; i++) {$3=f[i]; print} next } 1' filetext1a text2a id1 text4a text5atext1b text2b id2 text4b text5btext1c text2c id3 text4c text5ctext1c text2c id4 text4c text5ctext1d text2d id5 text4d,text4di text5dtext1d text2d id6 text4d,text4di text5dtext1d text2d id7 text4d,text4di text5d The above preserves the order of the ids listed in $3, if that's not desirable then you can do for (i in f) instead of for (i=1; i in f; i++) . Only executing the block containing the loop where $3 is assigned if split() returned more than 1 is more efficient than doing the assignment unconditionally because every time you assign to a field you force awk to reconstruct the current record replacing all FSs with OFSs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/634622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264395/"
]
} |
634,710 | What exactly is a "stable" Linux distribution and what are the (practical) consequences of using an "unstable" distribution? Does it really matter for casual users (i.e. not sysadmins) ? I've read this and this but I haven't got a clear answer yet. "Stable" in Context: I've seen words and phrases like "Debian Stable" and "Debian Unstable" and things like "Debian is more stable than Ubuntu". | In the context of Debian specifically, and more generally when many distributions describe themselves, stability isn’t about day-to-day lack of crashes, it’s about the stability of the interfaces provided by the distribution , both programming interfaces and user interfaces. It’s better to think of stable v. development distributions than stable v. “unstable” distributions. A stable distribution is one where, after the initial release, the kernel and library interfaces won’t change. As a result, third parties can build programs on top of the distribution, and expect them to continue working as-is throughout the life of the distribution. A stable distribution provides a stable foundation for building more complex systems. In RHEL, whose base distribution moves even more slowly than Debian, this is described explicitly as API and ABI stability . This works forwards as well as backwards: thus, a binary built on Debian 10.5 should work as-is on 10.9 but also on the initial release of Debian 10. (This is one of the reasons why stable distributions never upgrade the C library in a given release.) This is a major reason why bug fixes (including security fixes) are rarely done by upgrading to the latest version of a given piece of software, but instead by patching the version of the software present in the distribution to fix the specific bug only. Keeping a release consistent also allows it to be considered as a known whole, with a better-defined overall behaviour than in a constantly-changing system; minimising the extent of changes made to fix bugs helps keep the release consistent. Stability as defined for distributions also affects users, but not so much through program crashes etc.; rather, users of rolling distributions or development releases of distributions (which is what Debian unstable and testing are) have to regularly adjust their uses of their computers because the software they use undergoes major upgrades (for example, bumping LibreOffice). This doesn’t happen inside a given release stream of a stable distribution. This could explain why some users might perceive Debian as more stable than Ubuntu: if they track non-LTS releases of Ubuntu, they’ll get major changes every six months, rather than every two years in Debian. Programs in a stable distribution do end up being better tested than in a development distribution, but the goal isn’t for the development distribution to be contain more bugs than the stable distribution: after all, packages in the development distribution are always supposed to be good enough for the next release. Bugs are found and fixed during the stabilisation process leading to a release though , and they can also be found and fixed throughout the life of a release. But minor bugs are more likely to be fixed in the development distribution than in a stable distribution. In Debian, packages which are thought to cause issues go to “experimental”, not “unstable”. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/634710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449342/"
]
} |
634,809 | I have a pipe delimited file with date time in this format yyyymmddhhmmss as below. John|Doe|TEST|20210728120821|[email protected]|Davis|TEST|20210828120821|[email protected]|Smith|TEST|20210528120821|[email protected] I am trying to convert the string in column 4 to yyyy-mm-dd hh:mm:ss like this John|Doe|TEST|2021-07-28 12:08:21|[email protected]|Davis|TEST|2021-08-28 12:08:21|[email protected]|Smith|TEST|2021-05-28 12:08:21|[email protected] As I am new to text processing in Linux, I searched and tried using awk like this awk -F"|" '{OFS="|"; $4=strftime("%Y-%m-%d %H:%M:%S", $4); print $0}' But it didn't do the conversion as expected. | Those "timestamps" aren't seconds since the epoch as strftime() operates on, they're just dates+times with no separators between years, months, etc. You just need a simple text manipulation, not use of time functions. With GNU awk (which you're already using) for gensub(): $ awk 'BEGIN{FS=OFS="|"} {$4=gensub(/(.{4})(..)(..)(..)(..)(..)/,"\\1-\\2-\\3 \\4:\\5:\\6",1,$4)} 1' fileJohn|Doe|TEST|2021-07-28 12:08:21|[email protected]|Davis|TEST|2021-08-28 12:08:21|[email protected]|Smith|TEST|2021-05-28 12:08:21|[email protected] or with any awk: $ awk 'BEGIN{FS=OFS="|"} {$4=sprintf("%s-%s-%s %s:%s:%s", substr($4,1,4), substr($4,5,2), substr($4,7,2), substr($4,9,2), substr($4,11,2), substr($4,13,2))} 1' fileJohn|Doe|TEST|2021-07-28 12:08:21|[email protected]|Davis|TEST|2021-08-28 12:08:21|[email protected]|Smith|TEST|2021-05-28 12:08:21|[email protected] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/634809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433145/"
]
} |
635,016 | I know of the following transparent drive compressions on other operating systems: MS-DOS 6.22 Doublespace (configured by autoexec.bat or config.sys) Windiws XP/7: Drive Compression (configured by right-click on a folder in the file browser) How to get transparent drive compression on Debian, Ubuntu and Linux Mint? Possible, they are a solutions based on one of the follow one: a more modern ext5 file system in future https://www.phoronix.com/news/MTIxNTE a partly ZFS; BTRFS or from potential sucessor bcachefs, function on top of ext4 a compression like lz4 on top of ext4 Fusecompress LessFS https://lwn.net/Articles/561650/ https://web.archive.org/web/20221214235440/https://lwn.net/Articles/561650/ | ext4 doesn't support compression, for that you need to use either Btrfs or ZFS (available in Ubuntu since 19.10 but it's still experimental). Compression can be also configured on block device level with Device mapper VDO and than you could use it with ext4 (because it doesn't matter what filesystem is on top of the device), but that's currently not supported in Ubuntu . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/635016",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/456207/"
]
} |
635,132 | So, to make a long story short, I wrote a (python) program which opened a lot of files, writes data in it, and then deleted the files, but didn't properly close the file handles.After some time, this program halted due to lack of disk space. Auto-complete in bash failed with cannot create temp file for here-document: No space left on device" , and lsof -nP +L1 showed a ton of no-longer existing files. After killing my program, all the filehandles were closed, disk space was "free" again and everything was fine. Why did this happen? The disk space wasn't physically filled up. Or is the number of file handles limited? | Deleting a file in Unix simply removes a named reference to its data (hence the syscall name unlink / unlinkat , rather than delete ). In order for the data itself to be freed, there must be no other references to it. References can be taken in a few ways: There must be no further references to this data on the filesystem ( st_nlink must be 0) -- this can happen when hard linking. Otherwise, we'd drop the data while there's still a way to access it from the filesystem. There must be no further references to this data from open file handles (on Linux, the relevant struct file 's f_count in the kernel must be 0). Otherwise, the data could still be accessed or mutated by reading or writing to the file handle (or /proc/pid/fd on Linux), and we need somewhere to continue to store it. Once both of these conditions are fulfilled, the data is eligible to be freed. As your case violates condition #2 -- you still have open file handles -- the data continued to be stored on disk (since it has nowhere else to go) until the file handle was closed. Some programs even use this in order to simplify cleaning up their data. For example, imagine a program which needs to have some large data stored on disk for intermediate work, but doesn't need to share it with others. If it opens and then immediately deletes that file, it can use it without having to worry about making sure they clean up on exit -- the open file descriptor reference count will naturally drop to 0 on close(fd) or exit, and the relevant space will be freed whether the program exits normally or not. Detection Deleted files which are still being held open by a file descriptor can be found with lsof , using something like the following: % lsof -nP +L1COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAMEpulseaudi 1799 cdown 6u REG 0,1 67108864 0 1025 /memfd:pulseaudio (deleted)chrome 46460 cdown 45r REG 0,27 131072 0 105357 /dev/shm/.com.google.Chrome.gL8tTh (deleted) This lists all open files which an st_nlink value of less than one. Mitigation In your case you were able to close the file handles by terminating the process, which is a good solution if possible. In cases where that isn't possible, on Linux you can access the data backed by the file descriptor via /proc/pid/fd and truncate it to size 0, even if the file has already been deleted: : > "/proc/pid/fd/$num" Note that, depending on what your application then does with this file descriptor, the application may be varying degrees of displeased about having the data changed out from under it like this. If you are certain that file descriptor has simply leaked and will not be accessed again, then you can also use gdb to close it. First, use lsof -nP +L1 or ls -l /prod/pid/fd to find the relevant file descriptor number, and then: % gdb -p pid --batch -ex 'call close(num)' To answer your other question, although it's not the cause of your problem: Is the number of file [descriptors] limited? The number of file descriptors is limited, but that's not the limit you're hitting here. "No space left on device" is ENOSPC , which is what we generate when your filesystem is out of space. If you were hitting a file descriptor limit, you'd receive EMFILE (process-level shortage, rendered by strerror as "Too many open files") or ENFILE (system-level shortage, rendered by strerror as "Too many open files in system") instead. The process level soft limit can be inspected with ulimit -Sn , and the system-level limit can be viewed at /proc/sys/fs/file-max . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/635132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90771/"
]
} |
635,327 | I am perhaps picking nits here, but it would be really good to have this question that's been bothering me answered once and for all... Text file for reprex: Line one.Line two.Line three.Line four. To add an additional empty line consistently to this text file would require two sed commands for each line. This could be achieved with any of the following syntaxes: sed -e '/^$/d' -e '$!G' <file> ... but NOT ( sed -e '/^$/d' '$!G' <file> OR sed '/^$/d' '$!G' <file> ) sed -e '/^$/d; $!G' <file> or sed -e '/^$/d ; $!G' <file> sed '/^$/d; $!G' <file> or sed '/^$/d ; $!G' <file> My questions are: Is there any real difference (universality?, compliance?...) between any of the five working syntaxes listed above? Richard Blum's Latest Command Line And Shell Scripting Bible says to use something like sed -e 's/brown/red/; s/dog/cat/' data1.txt before doling out the following advice... The commands must be separated with a semicolon (;), and thereshouldn't be any spaces between the end of the first command and thesemicolon. ...and then goes on to completely neglect his own advice by not using the -e option at all and also adding spaces between the end of a command and the semicolon (like shown in the second variant of the #3 above).. So, does the spacing around the semicolon make any real difference, at all ? Although I couldn't find info on this in the manpage or documentation, my hunch is that the -e option is meant to be used as shown in syntax number #1 above, and using both -e and ; on the command line is redundant. Am I correct? EDIT : I should have mentioned this in my original question to make it more specific; but as some people have already pointed out, these nuances would matter when using branch ( b ) or test ( t ) commands. But it's interesting to note the other cases when these would make a difference. Thanks! | Let us use the Sed POSIX standard to answer the questions. Does the spacing around the semicolon make any real difference? Editing commands other than {...}, a, b, c, i, r, t, w, :, and # can be followed by a semicolon, optional blank characters, and another editing command. Thus /^$/d ; $!G is not compliant, but /^$/d; $!G is. But I do wonder ifthere is any modern Sed implementation that would stumble on that. Is there any real difference (universality, compliance...) between any of the three syntaxes listed above? No (except for the one with spaces before the semicolon, as argued above).This is clear in the synopsis: sed [-n] script [file...] sed [-n] -e script [-e script]... [-f script_file]... [file...] Do note, however, that as the previous quote mentioned, some commands cannotbe followed by a semicolon, and then sed -e ':a' -e 's/x/y/' -e 't a' is compliant, while sed ':a;s/x/y;t a' is not, although but work the same at least in GNU Sed. My hunch is that (...) using both -e and ; on the command line is redundant. Am I correct? If you refer to the examples in the question, yes. If there is a single -e option, then just drop it and it is all the same (unless you also use the -f option (see the synopsis)). But in sed -e ':a' -e 's/x/y;t a' both -e and ; are present but they are not redundant. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/635327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/399259/"
]
} |
635,804 | I am trying to delete all empty lines after line number 3, till the end of the file: cat ${File}1234556789sed -e '3,${s~^$~~g}' ${File}1234556789 Observation : No change in the output. Desired Output 1234556789 Any Suggestions? | try: sed '4,$ {/^$/d}' infile start from the 4 th line until end of the file, delete the empty lines. The problem with your command is that you replace empty line again with empty string in s~^$~~g (which is same as s~^$~~ ) and you are not deleting it. Note: also since you use different delimiter other than default slash / , to use this style ~^$~d you need to escape the first ~ to tell sed that is not part of your regex: sed -e '4,${ \~^$~d }' infile see man sed under "Addresses" about it: \c regexp c Match lines matching the regular expression regexp. The c may be any character. In case you wanted to delete empty lines as well as the lines containing only whitespaces (Tabs/Spaces), you can do: sed '4,${ /^[[:blank:]]*$/d }' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/635804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220462/"
]
} |
635,955 | I need to replace "..." with <...> in a file. I have used this command sed '/^#include/s/"/</g' file.c found here , but the output is incorrect. How can I fix this? Input: #include "stdlib.h"#include "graph.h" Output: #include <stdlib.h<#include <graph.h< Expected output: #include <stdlib.h>#include <graph.h> | The issue with the command is the g at the end. It will cause all double quotes to be substituted with < on each line matching ^#include . Notice how the StackOverflow question that you link to is concerned with replacing the <> with "" (which could be done using y/<>/""/ , or less efficiently using s/[<>]/"/g , on the relevant lines), not the other way around like you want. If you had used first s/"/</ (no g ) followed by s/"/> (no g here either), you would have been ok: sed '/^#include/ { s/"/</; s/"/>/; }' file.c >file-new.c The first substitution replaces the first double quote and the second substitution replaces the first one in the now modified string. To correct the file (assuming you did an in-place edit), just replace the < at the end of the line with > : sed '/^#include/ s/<$/>/' file.c >file-new.c This matches <$ (an < at the end of the line), and replaces it with > . Look at the resulting file-new.c and then replace file.c with it if it looks ok. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/635955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/457505/"
]
} |
636,098 | I have been running roary pangenome pipeline, where I need to write a script in for loop, For example I have a gff files as follows, a.gffb.gff5.gff101.gffclustered_proteins I need to run a command for retrieving unique genes from roary pipeline as follows, query_pan_genome -a difference --input_set_one a.gff --input_set_two b.gff,5.gff,101.gff -g clustered_proteins query_pan_genome -a difference --input_set_one b.gff --input_set_two a.gff,5.gff,101.gff -g clustered_proteinsquery_pan_genome -a difference --input_set_one 5.gff --input_set_two a.gff,b.gff,101.gff -g clustered_proteinsquery_pan_genome -a difference --input_set_one 101.gff --input_set_two a.gff,b.gff,5.gff -g clustered_proteins For doing the same I wrote a script as follows, file1=*.gfffile2=*.gfffile3="-f "$file1-$file2"for file in *.gffdoquery_pan_genome -a difference --input_set_one "$file1" --input_set_two "$file3" -g clustered_proteins done But above script is not serving my purpose, I knew simple for script only, this is something difficult for me to write. Kindly help me to make the script perfect. Thanks in advance. | It's easier with zsh : #! /bin/zsh -files=(*.gff(N))# don't run that command if there are fewer than 2 files(( $#files < 2 )) || for f ($files) query_pan_genome -a difference \ --input_set_one $f \ --input_set_two ${(j[,])files:#$f} \ -g clustered_proteins Where ${array:#pattern} expands to the elements of the array that don't match the pattern, so here with ${files#$f} the elements of $files except $f . ${(j[,])array} joins the elements of the array with , . Instead of *.gff(N) , you may want to use *.gff(Nn) where the n glob qualifier turns on the numericglobsort option for the expansion of that one glob, so that file10.gff comes after file2.gff for instance. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636098",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357294/"
]
} |
636,167 | In the man page of ssh, it says: -N Do not execute a remote command. This is useful for just forwarding ports. I don't understand what it means by "Do not execute a remote command." Can someone explain it to me? | Normally, the ssh program runs a command on a remote system (using the remote user's shell). For example, ssh user@server ls -l /tmp lists the content of the /tmp directory on server . When you leave the command out, as in ssh user@server , an interactive login session with the user's shell is launched. One of the features of OpenSSH is the creation of tunnels. The -D , -L and -R options use various techniques that allow the forwarding of network ports, also known as tunneling. By default, a tunnel created with ssh exists as long as the command executed by ssh runs on the remote server. Often though, you are not interested in running a remote command; all you want is the tunnel. This is what the -N option is for. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/636167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/457749/"
]
} |
636,178 | I'm looking for Alpine Linux's documented approach to keeping / upgrading individual package versions within a named stable Alpine version. So assuming I'm building an image (such as docker) and I start with: FROM alpine:3.13.1RUN apk add python3 py3-numpy Then... is it safe to assume this will always install the same major / minor version of python3 and numpy. will these receive security patches (if I rebuild the image) How does (#1 and #2) differ if I only pin to say 3.13 instead of 3.13.1 . | Normally, the ssh program runs a command on a remote system (using the remote user's shell). For example, ssh user@server ls -l /tmp lists the content of the /tmp directory on server . When you leave the command out, as in ssh user@server , an interactive login session with the user's shell is launched. One of the features of OpenSSH is the creation of tunnels. The -D , -L and -R options use various techniques that allow the forwarding of network ports, also known as tunneling. By default, a tunnel created with ssh exists as long as the command executed by ssh runs on the remote server. Often though, you are not interested in running a remote command; all you want is the tunnel. This is what the -N option is for. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/636178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20140/"
]
} |
636,186 | This might be a ridiculous post, but let's say we have a file called -|h4k3r|- , why doesn't this quoting work? cat '-|h4k3r|-' I thought single quotes remove meaning of all special characters? However this one works cat -- -\|h4k3r\|- This one too: cat "./-|h4k3r|-" But I don't get why this doesn't work? cat "-|h4k3r|-" I'm only trying to understand why quoting doesn't work as expected in this example. Maybe Bash has a special meaning for '- or "- ?? | The reason is that while single quotes actually remove the meaning of special characters, this refers to variable expansion, globbing and word splitting - i.e. those characters that are special to the shell and that are interpreted by the shell before the result is passed to the program - and shell metacharacters (such as the | ). The - is not a "special character" in that sense. What makes it special is that it is a de facto standard way to indicate to a program that the string started by it is an "option" argument. In fact, many (if not most) programs in the Unix/Linux ecosystem rely on the external getopt() function for that purpose 1 , which is part of the POSIX specification and provides a standardized way to handle command-line parameters, and the convention of interpreting parameters that start with - as "options" is embedded there. So, the single quotes ensure that the -|h4ker|- is passed verbatim from the shell to the program ( cat in your case), but also removes the quotes in that process . Hence the program still thinks that since this parameter starts with a - , it should be treated as an "option" argument, not an operand (like a file name to process). This is the reason why many programs (again, all that rely on getopt() ) interpret the -- token on the command line as a special "end-of-options" indicator, so that they can safely be applied to files that start with - . Another possibility, which you already explored in your investigation, is to "protect" the filename by either stating it as an absolute filename ( '/path/to/file/-|h4ker|-' ) or prepend the current directory ( './-|h4ker|-' ), because then, the argument will no longer start with the "ambiguous" - . Note that quoting/escaping is still necessary in this example because the | is a shell metacharacter. A nice demonstration is trying to create and list a file named -l : ~$ touch '-l'touch: Invalid option -- l~$ touch -- '-l'~$ ls '-l'< ... the entire directory content in long list format ... >~$ ls -- '-l'-l~$ ls -l -- '-l'-rw-r--r-- 1 user user 0 Feb 24 17:16 -l 1 For shell scripts, there is an equivalent getopts builtin command | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/456002/"
]
} |
636,204 | I'm looking to find all executable files that are NOT in my $PATH . Currently I'm doing this find / \( -path "/opt" -prune -o -path "/var" -prune -o -path "/bin" -prune -o -path "/sbin" -prune -o -path "/usr" -prune -o -path "/opt" \) -o -type f -executable -exec file {} \; I feel like there is a better way, I tried using a for loop with IFS=: to separate out the different parts of PATH but couldn't get it to work. Edit: I should have specified I don't want to use a script for this. | Assuming GNU find and the bash shell (as is used in the question), this is a short script that would accomplish what you're trying to do: #!/bin/bashIFS=:set -fargs=( -false )for dirpath in $PATH; do args+=( -o -path "$dirpath" )donefind / \( \( "${args[@]}" \) -o \ \( -type d \( ! -executable -o ! -readable \) \) \) -prune -o \ -type f -executable -exec file {} + This first creates the array args , consisting of dynamically constructed arguments to find . It does this by splitting the value of $PATH on colons, the value that we've given to the IFS variable. The splitting is happening when we use $PATH unquoted in the loop header. Ordinarily, the shell would invoke filename globbing on each of the words generated from the splitting of $PATH , but I'm using set -f to turn off filename globbing, just in case any of the directory paths in $PATH contains globbing characters (these would still be problematic as the -path operand of find would interpret them as patterns). If my PATH variable contains the string /usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin then args will be the following list (each line here is a separate element in the array, this is not really a set of strings with newline characters in-between them): -false-o-path/usr/bin-o-path/bin-o-path/usr/sbin-o-path/sbin-o-path/usr/X11R6/bin-o-path/usr/local/bin-o-path/usr/local/sbin This list is slotted into the find command invocation, in parentheses. There is no need to repeat -prune for each and every directory, as you could just use it once as I have above. I've opted for pruning any non-executable or non-readable directory. This ought to get rid of a lot of permission errors for directories that you can't access or list the contents of. Would you want to simplify the find command by removing this bit, use find / \( "${args[@]}" \) -prune -o \ -type f -executable -exec file {} + Also, I'm running file on the found pathnames in batches, rather than once per pathname. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/457764/"
]
} |
636,215 | In my bash script, I want to update a string according to new written filenames with .dat format. Here is what I am trying to do: For example, I use a file named blabla_3200.dat in my bash script, then run a case with this script. After this case is done, a new blabla_3300.dat is written in the same directory For the next run, I want to use this last blabla_3300.dat in the same bash script. Therefore, I have to search for the largest integer in the file names .dat , and then use sed to update my bash script like: sed -i 's/3200/max/g' mybash.sh then run a new case. Any help will be appreciatedHave a good day! Clearifying Note I should be more clear: Let's suppose i have blabla_3200.dat , submission.sh , bash.sh in same directory. I told the program in submission.sh ; read data blabla_3200.dat then start running. To call this submission.sh file to slurm machine, i command sbatch submission.sh in bash.sh Then end the end of run, program writes output file blabla_4500.dat in same directory(it is unknown what is going to write, it might blabla_8254.dat for example). What i want this; the code in bash.sh should update read data command in submission.sh after each new output came. Now in submission.sh, read data blabla_4500.dat shoud be commanded. | Assuming GNU find and the bash shell (as is used in the question), this is a short script that would accomplish what you're trying to do: #!/bin/bashIFS=:set -fargs=( -false )for dirpath in $PATH; do args+=( -o -path "$dirpath" )donefind / \( \( "${args[@]}" \) -o \ \( -type d \( ! -executable -o ! -readable \) \) \) -prune -o \ -type f -executable -exec file {} + This first creates the array args , consisting of dynamically constructed arguments to find . It does this by splitting the value of $PATH on colons, the value that we've given to the IFS variable. The splitting is happening when we use $PATH unquoted in the loop header. Ordinarily, the shell would invoke filename globbing on each of the words generated from the splitting of $PATH , but I'm using set -f to turn off filename globbing, just in case any of the directory paths in $PATH contains globbing characters (these would still be problematic as the -path operand of find would interpret them as patterns). If my PATH variable contains the string /usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin then args will be the following list (each line here is a separate element in the array, this is not really a set of strings with newline characters in-between them): -false-o-path/usr/bin-o-path/bin-o-path/usr/sbin-o-path/sbin-o-path/usr/X11R6/bin-o-path/usr/local/bin-o-path/usr/local/sbin This list is slotted into the find command invocation, in parentheses. There is no need to repeat -prune for each and every directory, as you could just use it once as I have above. I've opted for pruning any non-executable or non-readable directory. This ought to get rid of a lot of permission errors for directories that you can't access or list the contents of. Would you want to simplify the find command by removing this bit, use find / \( "${args[@]}" \) -prune -o \ -type f -executable -exec file {} + Also, I'm running file on the found pathnames in batches, rather than once per pathname. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/457791/"
]
} |
636,220 | I am still learning linux and have been running into a slight issue with my Raspbian OS system. I have uninstalled software like docker and vs code to clear up memory for my OMV NAS; however, they still show up when I run apt-get update. To be clear, everything is functioning properly. I have already tried the following commands: apt-get clean, apt-get autoremove, and reboot. I would like to clean up this part if at all possible. Cheers! | Assuming GNU find and the bash shell (as is used in the question), this is a short script that would accomplish what you're trying to do: #!/bin/bashIFS=:set -fargs=( -false )for dirpath in $PATH; do args+=( -o -path "$dirpath" )donefind / \( \( "${args[@]}" \) -o \ \( -type d \( ! -executable -o ! -readable \) \) \) -prune -o \ -type f -executable -exec file {} + This first creates the array args , consisting of dynamically constructed arguments to find . It does this by splitting the value of $PATH on colons, the value that we've given to the IFS variable. The splitting is happening when we use $PATH unquoted in the loop header. Ordinarily, the shell would invoke filename globbing on each of the words generated from the splitting of $PATH , but I'm using set -f to turn off filename globbing, just in case any of the directory paths in $PATH contains globbing characters (these would still be problematic as the -path operand of find would interpret them as patterns). If my PATH variable contains the string /usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin then args will be the following list (each line here is a separate element in the array, this is not really a set of strings with newline characters in-between them): -false-o-path/usr/bin-o-path/bin-o-path/usr/sbin-o-path/sbin-o-path/usr/X11R6/bin-o-path/usr/local/bin-o-path/usr/local/sbin This list is slotted into the find command invocation, in parentheses. There is no need to repeat -prune for each and every directory, as you could just use it once as I have above. I've opted for pruning any non-executable or non-readable directory. This ought to get rid of a lot of permission errors for directories that you can't access or list the contents of. Would you want to simplify the find command by removing this bit, use find / \( "${args[@]}" \) -prune -o \ -type f -executable -exec file {} + Also, I'm running file on the found pathnames in batches, rather than once per pathname. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/457408/"
]
} |
636,230 | I have a very big log file whit time of a call for start and end: Bigfile.txt : 2021-02-24 14:21:34,630;START2021-02-24 14:21:35,529;END 2021-02-24 14:57:05,600;START2021-02-24 14:57:06,928;END 2021-02-24 15:46:45,894;START2021-02-24 15:46:46,762;END 2021-02-24 17:49:20,925;START2021-02-24 17:49:26,243;END 2021-02-24 18:32:18,166;START2021-02-24 18:32:18,969;END I need to create a third file in this kind of format (made by 3 columns: START (line1 of the Bigfile), END(line2 of the Bigfile); DURATION (difference reported in seconds): Outputfile.txt : 2021-02-24 14:21:34,630;2021-02-24 14:21:35,529;0,8992021-02-24 14:57:05,600;2021-02-24 14:57:06,928;1,328 for the entire file.Could someone help me? how can i set this job via bash script?If someone could me also explain :D thanks in advance for every support. | Assuming GNU find and the bash shell (as is used in the question), this is a short script that would accomplish what you're trying to do: #!/bin/bashIFS=:set -fargs=( -false )for dirpath in $PATH; do args+=( -o -path "$dirpath" )donefind / \( \( "${args[@]}" \) -o \ \( -type d \( ! -executable -o ! -readable \) \) \) -prune -o \ -type f -executable -exec file {} + This first creates the array args , consisting of dynamically constructed arguments to find . It does this by splitting the value of $PATH on colons, the value that we've given to the IFS variable. The splitting is happening when we use $PATH unquoted in the loop header. Ordinarily, the shell would invoke filename globbing on each of the words generated from the splitting of $PATH , but I'm using set -f to turn off filename globbing, just in case any of the directory paths in $PATH contains globbing characters (these would still be problematic as the -path operand of find would interpret them as patterns). If my PATH variable contains the string /usr/bin:/bin:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/bin:/usr/local/sbin then args will be the following list (each line here is a separate element in the array, this is not really a set of strings with newline characters in-between them): -false-o-path/usr/bin-o-path/bin-o-path/usr/sbin-o-path/sbin-o-path/usr/X11R6/bin-o-path/usr/local/bin-o-path/usr/local/sbin This list is slotted into the find command invocation, in parentheses. There is no need to repeat -prune for each and every directory, as you could just use it once as I have above. I've opted for pruning any non-executable or non-readable directory. This ought to get rid of a lot of permission errors for directories that you can't access or list the contents of. Would you want to simplify the find command by removing this bit, use find / \( "${args[@]}" \) -prune -o \ -type f -executable -exec file {} + Also, I'm running file on the found pathnames in batches, rather than once per pathname. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636230",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/416880/"
]
} |
636,324 | I'm learning AWK these days for text processing. But I'm very much confused about AWK syntax. I read on Wikipedia that the syntax follows this format: (conditions) {actions} I assumed I can follow the same syntax in a BEGIN and END block. But I'm getting a syntax error when I run the following script. awk 'BEGIN{}(1 == 1) {print "hello";}END{(1==1) {print "ended"}}' $1 However, if I make a little bit of a change inside the END block and add 'if' before the condition, it runs just fine. awk 'BEGIN{}(1 == 1) {print "hello";}END{if (1==1) {print "ended"}}' $1 Why is it mandatory to write 'if' in END block while it's not needed in normal blocks? | AWK programs are a series of rules, and possibly functions. Rules are defined as a pattern ( (conditions) in your format) followed by an action ; either are optional. BEGIN and END are special patterns . Thus in BEGIN {}(1 == 1) { print "hello"; }END { if (1 == 1) { print "ended" } } the patterns are BEGIN , (1 == 1) (and the parentheses aren’t necessary), and END . Blocks inside braces after a pattern (or without a pattern, to match everything) are actions . You can’t write patterns as such inside a block, each block is governed by the pattern which introduces it. Conditions inside an action must be specified as part of an if statement (or other conditional statement, while etc.). The actions above are {} (the empty action), { print "hello"; } , and { if (1 == 1) { print "ended" } } . A block consisting of { (1 == 1) { print "ended" } } results in a syntax error because (1 == 1) is a statement here, and must be separated from the following statement in some way; { 1 == 1; { print "ended" } } would be valid, but wouldn’t have the effect you want — 1 == 1 will be evaluated, then separately, { print "ended" } . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/636324",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/247138/"
]
} |
636,346 | I've had a look at this (and the forum thread here ) and this . I've tried running in Python and also at the command line. I've double-checked: some files have definitely been deleted from the source, but are present in the link-dest destination. I've tried messing around with numerous options. I've tried adding forward slash to the end of the paths to see if that might make a difference. The paths in all cases are simple directories, never ending in glob patterns. I've also looked at the man pages. Incidentally, this shouldn't matter, but you never know: I'm running this under WSL (W10 OS). Nothing seems to work. By the way, the files deleted in source do get deleted (or rather not copied) in the target location (if not a dry run). What I'm trying to do is to find out what changes have occurred between the link-dest location and the source, with a view to cancelling the operation if nothing has changed. But to do that I have to be able to get a list of new or modified files and also files which have been deleted. This is the Python code I've been trying: link_dest_setting = '' if most_recent_snapshot_of_any_type == None \ else f'--link-dest={most_recent_snapshot_of_any_type[0]}'rsync_command_args = [ 'rsync', '-v', # '--progress', # '--update', '--recursive', '--times', '--delete', # '--info=DEL', '-n', link_dest_setting, source_dir, new_snapshot_path, ]print( f'running this: {rsync_command_args}') result = subprocess.run( rsync_command_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)rsync_result_stdout = result.stdout.decode( 'utf-8' )print( f'rsync_result stdout |{rsync_result_stdout}|')rsync_result_stderr = result.stderr.decode( 'utf-8' )print( f'rsync_result stderr |{rsync_result_stderr}|') Typical stdout (with dry run): rsync_result stdout |sending incremental file list./MyModifiedFile.odtsent 1,872 bytes received 25 bytes 3,794.00 bytes/sectotal size is 6,311,822 speedup is 3,327.27 (DRY RUN)| (no errors are reported in stderr ) Just found another option, -i . Using this things get quite mysterious: rsync_result stdout |sending incremental file list.d..t...... ./>f.st...... MyModifiedFile.odtsent 53,311 bytes received 133 bytes 35,629.33 bytes/sectotal size is 6,311,822 speedup is 118.10| Edit Typical BASH command: rsync -virtn --delete --link-dest=/mnt/f/link_dest_dir /mnt/d/source_dir /mnt/f/destination_dir Dry run which, in principle, should show files/dirs present under link_dest_dir but NOT present (deleted) under source_dir. I can't get this to be shown. In any event I think the Python answer is likely to be a preferable solution, because the scanning STOPS at the first detection of a difference. Edit 2 (in answer to roaima's question "what are you saving?") My "My Documents" dir has about 6 GB, and thousands of files. It takes my Python script 15 s or so to scan it, if no differences are found (shorter if one is). rsync typically takes about 2 minutes to do a copy (using hard links for the vast majority of the files). If that were found to be unnecessary, because there had been no change between the source and the link-dest location, I would then have to delete all those files and hard links. The deletion operation on its own is very expensive in terms of time. Incidentally, this is an external HD, spinning plates type. Not the slowest storage location ever, but it has the limitations it has. Just as importantly, because rsync does not appear to be capable, at least according to what I have found, of reporting on files which have been deleted in the source, how would I even know that this new snapshot was identical to the link-dest snapshot? In these snapshot locations I only want to keep a limited number (e.g. 5) snapshots, but I only want to add a new snapshot when it is different to its predecessor. So although the script may run every 10 minutes, the gap between adjacent snapshots may be 40 minutes, or much longer. I see you (roaima) have a high rep, and seem to specialise quite a bit in rsync . The simple question I want answering is: is it possible for rsync , on a dry run or not, to report on files/dirs deleted in the source relative to the link-dest ? If not, is this a bug/deficiency? Because the man pages certainly seem to claim (e.g. with --info=DEL ) that this should happen. | AWK programs are a series of rules, and possibly functions. Rules are defined as a pattern ( (conditions) in your format) followed by an action ; either are optional. BEGIN and END are special patterns . Thus in BEGIN {}(1 == 1) { print "hello"; }END { if (1 == 1) { print "ended" } } the patterns are BEGIN , (1 == 1) (and the parentheses aren’t necessary), and END . Blocks inside braces after a pattern (or without a pattern, to match everything) are actions . You can’t write patterns as such inside a block, each block is governed by the pattern which introduces it. Conditions inside an action must be specified as part of an if statement (or other conditional statement, while etc.). The actions above are {} (the empty action), { print "hello"; } , and { if (1 == 1) { print "ended" } } . A block consisting of { (1 == 1) { print "ended" } } results in a syntax error because (1 == 1) is a statement here, and must be separated from the following statement in some way; { 1 == 1; { print "ended" } } would be valid, but wouldn’t have the effect you want — 1 == 1 will be evaluated, then separately, { print "ended" } . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/636346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220752/"
]
} |
636,350 | I have a function prereq() that may be invocated several times but should not actually be executed more than once on the same running thread of the script by selecting other options from the menu (each option on the menu will have prereq() as part of the code): # Pre-requirementsprereq (){ echo echo "########################## CHECKING PRE-REQUIREMENTS ##########################" echo "# #" echo "Required packages: hdparm, fio, sysbench, iperf3 and sshpass" sleep 2 echo for pack in hdparm fio sysbench iperf3 sshpass; do echo "Checking and if needed install '$pack'..." if ! rpm -qa | grep -qw "$pack"; then yum install -y $pack > /dev/null else echo "$pack is already installed, skipping..." echo fi done echo "###############################################################################"echo} The function is executed like below: select opt in "${options[@]}"do case $opt in "CPU Stress Test (local)") sleep 2 prereq ===>> HERE IS! cpu cleanup echo break ;; "Memory Stress Test (local)") sleep 2 prereq ===>> HERE IS! memory cleanup echo break . . . I need to execute prereq() just once even if selecting other options from the menu that's invocate prereq() because each function can be executed once and the purpose of the script is done, and it can be exited. I am planning to have a variable on prereq() as a flag and if it's executed, the flag is checked every time that prereq() is invocated on any option from the menu. Furthermore, I appreciate any help! Thanks! | Since shell variables are global unless you declare them as local inside a function (a mean trap by the way), you can simply define a variable prereq_done=0 on top of the script and then modify the prereq() function to check for it at the beginning (exit if it is already set), and set it to 1 at the end: prereq(){ if (( prereq_done == 1 )); then return; fi < your code here > prereq_done=1} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/636350",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/457381/"
]
} |
636,595 | I have a JSON file which contains the following (amongst other properties): { "environment": "$USER"} I am extracting that value using jq like so: ENVIRONMENT="$(jq -r '.environment' "properties.json")" I want the value of ENVIRONMENT to be my username, not the string $USER - is this possible? | Option 1: Just read variable by name bash allows having variables that reference variables, example: REF='USER' # or get it from JSONecho "${!REF}"# prints current username So you can write this: ENVIRONMENT_REF="$(jq -r '.environment' "properties.json")"ENVIRONMENT="${!ENVIRONMENT_REF}" JSON contents: "environment": "USER" Option 2: Full shell expansion Other option, maybe better (but definitely NOT secure ) is using eval : ENVIRONMENT="$(eval "echo $(jq -r '.environment' "properties.json")")" Note that this is not secure and malicious code can be inserted to the JSON file. What malicious code? Let's look into this properties.json file: { "environment": "$(rm some_file)" } The shell will eval uate this code: echo $(rm some_file) … and the file named some_file will be deleted. As you can see, depending on the command entered there, you can cause very serious damage. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184722/"
]
} |
636,709 | I know this is a super common issue, but if I am here it means I have already searched and tried many roads: unsuccessfully. I am trying to install Ubuntu on a MacBookPro 13" 2019, running MacOS BigSur, in a partition (nor VM nor bootcamp). How I prepared Ubuntu live USB I have downloaded the latest stable versionfor Ubuntu at this time: Ubuntu 20.04.2.0 LTS . I then plugged in a 4GB USB and formatted it with Disk Utility as MS-DOS (FAT). Finally installed Etcher and burnt the ISO image into my USB. Restarted my Mac holding option key and booted into my USB (EFIboot). How I installed Ubuntu From the first menu I selected "Try Ubuntu" Once in the desktop nor touchpad nor keyboard worked, so I plugged in an external mouse and enabled the on-screen keyboard to complete the installation Open the Activities overview and start typing Settings. Click on Settings. Click Accessibility in the sidebar to open the panel. Switch on Screen Keyboard in the Typing section. I clicked on the Ubuntu installer icon on desktop Selected language Selected Normal Installation and checked "Install third party software" When asked for the partition where to install Ubuntu to, selected other option/else I then created 2 partitions from Free Space one of 512MB , " EFI " (from dropdown menu) one of 30GB , ext4 (from dropdown menu), mounted to / (from other drop down menu) I then made sure the disk where to install boot loader was set to the disk rather than the specific EFI partition I made (i.e. /dev/nvme0n1 and not /dev/nvme0n1p3) Clicked Install now The installation fails at the grub2 step Notes I already tried to uncheck " Install third party software" I already tried to select "Minimal Installation" rather than Normal WiFi won' t work either, so I later tried Ethernet cable and networkconnection now works I completely erased my disk and reinstalled MacOS (I have of course abackup of my files) Video of installation process Watch the video. Screenshot of last error messages | I was having exactly the same problem whilst installing Ubuntu 20.04.2.0 LTS on a 2020 MacBook Pro with a 2 GHz Quad-Core Intel Core i5 processor using macOS Big Sur Version 11.1. After spending two days trying to get it to work I finally found a solution. Be warned it is a long process. From my research I found that the issue is due to the Mac bootloader expecting the EFI partition to be formatted as HFS+ where the Ubuntu installer formats it as VFAT (as stated by Rohith Madhavan here ). To get around this issue I found three possible solutions: Use Rohith Madhavan's method . Swap your bootloader from GRUB to rEFInd . Install Ubuntu on an external SSD using Floris van Breugel's method . Option one was posted seven years ago and required adding an unsigned repository to my Ubuntu installation (which I wasn't willing to do for security reasons). I didn't understand the full implications of swapping from GRUB to rEFInd so I wasn't comfortable using option two and finally, I didn't want slow memory access by using an external SSD so I didn't want to go with option three. My final solution was to use parts of options one and three to make my own GRUB config file formatted in HFS+ so that I could boot Ubuntu from a partition on my internal SSD. Backup Whilst the process shouldn't cause you any issues, if a mistake occurs it could wipe your drive. As a result, it is always safest to back everything up before progressing. Installing Ubuntu Open up Disk Utility on your Mac. Select your Apple SSD drive (make sure to select the parent drive not the container). Select "Partition". Hit the plus button and create a new partition called Ubuntu Boot Loader with format Mac OS Extended (Journaled) and size 128MB. This will serve as the location for your Ubuntu bootloader later on. Hit the plus button again and create another new partition called Ubuntu with format MS-DOS (FAT) and allocate it the memory size you want your Ubuntu installation to have (I would recommend no smaller than 50GB). Download Ubuntu from here . Plug in a USB and go to Disk Utility. From here locate the USB, hit Erase , select the format MS-DOS (FAT) and choose the scheme GUID Partition Map then hit Erase . Use Etcher to flash this ISO file onto a USB. Be warned this will wipe the entire USB (see this for more details). Restart your computer hitting Cmd+R on reboot. This will put you into recovery mode. Sign into your account, go into the menu location Utilities , select the first thing in the drop down menu and change the settings to No Security and Allow booting from external drive . Turn off your computer. Plug in your bootable USB drive and turn on your computer whilst holding down the Option key. Select the EFI boot drive (should be yellow). It might show you a warning saying Update Required . Hit the Update option. This will restart your computer. Make sure you are holding Option when it turns back on. Then click on EFI boot again. Follow steps one to five from here . On the Installation Type page select Something Else . Locate the MS-DOS (FAT) partition you made and hit minus. Select Free Space and hit plus. Create your Linux memory space by choosing how many GBs you want, choose Ext4 Journaling File System , check Format the partition and have the mounting point as / . Select Free Space and hit plus. Create your Linux swap space, use the remaining memory and choose swap as the format. Under Device for boot loader installation select the partition where your ext4 formatted memory is. Hit Install Now . Continue the installation process. You will again see the grub-install /dev/nvme**** failed warning but don't worry. Just hit restart. You will be asked to remove the USB and then hit Enter . You will now have Ubuntu installed on your computer, but your GRUB bootloader won't be able to open it without some help. Getting into Ubuntu Restart your computer and hit the Option key when booting. Select the EFI boot drive (this is your Ubuntu installation). You should be displayed with a GRUB terminal. Follow these steps that Rohith Madhavan outlines: At the grub console, type ls grub> ls(memdisk) (hd0) (hd0,msdos) (hd1) (hd2) (hd2,gpt3) (hd2,gpt2) (hd2,gpt1) You may not get exactly the same results as this, but you’ll have some similar options. Now, find the partition which contains your user's home directory. grub> ls (hd2,gpt2)/homerohith/ Keep trying until you find it. The result from the last step has two parts: (hdX,gptY). You need to keep the hdX part, but go through all the gptY options looking for a /boot/grub directory. grub> ls (hd2,gpt2)/boot/grubunicode.pf2 [...] grub.cfg Now you want to set this as your root for further commands. grub> set root=(hd2,gpt2) The only way to boot properly was to use the UUID of the drive. To get it - grub> ls -l (hd2,gpt2) Note down the UUID . You'll have to type it manually in the next step. grub> linux /boot/vmlinuz .efi.signed root=UUID=〈the UUID from above〉 The GRUB console can do tab completion, so if you just type out the vmlinuz part and hit tab, then hit . and tab again, you won't have to type the whole file name. make sure that the efi.signed part is present. Now, set the initial RAM disk grub> initrd /boot/initrd〈...tab here!...〉 You should be able to boot with the command grub> boot You will now be in your Ubuntu installation as if everything was installed correctly. But every time you restart you have to repeat this process. To work around this you can do the following. If you are experiencing issues: Confirm that the UUID you received from before is located in your distribution's grub.cfg under /boot/efi/EFI/ (ex. /boot/efi/EFI/ubuntu/grub.cfg) in the line search.fs_uuid SYSTEM-PARTITION-UUID-HERE root . If not, copy-paste the line with the current value and prepend with a comment line (#), and on your new line, replace the SYSTEM-PARTITION-UUID-HERE with your UUID of your ubuntu partition. Permanently Fixing the GRUB Issue Here you want do reformat the VFAT boot loader that the Ubuntu installation made by default to HFS+. This can be done by making your own boot loader config using GRUB. The method I used was the same as what Floris van Breugel did (but on my internal SSD instead of an external SSD). Following the instructions starting from the heading Making Ubuntu bootable part 1 from this all the way to the end of Turn SIP back on (for security) . The only changes are the disk your should reference is your internal Ubuntu Boot Loader partition (you do not need an external drive with this partition. Restart your computer holding down the Option key during boot-up. You will now have two EFI boot drives. Go into the far left one. It should say you need to install an update for this to work. Click Update . During reboot hold down the Option key again and then select the middle EFI boot drive. This will take you to the GRUB screen again. Wait a minute or two and it should then take you to the Ubuntu loading screen. YOU ARE ALL DONE. You should now be able to boot Ubuntu and MacOS now. Hope this works for you. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/412198/"
]
} |
636,810 | I'd like to run two piped commands on the results of find on some nested csv files, but I miserably fail. Here is the idea: $ find ./tmp/*/ -name '*.csv' -exec tail -n +2 {} | wc -l \; in order not to count the header row of each CSV file. The command is failing on: wc: ';': No such file or directoryfind: missing argument to `-exec' Do I really need to do a for loop in that case? E.g.: $ for f in ./tmp/*/*.csv; do tail -n +2 ${f} | wc -l; done but with that I'm losing the nice output of find which does include the filename along the count. I'm also losing the file name when using this solution: pipe commands inside find -exec? $ find ./tmp/*/ -type f -name "*.csv" -print0 | while IFS= read -d '' f; do tail -n +2 "${f}" | wc -l; done A little precision; when I speak about the filename that gets printed, it's because I'm used to the following result when calling the commands on a single file: $ tail -n +2 | wc -l ./tmp/myfile.csv 2434 ./tmp/myfile.csv I use Ubuntu 18.04. | If you write find ... -exec foo | bar \; the vertical bar is interpreted by your shell before find is invoked. The left hand of the resulting pipeline is find ... -exec foo , which obviously gives a "missing argument to `-exec'" error; the right hand of the pipeline is bar . Protecting the vertical bar from the shell, as in find ... -exec foo \| bar \; is of no help, because the first token after -exec is interpreted by find as a command and all the following tokens, up to (but not including) the ; or + terminator, are taken as arguments to that command. See Understanding the -exec option of `find` for a thorough explanation. To use a pipeline with -exec you need to invoke a shell. For instance: find ./tmp/*/ -name '*.csv' -exec sh -c ' printf "%s %s\n" "$(tail -n +2 "$1" | wc -l)" "$1"' mysh {} \; Then, to avoid risking an "argument list too long" error, ./tmp/*/ can be rewritten as find ./tmp -path './tmp/*/*' ... or, more precisely, to also exclude tmp 's hidden subdirectories (as ./tmp/*/ would likely do by default), as find ./tmp -path './tmp/.*' -prune -o -path './tmp/*/*' ... Finally, you may use the faster -exec ... {} + variant, which avoids invoking a shell for any single found file. For instance, with awk in place of tail and wc : find ./tmp -path './tmp/.*' -prune -o -path './tmp/*/*' \ -name '*.csv' -exec awk ' BEGIN { skip = 1 } FNR > skip { lc[FILENAME] = (FNR - skip) } END { for (f in lc) print lc[f],f }' {} + (Note that awk also counts those malformed lines that do not end in a newline character, while wc does not). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/636810",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154644/"
]
} |
637,002 | I want to launch a test, then wait a little and then start a program, using a shell script: #!/bin/bashsleep 3 & # testsleep 1 # wait somesleep 4 & # run program under testfg 1 # <-- I want to put test back in "foreground", but yields "line 5: fg: no job control" I presume I misunderstood what "foreground" means, but is there some other way to do what I want? (I tried prefixing with jobs -x and nohup , but I suspect my misunderstanding runs deeper.) | You need to just enable job control in the shell with set -m #!/bin/bashset -msleep 3 & # testsleep 1 # wait somesleep 4 & # run program under testjobsfg %1 quoting from bash manual: -m Monitor mode. Job control is enabled. This option is on by default forinteractive shells on systems that support it (see JOB CONTROL above).Background processes run in a separate process group and a linecontaining their exit status is printed upon their completion. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/637002",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73741/"
]
} |
637,132 | I have a variable named choice . I want to prompt for inputting a value for choice until it is non-empty and equal to either yes or no . In other words: While choice is empty or different that yes and no then enter choice. If I write it in Java it would be piece of cake with: while (choice.IsEmpty() || (!choice.equals("yes") && !choice.equals("no")) But I cannot find a way to translate it to shell without using the content (var between double quotes) syntax: while [ "$choice" != "yes" && "$choice" != "no" ] Obviously it works, but, for my personal knowledge, is there another way to test that the same way as in Java? | is there another way to test that You could use the standard case construct: case $choice in yes|no) false ;; esac which can be used even in the condition part of a while , though the construction may end up a bit confusing: while case $choice in yes|no) false ;; esac; do the same way as in Java? No | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/458684/"
]
} |
637,138 | I wanted to download a bunch of numbered text files by: curl https://www.grc.com/sn/sn-{472..807}.txt > sn-{472..807}.txt but that gave me: bash: sn-{472..807}.txt: ambiguous redirect so I ended up doing a: for iEpisode in {472..807}; do curl https://www.grc.com/sn/sn-"$iEpisode.txt" > sn-"$iEpisode.txt"; if [[ $? -eq 1 ]]; then exit; fi;done The question is: Is there a way of doing a simple curl with dual brace extension? (Because it doesn't look ambiguous to me and my Google-fu led nowhere) :/ | That is ambiguous because you are trying to redirect to multiple files. Which would your shell choose? In general you will need the for loop, but with Curl you can use curl --remote-name-all https://www.grc.com/sn/sn-{472..807}.txt That option is like -O (see the manual page ), -O, --remote-name Write output to a local file named like the remote file weget. (Only the file part of the remote file is used, thepath is cut off.) But applies it to every argument. Curl also understands shell-like ranges: curl -O 'https://www.grc.com/sn/sn-[472-807].txt' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90054/"
]
} |
637,319 | I am trying to extract some info about github repositories using its API, apparently jq is the way to go.I can use this command to view all the available info: curl 'https://api.github.com/repos/tmux-plugins/tpm' | jq Output: { "id": 19935788, "node_id": "MDEwOlJlcG9zaXRvcnkxOTkzNTc4OA==", "name": "tpm", "full_name": "tmux-plugins/tpm", "private": false, "owner": { "login": "tmux-plugins", "id": 8289877, "node_id": "MDEyOk9yZ2FuaXphdGlvbjgyODk4Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8289877?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tmux-plugins", "html_url": "https://github.com/tmux-plugins", "followers_url": "https://api.github.com/users/tmux-plugins/followers", "following_url": "https://api.github.com/users/tmux-plugins/following{/other_user}", "gists_url": "https://api.github.com/users/tmux-plugins/gists{/gist_id}", "starred_url": "https://api.github.com/users/tmux-plugins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tmux-plugins/subscriptions", "organizations_url": "https://api.github.com/users/tmux-plugins/orgs", "repos_url": "https://api.github.com/users/tmux-plugins/repos", "events_url": "https://api.github.com/users/tmux-plugins/events{/privacy}", "received_events_url": "https://api.github.com/users/tmux-plugins/received_events", "type": "Organization", "site_admin": false }, "html_url": "https://github.com/tmux-plugins/tpm", "description": "Tmux Plugin Manager", "fork": false, "url": "https://api.github.com/repos/tmux-plugins/tpm", "forks_url": "https://api.github.com/repos/tmux-plugins/tpm/forks", "keys_url": "https://api.github.com/repos/tmux-plugins/tpm/keys{/key_id}", "collaborators_url": "https://api.github.com/repos/tmux-plugins/tpm/collaborators{/collaborator}", "teams_url": "https://api.github.com/repos/tmux-plugins/tpm/teams", "hooks_url": "https://api.github.com/repos/tmux-plugins/tpm/hooks", "issue_events_url": "https://api.github.com/repos/tmux-plugins/tpm/issues/events{/number}", "events_url": "https://api.github.com/repos/tmux-plugins/tpm/events", "assignees_url": "https://api.github.com/repos/tmux-plugins/tpm/assignees{/user}", "branches_url": "https://api.github.com/repos/tmux-plugins/tpm/branches{/branch}", "tags_url": "https://api.github.com/repos/tmux-plugins/tpm/tags", "blobs_url": "https://api.github.com/repos/tmux-plugins/tpm/git/blobs{/sha}", "git_tags_url": "https://api.github.com/repos/tmux-plugins/tpm/git/tags{/sha}", "git_refs_url": "https://api.github.com/repos/tmux-plugins/tpm/git/refs{/sha}", "trees_url": "https://api.github.com/repos/tmux-plugins/tpm/git/trees{/sha}", "statuses_url": "https://api.github.com/repos/tmux-plugins/tpm/statuses/{sha}", "languages_url": "https://api.github.com/repos/tmux-plugins/tpm/languages", "stargazers_url": "https://api.github.com/repos/tmux-plugins/tpm/stargazers", "contributors_url": "https://api.github.com/repos/tmux-plugins/tpm/contributors", "subscribers_url": "https://api.github.com/repos/tmux-plugins/tpm/subscribers", "subscription_url": "https://api.github.com/repos/tmux-plugins/tpm/subscription", "commits_url": "https://api.github.com/repos/tmux-plugins/tpm/commits{/sha}", "git_commits_url": "https://api.github.com/repos/tmux-plugins/tpm/git/commits{/sha}", "comments_url": "https://api.github.com/repos/tmux-plugins/tpm/comments{/number}", "issue_comment_url": "https://api.github.com/repos/tmux-plugins/tpm/issues/comments{/number}", "contents_url": "https://api.github.com/repos/tmux-plugins/tpm/contents/{+path}", "compare_url": "https://api.github.com/repos/tmux-plugins/tpm/compare/{base}...{head}", "merges_url": "https://api.github.com/repos/tmux-plugins/tpm/merges", "archive_url": "https://api.github.com/repos/tmux-plugins/tpm/{archive_format}{/ref}", "downloads_url": "https://api.github.com/repos/tmux-plugins/tpm/downloads", "issues_url": "https://api.github.com/repos/tmux-plugins/tpm/issues{/number}", "pulls_url": "https://api.github.com/repos/tmux-plugins/tpm/pulls{/number}", "milestones_url": "https://api.github.com/repos/tmux-plugins/tpm/milestones{/number}", "notifications_url": "https://api.github.com/repos/tmux-plugins/tpm/notifications{?since,all,participating}", "labels_url": "https://api.github.com/repos/tmux-plugins/tpm/labels{/name}", "releases_url": "https://api.github.com/repos/tmux-plugins/tpm/releases{/id}", "deployments_url": "https://api.github.com/repos/tmux-plugins/tpm/deployments", "created_at": "2014-05-19T09:18:38Z", "updated_at": "2021-03-03T04:30:43Z", "pushed_at": "2021-02-23T11:07:55Z", "git_url": "git://github.com/tmux-plugins/tpm.git", "ssh_url": "[email protected]:tmux-plugins/tpm.git", "clone_url": "https://github.com/tmux-plugins/tpm.git", "svn_url": "https://github.com/tmux-plugins/tpm", "homepage": null, "size": 204, "stargazers_count": 6861, "watchers_count": 6861, "language": "Shell", "has_issues": true, "has_projects": true, "has_downloads": true, "has_wiki": true, "has_pages": false, "forks_count": 251, "mirror_url": null, "archived": false, "disabled": false, "open_issues_count": 79, "license": { "key": "mit", "name": "MIT License", "spdx_id": "MIT", "url": "https://api.github.com/licenses/mit", "node_id": "MDc6TGljZW5zZTEz" }, "forks": 251, "open_issues": 79, "watchers": 6861, "default_branch": "master", "temp_clone_token": null, "organization": { "login": "tmux-plugins", "id": 8289877, "node_id": "MDEyOk9yZ2FuaXphdGlvbjgyODk4Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/8289877?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tmux-plugins", "html_url": "https://github.com/tmux-plugins", "followers_url": "https://api.github.com/users/tmux-plugins/followers", "following_url": "https://api.github.com/users/tmux-plugins/following{/other_user}", "gists_url": "https://api.github.com/users/tmux-plugins/gists{/gist_id}", "starred_url": "https://api.github.com/users/tmux-plugins/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tmux-plugins/subscriptions", "organizations_url": "https://api.github.com/users/tmux-plugins/orgs", "repos_url": "https://api.github.com/users/tmux-plugins/repos", "events_url": "https://api.github.com/users/tmux-plugins/events{/privacy}", "received_events_url": "https://api.github.com/users/tmux-plugins/received_events", "type": "Organization", "site_admin": false }, "network_count": 251, "subscribers_count": 83} How would I extract just the "description"? How do I extract the "language" & "description"? I ask question 2 as I have seen examples online (when trying to find an answer for myself), that show multiple fields being extracted in one, this would be helpful to me and others finding this question.Thank you! | In all cases below, file.json is the name of a file containing your JSON document. You could obviously use jq as you've done in the question instead, and have it read from a pipe connected to the output of curl . Pulling the requested fields out, one by one: $ jq -r '.description' file.jsonTmux Plugin Manager$ jq -r '.language' file.jsonShell The -r option is used above (and below) to get the "raw data" rather than JSON encoded data. Getting both at once (you'd have issues telling them apart if any of them contain embedded newline characters): $ jq -r '.language, .description' file.jsonShellTmux Plugin Manager Getting them as a CSV record (will be properly quoted so that embedded commas and newlines will be parsable by a CSV parser, and embedded double quotes will be CSV encoded too): $ jq -r '[.language, .description] | @csv' file.json"Shell","Tmux Plugin Manager" Tab-delimited (embedded newlines and tabs will show up as \n and \t respectively): $ jq -r '[.language, .description] | @tsv' file.jsonShell Tmux Plugin Manager Letting jq produce shell code containing two variable assignments. The values will be properly quoted for the shell. $ jq -r '@sh "lang=\(.language)", @sh "desc=\(.description)"' file.jsonlang='Shell'desc='Tmux Plugin Manager' Getting the shell to actually evaluate these statements: $ eval "$( jq -r '@sh "lang=\(.language)", @sh "desc=\(.description)"' file.json )"$ printf 'lang is "%s" and desc is "%s"\n' "$lang" "$desc"lang is "Shell" and desc is "Tmux Plugin Manager" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637319",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/458544/"
]
} |
637,325 | I'm running Arch Linux with GNOME 3.38 X11, and have an issue where I'm at idle (after using the computer for a while and closing everything) using around 8-9GB of RAM. I know about linxatemyram , and I don't think this is the issue, since free -m prints the following: total used free shared buff/cache availableMem: 62282 9059 29502 162 23720 52368Swap: 8191 0 8191 Indicating that indeed I am using a lot of ram when running nothing. When I used to have 16GB I would also run out of memory frequently due to this issue, so I don't think it's some form of caching, since that would back down when my memory usage goes up. Curiously the top memory usages don't add up to the amount it claims to have reserved either. Here's a paste of the results I get . I've been thinking for a while that something must be leaking, but I can't seem to find out what. EDIT: Extra outputs. These were measured soon after a restart, so aren't representative. I will rerun and post after the same situation arises. $ mount | grep tmpfsdev on /dev type devtmpfs (rw,nosuid,relatime,size=31848276k,nr_inodes=7962069,mode=755,inode64)run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755,inode64)tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=4096k,nr_inodes=1024,mode=755,inode64)tmpfs on /tmp type tmpfs (rw,nosuid,nodev,size=31888716k,nr_inodes=409600,inode64)tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=6377740k,nr_inodes=1594435,mode=700,uid=1000,gid=985,inode64) $ df -hFilesystem Size Used Avail Use% Mounted ondev 31G 0 31G 0% /devrun 31G 1.7M 31G 1% /run/dev/nvme0n1p3 450G 208G 219G 49% /tmpfs 31G 737M 30G 3% /dev/shmtmpfs 4.0M 0 4.0M 0% /sys/fs/cgrouptmpfs 31G 19M 31G 1% /tmptmpfs 6.1G 136K 6.1G 1% /run/user/1000 | Based on the information you've provided you indeed have tmpfs filesystems mounted at /tmp and /dev/shm which are not shown by top or other similar utilities. Please, monitor these mount points usage via df and clean up data or stop applications writing data to them. Some applications create files and delete them right away and such files still take space. They can't be seen directly via e.g. ls or df but you can discover them this way: sudo lsof -n | egrep "/tmp|/dev/shm" | grep deleted Since this is the 20th time I'm seeing this question I've gone ahead and filed bug reports against top , free and htop : https://gitlab.com/procps-ng/procps/-/issues/196 https://github.com/htop-dev/htop/issues/556 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/455446/"
]
} |
637,334 | Variables and procedures in JavaScript are often named via the "camelCase" naming method, as: myVariable Any letter of the only or not-only-but first letter in an expression which acts as a data structure's name or part of a name is lowercased. Contrarily, from my experience it is somewhat common to uppercase all starting letters of the only or all expressions of a name as: MyVariable What is the term to describe the data structure Naming method common in Bash? | This naming convention is called PascalCase, or Upper Camel Case or StudlyCase. Wikipedia has a list of naming conventions. Though, I haven't heard of such convention for Bash. It seems to be more open-minded. The only convention I know of for Bash is to use capitalized words for constants. This answer talks about that. TL;DR: pick a convention and stick to it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/637334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/458602/"
]
} |
637,406 | If I do my find command at the command line, it works fine, but if I try it in a script, I get this error. I run the command like FILTER=" | grep -v \"test\""files=`find . -type f -regex \".*$ext\" $FILTER` if I do echo "find . -type f -regex \".*$ext\" $FILTER" it outputs find . -type f -regex ".*.cpp" | grep -v "test" which works fine at the command line. How can I get this to work in a script? I tried escaping the * too, but get the same error. I've also noticed that find . -type f -regex \".*$ext\" gives no output when run in my shell script and I'm not sure why, because just like above, if I run that at command line, I get a list of .cpp files. | When the shell reaches the "expansions" stage , control operators (such as | ) have already been identified. The result of expansions is not parsed again in search of control structures. When the command substitution in files=`find . -type f -regex \".*$ext\" $FILTER` is expanded, Bash parses it as a simple command ( find ) followed by several arguments, two of them requiring expansion. You can turn tracing on to see the actual, expanded command: $ set -x$ find . -type f -regex \".*$ext\" $FILTER+ find . -type f -regex '".*cpp"' '|' grep -v '"test"' If you compare it with $ set -x$ find . -type f -regex ".*.cpp" | grep -v "test"+ grep --color=auto -v test+ find . -type f -regex '.*.cpp' you can clearly see that, in the first case, the | is used as a one-character argument to find . To dynamically build and execute a command string you need to explicitly add a new parsing stage. eval is a way to do that: $ set -x$ files=$(eval "find . -type f -regex \".*$ext\" $FILTER")++ eval 'find . -type f -regex ".*cpp" | grep -v "test"'+++ find . -type f -regex '.*cpp'+++ grep --color=auto -v test But note that, when executing a variable as a script, it is really important to make sure you have control on the variable's content for obvious security reasons. Since eval tends to also make programs harder to read and to debug, it is advisable to only use it as a last resort. In your case, a better approach could be: filter=( -regex ".*$ext" '!' -name "*test*" )find . -type f "${filter[@]}" -exec bash -c ' # The part of your script that works with "files" goes here # "$@" holds a batch of file names ' mybash {} + Which makes use of find 's flexibility and also correctly handles file names that include newline characters — a corner case that makes saving the output of find into a variable unreliable, in general, unless you use something like mapfile -d '' files < <(find ... -print0) (assuming Bash (since version 4.4) and a find implementation that supports the non-standard -print0 ). You can read more on this in Why is looping over find's output bad practice? , also relevant in relation to piping find 's output. Again, note that the filter array's elements can cause the execution of arbitrary code (think about filter=( -exec something_evil ';' ) ), so you still need to make sure you have control on its content. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373197/"
]
} |
637,497 | I want to use $HOME to replace /Users/Me in my ssh config file Host MyHost HostName 192.168.1.1 IdentitiesOnly yes User me IdentityFile "/Users/Me/.ssh/id_rsa" but I got no such identity: $HOME/.ssh/id_rsa: No such file or directory | At least on my Debian 10 system, the ssh_config(5) man page says: Arguments to IdentityFile may use the tilde syntax to refer to user's home directory or the tokens described in the TOKENS section. So, instead of using $HOME , you can write the IdentityFile line either as: IdentityFile "~/.ssh/id_rsa" or as: IdentityFile "%d/.ssh/id_rsa" Environment variable syntax ( ${HOME} ) support is only mentioned with the IdentityAgent configuration item, not with IdentityFile . According to OpenSSH release notes , the environment variable syntax support was added to CertificateFile , ControlPath and IdentityFile configuration keywords in OpenSSH version 8.4 (released on 2020-09-27), and Debian 10 only has version 7.9 (with the latest security fixes backported). So at the time of this writing, unless you are using a very new Linux distribution, your OpenSSH version might be too old to support using the ${HOME} syntax in SSH configuration file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637497",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/458319/"
]
} |
637,572 | I want to remove duplicate lines from /etc/fstab , so I did this: awk '!NF || !seen[$0]++' /etc/fstab > /etc/fstab.updateUUID=3de0d101-fba7-4d89-b038-58fe07295d96 /grid/sdb ext4 defaults,noatime 0 0UUID=683ed0b3-51fe-4dc4-975e-d56c0bbaf0bc /grid/sdc ext4 defaults,noatime 0 0UUID=1cf79946-0ba6-4cd8-baca-80c0a2693de1 /grid/sdd ext4 defaults,noatime 0 0UUID=fa9cc6e8-4df8-4330-9144-ede46b94c49e /grid/sde ext4 defaults,noatime 0 0UUID=3de0d101-fba7-4d89-b038-58fe07295d96 /grid/sdb ext4 defaults,noatime 0 0UUID=683ed0b3-51fe-4dc4-975e-d56c0bbaf0bc /grid/sdc ext4 defaults,noatime 0 0 But as we can see, the last two lines are the same with the first two lines, but last two lines are with spaces. Is it possible to ignore the space and remove the duplicate lines anyway? | Force the rebuild of the record with $1=$1 ! This squeezes all contiguous spaces into a single one. awk '{$1=$1};!seen[$0]++' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/637572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
637,575 | I have this systemd service: $ sudo cat /etc/systemd/system/my_service123.service [Unit] Description=my_service123 After=syslog.target [Service] ExecStart=./my_app --config main_cfg.conf Restart=on-abort WorkingDirectory=/home/user1/my_service_workspace SyslogIdentifier=my_service123 User=user1 [Install] WantedBy=multi-user.target When I run it: Failed to start my_service123.service: Unit my_service123.service has a bad unit file setting.See system logs and 'systemctl status my_service123.service' for details. But $ sudo journalctl -u my_service123No journal files were found.-- No entries -- What's wrong with it? $ sudo systemctl status my_service123● my_service123.service - ton-node_01 Loaded: bad-setting (Reason: Unit my_service123.service has a bad unit file setting.) Active: inactive (dead) | Force the rebuild of the record with $1=$1 ! This squeezes all contiguous spaces into a single one. awk '{$1=$1};!seen[$0]++' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/637575",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454694/"
]
} |
637,585 | This is a follow-up to an earlier question . I use this syntax in order to remove duplicate lines even if the fields are spaced differently awk '{$1=$1};!NF||!seen[$0]++' /tmp/fstab Now I want to exclude lines that start with # . So I use this syntax awk '/^#/ || "'!'"'{$1=$1};!NF||!seen[$0]++' /tmp/fstab-bash: !: event not found What is wrong with my syntax? | How about awk '!NF||$1~/^#/ {print; next} {$1=$1} !seen[$0]++' /tmp/fstab This will immediately print unchanged any line which is empty or whose first field starts with # and then skip execution so that any further code is bypassed. All other lines will be rebuilt and printed as long as they have not yet been encountered. The reason for checking if $1~/^#/ instead of applying the match to the entire line (i.e. simply /^#/ ) is that this way we also catch comment lines where the # is preceded by whitespace. Although the manpage for fstab mandates that in order to qualify as comment line, the first character must be a # , as @StephenKitt noted, the Linux implementation of libmount will skip leading whitespace and accept a line as comment also if the first non-whitespace character is a # . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
637,668 | I have a number of servers to SSH into, and some of them, being behind different NATs, may require an SSH tunnel. Right now I'm using a single VPS for that purpose. When that number reaches 65535 - 1023 = 64512 , and the VPS runs out of ports to attach tunnels to, do I spin up another VPS, or do I simply attach an additional IP address to the existing VPS? In other words, is a 65535 limit set per a Linux machine, or per a network interface? This answer seems to say it's per an IP address in general, and per IPv4 address specifically. So does a 5-tuple mean that introducing a new IP address will warrant a new tuple, therefore resetting the limit? And if IPv4 is the case, is it different for IPv6? | The limit on listening ports is per address regardless of IPv4 or IPv6. The limitation comes from TCP and UDP packet headers which are two bytes and so port numbers for TCP and UDP can only be in the rage 0x0000 (0) to 0xFFFF (65535). When any service (including an SSH server) listens to a port, it can chose to listen to one IP address or every IP address. So adding a new address won't necessarily help unless you configure each service to listen to one specific IP address. However two or more service can share the same port as long as they are listening to different IP addresses. To be honest NAT has always been a bit of a hack. The need for it has dropped in IPv6 wich each machine having it's own public facing IPv6 address and a firewall limiting incoming connections to replace the NAT. The more common approach to this situation is is to use a "bastion" machine... users ssh into bastion and from there ssh into the box they want. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163265/"
]
} |
637,779 | I have a file which looks like this ABC 2 3 4 7 9 4 1 2 5ABC 13 11 17 2 1 1ABC 7 9 14 5 8 2 9 9 9 7 1 2 and I want to print at the end of each "ABC" values the word "END" so the file will be like this ABC 2 3 4 7 9 4END 1 2 5ABC 13 11 17END 2 1 1ABC 7 9 14 5 8 2 9 9 9END 7 1 2 I tried a lot but I didn't solve it, so can any one help here. | awk '/^ABC/ && pre { print dpre ORS $0; pre=""; next } { if(pre) print pre; pre=dpre=$0; sub(/ {0,4}/, "END ", dpre) }END{ if(pre) print dpre }' infile first block will be executed only if a line starts with ABC string and when a temporary variable pre also was set otherwise next block will be executed. the END{...} block will be executed only once and after end of all. for the first line of course still pre variable doesn't set yet, so second block will be executed and it does following: if there was things inside pre print it first if(pre) print pre (with this we delay printing of previous line in order to check if next line starts with ABC or not, because we need to add END in front of that line) then we copy that line into say two separate variables pre and dpre (one would be untouched (later we need print it untouched) and for the another one in sub(/ {0,3}/,"END ", dpre) we are prepending the END string into dpre . Note that with {0,4} (zero or maximum 4 spaces; 4 is obtained from the length of END<SPC> ) we ensure that the END string will be always prepended as well as preventing truncating the original line value if there was no spaces at all. Below you can trace each iteration of the command for your own understanding: REPEAT Read a line; Is it start with ABC ( /^ABC/ )? no ; then do nothing and next block will be executed; go to 2 nd -Block yes ; Is pre was set? yes , then do these print content of the dpre variable then a single newline ORS and then current line itself empty variable pre="" and jump to REPEAT because of next statement tell that. no ; then do nothing and next block will be executed; go to 2 nd -Block 2 nd -Block is pre was set? yes ; do these print pre it's set in " if(pre) print pre "; update current line into both variables pre=dpre=$0 ; prepend END string for dpre . no ; do these update current line into both variables pre=dpre=$0 ; prepend END string for dpre . if END of file; print last state of the dpre variable if it was set, else jump to REPEAT . finish | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/395867/"
]
} |
637,904 | I have a nice little bash script which parses a message for a regular expression and does something with the capture groups: regex='\((Closes|Resolves):\s([0-9]+)\)'msg='Fixed a problem (Closes: 1234), (Resolves: 5678)'if [[ $msg =~ $regex ]] ; then action="${BASH_REMATCH[1]}" issue="${BASH_REMATCH[2]}" do_something $action $issuefi This works well for the first match, but if there are multiple matches msg , later matches are ignored. Is there a way for me to loop through each match or is it time to start thinking of python or perl ? | It's not explicitly supported by built-in bash syntax, but it can be easily achieved with a little variable substitution and some recursion. function do_something { echo $1 $2}function handlematches { regex='\((Closes|Resolves):\s([0-9]+)\)' msg=$1 if [[ $msg =~ $regex ]] ; then action="${BASH_REMATCH[1]}" issue="${BASH_REMATCH[2]}" do_something "$action" "$issue" # Remove the first regex match and try again handlematches "${msg/${BASH_REMATCH[0]}/}" fi}message='Fixed a problem (Closes: 1234), (Resolves: 5678)'handlematches "$message" Output: Closes 1234Resolves 5678 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/637904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/272848/"
]
} |
637,909 | My girlfriend has a external hard disk with 10 years+ of photos, documents and more. A lot of these files originate from her old iPhone 5 and her MacBook. The hard disk itself is NTFS Format. Since the disk is so old, it turns into a hazard of data loss (what an irony). As we tried to upload all the files into OneDrive to store them safely, we got 1,000s of errors because of invalid file names. I realized that many files started with ._ , e.g. ./pic/92 win new/iphone/._IMG_1604.JPG . I don't understand macOS and why files should be named like that, but for sure you can never get them into OneDrive like that. So I decided to hook it to my Raspberry Pi and rename all files with the wrong characters from the command line. After listing the nearly 10,000 files, I ran the following over the whole hard disk. find . -name "._*" | sed -e "p;s/\._//" | xargs -d '\n' -n2 mv Furthermore, I removed some leading whitespace in filenames with zmv. I tried the command in a test environment first and it looked fine. But I didn't check the file size. After my girlfriend connected the hard disk back onto her Mac, all renamed files show a file size of 4KB (empty)! I screwed it up and I don't know how. I assume the data is still there, but I somehow screwed the filesystem. Does anybody understand what I did wrong? More importantly, do you see a chance to recover the files? I would appreciate any advice. | As mentioned by terdon , when writing to a "foreign" filesystem, Mac OS uses two filenames for each file. One with the actual contents and a second one with metadata that would have been stored in the resource fork. You renamed the metadata filename to the content filename, thus deleting the content file in the process. However, I slightly disagree with his that the originals were overwritten. The data should be in the disk (I hope it's not a ssd), but you no longer have a filename to them, and the clusters will be marked as free space. If the files were uploaded into OneDrive, you already have a copy there. An advantage here is that you have the full list of filenames that were originally in the disk. If you don't, continue reading. First of all, before doing any further recovery on the disk, you should make a copy and work with that, e.g. with dd . This way, you avoid making things worse during a recovery attempt, since you would be working on a copy of the data. Second step would be to attempt recovery with a tool like ntfsundelete , trying to recover the deleted entries. Third, since these files were presumably copied in full from a different system, I expect the files wouldn't (generally) have been fragmented, but using sequential blocks, so it will probably be possible to recover most of them through file carving. In that case, a tool like photorec should be able to find most of the photos, even with no access to the filesystem metadata. Finally, remember to back up what you might recover! Good luck | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/637909",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/459450/"
]
} |
638,058 | I have a file with multiple lines/rows, and each line contains a variable amount of columns: Name1 String111 String112Name2 String121 String122 String123Name3 String131 String132 String133 String134 And so on and so forth (no pattern as to what line has how many entries).I would like to add the name in the first column to the beginning of every column in that line/row such that I end up with: Name1 Name1String111 Name1String112Name2 Name2String121 Name2String122 Name2String123Name3 Name3String131 Name3String132 Name3String133 Name3String134 We can start it simple and get more complicated: How to add a string such as "Test" to the beginning of every column? How to add the value in column 1 to every column in that row, including column 1? How to add the value in column 1 to every column in that row, not including column 1? My best guesses: I do not know how to call "every column" and I do not know how to make the command access the currently column so I can only add a string or the value in column 1 to a single other column: awk -F'\t' -vOFS='\t' '{ !$1 = "hello" $2}' awk -F'\t' -vOFS='\t' '{ !$1 = $1 $2}' Is there a good resource on where I can learn this syntax? | Just iterate over all fields starting with the second, and concatenate the first field to whatever you already have: $ awk '{ for(i=2;i<=NF;i++){ $i = $1$i }}1' fileName1 Name1String111 Name1String112Name2 Name2String121 Name2String122 Name2String123Name3 Name3String131 Name3String132 Name3String133 Name3String134 The 1 in the end is awk shorthand for "print the current line". You could write the same thing like this: $ awk '{ for(i=2;i<=NF;i++){ $i = $1$i }; print}' fileName1 Name1String111 Name1String112Name2 Name2String121 Name2String122 Name2String123Name3 Name3String131 Name3String132 Name3String133 Name3String134 The basic idea above can be trivially expanded to match all of your examples. NF is the special awk variable that holds the number of fields; it will always be set to however many fields are present in the current line. Then, awk allows you to refer to specific fields using a variable. So if you set i=5 , then $i is equivalent to $5 . This then lets you iterate over all fields using the for(i=2;i<=NF;i++) { } format which sets i to all numbers from 2 to the number of fields on this line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/459591/"
]
} |
638,090 | I would like to modify $2 with the following code: cat file | awk '{printf "%.15f\n", $2 = 1-$2}' > new_file The code does its job, it prints 15 decimals and calculates 1-n. However, it does not print the other columns. If I try to do that with the following code it does print it, but to a different line: cat file | awk '{printf "%.15f\n", $2 = 1-$2; print $0}' > new_file My original file: 752566 0.883928571428571 1 rs3094315 0752721 0.765873015873016 1 rs3131972 0752894 0.883928571428571 1 rs3131971 0753541 0.268849206349206 1 rs2073813 0 Output: 752566 0.116071 1 rs3094315 00.116071428571429 Desired output (the order of the columns does not matter): 752566 1 rs3094315 0 0.116071428571429 | You can use sprintf to print the new value to a formatted string, and assign that as the new value of $2 : $2 = sprintf("%.15f",1-$2) Then it's just a matter of printing the whole (modified) record: $ awk '{$2 = sprintf("%.15f",1-$2); print}' file752566 0.116071428571429 1 rs3094315 0752721 0.234126984126984 1 rs3131972 0752894 0.116071428571429 1 rs3131971 0753541 0.731150793650794 1 rs2073813 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638090",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/457434/"
]
} |
638,118 | I need to perform search and replace using sed.starting from this sample text for example : <TextView android:textSize="20.0sp" android:textStyle="bold" android:textColor="@android:color/white" android:layout_gravity="center" android:textColor="@android:color/white" i wanna replace all occurrences of android:textColor="@android:color/white" with android:textColor="#ffff5d" I spent over 3 hours (yes so frustrating) without successthe closer I get is sed -i "s/"$android:textColor=\"@android:color/white\""\|"$android:textColor=\"#ff4000\""/g" path to file.xml but it's far from right.as results is androidwhite"|:textColor="#ff4000"/white" what am I doing wrong? | You can use sprintf to print the new value to a formatted string, and assign that as the new value of $2 : $2 = sprintf("%.15f",1-$2) Then it's just a matter of printing the whole (modified) record: $ awk '{$2 = sprintf("%.15f",1-$2); print}' file752566 0.116071428571429 1 rs3094315 0752721 0.234126984126984 1 rs3131972 0752894 0.116071428571429 1 rs3131971 0753541 0.731150793650794 1 rs2073813 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/437755/"
]
} |
638,160 | In rpm-based systems, we can easily see if there is a signature associated with an rpm file: rpm -qpi <rpm-file.rpm> | grep -i signature For .deb files, we can see the package information but it doesn't include the information of whether a signature is associated or not: dpkg-deb -I uma-18feb-latest.deb Is there a way in Ubuntu to see the signature without using the following command which actually verifies the signature? dpkg-sig --verify <deb-file.deb> | dpkg-sig --list <deb-file.deb> will list any items in the file which look like a signature, without verifying the file. This will list the role of any signature in the file; e.g. $ dpkg-sig -l vuescan_9.7.50-1_amd64.debProcessing vuescan_9.7.50-1_amd64.deb...builder$ dpkg-sig -l zstd_1.4.8+dfsg-2.1_i386.debProcessing zstd_1.4.8+dfsg-2.1_i386.deb...$ The first file has a signature with the “builder” role; the second file isn’t signed. Note that it’s unusual for individual .deb files to be signed (unlike RPMs). Debian packages’ authenticity relies on the repository’s authenticity; see How is the authenticity of Debian packages guaranteed? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/459683/"
]
} |
638,188 | I have a .txt file with multiple lines that gives amino acid and residue data. The data looks like this: ARG262-Side ASP368-Side 140,83%ARG95-Side GLU107-Side 103,73%ARG474-Side VAL468-Main 94,93%PHE169-Main ALA190-Main 94,63%THR205-Side ASP203-Side 94,07%ILE299-Main LYS249-Main 94%LEU354-Main LYS365-Main 93,6%ARG346-Side GLU263-Side 93,57%LEU301-Main ALA247-Main 93,43%ALA190-Main PHE169-Main 93,37%SER252-Side ASP296-Side 93,1%TYR424-Side ASN446-Main 93% I can roughly say that the numbers indicate residues and the letters indicate aminoacids. So, both in the first and second field of each line, the part before the - consists of an aminoacid identifier and a residue value. I want to print only lines where the see residue value lies in a certain range, regardless of amino acid, and regardless of whether the first or second field matches the criterion. For example, from the above input file, I want to extract data that contains only residues between 300-425 . In this case, my output should look like this: ARG262-Side ASP368-Side 140,83%LEU354-Main LYS365-Main 93,6%ARG346-Side GLU263-Side 93,57%LEU301-Main ALA247-Main 93,43%TYR424-Side ASN446-Main 93% I tried using the grep command for this, but I wasn't very successful. Is there a command I can use other than grep ? | Tools that mainly deal with regular expressions are notoriously bad at dealing with numbers. In this case, I would suggest using something like awk instead of grep : $ awk '{ r1 = substr($1,4,3); r2 = substr($2,4,3) } (r1 >= 300 && r1 <= 425) || (r2 >= 300 && r2 <= 425)' fileARG262-Side ASP368-Side 140,83%LEU354-Main LYS365-Main 93,6%ARG346-Side GLU263-Side 93,57%LEU301-Main ALA247-Main 93,43%TYR424-Side ASN446-Main 93% The awk code extracts the tree characters, starting at offset four, from the first two whitespace-delimited fields on each line, and calls these r1 and r2 . I'm using substr() to extract the numbers at fixed positions in the data of the fields, but you could also just delete all non-digits, if you're certain that the only digits are the ones that you're interested in. You would do that with r1 = $1; gsub("[^[:digit:]]", "", r1) and similarly for r2 using $2 . If the condition at the end is true, the current line would be printed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/459703/"
]
} |
638,211 | I have a requirement to make a function in a zsh shell script, that is called by command substitution, communicate state with subsequent calls to the same command substitution. Something like C's static variables in functions (very crudely speaking). To do this I tried 2 approaches - one using coprocessors, and one use named pipes. The named pipes approach, I can't get to work - which is frustrating because I think it will solve the only problem I have with coprocessors - that is, if I enter into a new zsh shell from the terminal, I don't seem to be able to see the coproc of the parent zsh session. I've create simplified scripts to illustrate the issue below - if you're curious about what I'm trying to do - it's adding a new stateful component to the bullet-train zsh theme, that will be called by the command substituted build_prompt() function here: https://github.com/caiogondim/bullet-train.zsh/blob/d60f62c34b3d9253292eb8be81fb46fa65d8f048/bullet-train.zsh-theme#L692 Script 1 - Coprocessors #!/usr/bin/env zshcoproc catdisownprint 'Hello World!' >&pcall_me_from_cmd_subst() { read get_contents <&p print "Retrieved: $get_contents" print 'Hello Response!' >&p print 'Response Sent!'}# Run this firstcall_me_from_cmd_subst# Then comment out the above call# And run this instead#print "$(call_me_from_cmd_subst)"# Hello Response!read finally <&pecho $finally Script 2 - Named Pipes #!/usr/bin/env zshrm -rf /tmp/foo.barmkfifo /tmp/foo.barprint 'Hello World!' > /tmp/foo.bar &call_me_from_cmd_subst() { get_contents=$(cat /tmp/foo.bar) print "Retrieved: $get_contents" print 'Hello Response!' > /tmp/foo.bar &! print 'Response Sent!'}# Run this firstcall_me_from_cmd_subst# Then comment out the above call# And run this instead#print "$(call_me_from_cmd_subst)"# Hello Response!cat /tmp/foo.bar In their initial forms they both produce exactly the same output: $ ./named-pipe.zshRetrieved: Hello World!Response Sent!Hello Response!$ ./coproc.zshRetrieved: Hello World!Response Sent!Hello Response! Now if I switch the coproc script to call using the command substitution nothing changes: # Run this first#call_me_from_cmd_subst# Then comment out the above call# And run this insteadprint "$(call_me_from_cmd_subst)" That is reading and writing to the coprocess from the subprocess created by command substituion causes no issue. I was a little suprised by this - but it's good news! But if I make the same change in the named piped examples the script blocks - with no output. To try to guage why I ran it with zsh -x , giving: +named-pipe.zsh:3> rm -rf /tmp/foo.bar+named-pipe.zsh:4> mkfifo /tmp/foo.bar+named-pipe.zsh:15> call_me_from_cmd_subst+call_me_from_cmd_subst:1> get_contents=+call_me_from_cmd_subst:1> cat /tmp/foo.bar+named-pipe.zsh:5> print 'Hello World!'+call_me_from_cmd_subst:1> get_contents='Hello World!'+call_me_from_cmd_subst:2> print 'Retrieved: Hello World!'+call_me_from_cmd_subst:4> print 'Response Sent!' It looks to me like the subprocess created by the command substitution won't terminate whilst the following line hasn't terminated (I've played with using & , &! , and disown here with no change in result). print 'Hello Response!' > /tmp/foo.bar &! To demonstrate this I can manually fire-in a cat to read the response: $ cat /tmp/foo.barHello Response! The script now waits at the final cat command as there is nothing in the pipe to read. My questions are: Is it possible to construct the named pipe to behave exactly like the coprocess in the presence of a command substitution? Can you explain why a coprocess can demonstrably be read and written to from a subprocess, but if I manually create a subshell (by typing zsh ) into the console, I can no longer access it (in fact I can create a new coproc that will operate independantly of its parent and exit, and continue using the parent's!). If 1 is possible, I assume named pipes will have no such complicates as in 2 because the named pipe is not tied to a particular shell process? To explain what I mean in 2 and 3: $ coproc cat[1] 24516$ print -p test$ read -eptest$ print -p test_parent$ zsh$ print -p test_childprint: -p: no coprocess$ coproc cat[1] 28424$ disown$ print -p test_child$ read -eptest_child$ exit$ read -eptest_parent I can't see the coprocess from inside the child zsh, yet I can see it from inside a command substitution subprocess? Finally I'm using Ubuntu 18.04: $ zsh --versionzsh 5.4.2 (x86_64-ubuntu-linux-gnu) | The reason your pipe-based script doesn't work isn't some peculiarity of zsh. It's due to the way shell command substitutions, shell redirections and pipes work. Here's the script without the superfluous parts. mkfifo /tmp/foo.barecho 'Hello World!' > /tmp/foo.bar &call_me_from_cmd_subst() { echo 'Hello Response!' > /tmp/foo.bar & echo 'Response Sent!'}echo "$(call_me_from_cmd_subst)"cat /tmp/foo.bar The command substitution $(call_me_from_cmd_subst) creates an anonymous pipe connecting the output of the subshell running the function to the original shell process. The original process reads from that pipe. The child process creates a grandchild process to run echo 'Hello Response!' > /tmp/foo.bar . Both processes start out with the same open files, including the anonymous pipe. The grandchild performs the redirection > /tmp/foo.bar . This blocks because nothing is reading from the named pipe /tmp/foo.bar . Redirection is a two-step process (in fact three-step but the third doesn't matter here), because when you open a file, you don't get to choose its file descriptor. The > operator wants to redirect standard output, i.e. it wants to connect a specific file to file descriptor 1. This takes three system calls: Call fd = open("/tmp/foo.bar", O_RDWR) to open the file. The file will be opened on some file descriptor fd that the process is not currently using. This is the step that blocks until something starts reading from the named pipe /tmp/foo.bar : opening a named pipe blocks if nobody is listening. Call dup2(fd, 1) to open the file on the desired file descriptor in addition to the one the kernel chose. If there's anything open on the new descriptor (1), which there is (the anonymous pipe used for the command substitution), it is closed at this point. Call close(fd) , keeping the redirection target only on the desired file descriptor. Meanwhile, the child prints Reponse Sent! and terminates. The original shell process is still reading from the pipe. Since the pipe is still open for writing in the grandchild, the original shell process keeps waiting. To fix this deadlock, ensure that the grandchild does not keep the pipe open any longer than it has to. For example: call_me_from_cmd_subst() { { exec >&-; /bin/echo 'Hello Response!' > /tmp/foo.bar; } & echo 'Response Sent!'} or call_me_from_cmd_subst() { { echo 'Hello Response!' > /tmp/foo.bar; } >/dev/null & echo 'Response Sent!'} or any number of variations on this theme. You don't have this problem with a coprocess because it doesn't involve a named pipe, so one of the halves of the deadlock isn't blocked: >/tmp/foo.bar blocks when it opens a named pipe, but >&p doesn't block since it's just redirecting an already-open file descriptor. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286210/"
]
} |
638,221 | Not sure if it is ok to share the website I tried to get its source, but I think it is necessary for a better explanation. And I apologize if it's not in advance The command: curl -k -L -s https://www.mi.com The output was binary data for some reason by getting the following error Warning: Binary output can mess up your terminal. Use "--output -" to tellWarning: curl to output it to your terminal anyway, or consider "--outputWarning: <FILE>" to save to a file. How can I read the page HTML source? thanks! | The returned data is compressed , you can instruct curl to handle the decompression directly by adding the --compressed option: curl -k -L -s --compressed https://www.mi.com | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/638221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/417850/"
]
} |
638,244 | Using the command $ date +%j gives an output of the day of the year (001-366).However I need to use the day of the current year as an input format i.e. $ date --date='066 day' +%F Which for year 2021 I would expect 2021-03-07 as the output. Instead I get 2021-05-13 . Does anyone know what is going on and if there is a way to get what I want using date . | To get the 66th day of the year with GNU date : $ date --date='jan 1 + 65 days' +%F2021-03-07 or $ date --date='dec 31 last year + 66 days' +%F2021-03-07 With date --date='066 day' +%F you get the date 66 days from today. On the 8th of March, this happens to be the 13th of May. The above uses GNU date . On OpenBSD (not that you asked, but anyway), you could do something like the following: $ date -u -r "$(( $(date -ju -f %m%d 0101 +%s) + 65*86400 ))" +%F2021-03-07 which would be somewhat similar to the first GNU date command above in that it adds some time from January 1st. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/638244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/347238/"
]
} |
638,248 | TL; DR: is there a zsh equivalent of Ksh/Bash's "${!varnamepfx@}" expansion? So that, for example, if I have the following variables set : [...]foo='random value'bar=$'amazing\n value'baz='that other value'[...] then by requesting printf -- %s\\n "${!ba@}" in Bash I get: barbaz I've been perusing the zsh manual but haven't been able to find anything direct like the above syntax. The best I could resort to has been the following nested-expansion: (for the example above) printf -- %s\\n "${(M)${${(f)$(set)}[@]%%=*}[@]:#ba*}" It seems to be doing the job reliably (at least on MacOS Catalina's zsh v5.3) but looks quite convoluted and I also wonder whether the $(set) Command Substitution in there really spawns a process or is instead optimized by zsh . Admittedly, I have so far ruled out (and thus not looked into) getting that job done through the Completion System as it would seem a bit of an overkill for such a simple task. | To get the 66th day of the year with GNU date : $ date --date='jan 1 + 65 days' +%F2021-03-07 or $ date --date='dec 31 last year + 66 days' +%F2021-03-07 With date --date='066 day' +%F you get the date 66 days from today. On the 8th of March, this happens to be the 13th of May. The above uses GNU date . On OpenBSD (not that you asked, but anyway), you could do something like the following: $ date -u -r "$(( $(date -ju -f %m%d 0101 +%s) + 65*86400 ))" +%F2021-03-07 which would be somewhat similar to the first GNU date command above in that it adds some time from January 1st. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/638248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342404/"
]
} |
638,335 | I have a question concerning the find command in Linux. In all the articles I've found online it says that attribute -size -10M , for example, returns files that are less than 10 MB in size. But when I tried to test this, it seems that -size -10M returns files that are less than or equal 9 MB in size. If I do find . -type f -size -1M the find command returns only empty files (the unit is irrelevant, it can be -1G, -1k...). find . -type f -size -2M returns files <= 1M in size, etc. The man page says: Bear in mind that the size is rounded up to the next unit. Therefore -size -1M is not equivalent to -size -1048576c. The former only matches empty files, the latter matches files from 0 to 1,048,575 bytes. Ok, so I guess -1M is rounded to 0M, -2M to -1M and so on... ? But then find . -type f -size 1M returns files <= 1M (i.e. 100K and 512K files, but not empty files), while I would expect it to return files that are exactly 1M in size. find . -type f -size 2M returns files > 1M and <= 2M, etc. Is this all normal or am I doing something wrong and what's the exact behavior of the -size parameter? | The GNU find man page says as follows — and this appears specific to GNU find, other implementations may differ, see below: The + and - prefixes signify greater than and less than, as usual; i.e., an exact size of n units does not match. Bear in mind that the size is rounded up to the next unit. Therefore -size -1M is not equivalent to -size -1048576c . The former only matches empty files, the latter matches files from 0 to 1,048,575 bytes. Question: Ok, so I guess -1M is rounded to 0M, -2M to -1M and so on... ? No. It's not the limit in the -size condition that's rounded, but the file size itself. Take a file of 1234 bytes and a -size -1M directive. The file size is rounded up the nearest unit mentioned in the directive, here, MB's. 1234 -> 1 MB. That doesn't match the condition, since -size -1M demands less than 1 MB (after this rounding). So, indeed, -size -1 x for any x , returns only empty files. Similarly, -size 1M would match the above file, since after rounding, it's exactly 1 MB in size. On the other hand, -size 1k would not, since it rounds to 2 kB. Note that the - or + in front of the number in the condition is irrelevant for the rounding behaviour. It may be useful to just always specify the sizes in bytes, since that way there's no rounding to stumble on. -size -$((1024*1024))c will reliably find files that are strictly less than 1 MB (or 1 MiB, if you will) in size. If you want a range, you can use e.g. ( -size +$((512*1024-1))c -size -$((1024*1024+1))c ) for files within [512 kB, 1024 kB]. Another question on this: Why does `find -size -1G` not find any files? Gilles mentions in that linked question the fact that POSIX only specifies -size N as meaning size in 512-byte blocks (rounded as above: "the file size in bytes, divided by 512 and rounded up to the next integer"), and -size N c as meaning the size in bytes. Both with the optional plus or minus. The others are left unspecified, and not all find implementations recognize other prefixes, or round like GNU find does. I tested with Busybox and the *BSD find on my Mac, and it seems they treatconditions with size specifiers in a way that feels more sensible, i.e. -size -1k matches files from 0 to 1023 bytes, the same as -size -1024c , and similarly for -size -1M == -size -1024k (Busybox only has c , b and k ). Then again, Busybox doesn't seem to do the rounding even for sizes specified in blocks, against what the POSIX text seems to say it should. So, YMMV and again, maybe better to stick with sizes in bytes. Note that there's a similar issue with the -atime , -mtime and -ctime conditions: -atime n File was last accessed n*24 hours ago. When find figures out how many 24-hour periods ago the file was last accessed, any fractional part is ignored, so to match -atime +1 , a file has to have been accessed at least two days ago. And similarly, it may be easier to just use -amin +$((24*60-1)) to find files that have been last accessed at least a full 24 h ago. (Up to rounding to a minute, which you can't get rid of.) See also: Why does find -mtime +1 only return files older than 2 days? Is this all normal or am I doing something wrong and what's the exact behavior of the -size parameter? It's "normal" as far as the behaviour of GNU find is concerned, but I wouldn't call it exactly sensible. You're not wrong to be confused, it's find that is confusing. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/638335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/459820/"
]
} |
638,394 | I would like to authenticate to neomutt using keepassxc. I could not find a way to send password to neomutt's stdin. How can i do that ? I imagine something like this: keepassxc-cli exportpass mydatabase.kdbx [email protected] end output would be the password it self. How can I achieve this ? EDIT: I have found out out keepassxc-cli show Database.kdbx accounts.google.com . But it does not show password. Instead it returns PROTECTED . Thank you for help | The solution is to use the -s ( --show-protected ) and -a ( --attributes ) flags as follows: keepassxc-cli show -sa password database entry -s will display the password instead of PROTECTED , and -a password will output only the password. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/638394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/429595/"
]
} |
638,739 | What I am doing: XZ_OPT='-T0 -9 -vv' tar -vvcJf ~/backup.tar.xz ...FILES I doubt if tar really pass the given options, I have tried those things: I used -vv in XZ_OPT , but there is no message from xz in the output, neither --verbose I also use ps aux | grep xz to see if tar will spawn another process for xz , but I didn't see that tar create any process. Questions Does XZ_OPT environment really work? How to verify it? Why can't I find xz processes during tar execution? Does tar really spawn process to compress files? Environment $ xz --versionxz (XZ Utils) 5.2.5liblzma 5.2.5$ tar --versionbsdtar 3.3.2 - libarchive 3.3.2 zlib/1.2.11 liblzma/5.0.5 bz2lib/1.0.6 | Does XZ_OPT environment really work? How to verify it? Pass something invalid to it: % XZ_OPT='--this-wont-work' tar -cJf foo.tar.xz fooxz: unrecognized option '--this-wont-work'xz: Try `xz --help' for more information.tar: foo.tar.xz: Cannot write: Broken pipetar: Child returned status 1tar: Error is not recoverable: exiting now Why can't I find xz processes during tar execution? Does tar really spawn process to compress files? From the output above, it looks like it does. Does your archive take long enough to create for the process to last? ps aux | grep xz and pgrep -fa xz both show xz processes for me. In all likelihood, tar won't show output from the programs it calls unless they fail. Otherwise, they could add uncontrolled noise to the output which wasn't asked for from tar itself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223471/"
]
} |
638,740 | Suddenly windows is missing from bootmenu maybe after an update, or maybe after a sudden shutdown during windows boot process. os-prober output: /dev/sdb1@/EFI/Microsoft/Boot/bootmgfw.efi:Windows Boot Manager:Windows:efi/dev/sdb3:Ubuntu 18.04.2 LTS (18.04):Ubuntu:linux efibootmgr -v output: BootCurrent: 0003Timeout: 0 secondsBootOrder: 0001,0008,0000,0007,0009Boot0000* Windows Boot Manager HD(1,GPT,93828d50-bca4-01d4-a842-c149525eea00,0x800,0x145000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...3................Boot0001* manjaro HD(1,GPT,65c82838-e33c-4e92-9be0-c427de042756,0x800,0x145000)/File(\EFI\manjaro\grubx64.efi)Boot0007* UEFI: WDC WD10SPZX-75Z10T1, Partition 1 HD(1,GPT,93828d50-bca4-01d4-a842-c149525eea00,0x800,0x145000)/File(EFI\boot\bootx64.efi)..BOBoot0008* ubuntu HD(1,GPT,93828d50-bca4-01d4-a842-c149525eea00,0x800,0x145000)/File(\EFI\ubuntu\shimx64.efi)Boot0009* UEFI: Micron 1100 SATA 256GB, Partition 1 HD(1,GPT,65c82838-e33c-4e92-9be0-c427de042756,0x800,0x145000)/File(EFI\Microsoft\Boot\bootmgfw.efi)..BO lsblk output: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 931.5G 0 disk ├─sda1 8:1 0 650M 0 part ├─sda2 8:2 0 651G 0 part └─sda3 8:3 0 279.9G 0 part /run/media/user/lincomsdb 8:16 0 238.5G 0 disk ├─sdb1 8:17 0 650M 0 part /boot/efi├─sdb2 8:18 0 70.2G 0 part /run/media/user/6A5E35815E35475B├─sdb3 8:19 0 27.9G 0 part /run/media/user/43f98f19-cd98-403a-96bd-6bac85├─sdb4 8:20 0 51G 0 part /├─sdb5 8:21 0 33.3G 0 part /run/media/user/vms└─sdb6 8:22 0 55.5G 0 part /home I used the following command to reinstall grub: sudo grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=manjaro --rechecksudo update-grub But it still did not find the Windows (or the Ubuntu). All the operating systems were installed in UEFI mode and using the same efi partition for boot. ~/# uname -r5.4.101-1-MANJARO~/$ grub-install --versiongrub-install (GRUB) 2.04~19~manjaro | Does XZ_OPT environment really work? How to verify it? Pass something invalid to it: % XZ_OPT='--this-wont-work' tar -cJf foo.tar.xz fooxz: unrecognized option '--this-wont-work'xz: Try `xz --help' for more information.tar: foo.tar.xz: Cannot write: Broken pipetar: Child returned status 1tar: Error is not recoverable: exiting now Why can't I find xz processes during tar execution? Does tar really spawn process to compress files? From the output above, it looks like it does. Does your archive take long enough to create for the process to last? ps aux | grep xz and pgrep -fa xz both show xz processes for me. In all likelihood, tar won't show output from the programs it calls unless they fail. Otherwise, they could add uncontrolled noise to the output which wasn't asked for from tar itself. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108534/"
]
} |
638,746 | I have a JSON file that has this content: { "Message": { "greeting": "hello" }}{ "Message": { "greeting": "Bonjour" }}{ "Message": { "greeting": "Konnichiwa" }} I would like to extract only the first Message that has "greeting" : "hello" .I can't seem to use an index on it using cat "$file" | jq -c "." The above command all returns the 3 messages. I would like to ask on how to only extract the first message or 1 by 1. | You input file contains several JSON objects. Use -s to read them all into a single array, otherwise jq processes them one by one. Then, you can just print the first one by specifying its index: jq -cs '.[0]' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/460211/"
]
} |
638,775 | I need help with bash expansion. I want to retrieve array values, GNU bash 5.1.0. The array name shall be a variable. "Just" referencing a variable in bash. I have an array named "armin" and its name is in variable $gral (works fine): gral="armin" Values are assigned: declare -a ${gral}[1]="milk"declare -a ${gral}[2]="cow"declare ${gral}[7]="budgie"declare ${gral}[9]="pla9ne" fine. Array exists: $ echo ${armin[@]}milk cow budgie pla9ne Array index exists: $echo ${!armin[@]}1 2 7 9 Array and index are fine. I want to retrieve the array by referencing its name as a variable , not manually. Have plenty of them ... The variable was set and used before: $ echo $gralarmin ## name of our bash array fine - so far. Just to show the difference NOT using a variable: echo ${armin[@]}milk cow budgie pla9ne Now attempts to reference a variable (gral) to call the name (armin): $ echo ${$gral[@]}-bash: ${$gral[@]}: wrong substitution.$echo ${"$gral"[@]}-bash: ${"$gral"[@]}: wrong substitution.echo ${"gral"[@]}-bash: ${"gral"[@]}: wrong substitution.echo ${${gral}[@]}-bash: ${${gral}[@]}: wrong substitution. all fail.Some attempts with "eval" as well. Using associative (declare -A) makes no difference. Rem.: Index works fine this way, no issue. Name is the issue. I think I am missing something. Maybe the answer was described before, I found a lot of interesting stuff about variables in arrays but did not recognize an answer to my challenge. Can you please help me find the term to retrieve the array by referencing its name as a variable ? | Use namerefs (in Bash >= 4.3): $ armin=(foo bar doo)$ declare -n gral=armin # 'gral' references array 'armin' $ gral[123]=quux # same as 'armin[123]=quux'$ echo "${gral[@]}"foo bar doo quux$ echo "${gral[1]}"bar$ echo "${!gral[@]}" # listing the indexes works too0 1 2 123 See also: Does bash provide support for using pointers? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226127/"
]
} |
638,793 | I've read this . But I am trying to achieve something slightly different. I have a directory with many sub-directories. I want to create zip files with those sub-directories, but instead of creating a separate zip file for each sub-directory, I would like to group them -- lets say 10 sub-directories per zip file. EDIT: all sub-directories are in one-depth! Many thanks. | Use namerefs (in Bash >= 4.3): $ armin=(foo bar doo)$ declare -n gral=armin # 'gral' references array 'armin' $ gral[123]=quux # same as 'armin[123]=quux'$ echo "${gral[@]}"foo bar doo quux$ echo "${gral[1]}"bar$ echo "${!gral[@]}" # listing the indexes works too0 1 2 123 See also: Does bash provide support for using pointers? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/638793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/460263/"
]
} |
639,105 | I am having an issue where I have Parent folder that has full permissions.. I can create a new folder and that folder also has full permissions. However when I copy a folder over to this parent Directory and try to create a new directory to this copied directory. it loses all permissions.. is there a way to retain permissions to the copied folders.. | Yes. When copying using cp , the -p option preserves permissions. https://man7.org/linux/man-pages/man1/cp.1.html -p same as --preserve=mode,ownership,timestamps --preserve[=ATTR_LIST] preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/460592/"
]
} |
639,112 | I want to execute a command in a loop until it exits with the exit code 0 , so I tried: $ while/until $?; do $command; done But until/while seems to only accept booleans like true/false and not 0/1 . Did I miss something like [[ ]] or something else? | The while and until keywords takes a command as their argument. If the command returns a zero exit-status, signalling "success", while would run the commands in the body of the loop, while until would not run the commands in the body of the loop. Your code seems to use while and until with $? , the exit-status of the most recent command, as a command. This will not work, unless you have commands with integers as names on the system. What would work is to use the test command to compare it against 0 , for example, with until test "$?" -eq 0; do some-command; done or, using the short form of test , until [ "$?" -eq 0 ]; do some-command; done The issue now is obviously that $? may well be zero when entering the loop, preventing the loop from running even once. It would make more sense to let the command itself give its exit-status directly to either while or until (unless you want to initialize $? to a non-zero value by calling e.g. false just before the loop, which would look strange). Using standard POSIX (though not Bourne, not that it matters much these days) shell syntax: while ! some-command; do continue; done or (both Bourne and POSIX), until some-command; do continue; done Both of these runs some-command until it returns a zero exit-status ("success"). In each iteration, it executes the continue keyword inside the loop body. Executing continue skips ahead to the next iteration of the loop immediately, but you could replace it with e.g. sleep 5 to have a short delay in each iteration, or with : which does nothing. Another variant would be to have an infinite loop that you exit when your command succeeds: while true; do some-command && break; done or, slightly more verbose, and with added air for readability, while true; do if some-command; then break fidone The only time you actually need to use $? is when you are required to keep track of this value for a while, as in somefunction () { grep -q 'pattern' file ret=$? if [ "$ret" -ne 0 ]; then printf 'grep failed with exit code %s\n' "$ret" fi >&2 return "$ret"} In the code above, the exit-status in $? is reset by the test [ "$ret" -ne 0 ] and we would not be able to print its value nor return it from the function without storing it in a separate variable. Another thing to point out is that you seem to be using a string , $command (unquoted), as your command. It would be much better to use an array if you want to run a command stored in a variable: thecommand=( awk -F ',' 'BEGIN { "uptime" | getline; split($4,a,": "); if (a[2] > 5) exit 1 }' )while ! "${thecommand[@]}"; do sleep 5; done (This example code sleeps in intervals of five seconds until the 1-minute average system load has gone below 5.) Note how by using an array and quoting the expansion as "${thecommand[@]}" , I can run my command without worrying about how the shell would split the string into words and apply filename globbing to each of those words. See also How can we run a command stored in a variable? for more info on this aspect of your issue. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/446693/"
]
} |
639,180 | I cannot find any informations about it. May someone has some insights to share. apt suggests to downgrade some SSL packages. # apt-get update && apt-get dist-upgrade --assume-yesReading package lists... DoneBuilding dependency tree Reading state information... DoneCalculating upgrade... DoneThe following packages will be DOWNGRADED: libssl-dev libssl1.1 openssl0 upgraded, 0 newly installed, 3 downgraded, 0 to remove and 0 not upgraded.E: Packages were downgraded and -y was used without --allow-downgrades. Why this packages would be downgraded? I didn't initiated anything to downgrade them. It's just what happened during my regular daily dist-upgrade. I assume there's some critical security issue in SSL they cannot fix fast and easy. So they downgrade to the latest version without that issue. But currently I didn't find any information about such thing. Additional info Linux <hostname> 4.19.0-14-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linuxlibssl-dev/now 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 amd64 [installed,local]libssl-dev/stable 1.1.1d-0+deb10u5 amd64libssl-dev/stable 1.1.1d-0+deb10u4 amd64libssl-dev/stable 1.1.1d-0+deb10u5 i386libssl-dev/stable 1.1.1d-0+deb10u4 i386libssl1.1/now 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 amd64 [installed,local]libssl1.1/stable 1.1.1d-0+deb10u5 amd64libssl1.1/stable 1.1.1d-0+deb10u4 amd64libssl1.1/stable 1.1.1d-0+deb10u5 i386libssl1.1/stable 1.1.1d-0+deb10u4 i386openssl/now 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 amd64 [installed,local]openssl/stable 1.1.1d-0+deb10u5 amd64openssl/stable 1.1.1d-0+deb10u4 amd64openssl/stable 1.1.1d-0+deb10u5 i386openssl/stable 1.1.1d-0+deb10u4 i386 # apt policy libssl-dev libssl1.1 openssllibssl-dev: Installed: 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 Candidate: 1.1.1d-0+deb10u5 Version table: *** 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 100 100 /var/lib/dpkg/status 1.1.1d-0+deb10u5 1000 500 http://security.debian.org/debian-security buster/updates/main amd64 Packages 1.1.1d-0+deb10u4 1000 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main amd64 Packageslibssl1.1: Installed: 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 Candidate: 1.1.1d-0+deb10u5 Version table: *** 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 100 100 /var/lib/dpkg/status 1.1.1d-0+deb10u5 1000 500 http://security.debian.org/debian-security buster/updates/main amd64 Packages 1.1.1d-0+deb10u4 1000 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main amd64 Packagesopenssl: Installed: 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 Candidate: 1.1.1d-0+deb10u5 Version table: *** 1.1.1j-1+0~20210301.25+debian10~1.gbp2578a0 100 100 /var/lib/dpkg/status 1.1.1d-0+deb10u5 1000 500 http://security.debian.org/debian-security buster/updates/main amd64 Packages 1.1.1d-0+deb10u4 1000 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main amd64 Packages # apt policyPackage files: 100 /var/lib/dpkg/status release a=now 500 https://packages.sury.org/php buster/main i386 Packages release o=deb.sury.org,n=buster,c=main,b=i386 origin packages.sury.org 500 https://packages.sury.org/php buster/main amd64 Packages release o=deb.sury.org,n=buster,c=main,b=amd64 origin packages.sury.org 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster-updates/non-free i386 Packages release o=Debian,a=stable-updates,n=buster-updates,l=Debian,c=non-free,b=i386 origin ftp.hosteurope.de 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster-updates/non-free amd64 Packages release o=Debian,a=stable-updates,n=buster-updates,l=Debian,c=non-free,b=amd64 origin ftp.hosteurope.de 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster-updates/main i386 Packages release o=Debian,a=stable-updates,n=buster-updates,l=Debian,c=main,b=i386 origin ftp.hosteurope.de 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster-updates/main amd64 Packages release o=Debian,a=stable-updates,n=buster-updates,l=Debian,c=main,b=amd64 origin ftp.hosteurope.de 500 http://security.debian.org/debian-security buster/updates/non-free i386 Packages release v=10,o=Debian,a=stable,n=buster,l=Debian-Security,c=non-free,b=i386 origin security.debian.org 500 http://security.debian.org/debian-security buster/updates/non-free amd64 Packages release v=10,o=Debian,a=stable,n=buster,l=Debian-Security,c=non-free,b=amd64 origin security.debian.org 500 http://security.debian.org/debian-security buster/updates/main i386 Packages release v=10,o=Debian,a=stable,n=buster,l=Debian-Security,c=main,b=i386 origin security.debian.org 500 http://security.debian.org/debian-security buster/updates/main amd64 Packages release v=10,o=Debian,a=stable,n=buster,l=Debian-Security,c=main,b=amd64 origin security.debian.org 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/contrib i386 Packages release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=contrib,b=i386 origin ftp.hosteurope.de 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/contrib amd64 Packages release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=contrib,b=amd64 origin ftp.hosteurope.de 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/non-free i386 Packages release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=non-free,b=i386 origin ftp.hosteurope.de 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/non-free amd64 Packages release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=non-free,b=amd64 origin ftp.hosteurope.de 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main i386 Packages release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=main,b=i386 origin ftp.hosteurope.de 500 http://ftp.hosteurope.de/mirror/ftp.debian.org/debian buster/main amd64 Packages release v=10.8,o=Debian,a=stable,n=buster,l=Debian,c=main,b=amd64 origin ftp.hosteurope.dePinned packages: openssl -> 1.1.1d-0+deb10u5 with priority 1000 openssl -> 1.1.1d-0+deb10u4 with priority 1000 libssl-dev -> 1.1.1d-0+deb10u5 with priority 1000 libssl-dev -> 1.1.1d-0+deb10u4 with priority 1000 libssl-doc -> 1.1.1d-0+deb10u5 with priority 1000 libssl-doc -> 1.1.1d-0+deb10u4 with priority 1000 libssl1.1 -> 1.1.1d-0+deb10u5 with priority 1000 libssl1.1 -> 1.1.1d-0+deb10u4 with priority 1000 Solution Based on the answere of @Louis Thompson ... The currently installed packages are in fact provided by the inofficial PHP repository maintained by Ondřej Surý. https://packages.sury.org/php/ https://packages.sury.org/php/dists/buster/main/debian-installer/binary-amd64/Packages To stay straight with my debian installation I downgraded these packages. By now everything works fine with my PHP installation and my PHP applications whose are using SSL functionality. Update Thanks to @William Turrell. I installed apt-listchanges to get informations about a change in the future. Would've made things a lot easier. | https://www.debian.org/security/2021/dsa-4855 This, and other package information about openssl in Debian Buster, indicates that 1.1.1d is the current stable version. It looks like you've acquired 1.1.1j from elsewhere (gbp2578a0), and it doesn't have this important security patch | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250324/"
]
} |
639,211 | I've an .m4a audio file and wish to take the front 8 secs away and keep the rest of the file intact, then once this first step is done discard the last 8 seconds of the file and keep the rest of the file intact. So in essence the first 8 seconds will be completley discarded / removed from the file, and the file will start at the 8th second as if the previous 8 seconds never existed. Similarly the last 8 seconds of the file will be discarded. Do this all without re-encoding the file. ( EDIT : Other answers on here and elsewhere I have seen, but I could not get to work becasue other answers require the start and end time of the trimmed part to be given. What I need is ffmpeg to provide the end timestamp and start time stamp of the .m4a audio file without having to work this out and feed it into the command) I have this for trimming the front of the file ffmpeg -t 00:00:08 -acodec copy -i in_file.m4a out_file.m4a but nothing for trimming the end of the file. I can't get what I have to work. (I've seen some other answers here and elsewhere, but nothing that seems to get me there) | https://www.debian.org/security/2021/dsa-4855 This, and other package information about openssl in Debian Buster, indicates that 1.1.1d is the current stable version. It looks like you've acquired 1.1.1j from elsewhere (gbp2578a0), and it doesn't have this important security patch | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46470/"
]
} |
639,358 | Directly on the command line, this works: $sed "s/a/X/;s/X/&a&/" file and so does using shell variables: $varin=a ; varout=X ; sed "s/$varin/$varout/;s/$varout/&$varin&/" file However using an external script file $sed -f script.sed file Only the "hardcoded" approach works $cat script.seds/a/X/s/X/&a&/ While shell variables are not expanded $cat script.seds/$varin/$varout/a/$varout/&$varin&/$varin=a ; varout=X ; sed -f script.sed file#-> variables in script.sed not interpreted; output unchanged How to achieve the interpretation of shell variables within an external sed script file? I quite understand that sed itself cannot interpret the shell variables, but how could I preprocess the script (in a save manner) to once run it through bash for resolving variables? Maybe one could import them to sed with a command at the beginning? The following approaches were not fruitful (using bash ): using export varin=a ; export varout=X via sed -e $(cat sed.script) file : works for single-line script, fails on multi-line ones, even with comment as first line sed -f <(cat script.sed) file eval sed -f script.sed file | You could use envsubst to replace the variables in your script with the intended values before running sed . $ cat script.seds/$varin/$varout/s/$varout/&$varin&/ Replacing the strings $varin or ${varin} and $varout or ${varout} in script.sed with their variables values: $ varin='a' varout='X' envsubst '$varin $varout' <script.seds/a/X/s/X/&a&/ The sed command using a process substitution: sed -f <(varin='a' varout='X' envsubst '$varin $varout' <script.sed) file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639358",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/123460/"
]
} |
639,434 | This bash script seems broken: #!/bin/bashecho "It prints ok"cat << 'EOF' > ~/doFoo.sh echo 'nested script doing Foo'EOFecho "It never prints"cat << 'EOF' > ~/doBar.sh echo 'nested script doing Bar'EOFecho "It never prints too"# Here there is no doFoo.sh or doBar.sh in ~ls -l ~/doFoo.sh ~/doBar.sh The script just prints the first message ( It prints ok ) and creates a file named doFoo.sh'$'\r' with the following contents: echo 'nested script doing Foo'EOFecho "It never prints"cat << 'EOF' > ~/doBar.sh echo 'nested script doing Bar'EOFecho "It never prints too"# Here there is no doFoo.sh or doBar.sh in ~ @Jim L. After adding the line you told the exact output is still: It prints ok and nothing more. | You wrote your script on a Windows system, using an editor that saved it as a DOS text file. According to comments, you then edited it with nano a few times on a Unix system. Most text editors on Unix, nano included, will notice that the text file is in DOS format, and then preserve this format when the file is later saved. For nano , if you start the editor with -u or --unix , it will always save in Unix text format. Since DOS text files have "crlf" (carriage-return+linefeed) newlines, and Unix text files have "lf" (linefeed) newlines, this mean that each line, when read by Unix tools, now has a carriage-return character at the end of it (invisible, but usually encoded as ^M or \r ). These carriage-returns are interfering with the commands in your script. For example, this makes it impossible for the shell to find the ending EOF for the first here-document, as the line actually says EOF\r , not EOF . You would see the carriage-returns if you use cat -v on the script: $ cat -v script#!/bin/bash^Mecho "It prints ok"^Mcat << 'EOF' > ~/doFoo.sh^M echo 'nested script doing Foo'^MEOF^Mecho "It never prints"^Mcat << 'EOF' > ~/doBar.sh^M echo 'nested script doing Bar'^MEOF^Mecho "It never prints too"^M# Here there is no doFoo.sh or doBar.sh in ~^M Simply convert your script file to a Unix script file using dos2unix , and it will be fixed, or save the text in nano after having started nano with -u or --unix as described above. $ dos2unix scriptdos2unix: converting file script to Unix format... $ cat -v script#!/bin/bashecho "It prints ok"cat << 'EOF' > ~/doFoo.sh echo 'nested script doing Foo'EOFecho "It never prints"cat << 'EOF' > ~/doBar.sh echo 'nested script doing Bar'EOFecho "It never prints too"# Here there is no doFoo.sh or doBar.sh in ~ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639434",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/460900/"
]
} |
639,443 | In shell (not Bash) I have a string like this that I want to parse: stringhere/morestring!99 After parsing, I want to keep the 99 at the end, and throw away the rest of the string. The substring that needs to be kept will not always be two characters long. It will be one or more digits, from the ! to either the end of string, or a , . Example input/output: In: stringhere/morestring!99Out: 99In: string/more!99,string/more!98,string/more!97Out: 99 cut sounds like the obvious thing to use, except for the ! in the middle of the string. Is there an easy way to do this? Would awk be better? | If that strings are in FILE, and you always only want the first numbers after ! and before the first , , if it exists this should work awk -F'[!,]' '{print$2}' FILE It takes either ! or , as delimiter and shows the second field, which will be the first numbers between ! and , or are just after ! if there is no , in that line, or before it. If there is , before first ! upper awk example is not applicable. You could also pipe one cut command to another, in first you specify ! as delimiter and take stuff after the first ! and in second you specify , as delimiter and take the stuff before first , if it exists cut -d'!' -f2 FILE | cut -d',' -f1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14184/"
]
} |
639,482 | I can disable it with sudo nmcli radio wifi off but it turns back on after a reboot. The only thread I could find was this one , but it only applies to Ubuntu, and I'm using Arch Linux. My laptop has a physical switch, but it doesn't actually disable the Wi-fi card. It just asks the OS to turn off Wi-fi, but it turns back on after a reboot. To be clear: I don't just want to turn off Wi-fi and stop NetworkManager from connecting to networks. I want to disable the radio entirely and permanently, so it doesn't consume any power ever. Edit: I would also like to easily enable the wireless card as needed. | rfkill block ID|type Might be what you're looking for. Run without any parameters to find out what devices you've got. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639482",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/418663/"
]
} |
639,534 | I have a list of values, separated by ':' and I want to process them one by one. When the delimiter is space, there are no problems: nuclear@korhal:~$ for a in 720 500 560 130; do echo $a; done720500560130 But after settings IFS (Internal Field Separator) to : , strange things start to happen: nuclear@korhal:~$ IFS=":" for a in 720:500:560:130; do echo $a; done;bash: syntax error near unexpected token `do' If I skip all semicolons, when IFS is set: nuclear@korhal:~$ IFS=":" for a in 720:500:560:130 do echo $a done;Command 'for' not found, did you mean: command 'vor' from deb vor (0.5.8-1) command 'fop' from deb fop (1:2.5-1) command 'tor' from deb tor (0.4.4.5-1) command 'forw' from deb mailutils-mh (1:3.9-3.2) command 'forw' from deb mmh (0.4-2) command 'forw' from deb nmh (1.7.1-7) command 'sor' from deb pccts (1.33MR33-6build1) command 'form' from deb form (4.2.1+git20200217-1) command 'fox' from deb objcryst-fox (1.9.6.0-2.2) command 'fort' from deb fort-validator (1.4.0-1) command 'oor' from deb openoverlayrouter (1.3.0+ds1-3)Try: sudo apt install <deb name> Bash does not recognize the for command at all. If there was no IFS set in this case, it will show the prompt, because it expects more output (normal behaviour) What is happening when the IFS is set to custom character? Why the for loop does not work with it? I am using Kubuntu 20.10Bash version 5.0.17 | Keywords aren't recognized after an assignment. So, the for in IFS=blah for ... just runs a regular command called for , if you have one: $ cat > ./for#!/bin/sh echo script for$ chmod +x ./for $ PATH=$PATH:.$ for x in a b c> ^C$ foo=bar for x in a b cscript for But because Bash parses the whole input line before running it, the keyword do causes a syntax error before that happens. This is similar with redirections in place of the assignment: Can I specify a redirected input before a compound command? And also see Why can't you reverse the order of the input redirection operator for while loops? for the gory details about how the syntax is defined. Also see: How do I use a temporary environment variable in a bash for loop? My Zsh is stricter: $ zsh -c 'foo=bar for x in a b c'zsh:1: parse error near `for' But Zsh does allow redirections there before a compound command. This outputs the three lines to test.txt : $ zsh -c '> test.txt for x in a b c ; do echo $x; done ' Besides, note that IFS won't be used to split a static string like 720:500:560:130 , word splitting only works for expansions. So: $ IFS=":"$ for a in 720:500:560:130; do echo "$a"; done;720:500:560:130 but, $ IFS=":"$ s=720:500:560:130$ for a in $s; do echo "$a"; done;720500560130 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17354/"
]
} |
639,726 | I have the following command line, which should return the value 1 in case of, by means of nc , check communication with the IP and Port in question: /bin/nc -z 10.102.10.22 10003 > /dev/null && if [ $? -eq 0 ]; then echo 1; else echo 0; fi The result is satisfactory and I get the value 1 . But when there is no communication with the Port or IP, the command is left "waiting", without getting value of any kind. What would be the correct way to return the value 0 (the else statement) after a specific time has passed (e.g. 10 seconds). The results of this command are monitored every short time to draw a communications graph, so it is interesting to know when it is 0 . | The && only applies the second part if the first part is true. So what you have is this nc succeeds, so run the if it suceeded (which it did) echo 1 nc fails, so don't go past the && I think what you want is this if nc -z 10.102.10.22 10003 > /dev/null; then echo 1; else echo 0; fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639726",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/373893/"
]
} |
639,764 | I installed a LEMP web server on top of a Debian Buster template given by my server provider OVH . The server has connectivity problem : from time to time the ping is lost, then it needs a hard reboot from OVH technicians. I reinstalled the OS template three times from scratch. I ran processors and memory tests which resulted OK. I ran file system check . OVH support is overbooked and not reachable, due to the huge fire that destroyed a datacenter last week ..., so that I can't grab any information from there. Then I realized that systemd-networkd.service and networking.service are alternatively actives at reboot . Below are some sample output of the machine. Before hard reboot yesterday, while still connected in ssh: root@srv:~# systemctl | grep network cloud-init-local.service loaded active exited Initial cloud-init job (pre-networking) ● networking.service loaded failed failed Raise network interfaces network-online.target loaded active active Network is Online network-pre.target loaded active active Network (Pre) network.target loaded active active Networkroot@srv:~# systemctl status networking.service ● networking.service - Raise network interfaces Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Mon 2021-03-15 10:32:32 CET; 2h 24min ago Docs: man:interfaces(5) Process: 710 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE) Main PID: 710 (code=exited, status=1/FAILURE) Mar 15 10:32:31 srv dhclient[747]: DHCPREQUEST for 111.222.333.213 on enp3s0f0 to 255.255.255.255 port 67 Mar 15 10:32:31 srv dhclient[747]: DHCPACK of 111.222.333.213 from 111.222.333.253 Mar 15 10:32:32 srv ifup[710]: /etc/resolvconf/update.d/libc: Warning: /etc/resolv.conf is not a symbolic link to /etc/resolvconf/run/resolv.conf Mar 15 10:32:32 srv dhclient[747]: bound to 111.222.333.213 -- renewal in 40762 seconds. Mar 15 10:32:32 srv ifup[710]: bound to 111.222.333.213 -- renewal in 40762 seconds. Mar 15 10:32:32 srv ifup[710]: RTNETLINK answers: File exists Mar 15 10:32:32 srv ifup[710]: ifup: failed to bring up enp3s0f0 Mar 15 10:32:32 srv systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE Mar 15 10:32:32 srv systemd[1]: networking.service: Failed with result 'exit-code'. Mar 15 10:32:32 srv systemd[1]: Failed to start Raise network interfaces. After hard reboot yesterday : root@srv:/etc/systemd/network# systemctl | grep networkcloud-init-local.service loaded active exited Initial cloud-init job (pre-networking) networking.service loaded active exited Raise network interfaces network-online.target loaded active active Network is Online network-pre.target loaded active active Network (Pre) network.target loaded active active Networkroot@srv:/etc/systemd/network# systemctl status networking.service ● networking.service - Raise network interfaces Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled) Active: active (exited) since Tue 2021-03-16 13:53:25 CET; 1 day 1h ago Docs: man:interfaces(5) Main PID: 714 (code=exited, status=0/SUCCESS) Tasks: 1 (limit: 4915) Memory: 14.3M CGroup: /system.slice/networking.service └─751 /sbin/dhclient -4 -v -i -pf /run/dhclient.enp3s0f0.pid -lf /var/lib/dhcp/dhclient.enp3s0f0.leases -I -df /var/lib/dhcp/dhclient6.enp3s0f0.leases enp3s0f0root@srv:/etc/systemd/network# systemctl status systemd-networkd● systemd-networkd.service - Network Service Loaded: loaded (/lib/systemd/system/systemd-networkd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:systemd-networkd.service(8) Network configuration root@srv:~# ls /etc/network/if-down.d if-post-down.d if-pre-up.d if-up.d interfaces interfaces.droot@srv:~# cat /etc/network/interfacesauto loiface lo inet loopback# The normal eth0allow-hotplug eth0iface eth0 inet dhcp# Additional interfaces, just in case we're using multiple networksallow-hotplug eth1iface eth1 inet dhcpallow-hotplug eth2iface eth2 inet dhcp# Set this one last, so that cloud-init or user can defaults.source /etc/network/interfaces.d/*root@srv:~# ls /etc/network/interfaces.d/50-cloud-initroot@srv:~# cat /etc/network/interfaces.d/50-cloud-init # This file is generated from information provided by the datasource. Changes# to it will not persist across an instance reboot. To disable cloud-init's# network configuration capabilities, write a file# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:# network: {config: disabled}auto loiface lo inet loopbackauto enp3s0f0iface enp3s0f0 inet dhcp accept_ra 0# control-alias enp3s0f0iface enp3s0f0 inet6 static address 2001:abcd:1007:efgh::/56 dns-nameservers 2001:abcd:3:163::1 gateway 2001:abcd:1007:1dff:ff:ff:ff:ff post-up route add -A inet6 2001:abcd:1007:ef00::/57 gw 2001:abcd:1007:1dff:ff:ff:ff:ff || true pre-down route del -A inet6 2001:abcd:1007:ef00::/57 gw 2001:abcd:1007:1dff:ff:ff:ff:ff || true[... plus about fifty similar lines ...]root@srv:~# ls /etc/systemd/networkroot@srv:~# ls /lib/systemd/network80-container-host0.network 80-container-ve.network 80-container-vz.network 99-default.link Question May something in that config be responsible of reccurent lost connectivity ? How do I make sure to reboot always with the same service, either systemd.networkd or networking ? Where do I add my static IPs in that jungle ? I may add the following to a file in /etc/systemd/network/ (empty at the moment), but that would imply trying to start systemd.networkd and stop the other, not sure if I can do this from remote ssh..., neither if the service is properly setup ! nano /etc/systemd/network/50-default.network[Address]Address=FAILOVER_IP/32Label=failover1 # optional Or better add something like that into /etc/network/interfaces and restart networking.service ? auto eth0:0iface eth0:0 inet staticaddress STATIC_IPnetmask 255.255.255.255auto eth0:1iface eth0:1 inet staticaddress OTHER_STATIC_IPnetmask 255.255.255.255 Thanks a lot ! Side note, all the network configuration was provided by OVH in its template, and unfortunately my knowledge in that topic is very limited. Moreover the network config changes between successive versions of Debian make it even harder to learn IMHO. | The && only applies the second part if the first part is true. So what you have is this nc succeeds, so run the if it suceeded (which it did) echo 1 nc fails, so don't go past the && I think what you want is this if nc -z 10.102.10.22 10003 > /dev/null; then echo 1; else echo 0; fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149565/"
]
} |
639,859 | I have a text file that contains bash commands for fetching some data from a database. Each command is separated with a newline. See an example below: (the commands could also have been "ls -l", "cat whatever.txt", etc.) prefetch SRR403012fastq-dump --fasta 0 SRR403012my_cpp_program.out SRR403012... How do I create a bash script that executes all these commands?I want something that executes the command in sequential order. However! It's important to note that the commands will take time before termination, because of the enormous files I am fetching. Thus, the for loop should also wait until the execution has finished. #!/bin/bashFILENAME="my_commands.txt"LINES=$(cat $FILENAME)for LINE in LINESdo execute $LINEdone | Your list of commands is effectively a shell script, even without a shebang (the #!/bin/bash line at the top). You can specify it as an argument to the shell, and the shell will execute it, one line at a time, waiting for the execution to complete: bash my_commands.txt or, if your commands don’t actually need bash , sh my_commands.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/460364/"
]
} |
639,898 | I want to separate a long path to multiple lines, like this: cd foo1/foo2/foo3/foo4/bar to cd foo1\ foo2\ foo3\ foo4\ bar | You can separate a long command into multiple lines by using backslashes, but you would need to preserve the forward-slashes and omit the leading spaces: cd foo1\/foo2\/foo3\/foo4\/bar The backslashes are a line-continuation marker; when bash sees them, it incorporates the next line as if it was continued at the backslash of the current line. As a result, you couldn't use leading spaces on those subsequent lines, since they'd become spaces on the current line, creating a "too many arguments" error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/458543/"
]
} |
639,913 | I'm trying to install CentOS 7 on a desktop PC, which I'm not allowed to connect to the Internet. I would like to have the GNOME desktop and development tools installed as well. My last install attempt resulted in hanging at the "Starting Package Installation Process" stage, and I'm assuming it's because the installer is trying to reach an external mirror for packages. Is it possible to do a GNOME desktop with Development Tool add-on without a network connection? If not, I've brainstormed a couple options: Try installing using an "Everything" ISO Use the DVD ISO and perform a minimal install, then download packages to a DVD, and use that to set up a local yum repo In the case that I can't achieve my main goal without a network, which of the alternate paths would be the best to take? | You can separate a long command into multiple lines by using backslashes, but you would need to preserve the forward-slashes and omit the leading spaces: cd foo1\/foo2\/foo3\/foo4\/bar The backslashes are a line-continuation marker; when bash sees them, it incorporates the next line as if it was continued at the backslash of the current line. As a result, you couldn't use leading spaces on those subsequent lines, since they'd become spaces on the current line, creating a "too many arguments" error. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/639913",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/461362/"
]
} |
640,062 | How do I correctly run a few commands with an altered value of the IFS variable (to change the way field splitting works and how "$*" is handled), and then restore the original value of IFS ? I know I can do ( IFS='my value here' my-commands here) to localize the change of IFS to the sub-shell, but I don't really want to start a sub-shell, especially not if I need to change or set the values of variables that needs to be visible outside of the sub-shell. I know I can use saved_IFS=$IFS; IFS='my value here'my-commands hereIFS=$saved_IFS but that seems to not restore IFS correctly in the case that the original IFS was actually unset . Looking for answers that are shell agnostic (but POSIX). Clarification: That last line above means that I'm not interested in a bash -exclusive solution. In fact, the system I'm using most, OpenBSD, does not even come with bash installed at all by default, and bash is not a shell I use for anything much other than to answer questions on this site. It's much more interesting to see solutions that I may use in bash or other POSIX-like shells without making an effort to write non-portable code. | Yes, in the case when IFS is unset, restoring the value from $saved_IFS would actually set the value of IFS (to an empty value). This would affect the way field splitting of unquoted expansions is done, it would affect field splitting for the read built-in utility, and it would affect the way the positional parameters are combined into a string when using "$*" . With an unset IFS these things would happen as if IFS had the value of a space, a tab character, and a newline character, but with an empty value, there would be no field splitting and the positional parameters would be concatenated into a string with no delimiter when using "$*" . So, there's a difference. To correctly restore IFS , consider setting saved_IFS only if IFS is actually set to something. unset saved_IFS[ -n "${IFS+set}" ] && saved_IFS=$IFS The parameter substitution ${IFS+set} expands to the string set only if IFS is set, even if it is set to an empty string. If IFS is unset, it expands to an empty string, which means that the -n test would be false and saved_IFS would remain unset. Now, saved_IFS is unset if IFS was initially unset, or it has the value that IFS had, and you can set whatever value you want for IFS and run your code. When restoring IFS , you do a similar thing: unset IFS[ -n "${saved_IFS+set}" ] && { IFS=$saved_IFS; unset saved_IFS; } The final unset saved_IFS isn't really necessary, but it may be good to clean up old variables from the environment. An alternative way of doing this, suggested by LL3 in comments (now deleted), relies on prefixing the unset command by : , a built-in utility that does nothing, effectively commenting out the unset , when it's not needed: saved_IFS=$IFS${IFS+':'} unset saved_IFS This sets saved_IFS to the value of $IFS , but then unsets it if IFS was unset. Then set IFS to your value and run you commands. Then restore with IFS=$saved_IFS${saved_IFS+':'} unset IFS (possibly followed by unset saved_IFS if you want to clean up that variable too). Note that : must be quoted, as above, or escaped as \: , so that it isn't modified by $IFS containing : (the unquoted parameter substitution invokes field splitting, after all). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/640062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116858/"
]
} |
640,075 | I need to extract a path that is set within a config file, to use it in a bash script. This is how that path looks like inside said file: DIR = "${HOME}/test/tmp" I need to extract it without quotation marks, and this is how i do it: TESTVAR="$(cat /home/user/path/to/file.conf | grep ^DIR | grep -o '".*"' | tr -d '"')" The problem is that commands don't interpret ${HOME} variable "properly". Let's say i call echo $TESTVAR - as a result, instead of this: /home/user/test/tmp i get that: ${HOME}/test/tmp so i can't use it as a parameter of commands inside a script! Pretty please? | Expansions don't get recursively applied. Doing that would make it impossible to handle arbitrary data with dollar signs embedded. (A related matter is that quotes and redirection and other operators are also just regular characters after expansions.) A somewhat usual custom is to have config files like that as actual shell script, so (like the ones in Debian's /etc/default ), so the file would be DIR="${HOME}/test/tmp" and you'd read it with . configfile though of course that has the problem that the file is a full shell script, must conform to the shell syntax and the config can run arbitrary commands. Another possibility would be to run the file through envsubst , e.g. with the file in your question, envsubst < configfile would output: DIR = "/home/me/test/tmp" or you could use envsubst '$HOME $OTHER $VARS' to expand just some particular ones. Note that unlike a shell script would do, envsubst doesn't read any assignments from the input file. E.g. the value of ROOTPATH used on the second line is the one envsubst gets from the environment, it has nothing to do with the "assignment" on the first line: ROOTPATH=/something/$USERLOGPATH=$ROOTPATH/logs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640075",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/461544/"
]
} |
640,083 | CPU cores in Linux, /proc/cpuinfo , are separated with an empty line. How would you go about printing the information only for the first core? | Expansions don't get recursively applied. Doing that would make it impossible to handle arbitrary data with dollar signs embedded. (A related matter is that quotes and redirection and other operators are also just regular characters after expansions.) A somewhat usual custom is to have config files like that as actual shell script, so (like the ones in Debian's /etc/default ), so the file would be DIR="${HOME}/test/tmp" and you'd read it with . configfile though of course that has the problem that the file is a full shell script, must conform to the shell syntax and the config can run arbitrary commands. Another possibility would be to run the file through envsubst , e.g. with the file in your question, envsubst < configfile would output: DIR = "/home/me/test/tmp" or you could use envsubst '$HOME $OTHER $VARS' to expand just some particular ones. Note that unlike a shell script would do, envsubst doesn't read any assignments from the input file. E.g. the value of ROOTPATH used on the second line is the one envsubst gets from the environment, it has nothing to do with the "assignment" on the first line: ROOTPATH=/something/$USERLOGPATH=$ROOTPATH/logs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640083",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260833/"
]
} |
640,114 | I've looked around multiple guides for Manjaro KDE to add a Korean keyboard, but none of them work. I can "swap" to Korean, but my input is still English. I've added Korean to my locale, installed the fonts, tried the guides for other languages, but nothing works. How can I install the Korean keyboard? | Expansions don't get recursively applied. Doing that would make it impossible to handle arbitrary data with dollar signs embedded. (A related matter is that quotes and redirection and other operators are also just regular characters after expansions.) A somewhat usual custom is to have config files like that as actual shell script, so (like the ones in Debian's /etc/default ), so the file would be DIR="${HOME}/test/tmp" and you'd read it with . configfile though of course that has the problem that the file is a full shell script, must conform to the shell syntax and the config can run arbitrary commands. Another possibility would be to run the file through envsubst , e.g. with the file in your question, envsubst < configfile would output: DIR = "/home/me/test/tmp" or you could use envsubst '$HOME $OTHER $VARS' to expand just some particular ones. Note that unlike a shell script would do, envsubst doesn't read any assignments from the input file. E.g. the value of ROOTPATH used on the second line is the one envsubst gets from the environment, it has nothing to do with the "assignment" on the first line: ROOTPATH=/something/$USERLOGPATH=$ROOTPATH/logs | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640114",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330505/"
]
} |
640,272 | I am using Ubuntu 20.04 for Windows 10 (WSL2) on a Haswell laptop and I am getting about 0.6 bytes per second. As in 6 bytes total after 10 seconds of waiting. This is unacceptable. What is the problem? EDIT: This only appears to be an issue when operating in WSL2 mode.WSL1 = 40MiB/sWSL2 = 0.6 byte/s | Both /dev/random and /dev/urandom in Linux are cryptographically secure pseudorandom number generators. In older versions of the Linux kernel, /dev/random would block once initialized until additional sufficient entropy was accumulated, whereas /dev/urandom would not. Since WSL2 is a virtual machine with a real Linux kernel, it has a limited set of entropy sources from which it can draw entropy and must rely on the host system for most of its entropy. However, as long as it has received enough entropy when it boots, it's secure to use the CSPRNGs. It sounds like in your environment, the CSPRNG has been seeded at boot from Windows, but isn't reseeded at a high rate. That's fine, but it will cause /dev/random to block more frequently than you want. Ultimately, this is a problem with the configuration of WSL2. WSL1 probably doesn't have this problem because in such a case, /dev/random probably doesn't block and just uses the system CSPRNG, like /dev/urandom . In more recent versions of Linux , the only time that /dev/random blocks is if enough entropy hasn't been accumulated at boot to seed the CSPRNG once; otherwise, it is completely equivalent to /dev/urandom . This decision was made because there is no reasonable security difference in the two interfaces provided the pool has been appropriately initialized. Since there's no measurable difference in these cases, if /dev/random is blocking and is too slow for you, the proper thing to do is use /dev/urandom , since they are the output of the same CSPRNG (which is based on ChaCha20). The upstream Linux behavior will likely be the default in a future version of WSL2 anyway, since Microsoft will eventually incorporate a newer version of Linux. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/640272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/430596/"
]
} |
640,294 | I've been trying to make printf output some chars, given their ASCII numbers (in hex)... something like this: #!/bin/bashhexchars () { printf '\x%s' $@ ;}hexchars 48 65 6c 6c 6fExpected output:Hello For some reason that doesn't work though. Any ideas? EDIT: Based on the answer provided by Isaac (accepted answer), I ended up with this function: chr () { local var ; printf -v var '\\x%x' $@ ; printf "$var" ;} Note, I rewrote his answer a bit in order to improve speed by avoiding the subshell. Result:~# chr 0x48 0x65 0x6c 0x6c 0x6fHello~# chr 72 101 108 108 111Hello~# chr {32..126} !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|} I guess the inverse of the chr function would be a function like... asc () { printf '%d\n' "'$1" ;}asc A65chr 65A Or, if we want strictly hex variants... chrx () { local var ; printf -v var '\\x%s' $@ ; printf "$var\n" ;}ascx () { printf '%x\n' "'$1" ;}chrx 48 65 6c 6c 6fHelloascx A41 Thank you! | Oh, sure, just that it has to be done in two steps. Like a two step tango: $ printf "$(printf '\\x%s' 48 65 6c 6c 6f)"; echoHello Or, alternatively: $ test () { printf "$(printf '\\x%s' "$@")"; echo; }$ test 48 65 6c 6c 6fHello Or, to avoid printing on "no arguments": $ test () { [ "$#" -gt 0 ] && printf "$(printf '\\x%s' "$@")"; echo; }$ test 48 65 6c 6c 6fHello$ That is assuming that the arguments are decimal values between 1 and 127 (empty arguments would be counted but will fail on printing). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640294",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
640,343 | I accidentally executed sudo rm /* instead of sudo rm ./* inside a directory whose contents I wanted to delete, and I have basically messed up my system. None of the basic commands like ls , grep etc. are working, and none of my applications are opening, like chromium, slack, image viewer etc. I tried to look up my problem on the internet and found this question, but none of the solutions there work for me. I am on an Arch Linux desktop, and I haven't logged out of my system since this happened, because I'm afraid I won't be able to log back in, as suggested here . Also, I don't have a live USB of an Arch Linux image file, if that helps. Any help on how should I proceed further to make my system go back to normal, would be appreciated. Thanks! EDIT : I'm attaching the outputs of some commands: $ echo /*/boot /dev /etc /home /lost+found /mnt /opt /proc /root /run /srv /sys /tmp /usr /var $ echo /usr/*/usr/bin /usr/include /usr/lib /usr/lib32 /usr/lib64 /usr/local /usr/sbin /usr/share /usr/src Also, echo /usr/bin/* gives me a long list of directories in the format /usr/bin/{command} where {command} is any command that I could have run from the terminal had I not messed my system up. Please let me know if any other information is needed! | Arch Linux has four symbolic links in / : bin -> usr/bin lib -> usr/lib lib64 -> usr/lib sbin -> usr/bin You should be able to recreate them (using a Live-USB or an emergency shell) or by calling the linker (with root privileges and in / as working directory) directly: /usr/lib/ld-linux-x86-64.so.2 /usr/bin/ln -s usr/lib lib64 This should restore basic functionality in your running system. Then restoring the other symbolic links should be easy. If you don't have root privileges you can reboot into a recovery shell and fix the problems there. Why does /usr/bin/ls and other commands fail? Without the /lib64 symbolic link dynamically linked programs will not find the dynamic linker/loader because the path is hardcoded to /lib64/ld-linux-x86-64.so.2 (c.f. ldd /usr/bin/ln ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/640343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454328/"
]
} |
640,346 | Recently i got an update which broke telegram and now i can't install it using apt on my KDE Neon installation. It used to work perfectly fine before that. I got the repo at this article https://www.omgubuntu.co.uk/2019/08/how-to-install-telegram-on-ubuntu and I added it by using the command sudo add-apt-repository ppa:atareao/telegram Using the below command used to install it perfectly fine $ sudo apt install telegram-desktop but after some update i have been getting this error message and I don't understand why. Reading package lists... DoneBuilding dependency tree Reading state information... DoneStarting pkgProblemResolver with broken count: 1Starting 2 pkgProblemResolver with broken count: 1Investigating (0) telegram-desktop:amd64 < none -> 2.1.7+ds-2~ubuntu20.04.1 @un puN Ib >Broken telegram-desktop:amd64 Depends on libopenal1:amd64 < none | 1:1.19.1-1 @un uH > (>= 1.14) Considering libopenal1:amd64 0 as a solution to telegram-desktop:amd64 9999 Re-Instated libopenal-data:amd64 Re-Instated libopenal1:amd64Broken telegram-desktop:amd64 Depends on libqrcodegencpp1:amd64 < none | 1.5.0-2build1 @un uH > (>= 1.2.1) Considering libqrcodegencpp1:amd64 0 as a solution to telegram-desktop:amd64 9999 Re-Instated libqrcodegencpp1:amd64Broken telegram-desktop:amd64 Depends on librlottie0-1:amd64 < none | 0~git20200305.a717479+dfsg-1 @un uH > (>= 0~git20200305.a717479+dfsg) Considering librlottie0-1:amd64 0 as a solution to telegram-desktop:amd64 9999 Re-Instated librlottie0-1:amd64Broken telegram-desktop:amd64 Depends on libxxhash0:amd64 < none | 0.7.3-1 @un uH > (>= 0.6.5) Considering libxxhash0:amd64 0 as a solution to telegram-desktop:amd64 9999 Re-Instated libxxhash0:amd64Broken telegram-desktop:amd64 Depends on qtbase-abi-5-12-8:amd64 < none @un H > Considering libqt5core5a:amd64 3417 as a solution to telegram-desktop:amd64 9999DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: telegram-desktop : Depends: qtbase-abi-5-12-8E: Unable to correct problems, you have held broken packages. | Arch Linux has four symbolic links in / : bin -> usr/bin lib -> usr/lib lib64 -> usr/lib sbin -> usr/bin You should be able to recreate them (using a Live-USB or an emergency shell) or by calling the linker (with root privileges and in / as working directory) directly: /usr/lib/ld-linux-x86-64.so.2 /usr/bin/ln -s usr/lib lib64 This should restore basic functionality in your running system. Then restoring the other symbolic links should be easy. If you don't have root privileges you can reboot into a recovery shell and fix the problems there. Why does /usr/bin/ls and other commands fail? Without the /lib64 symbolic link dynamically linked programs will not find the dynamic linker/loader because the path is hardcoded to /lib64/ld-linux-x86-64.so.2 (c.f. ldd /usr/bin/ln ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/640346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/408689/"
]
} |
640,388 | I have a xml file having following content. <contracts> <clients> <client> <name>Nicol</name> <clientRef>123</clientRef> </client> <client> <name>Basil</name> <clientRef>8234</clientRef> </client> </clients> <entries> <entry> <regCode>BCG</regCode> <clientRef>63352</clientRef> </entry> <entry> <regCode>TYD</regCode> <clientRef>3242</clientRef> </entry> </entries></contracts> The xml tags 'clientRef' are in both clients and entry sections. However, I need to remove the clientRef tags only in client section. The desired output is : <contracts> <clients> <client> <name>Nicol</name> </client> <client> <name>Basil</name> </client> </clients> <entries> <entry> <regCode>BCG</regCode> <clientRef>63352</clientRef> </entry> <entry> <regCode>TYD</regCode> <clientRef>3242</clientRef> </entry> </entries></contracts> I am new to shell and sed commands. How I can remove the clientRef tags with shell scripts? | Although possible, it is a very, very bad idea to attempt to parse XML or HTML with tools like sed that are based on regular expressions. That can work for simple cases but gets really hard to get right, even for experts , for even slightly more complex cases. So, use an XML parser such as xmlstarlet (should be installable from your operating system's repositories): $ xmlstarlet ed -d '//client/clientRef' file.xml <?xml version="1.0"?><contracts> <clients> <client> <name>Nicol</name> </client> <client> <name>Basil</name> </client> </clients> <entries> <entry> <regCode>BCG</regCode> <clientRef>63352</clientRef> </entry> <entry> <regCode>TYD</regCode> <clientRef>3242</clientRef> </entry> </entries></contracts> The ed means "edit this file" and the -d '//client/clientRef' means "remove clientRef entries under client ". In this particular case, you can also use simple text-parsing tools, so I will include an example, but please don't do this for anything more complicated, and be aware that it is likely to break with even a minor change in the input data: $ awk '{ if(/<clients>/){a=1} else if(/<\/clients>/){a=0} if(/<clientRef>/ && a){ next} }1;' file.xml <contracts> <clients> <client> <name>Nicol</name> </client> <client> <name>Basil</name> </client> </clients> <entries> <entry> <regCode>BCG</regCode> <clientRef>63352</clientRef> </entry> <entry> <regCode>TYD</regCode> <clientRef>3242</clientRef> </entry> </entries></contracts> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277074/"
]
} |
640,457 | Yesterday before going to sleep I started a long process of which I thought it would be finished before I stand up, therefore I used ./command && sudo poweroff my system is configured to not ask for a password for sudo poweroff , so it should shutdown when that command is finished. However it is still running and I want to use that system for other tasks now. Having that command running in the background is not an issue, but having my system possibly shutting down any second is. Is there a way to prevent zsh from executing the poweroff command while making sure that the first one runs until it is done? Would editing the /etc/sudoers file so that the system asks for my password still help in this case? | As you clarified in comments it's still running in foreground on an interactive shell, you should just be able to press Ctrl+Z . That will suspend the ./command job. Unless ./command actually intercepts the SIGTSTP signal and chooses to exit(0) in that case (unlikely), the exit status will be non-0 (128+SIGTSTP, generally 148), so sudo poweroff will not be run. Then, you can resume ./command in foreground or background with fg or bg . You can test with: sleep 10 && echo poweroff And see that poweroff is not output when you press Ctrl+Z and resume later with fg / bg . Or with sleep 10 || echo "failed: $?" And see failed: 148 as soon as you press Ctrl+Z . Note that this is valid for zsh and assuming you started it with ./command && sudo poweroff . It may not be valid for other shells, and would not be if you started it some other way such as (./command && sudo poweroff) in a subshell or { ./command && sudo poweroff; } as part of a compound command (which zsh , contrary to most other shells transforms to a subshell so it can be resumed as a whole when suspended). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/640457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/200668/"
]
} |
640,579 | I was trying to understand the difference between a task_struct's mm and active_mm fields, and came across a 20-year old email from Linus Torvalds which references the notion of "anonymous processes": - we have "real address spaces" and "anonymous address spaces". The difference is that an anonymous address space doesn't care about the user-level page tables at all, so when we do a context switch into an anonymous address space we just leave the previous address space active. [...] - "tsk->mm" points to the "real address space". For an **anonymous process**, tsk->mm will be NULL, for the logical reason that an **anonymous process** really doesn't _have_ a real address space at all. - however, we obviously need to keep track of which address space we "stole" for such an anonymous user. For that, we have "tsk->active_mm", which shows what the currently active address space is. The rule is that for a process with a real address space (ie tsk->mm is non-NULL) the active_mm obviously always has to be the same as the real one. For a **anonymous process**, tsk->mm == NULL, and tsk->active_mm is the "borrowed" mm while the **anonymous process** is running. When the **anonymous process** gets scheduled away, the borrowed address space is returned and cleared. | It's more or less explained in the part of the email you left out. The obvious use for a "anonymous address space" is any thread that doesn't need any user mappings - all kernel threads basically fall into this category, but even "real" threads can temporarily say that for some amount of time they are not going to be interested in user space, and that the scheduler might as well try to avoid wasting time on switching the VM state around. Currently only the old-style bdflush sync does that. Kernel threads only access kernel memory, so they don't care what's in user space memory. "Anonymous process" is an optimization for these. When the scheduler switches to a kernel thread task it can then skip the relatively time-consuming memory mapping setup and just keep the address space of the previous process in place. The kernel part of the address space is mapped the same way for all processes, so it doesn't make any difference which mapping is used for these tasks. This optimization could also temporarily be applied to a user space task while that task is running kernel space code, e.g. while waiting for a system call like sync to complete, as the real address space only needs to be restored right before returning back to user space code. As mentioned in the email, it seems like this isn't done anymore at least since bdflush was superseded by the pdflush kernel thread. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/457123/"
]
} |
640,687 | So i recently noticed that, when declaring a variable using a command substitution, let's say like so: var=$(echo "this is running" > test; cat test) Then it will run (the file test will be created) , even though i didn't yet called it (technically), which i would like so: var=$(echo "this is running" > test; cat test)echo "$var" # is where i would "normally" call the variable How could I prevent the command substitution from actually running when declaring it in a variable, so it only actually run when I call the said variable? PS: Well aware this is kind of a bad example, but it serve well to demonstrate what i mean, although with "useless use of cat" and "useless use of echo"... | Sound like you want a variable whose contents is dynamically generated. bash does not have support for ksh93's disciplines , or zsh's dynamic named directory or mksh's value substitution which would make it easier, but you could use this kind of hack, using namerefs: var_generator() { date --iso-8601=ns; }var_history=()typeset -n var='var_history[ ${##${var_history[${#var_history[@]}]=$(var_generator)}},${#var_history[@]}-1]' Here with var defined as a reference to an element of the $var_history array, using the fact that array indices are evaluated dynamically and allow running arbitrary code (here used to run the var_generator function and assign its output to a new element of the array). Then: bash-5.1$ echo "$var"2021-03-23T13:36:43,243211696+00:00bash-5.1$ echo "$var"2021-03-23T13:36:45,517726619+00:00 That sounds a bit too convoluted though where you could just use $(var_generator) here. One advantage though is that you can still do things like ${var#pattern} while bash (contrary to zsh ) won't let you do ${$(cmd)#pattern} . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409852/"
]
} |
640,694 | My question is similar to this one: How can I count the number of words in a file whilst editing the file in vim But with a different task where I have to count characters with the search function in vim and write them at the of the file. For example, if I have to count how many numeric characters are there, I would need something like this: :%s/[0-9]/{g} and after that a command that counts how many characters I have selected with that search. Edit: I'm trying a method where I first select all the characters that I want to count with /[0-9] and trying to use '<,'> like this: '<,'> !wc -m | Sound like you want a variable whose contents is dynamically generated. bash does not have support for ksh93's disciplines , or zsh's dynamic named directory or mksh's value substitution which would make it easier, but you could use this kind of hack, using namerefs: var_generator() { date --iso-8601=ns; }var_history=()typeset -n var='var_history[ ${##${var_history[${#var_history[@]}]=$(var_generator)}},${#var_history[@]}-1]' Here with var defined as a reference to an element of the $var_history array, using the fact that array indices are evaluated dynamically and allow running arbitrary code (here used to run the var_generator function and assign its output to a new element of the array). Then: bash-5.1$ echo "$var"2021-03-23T13:36:43,243211696+00:00bash-5.1$ echo "$var"2021-03-23T13:36:45,517726619+00:00 That sounds a bit too convoluted though where you could just use $(var_generator) here. One advantage though is that you can still do things like ${var#pattern} while bash (contrary to zsh ) won't let you do ${$(cmd)#pattern} . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640694",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/460977/"
]
} |
640,711 | You hear it a lot eval is evil , whether it's in Shell/POSIX world, or in other langs like python etc... But I'm wondering, is it actually useless? or is it there some arcane, non-documented, interesting or just useful use-cases for it? Would prefer if the answer is sh / bash centric, but it's fine if it's about other Shells too. PS: I'm well aware why eval is considered evil . | I know of two ... common ... use cases for eval : Argument processing with getopt : [T]his implementation can generate quoted output which must once again be interpreted by the shell (usually by using the eval command). Setting up an SSH agent : [T]he agent prints the needed shell commands (either sh(1) or csh(1) syntax can be generated) which can be evaluated in the calling shell, eg eval `ssh-agent -s` for Bourne-type shells such as sh(1) or ksh(1) and eval `ssh-agent -c` for csh(1) and derivatives. Both uses might have alternatives, but I wouldn't bat an eyelid on seeing either of them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640711",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409852/"
]
} |
640,721 | This system's was basically cloned from another system so it has its network properties so the network does not work at all. I connected an ethernet cable into one of the ethernet ports and ran ifconfig. I think the output is giving me the details of the previous system. I can't connect to the internet at all or ping other systems on the same network. I ran $lspci | egrep -i --color 'network|ethernet' and the output gave me the two ethernet ports I believe 00:1f:6 Ethernet controller: Intel Corporation Device 0e7c00:00:0 Ethernet controller: Intel Corporation Device 13g7 (rev 02) so the ethernet ports are detected. How do I determine which port is being connected by the ethernet cable? I'm not sure if virbr0 is actually one of the ethernet ports or if it is from the previous original system. EDIT I just ran lshw -class network and it list the 2 networks and I'm sure its both of them because one has a bus info ending with pci@0000:00:1f.6 which is what one of the ethernet controllers outputs from the lspci command was "00.1f.6". It list both networks as "UNCLAIMED" aboe the two other networks virbr0-NIC which list as DISABLED and virbr0 so my problem is those two are unclaimed right now. EDIT2 this is what the lshw -class network looks like when using ubuntu off a usb stick which detects the ethernet ports root@ubuntu:/home/ubuntu# lshw -class network *-network description: Ethernet interface product: Intel Corporation vendor: Intel Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: eno2 version: 02 serial: **:**:**:**:**:** capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi msix pciexpress bus_master cap_list rom ethernet physical 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=igc driverversion=0.0.1-k latency=0 link=no multicast=yes port=twisted pair resources: irq:17 memory:a4200000-a42fffff memory:a4300000-a4303fff memory:a4100000-a41fffff *-network description: Ethernet interface product: Ethernet Connection (11) I219-LM vendor: Intel Corporation physical id: 1f.6 bus info: pci@0000:00:1f.6 logical name: eno1 version: 00 serial: **:**:**:**:**:** size: 1Gbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=3.2.6-k duplex=full firmware=0.4-4 ip=10.134.33.118 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:130 memory:a4600000-a461ffff ANSWER Ok the issue was that I did not have the proper drivers installed which is why the ethernet controllers were listed as "UNCLAIMED" when I ran the lshw -class network command on the Centos OS. I got these drivers here after finding out the model of the ethernet controller, I forgot what command I used to find the exact model since running lspci doesn't really provide the right details but basically I just went to this link here and downloaded the appropriate file https://www.intel.com/content/www/us/en/support/articles/000005480/network-and-i-o/ethernet-products.html The e1000e driver fixed this issue for me and once I loaded the e1000e module, I finally saw the "eno1" device and just created a network config file for it and restarted the Network Manager. | I know of two ... common ... use cases for eval : Argument processing with getopt : [T]his implementation can generate quoted output which must once again be interpreted by the shell (usually by using the eval command). Setting up an SSH agent : [T]he agent prints the needed shell commands (either sh(1) or csh(1) syntax can be generated) which can be evaluated in the calling shell, eg eval `ssh-agent -s` for Bourne-type shells such as sh(1) or ksh(1) and eval `ssh-agent -c` for csh(1) and derivatives. Both uses might have alternatives, but I wouldn't bat an eyelid on seeing either of them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/399002/"
]
} |
640,919 | Often I'm writing a command in a bash prompt, where I want to get previous arguments, in the CURRENT line I'm typing out, and put in other places in the command. A simple example, would be if I want to rename a file. I would type out the mv command type the filename I want to move, ~/myTestFileWithLongFilename.txt now I want to just change the extension of the file that I supplied in the first argument, without typing it again. Can I use history or bash completion in some way to autocomplete that first argument? $ mv ~/myTestFileWithLongFilename.txt ~/myTestFileWithLongFilename.md I know of course I could execute the incomplete command, to get it into the history, and then reference it with !$ , but then my history is polluted with invalid commands, and I'm wondering if there's a better way | It's possible but a bit cumbersome. In bash !# refers to the entireline already typed. You can specify a given word you want to refer toafter : , in this case it would be !#:1 . You can expand it in placeusing shell-expand-line built-in readline keybinding Control - Alt - e . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22734/"
]
} |
640,929 | I'm trying to put a Windows and Kali Linux machine on the same network. I used sudo ifconfig eth0 192.168.10.3 netmask 255.255.255.0 which worked fine but when I use sudo route add default gw 192.168.10.1 it says SIOCADDRT: Network is unreachable | It's possible but a bit cumbersome. In bash !# refers to the entireline already typed. You can specify a given word you want to refer toafter : , in this case it would be !#:1 . You can expand it in placeusing shell-expand-line built-in readline keybinding Control - Alt - e . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/640929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/462373/"
]
} |
641,003 | With the below function signature ssize_t read(int fd , void * buf , size_t count ); While I do understand based off the man page that on a success case, return value can be lesser than count , but can the return value exceed count at any instance? | A call to read() might result in more data being read behind the scenes than was requested ( e.g. to read a full block from storage, or read ahead the following blocks), but read() itself never returns more data than was requested ( count ). If it did, the consequence could well be a buffer overflow since buf is often sized for only count bytes. POSIX (see the link above) specifies this limit explicitly: Upon successful completion, where nbyte is greater than 0, read() shall mark for update the last data access timestamp of the file, and shall return the number of bytes read. This number shall never be greater than nbyte . The Linux man page isn’t quite as explicit, but it does say read() attempts to read up to count bytes from file descriptor fd into the buffer starting at buf . (Emphasis added.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/641003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/398278/"
]
} |
641,228 | I am running Debian 9.13. I tried to check what is the difference between auto eth1 and allow-hotplug eth1 in /etc/network/interfaces . I have eth1 networking interface connected via USB . I tried rebooting, running systemctl restart networking and plugging/unplugging and it seems that main difference between allow-hotplug and auto is that if interface is marked as auto , command systemctl restart networking fails when eth1 is not connected. This leads to the conclusion that allow-hotplug is in fact preferable in all cases maybe except situation where I know that interface won't go away (lo, built-in interfaces). Is it correct? Is there any other difference? | I will amend answer with one more very important note. If you use special interfaces, like bonding (trunk) or network bridge , avoid adding allow-hotplug to their configuration inside /etc/networking file, always use auto . auto brings them on boot, but allow-hotplug can start messing things around during OS runtime (after initial configuration), like remove static ip configuration, reset interface, setting ip auto-configuration, thus resulting in self assigned ip like 169.254.240.1/16. allow-hotplug basically means for OS: this interfaces is dynamic, manage it on various condition changes. auto basically is telling to OS: bing up this interface with provided configuration during boot time or interface link up event. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/641228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/331772/"
]
} |
641,282 | I have this working command line expression: program --files path_to_mydir/mydata_[0-9].csv I would like to go from [0-100] but this is not working. program --files path_to_mydir/mydata_[0-100].csv Also, bonus ask, what do you call [0-10] wrt shell scripting and bash scripts? Thanks Edit: while similar this question is not asking about ls | The [...] is a bracket expression. It matches a single character, always, so you can't use [0-100] as that would just match a single 0 or 1 (in the POSIX locale) In the zsh shell, you could use <0-100> for a numerical range globbing pattern, but that won't work in bash : program --files path_to_mydir/mydata_<0-100>.csv In bash , you could use a brace expansion instead: program --files path_to_mydir/mydata_{0..100}.csv but you have to be aware of the difference between this and a filename globbing pattern. A brace expansion, as the one just above, generates strings , regardless of what filenames are available, while a filename globbing pattern matches existing names . This means that a brace expansion could potentially feed your program filenames that does not exist. You could use [...] to match the files with numbers between 0 and 100, but you would have to make it three patterns, one for each length of numbers: shopt -s nullglobprogram --files \ path_to_mydir/mydata_[0-9].csv \ path_to_mydir/mydata_[1-9][0-9].csv \ path_to_mydir/mydata_[1][0][0].csv The first would match names containing the digits 0 through to 9 , the second would match the names containing 10 through to 99 , and the last would match the name containing 100 . Would you want to match zero-filled integers: shopt -s nullglobprogram --files \ path_to_mydir/mydata_[0][0-9][0-9].csv \ path_to_mydir/mydata_[1][0][0].csv I set the nullglob shell option in both variations of this code to make sure that any pattern that is not matching any names is removed, and not left unexpanded. User fra-san noticed that you could use a combination of the brace expansion above with something that would force the shell to trigger a globbing pattern match: shopt -s nullglobprogram --files path_to_mydir/[m]ydata_{0..100}.csv The inclusion of [m] in the string (a pattern matching the character m ) would force the shell to treat each of the strings that the brace expansion creates as a separate globbing pattern. Since we're using nullglob , the patterns that do not correspond to existing names would be removed from the argument list. Note that this would generate and expand 101 globbing patterns, whereas the other approaches using globbing in this answer uses two or three patterns. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/641282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/159435/"
]
} |
641,307 | My nvidia drivers won't work. They worked just fine before my last update. I have been trying to get them to work again but with no success. if I run: nvidia-settings I get this error: ERROR: NVIDIA driver is not loaded if I run: nvidia-smi I get this error: NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. I tried to install the default drivers using: sudo ubuntu-drivers autoinstall , which installs the 390 version. However i read somewhere that this driver isn't compatible with the more up to date linux kernel. Then i tried installing the ppa and a more updated nvidia driver (after purging the old nvidia packages) sudo apt-get remove --purge '^nvidia-.*'sudo add-apt-repository ppa:graphics-drivers/ppasudo apt-get install nvidia-driver-430 But this won't work as well. I actually read that this drivers don't support my graphics card. Is there anything i can do to get my card working again? My card is a GeForce GT 635M and am using Linux Mint Ulyssa | The [...] is a bracket expression. It matches a single character, always, so you can't use [0-100] as that would just match a single 0 or 1 (in the POSIX locale) In the zsh shell, you could use <0-100> for a numerical range globbing pattern, but that won't work in bash : program --files path_to_mydir/mydata_<0-100>.csv In bash , you could use a brace expansion instead: program --files path_to_mydir/mydata_{0..100}.csv but you have to be aware of the difference between this and a filename globbing pattern. A brace expansion, as the one just above, generates strings , regardless of what filenames are available, while a filename globbing pattern matches existing names . This means that a brace expansion could potentially feed your program filenames that does not exist. You could use [...] to match the files with numbers between 0 and 100, but you would have to make it three patterns, one for each length of numbers: shopt -s nullglobprogram --files \ path_to_mydir/mydata_[0-9].csv \ path_to_mydir/mydata_[1-9][0-9].csv \ path_to_mydir/mydata_[1][0][0].csv The first would match names containing the digits 0 through to 9 , the second would match the names containing 10 through to 99 , and the last would match the name containing 100 . Would you want to match zero-filled integers: shopt -s nullglobprogram --files \ path_to_mydir/mydata_[0][0-9][0-9].csv \ path_to_mydir/mydata_[1][0][0].csv I set the nullglob shell option in both variations of this code to make sure that any pattern that is not matching any names is removed, and not left unexpanded. User fra-san noticed that you could use a combination of the brace expansion above with something that would force the shell to trigger a globbing pattern match: shopt -s nullglobprogram --files path_to_mydir/[m]ydata_{0..100}.csv The inclusion of [m] in the string (a pattern matching the character m ) would force the shell to treat each of the strings that the brace expansion creates as a separate globbing pattern. Since we're using nullglob , the patterns that do not correspond to existing names would be removed from the argument list. Note that this would generate and expand 101 globbing patterns, whereas the other approaches using globbing in this answer uses two or three patterns. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/641307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/408338/"
]
} |
642,359 | I am trying to count the occurrences of consonants in multiple files ,but I want the number of occurrences to be separately calculated for each file. I use awk -v FS="" '{for ( i=1;i<=NF;i++){if($i ~/[bcdfghjklmnpqrtsvwxyzBCDEFGHJKLMNPQRTSVWXYZ]/) count_c++}} END {print FILENAME,count_c}' file1 file2 file1 looks like this: bac Dfeg k87 eHtRe rt up file2 looks like this: hirt2wPrt but it prints the occurrences of both files (output= file2 19 ). How could I change this so the output would be like: file1 12file2 7 | With GNU awk for ENDFILE and IGNORECASE: $ awk -v IGNORECASE=1 ' { cnt += ( gsub(/[[:alpha:]]/,"&") - gsub(/[aeiou]/,"&") )} ENDFILE { print FILENAME, cnt+0; cnt=0 }' file1 file2file1 12file2 7 or with any POSIX awk: $ awk ' { lc=tolower($0); cnt[FILENAME] += (gsub(/[[:alpha:]]/,"&",lc) - gsub(/[aeiou]/,"&",lc)) } END { for (i=1; i<ARGC; i++) print ARGV[i], cnt[ARGV[i]]+0 }' file1 file2file1 12file2 7 If you only want to count the specific characters b, c, d, etc. instead of all alphabetic characters that aren't aeiou, then just change ( gsub(/[[:alpha:]]/,"&") - gsub(/[aeiou]/,"&") ) above to gsub(/[bcdfghjklmnpqrtsvwxyz]/,"&")) Note that, unlike any approach that prints results in an FNR==1 clause, both of the above scripts will handle empty files correctly by printing the file name and 0 as the count. Also note the cnt+0 in the first script - the +0 ensures that the value printed will be a numeric 0 rather than a null string if the first file is empty. If the same file name can appear multiple times in the input then add FNR==1{cnt[FILENAME]=0} to the start of the script if you want it output multiple times or add if (!seen[ARGV[i]]++) { ... } around the print in the END section if you only want it output once. See https://unix.stackexchange.com/a/642372/133219 for an answer to the followup question of also counting vowels. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/462371/"
]
} |
642,642 | I am running Debian on different machines (Debian Buster, same installation in all machines). I am using internal and external HDD. Some disks are encrypted others no. When I mount a disk, some machines ask me for the user password to mount the disk (Sudoer account). Other machines ask me for the root/ Admin password (even if I am in a sudoer account). Even one machine asked me for another sudoer account's password ( link ). Some machines do not ask for password to mount the disk (only for the decryption password for encrypted disks). Where is the configuration file where decides what account's password to use? | Mounting in GUI is done by UDisks , it's a daemon that runs as root and uses polkit to decide who can a cannot mount (or do other operations like unlocking an encrypted device) a block device. Some mount operations can be done by any user in a active session, some require administrator privileges. For example UDisks allows to mount a removable device by "normal" users, but requires an administrator to mount a non-removable (internal) device. Administrator in polkit does not mean user that can use sudo , polkit doesn't check /etc/sudoers , to be an administrator, user must be in a specific group. It's usually the same group that grants sudo , but this doesn't work if you add the user to sudoers manually. In Fedora administrators are defined as users in the wheel group, you'll find this either in the polkit manpage or in /etc/polkit-1/rules.d/50-default.rules : Define administrative users to be the users in the wheel group: polkit.addAdminRule(function(action, subject) { return ["unix-group:wheel"]; }); If you are not an administrator and you ask for a privileged operation (like mounting a non-removable drive), polkit agent will ask you for password of a different user who is an administrator. For example KDE agent will ask you to select a user, but IIRC Xfce agent just picks a user for you and asks for his password. Default behaviour depends on the agent you are using. Interestingly, polkit agent will prefer user in an active session even when you try to use different administrator account from a terminal -- running sudo -u <user> udisksctl unlock -b /dev/sda1 will ask for passphrase of user logged in in the GUI session. tl;dr UDisks (its polkit rules) decides whether you'll be asked for administrator password or not. Polkit agent decided whose password it will be. It should prefer user in the active session (if it is administrator account). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/438753/"
]
} |
642,745 | sudo -i COMMAND runs COMMAND in the $HOME of root. This causes an error when I try to run a script of current directory: $ sudo -i ./myscript.sh-bash: ./myscript.sh: No such file or directory cannot find /home/user/myscript.sh because sudo -i command moves current directory to /root in my case. How can I make sudo -i command keep the current directory? sudo and sudo -i have different env, and it needs the env of sudo -i | You can't; changing the directory is part of what sudo -i does. But you can just go back to where you were and then run the command: sudo -i sh -c "cd '$PWD'; ./myscript.sh" That will fail if $PWD contains single quotes. (On the other hand, if you know $PWD doesn't contain whitespace, or anything special to the shell, you could nix the single quotes too.) A safer way would be something like below: sudo -i sh -c 'cd "${1}"; ./myscript.sh' sh "$PWD" (A plain "$1" doesn't work as you'd expect, because sudo -i runs the user's login shell in between, leading to another round of expansions. It tries to escape the command to prevent those expansions, but fails for $1 (and $foo etc.). See "sh -c" does not expand positional parameters, if I run it from "sudo --login". Is there a way around this? for the gory details.) In any case, if that script is a tool commonly run by root, it would make sense to put it in some directory that's both in PATH for root, and only writable by root . In general, it's best to avoid any chance of non-root users messing up with what root runs, though if it's e.g. in your regular account's home directory, the possible issues are likely minimal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/426917/"
]
} |
642,776 | What timedate value is always greater than any timedate? In a script, I want to provide an argument to variable duration so that the loop will run forever until I kill the process: # `duration` has a value in secondsend=$(($(date +%s) + duration)) while true; do # ... [ $(date +%s) -ge $end ] && break # ...done | I'd change it to: SECONDS=0while true; do # ... [ "$duration" = forever ] || [ "$SECONDS" -lt "$duration" ] || break # ...done And set duration=forever without having to worry as to what the maximum number supported by [ on the system is. $SECONDS is automatically incremented every second. That feature comes from ksh and is also available in zsh and bash . Beware however that $SECONDS in bash is incremented every time the full seconds of wall clock time change, so for instance, if SECONDS=0 is run at 12:00:00.999, it will be incremented to 1 at 12:00:01.000, so only one millisecond later. If switching to zsh (which no longer has that bug) is an option, you can change it to: typeset -F SECONDS=0while true; do # ... (( SECONDS < duration )) || break # ...done And use duration=inf for the loop to run forever. That also allows fractional durations. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/462748/"
]
} |
642,785 | I have Ubuntu Linux 20.04 and Kali 2021 installed, and before reinstalling both, I had two grubs, the main one showed up when I started the pc, it was the Ubuntu one, and the second one showing up when I selected the Kali option in the Ubuntu Linux's grub. The grub on which the pc started was the Ubuntu one, and if I chose to start Kali, it would start again and show me a Kali grub menu. How can I get both back to what they were ?Thanks for your replies, but please, do not say "It is not possible", as I did have both of them fine and working.Thanks | I'd change it to: SECONDS=0while true; do # ... [ "$duration" = forever ] || [ "$SECONDS" -lt "$duration" ] || break # ...done And set duration=forever without having to worry as to what the maximum number supported by [ on the system is. $SECONDS is automatically incremented every second. That feature comes from ksh and is also available in zsh and bash . Beware however that $SECONDS in bash is incremented every time the full seconds of wall clock time change, so for instance, if SECONDS=0 is run at 12:00:00.999, it will be incremented to 1 at 12:00:01.000, so only one millisecond later. If switching to zsh (which no longer has that bug) is an option, you can change it to: typeset -F SECONDS=0while true; do # ... (( SECONDS < duration )) || break # ...done And use duration=inf for the loop to run forever. That also allows fractional durations. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/464226/"
]
} |
642,792 | My OS is Debian stretch (mate Desktop) Since about 2 years I have a problem with default setting for application browser(in mate-control-center) and firefox.whenever I let firefox set itselfs as default browser in (about:preferences) my prefered application settings in mate-control-center changed from firefox-esr to thunderbird for default browser. And the other way around. Both seemed somehow registert alex@Taomon:~$ gvfs-mime --query x-scheme-handler/http Default application for 'x-scheme-handler/http': firefox-esr.desktop Registered applications: thunderbird.desktop firefox-esr.desktop Recommended applications: thunderbird.desktop firefox-esr.desktop alex@Taomon:~$ alex@Taomon:~$ gio mime x-scheme-handler/httpsDefault application for 'x-scheme-handler/https': firefox-esr.desktopRegistered applications: thunderbird.desktop firefox-esr.desktopRecommended applications: thunderbird.desktop firefox-esr.desktopalex@Taomon:~$ How can I remove thunderbird as registered application for x-scheme-handler/http and x-scheme-handler/http s. I hope then my Error will be gone for good. edit In ubuntu (focal) is only firefox registert. alex@Guilmon:~$ gio mime x-scheme-handler/httpDefault application for “x-scheme-handler/http”: firefox.desktopRegistered applications: firefox.desktopRecommended applications: firefox.desktopalex@Guilmon:~$ there is also thunderbird as e-mail client installed. | I'd change it to: SECONDS=0while true; do # ... [ "$duration" = forever ] || [ "$SECONDS" -lt "$duration" ] || break # ...done And set duration=forever without having to worry as to what the maximum number supported by [ on the system is. $SECONDS is automatically incremented every second. That feature comes from ksh and is also available in zsh and bash . Beware however that $SECONDS in bash is incremented every time the full seconds of wall clock time change, so for instance, if SECONDS=0 is run at 12:00:00.999, it will be incremented to 1 at 12:00:01.000, so only one millisecond later. If switching to zsh (which no longer has that bug) is an option, you can change it to: typeset -F SECONDS=0while true; do # ... (( SECONDS < duration )) || break # ...done And use duration=inf for the loop to run forever. That also allows fractional durations. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/400156/"
]
} |
642,880 | Is it possible to encrypt the boot disk but not require users to input a password when the system boots? I have some headless boxes in remote locations for which I cannot guarantee they will be securely disposed off. I want to avoid somebody being able to take out the disk drive and hook it up to another device and look what is on it but at the same time the system must be able to (re)boot without user interaction. I have very little experience with encryption but I was thinking about something along the lines of storing the key in the UEFI but I am unable to find any information on whether such a thing is possible. I'm using Ubuntu 18.04 LTS, but I could upgrade if required. | To tie a disk drive to a given host, and allow it to be decrypted without requiring a manually-entered passphrase, you’d typically rely on storing or tying the encryption key to the host’s TPM (trusted platform module) or equivalent. With such a setup, the disk can’t be decrypted if it’s removed from its host. Another possible solution, if the network is trusted, is to tie the encryption key to the network (strictly speaking, some sort of key server on the network). With such a setup, the disk can’t be decrypted if its host isn’t on the correct network. Both of these are supported by Clevis . Clevis can use TPM2 or Tang for key binding, and can even combine multiple key sources using Shamir secret sharing. In both cases, confidentiality is ensured by using an inaccessible key at some point in the process: keys stored in the TPM can’t be extracted from it, nor can keys stored on a host elsewhere on the network. Other tools exist, for example TPM-LUKS . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274656/"
]
} |
642,966 | Why does file xxx.src lead to cannot open `xxx.src' (No such file or directory) but has an exit status of 0 (success)? $ file xxx.src ; echo $?xxx.src: cannot open `xxx.src' (No such file or directory)0 Note: to compare with ls : $ ls xxx.src ; echo $?ls: cannot access 'xxx.src': No such file or directory2 | This behavior is documented on Linux, and required by the POSIX standard. From the file manual on an Ubuntu system: EXIT STATUS file will exit with 0 if the operation was successful or >0 if an error was encoun‐ tered. The following errors cause diagnostic messages, but don't affect the pro‐ gram exit code (as POSIX requires), unless -E is specified: • A file cannot be found • There is no permission to read a file • The file type cannot be determined With -E (as noted above): $ file -E saonteuh; echo $?saonteuh: ERROR: cannot stat `saonteuh' (No such file or directory)1 The non-standard -E option on Linux is documented as On filesystem errors (file not found etc), instead of handling the error asregular output as POSIX mandates and keep going, issue an error message andexit. The POSIX specification for the file utility says (my emphasis): If the file named by the file operand does not exist, cannot be read, or the type of the file named by the file operand cannot be determined, this shall not be considered an error that affects the exit status . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/642966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/334715/"
]
} |
642,971 | Suppose one has the following case: #!/bin/shcase $1 ine|ex|exa|exam|examp|exampl|example) echo "OK";;t|te|tes|test) echo "Also OK";;*) echo "Error!";;esac Is there a more elegant and at the same time POSIX-compliant solution (i.e., no bash, zsh, etc.) to a situation like this? P.S. No need for exampleeee or Exam to work. | What you can do is turn the comparison around: case "example" in "$1"*) echo OK ;; *) echo Error ;;esac With multiple words, you can stick with your original idea case "$1" in e|ex|exa|exam|examp|exampl|example) : ;; t|te|tes|test) : ;; f|fo|foo) : ;; *) echo error ;;esac or use a loop and a "boolean" variable match=""for word in example test foo; do case "$word" in "$1"*) match=$word; break ;; esacdoneif [ -n "$match" ]; then echo "$1 matches $match"else echo Errorfi You can decide which is better. I think the first one is elegant. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/642971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/464484/"
]
} |
Subsets and Splits