source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
613,813
I need to scrub a couple of very large HDDs. However, I can't do that from a desktop. I need to do it on the move, from a laptop. A single pass of scrub(1) on a HDD would take more than a day, but there is no way I can leave my laptop stationary for that long. scrub(1) itself doesn't support any kind of offset command line parameter. Is there a way to do what scrub(1) does (writing random bytes), but in a way that can be resumed? Basically the command would need to print out the offset when I interrupt it, and it needs to accept an offset parameter for resuming.
dd can be coerced into producing a progress report (by signalling it with SIGUSR1 ) and can be told to start writing part way through (using seek ) You then just need a source of random bytes, such as /dev/urandom
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/613813", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26714/" ] }
613,839
When I run gpg --list-keys I get the following output: /home/yax/.gnupg/pubring.kbx----------------------------pub rsa2048 2020-10-09 [SC] 4424C645C99A4C29E540C26AAD7DB850AD9CFFABuid [ultimate] yaxley peaks <[email protected]>sub rsa2048 2020-10-09 [E] What is my actual key in this block of text? How do I get my key id? What does the [SC] and the [E] mean, and what does sub mean? Here's some info regarding the key. it was generated with gpg --full-generate-key and I chose the rsa rsa option. It's 2048 bytes long
what is my actual key in this block of text? It's not shown. Since this is, as you (correctly) said, an RSA 2048-bit key, your actual public key (which is what --list-keys shows) in hex would be over 500 characters -- about 7 full lines on a typical terminal. Your private key, which for hysterical raisins PGP and GPG calls 'secret', shown by --list-secret-keys , would be even longer, and in addition showing it on a terminal where in some cases a bad person might be able to get a copy of it is extremely bad for security. How do i get my key id? 4424C645C99A4C29E540C26AAD7DB850AD9CFFAB is the fingerprint . There are two keyids , and except for v3 keys which are long obsolete, both are derived from the fingerprint . The 'short' keyid is the low 32 bits, or last 8 hex digits, of the fingerprint and thus is AD9CFFAB. The 'long' keyid is the low 64 bits, or last 16 hex digits, of the fingerprint and thus is AD7DB850AD9CFFAB. Historically the short keyid was used for almost everything, and most websites, blogs, and much documentation that you find will use and show them, but in the last few years short keyids have been successfully attacked so modern programs now default to either the long keyid or (as here) the fingerprint, but you can add them by specifying --keyid-format=long or --keyid-format=short or the equivalent option in some config file, probably .gnupg/config . The 2048R/0B2B9B37 you found somewhere is an example of the format used by old versions of GPG. It used a single letter R for RSA, because in the old days there were really one three types of keys (and algorithms) to distinguish while now there are more; and it used the short keyid of 8 hexits.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/613839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/424473/" ] }
613,843
I was trying to compute sha256 for a simple string, namely "abc". I found out that using sha256sum utility like this: sha256sum file_with_string gives results identical to: sha256sum # enter, to read input from stdinabc^D namely: edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb Note, that before the end-of-input signal another newline was fed to stdin. What bugged me at first was that when I decided to verify it with an online checksum calculator, the result was different: ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad I figured it might have had something to do with the second newline I fed to stdin, so I tried inserting ^D twice this time (instead of using newline) with the following result: abcba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad Now, this is of course poorly formatted (due to the lack of a newline character), but that aside, it matches the one above. After that, I realized I clearly fail to understand something about input parsing in the shell. I double-checked and there's no redundant newline in the file I specified initially, so why am I experiencing this behavior?
The difference is the newline. First, let's just collect the sha256sums of abc and abc\n : $ printf 'abc\n' | sha256sum edeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb -$ printf 'abc' | sha256sum ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad - So, the ba...ad sum is for the string abc , while the ed..cb one is for abc\n . Now, if your file is giving you the ed..cb output, that means your file has a newline. And, given that "text files" require a trailing newline, most editors will add one for you if you create a new file. To get a file without a newline, use the printf approach above. Note how file will warn you if your file has no newline: $ printf 'abc' > file$ file filefile: ASCII text, with no line terminators And $ printf 'abc\n' > file2$ file file2file2: ASCII text And now: $ sha256sum file file2ba7816bf8f01cfea414140de5dae2223b00361a396177a9cb410ff61f20015ad fileedeaaff3f1774ad2888673770c6d64097e391bc362d7d6fb34982ddf0efd18cb file2
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/613843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398918/" ] }
614,451
I want to read a multi-line file in a bash script, using the file path from a variable, then merge the lines using a multi-char delimiter and save the result to another variable. I want to skip blank lines and trailing new lines and do not want a trailing delimiter. Additionally I want to support \r\n and - if at no further "cost" - why not also \r as line break (and of course \n ). The script should run on RHEL with GNU's bash 4.2.46, sed 4.2.2, awk 4.0.2, grep 2.20, coreutils 8.22 (tr, cat, paste, sort, cut, head, tail, tee, ...), xargs 4.5.11 and libc 2.17 and with perl 5.16.3, python 2.7.5 and openjdk 11.0.8. It should run about twice per day on files with ca. 10 lines on a decent machine/VM.If readability, maintainability and brevity don't suffer too much I'm very open to more performant solutions though. The files to be read from can be created and modified either on the same machine or other Win7 or Win10 systems. My approach so far is joined_string_var=$(sed 's/\r/\n/g' $filepathvar | grep . | sed ':a; N; $!ba; s/\n/; /g') So first I replace \r with \n to cover all newline formats and make the output readable for grep. Then I remove blank lines with grep . And finally I use sed for the actual line merging. I used sed instead of tr in the first step to avoid using cat, but I'm not quite sure if I prefer it like that: joined_string_var=$(cat $filepathvar | tr '\r' '\n' | grep . | sed ':a; N; $!ba; s/\n/; /g') UPDATE: I somehow completely missed simple redirection: joined_string_var=$(tr '\r' '\n' <$filepathvar | grep . | sed ':a; N; $!ba; s/\n/; /g') Any thoughts how this might be done more elegantly (less commands, better performance, not much worse brevity and readability)?
The elegance may come from the correct regex. Instead of changing every \r to a \n ( s/\r/\n/g ) you can convert every line terminator \r\n , \r , \n to the delimiter you want (in GNU sed, as few sed implementations will understand \r , and not all will understand -E ): sed -E 's/\r\n|\r|\n/; /g' Or, if you want to remove empty lines, any run of such line terminators: sed -E 's/[\r\n]+/; /g' That will work if we are able to capture all line terminators in the pattern space. That means to slurp the whole file into memory to be able to edit them. So, you can use the simpler (one command for GNU sed): sed -zE 's/[\r\n]+/; /g; s/; $/\n/' "$filepathvar" The -z takes null bytes as line terminators effectively getting all \r and \n in the pattern space. The s/[\r\n]+/; /g converts all types of line delimiters to the string you want. The s/; $/\n/ converts the (last) trailing delimiter to an actual newline. Notes The -z sed option means to use the zero delimiter (0x00). The use of that delimiter started as a need of find to be able to process filenames with newlines ( -print0 ) which will match the xargs ( -0 ) option. That meant that some tools were also modified to process zero delimited strings. That is a non-posix option that breaks files at zeros instead of newlines. Posix text files must have no zero (NIL) bytes, so the use of that option means, in practice, to capture the whole file in memory before processing it. Breaking files on NILs means that newline characters end being editable on the pattern space of sed. If the file happens to have some NIL bytes, the idea still works correctly for newlines, as they still end being editable in each chunk of the file. The -z option was added to GNU sed. The ATT sed (on which posix was based) did not have such option (and still doesn't), some BSD seds also still don't. An alternative to the -z option is to capture the whole file in memory. That could be done Posixly in some ways: sed 'H;1h;$!d' # capture whole file in hold space.sed ':a;N;$!ba' # capture whole file in pattern space. Having all newlines (except the last one) in the pattern space makes it possible to edit them: sed -Ee 'H;1h;$!d;x' -e 's/(\r\n|\r|\n)/; /g With older sed's it is also required to use the longer and more explicit (\r\n|\r|\n)+ instead of [\r\n]+ because such sed's don't understand \r or \n inside bracket expressions [] . Line oriented A solution that works one line at a time (a \r is also a valid line terminator in this solution), which means that there is no need to keep the whole file in memory (less memory used) is possible with GNU awk: awk -vRS='[\r\n]+' 'NR>1{printf "; "}{printf $0}END{print ""}' file Must be GNU awk because of the regex record separator [\r\n]+ . In other awk, the record separator must be a single byte.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/437232/" ] }
614,635
I need help to process a column entry via awk. Below are the operations i want to try: To divide a column in user-defined chunk size; count and sum each entry for every chunk to eventually give average per entry which will be eventually the chunk size. For instance, below is a list: 123456789101112 Here, I want to use a chunk size of 4 (but in my case it can vary from case to case): chunk1 1234 chunk2 5678 chunk3 9101112 After processing, I would like to have: 5678 which is the average for the entries in position 1, 2, 3 and 4, respectively, across all chunks.
The following awk program would do the job. It assumes the data is stored in data.txt in the first column (but can easily be adapted for any other column). It also assumes there are no empty columns, and only complete chunks. awk -v cs=4 '{if ((i=NR%cs)==0) {n_ch++; i=cs};buf[i]+=$1;} END{for (i=1;i<=cs;i++) printf "%d\n",buf[i]/n_ch}' data.txt The chunk size is passed to awk via the -v cs= size statement. It will, for each line, determine the "entry number within the chunk", i , via i = "line number" modulo "chunk size" , and sum the entries into an array buf . Whenever one chunk is complete, the chunk counter n_ch is increased. In the end, we print the average for all entry numbers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/433098/" ] }
614,636
Ok, so I want to monitoring running programs on Debian. For example I have a running several program on my instance and I can get output of netstat -plnt and see what is program and their ports. Example: tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 65/sshdtcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 656/mysqldtcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 631/redis-servertcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1023/nginx And I want receive a notification on email/slack when new program will be running. Maybe anybody know some utilities or program that can do it?
The following awk program would do the job. It assumes the data is stored in data.txt in the first column (but can easily be adapted for any other column). It also assumes there are no empty columns, and only complete chunks. awk -v cs=4 '{if ((i=NR%cs)==0) {n_ch++; i=cs};buf[i]+=$1;} END{for (i=1;i<=cs;i++) printf "%d\n",buf[i]/n_ch}' data.txt The chunk size is passed to awk via the -v cs= size statement. It will, for each line, determine the "entry number within the chunk", i , via i = "line number" modulo "chunk size" , and sum the entries into an array buf . Whenever one chunk is complete, the chunk counter n_ch is increased. In the end, we print the average for all entry numbers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614636", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/356747/" ] }
614,654
I'm trying to find all the unique values in a column. However, with this command I'll also get the header row. How do I skip that? awk -vFPAT='([^,]*)|("[^"]+")|","' '{if ($2!~/NULL/) {print $2}}' Files/* | sort | uniq -c | sort -n | wc -l Sample data is as: "link","shared_story","101","52"link","published_story","118","100"link","published_story","134","51"link",NULL,"152","398"link","shared_story","398","110
The following awk program would do the job. It assumes the data is stored in data.txt in the first column (but can easily be adapted for any other column). It also assumes there are no empty columns, and only complete chunks. awk -v cs=4 '{if ((i=NR%cs)==0) {n_ch++; i=cs};buf[i]+=$1;} END{for (i=1;i<=cs;i++) printf "%d\n",buf[i]/n_ch}' data.txt The chunk size is passed to awk via the -v cs= size statement. It will, for each line, determine the "entry number within the chunk", i , via i = "line number" modulo "chunk size" , and sum the entries into an array buf . Whenever one chunk is complete, the chunk counter n_ch is increased. In the end, we print the average for all entry numbers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/437427/" ] }
614,672
Let’s say I have a bunch of numbers representing quantities of memory, written in the form 86k or 320m or 1.7g for instance. How can I compute their sum in command line, and get back a human-readable result? Being able to compute subtractions would be nice too. The perfect tool would handle several sets of notations (such as 1g / 1G / 1GB / 1Go / 1GiB / 1.7Gio ) and their meaning (binary or decimal multipliers). I am looking for a pure calculator. These numbers are not necessarily the size of some files on my disk, so tools such as find , stat or du are not an option. This is obviously easy to implement (with some hurdles regarding precision), but I would be damned if this didn’t exist already!
A little self promotion: we wrote a library called libbytesize to do these calculations in C and Python and it also has a commandline tool called bscalc $ bscalc "5 * (100 GiB + 80 MiB) + 2 * (300 GiB + 15 GiB + 800 MiB)"1215425413120 B1186938880.00 KiB 1159120.00 MiB 1131.95 GiB 1.11 TiB The library is packaged in most distributions, unfortunately the tool isn't. It's in Fedora in libbytesize-tools and SuSE in bscalc package, but not in Debian/Ubuntu.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614672", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288527/" ] }
614,808
One of the most common things shell scripts need to do is to create and manipulate temporary files. Doing so safely is a pain, since you need to avoid name clashes, avoid race conditions, make sure the file has the correct permissions, etc. (See the GNU Coreutils manual and this Signs of Triviality blog post for a more detailed discussion of these issues.) Most Unix-like operating systems solve this problem by providing a mktemp command that takes care of all these gotchas. However, the syntax and semantics of these mktemp commands are not standardized . If you really want to create a temp file both safely and portably, you have to resort to ugly kludges such as the following: tmpfile=$( echo 'mkstemp(template)' | m4 -D template="${TMPDIR:-/tmp}/baseXXXXXX") || exit (This workaround exploits the fact that the macro processor m4 is part of POSIX, and m4 exposes the C standard library function mkstemp() which is also defined by POSIX.) Given all this, why hasn't POSIX standardized a mktemp command, guaranteeing its presence and at least certain aspects of its behaviour? Is this a glaring oversight on the part of the POSIX committee, or has the idea of standardizing a mktemp actually been discussed by the committee and rejected for some technical or other reason?
That comes up regularly on the Austin Group mailing list, and I'm not under the impression the Open Group would be opposed to specifying it. It just needs someone to propose something. See for instance this message from Eric Blake (Red Hat, sits on the weekly POSIX meeting) from 2011 (here copied from gmane): Date: Tue, 10 May 2011 07:13:32 -0600From: Eric Blake <[email protected]>To: Nico Schottelius <[email protected]>Cc: austin-group-l-7882/[email protected]: gmane.comp.standards.posix.austin.generalSubject: Re: No mktemp in posix?Organization: Red HatMessage-ID: <[email protected]>References: <[email protected]>Xref: news.gmane.org gmane.comp.standards.posix.austin.general:4151On 05/10/2011 04:50 AM, Nico Schottelius wrote:> Good morning,>> digging through Issue 7, I haven't found any utility that gives> the ability to create a secure, temporary file that is usually> implemented in mktemp.echo 'mkstemp(fileXXXXXX)' | m4will output the name of a just-created temporary file. However, I agreethat there does not seem to be any standardized utility that givesmkdtemp functionality, which is often more useful than mkstemp (afterall, once you have a secure temporary directory, then you can createsecure fifos within that directory, rather than having to wish for acounterpart 'mkftemp' function).> Is there no mktemp utility by intent or can we add it in the> next issue?I know both BSD and GNU have a mktemp(1) that wraps mktemp(), mkstemp(),and mkdtemp(). In my inbox, I have record of some off-list email inFebruary of this year regarding some work between those teams to try andconverge on some common functionality and to word that in a mannerappropriate for the standard, although I can't find any publiclyarchived messages to that effect. But yes, I think adding mktemp(1) tothe next revision of the standard would be worthwhile. I'll try torevive those efforts and actually post some proposed wording.--Eric Blake [email protected] +1-801-349-2682Libvirt virtualization library http://libvirt.org In a more recent thread (worth reading), Geoff Clare (from the Open Group): Date: Wed, 2 Nov 2016 15:13:46 +0000From: Geoff Clare <gwc-7882/[email protected]>To: austin-group-l-7882/[email protected]: gmane.comp.standards.posix.austin.generalSubject: Re: [1003.1(2013)/Issue7+TC1 0001016]: race condition with set -CMessage-ID: <[email protected]>Xref: news.gmane.org gmane.comp.standards.posix.austin.general:13408Stephane Chazelas <[email protected]> wrote, on 02 Nov 2016:>> At the moment, there's no way (that I know) to create a temp file> reliably with POSIX utilitiesGiven an m4 utility that conforms to the 2008 standard, there is:tmpfile=$(echo 'mkstemp(/tmp/fooXXXXXX)' | m4)However, I don't know how widespread support for the new mkstemp()macro is.--Geoff Clare <g.clare-7882/[email protected]>The Open Group, Apex Plaza, Forbury Road, Reading, RG1 1AX, England (which is where I learned that trick which you're referring to in your question).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37849/" ] }
614,899
I run Firefox from a Flatpak but I can't find how to have it open links when I click them in various XDG-compliant apps. How can I have xdg-open run Firefox from a Flatpak distribution? The application itself shows in preferences that it is not the default browser. It offers a button to set itself as default but it seems to have no effect.
You need to use the xdg-settings command. This should return your current default browser: xdg-settings get default-web-browser To change it to your Flatpak version, do this: xdg-settings set default-web-browser <your_flatpak_browser.desktop> To validate your new settings, do this: xdg-settings check default-web-browser <your_flatpak_browser.desktop>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46416/" ] }
614,913
I've been writing a lot of one-off functions recently. On the occasions that I go "hmm, I should save this" I use type <function name> to show the code, and copy and paste it into .bashrc. Is there a faster way to do this, or some standard or command built for this purpose? FWIW, I'm just doing this on my personal computer running Mint, so conveniences like copy and paste are easy. However, I'm also interested in answers specific to shell-only environments.
You need to use the xdg-settings command. This should return your current default browser: xdg-settings get default-web-browser To change it to your Flatpak version, do this: xdg-settings set default-web-browser <your_flatpak_browser.desktop> To validate your new settings, do this: xdg-settings check default-web-browser <your_flatpak_browser.desktop>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614913", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67459/" ] }
614,919
I created a script that adds four numbers that you enter:Example: ./myawkaverage 8 7 9 4 28 I need my script to add those four number and display the average so the results look like this: Example: ./myawkaverage 8 7 9 4The average is 7 I also need the script to accept negative numbers. My script so far looks like this: #!/bin/bashecho $1 $2 $3 $4 | awk ' { print sum3($1, $2, $3, $4) } function sum3(a, b, c, d) { return (a + b + c + d) }'
You need to use the xdg-settings command. This should return your current default browser: xdg-settings get default-web-browser To change it to your Flatpak version, do this: xdg-settings set default-web-browser <your_flatpak_browser.desktop> To validate your new settings, do this: xdg-settings check default-web-browser <your_flatpak_browser.desktop>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/437534/" ] }
614,943
I need to read and modify a single file ( pcmanfm.conf ) in the current directory. I tried $ augtool -At "Toml.lns incl $(pwd)/pcmanfm.conf" -I lenses/ print /files/files but it does not work. toml.aug (on /usr/share/augeas/lenses/dist/toml.aug ) starts with (*Module: Toml Parses TOML filesAuthor: Raphael Pinson <[email protected]>... so I believe I put the name of the lens correctly ( Toml.lns ). The same setup works well if I parse file of a different type, e.g. $ augtool -At "Shellvars.lns incl /tmp/vars.sh" -I lenses/ print /files/files/files/tmp/files/tmp/vars.sh/files/tmp/vars.sh/TESTINT = "2"/files/tmp/vars.sh/TESTSTR = "\"FF\"" I've posted the same question on https://github.com/hercules-team/augeas/issues/699 in case it is a bug in Augeas. The file I try to parse has the following content: [config]bm_open_method=0[volume]mount_on_startup=1mount_removable=1autorun=1[ui]always_show_tabs=0max_tab_chars=32win_width=1916win_height=1149splitter_pos=150media_in_new_tab=0desktop_folder_new_win=0change_tab_on_drop=1close_on_unmount=1focus_previous=0side_pane_mode=placesview_mode=compactshow_hidden=0sort=name;ascending;toolbar=newtab;navigation;home;show_statusbar=1pathbar_mode_buttons=0 I want to add/replace one value in the [ui] section.
You need to use the xdg-settings command. This should return your current default browser: xdg-settings get default-web-browser To change it to your Flatpak version, do this: xdg-settings set default-web-browser <your_flatpak_browser.desktop> To validate your new settings, do this: xdg-settings check default-web-browser <your_flatpak_browser.desktop>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17765/" ] }
614,950
I'm running some python programs that are quite heavy. I've been running this script for several weeks now, but in the past couple of days, the program gets killed with the message: Killed I tried creating a new swap file with 8 GB, but it kept happening. I also tried using: dmesg -T| grep -E -i -B100 'killed process' which listed out the error: [Sat Oct 17 02:08:41 2020] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/[email protected],task=python,pid=56849,uid=1000[Sat Oct 17 02:08:41 2020] Out of memory: Killed process 56849 (python) total-vm:21719376kB, anon-rss:14311012kB, file-rss:0kB, shmem-rss:4kB, UID:1000 pgtables:40572kB oom_score_adj:0 I have a strong machine and I tried also not running anything else when running ( Pycharm or terminal) but it keeps happening. specs: Ubuntu 20.04 LTS (64bit) 15.4 GiB RAM Intel Core i7-105100 CPU @ 1.80 GHz x 8 when running free -h t total used free shared buff/cache availableMem: 15Gi 2.4Gi 10Gi 313Mi 2.0Gi 12GiSwap: 8.0Gi 1.0Gi 7.0Gi
There's nothing to be done here, I'm afraid. The process is being killed by the OOM killer (Out Of Memory Killer), which is a process of the operating system whose job it is to kill jobs that are taking up too much memory before they crash your machine. This is a good thing. Without it, your machine would simply become unresponsive. So, you need to figure out why your python script is taking up so much memory, and try to make it so that it uses less. The only other alternative is to try and get more swap, or more RAM of course, but that feels like a bandaid. If this is your python script, you should focus on making it less memory hungry if at all possible.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/614950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/398409/" ] }
615,012
I recently read that it's a good idea to disable root login, e.g. by setting the root user's shell to /sbin/nologin instead of /bin/bash, and to use a non-root user with sudo rights. I did this now on a server of mine where logs were showing a large amount of login attempts. So instead of root, I now login as a non-root user, and use sudo whenever I need to. How is this safer? In both cases, if anyone cracks the password, they will be able to execute any command.
sudo improves safety/security by providing accountability , and privilege separation . Imagine a system that has more than one person performing administrative tasks. If a root login account is enabled, the system will have no record/log of which person performed a particular action. This is because the logs will only show root was responsible, and now we may not know exactly who root was at that time. OTOH, if all persons must login as a regular user, and then sudo for privilege elevation, the system will have a record of which user account performed an action. In addition, privileges for that particular user account may be managed and allocated in the sudoers file. To answer your question now, a hacker that compromises one user account will get only those privileges assigned to that account. Further, the system logs will (hopefully) have a record showing which user account was compromised. OTOH, if it's a simple, single-user system where the privileges in the sudoers file are set to ALL (e.g. %sudo ALL=(ALL:ALL) ALL ), then the advantages of accountability , and privilege separation are effectively neutered. Finally, in regard to the advantages of sudo , the likelihood is that a knowledgeable hacker may also be able to cover his tracks by erasing log files, etc; sudo is most certainly not a panacea. At the end of the day, I feel that like many other safeguards we put in place, sudo helps keep honest people honest - it's less effective at keeping dishonest people at bay.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/615012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/437784/" ] }
615,022
I am new to Linux, so bear with me for my lack of understanding. Imagine there are 2 users in my Linux installation, say A and B. User A creates a folder FolderA and adds some files inside this folder. User A then uses the chmod command to block the folder for access by anyone else using the following command sudo chmod 700 FolderA -R What I don't understand is how User B can just log into his account and change this restriction using the command as follows. sudo chmod 777 FolderA -R I mean what's the point in setting restrictions to a folder when anyone can change it? I just don't understand the logic of this. Once again, I am new to Linux and hence this question.
The sudo command gives temporary adminstrator privileges to a user. If you use this then you bypass any security controls. On a managed multiuser system very few users would have this right - typically just the system administrators. On a home system it's probably that you would have this by default so that you can look after your own system. Remove FolderA entirely. Then try the commands again without using sudo and see what happens
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/615022", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/437788/" ] }
615,380
I have a crontab entry: 00 10 * * * test.sh>>output_log 2>>error_log But I want to also include date into output_log and error_log in the beginning line of each day. How can I achieve this? I tried: { echo `date` && . test.sh; }>>output_log 2>>error_log but it just includes date only in output_log because echo date is not considered as stderr so that is not included in 2 . I want something like this psuedo code: { echo `date` && . test.sh }>>output_log { echo `date` plus 2's content } >>error_log
Most simply: 0 10 * * * { date; date >&2; test.sh; } >> output_log 2>> error_log It might also be interesting to use the ts utility from Moreutils, which will prepend a timestamp to every line: SHELL=/bin/bash0 10 * * * test.sh > >(ts >> output_log) 2> >(ts >> error_log) (Note the use of /bin/bash , as the most basic POSIX /bin/sh does not support the >() redirection construct.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615380", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438128/" ] }
615,392
I have GTX 1650 and I am using Manjaro 64 bit and Kernel 58, but when i try to install Driver 430.09 with this command: sudo ./NVIDIA-Linux-x86_64-430.09.run --kernel-source-path /usr/include/linux/ERROR: The kernel header file '/usr/include/linux//include/linux/kernel.h' does not exist. The most likely reason for this is that the kernel source path '/usr/include/linux/' is incorrect. and if I use: sudo ./NVIDIA-Linux-x86_64-430.09.run --kernel-source-path /usr/ERROR: The kernel source path '/usr/' is invalid. Please make sure you have installed the kernel source files for your kernel and that they are properly configured; on Red Hat Linux systems, for example, be sure you have the 'kernel-source' or 'kernel-devel' RPM installed. If you know the correct kernel source files are installed, you may specify the kernel source path with the '--kernel-source-path' command line option How could I use /usr/include/linux/ as kernel-source-path.Try 1: I have also used 455.28 for installation and same problem arises.
Most simply: 0 10 * * * { date; date >&2; test.sh; } >> output_log 2>> error_log It might also be interesting to use the ts utility from Moreutils, which will prepend a timestamp to every line: SHELL=/bin/bash0 10 * * * test.sh > >(ts >> output_log) 2> >(ts >> error_log) (Note the use of /bin/bash , as the most basic POSIX /bin/sh does not support the >() redirection construct.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415983/" ] }
615,419
After some googling, I found a way to compile BASH scripts to binary executables (using shc ). I know that shell is an interpreted language, but what does this compiler do? Will it improve the performance of my script in any way?
To answer the question in your title, compiled shell scripts could be better for performance — if the result of the compilation represented the result of the interpretation, without having to re-interpret the commands in the script over and over. See for instance ksh93 's shcomp or zsh 's zcompile . However, shc doesn’t compile scripts in this way. It’s not really a compiler, it’s a script “encryption” tool with various protection techniques of dubious effectiveness. When you compile a script with shc , the result is a binary whose contents aren’t immediately readable; when it runs, it decrypts its contents, and runs the tool the script was intended for with the decrypted script, making the original script easy to retrieve (it’s passed in its entirety on the interpreter’s command line, with extra spacing in an attempt to make it harder to find). So the overall performance will always be worse: on top of the time taken to run the original script, there’s the time taken to set the environment up and decrypt the script.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/615419", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420387/" ] }
615,438
Context I'm trying to import a dump that have some long lines (8k+ character) with SQL*Plus, so I face the error SP2-0027: Input is too long (> 2499 characters) . This is a hard-coded limit and cannot be overcome. Expected solution I would like to stream my input in bash and to split lines longer than the expected width on the last , (comma) character. So I should have something like cat my_dump.sql | *magic_command* | sqlplus system/oracle@xe Details I know that newer version can accept lines up to 4999 characters but I still have lines longer ( grep '.\{5000\}' my_dump.sql | wc -l ) It is not really feasible to update the dump by hand I did try to use tr but this split every line wich I do not want I did try to use fmt and fold but it does not seems to be possible to use a custom delimiter I am currently looking on sed but I cannot seem to figure out a regexp that would "find the last match of , in the first 2500 characters if there is more than 2500 characters"
To answer the question in your title, compiled shell scripts could be better for performance — if the result of the compilation represented the result of the interpretation, without having to re-interpret the commands in the script over and over. See for instance ksh93 's shcomp or zsh 's zcompile . However, shc doesn’t compile scripts in this way. It’s not really a compiler, it’s a script “encryption” tool with various protection techniques of dubious effectiveness. When you compile a script with shc , the result is a binary whose contents aren’t immediately readable; when it runs, it decrypts its contents, and runs the tool the script was intended for with the decrypted script, making the original script easy to retrieve (it’s passed in its entirety on the interpreter’s command line, with extra spacing in an attempt to make it harder to find). So the overall performance will always be worse: on top of the time taken to run the original script, there’s the time taken to set the environment up and decrypt the script.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/615438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438174/" ] }
615,448
I am looking for a minimal version of Debian, I would install something like the Dekstop Environment by myself, just like the Alpine installer. Is there some minimal installation of Debian?
To install a minimal Debian, use the standard installer ( e.g. on a network installation image ), in either graphical or text mode, and when you get to the “Software selection” phase (towards the end), deselect everything: This will result in a small setup with around 220 packages installed (the exact number will vary depending on the locale you chose and the detected hardware).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615448", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/431939/" ] }
615,485
Pipes and redirection are two of the most powerful functions in Linux, and I love them. However, I'm stuck with a situation where I need to write a fixed piece of text to a file without using a pipe, redirection or a function. I'm using Bash in case that makes a difference. First: Why? I'll explain why, in case there's a simpler solution. I have a background yad notification with some menu entries. In some of the menu entries, I want the notification to write a fixed piece of text to a file. Here's an example of what I mean. yad --notification --command=quit --menu='Example!echo sample >text.txt' The problem is that yad doesn't accept redirection, so it literally prints the string sample >text.txt instead of redirecting. Likewise, the pipe symbol ( | ) is a separator in yad; but if you change that, yad takes it as a literal character. For example: yad --notification --command=quit --separator='#' --menu='Example!echo sample | tee text.txt' This literally prints the string sample | tee text.txt instead of piping. There's also no point in writing a function for yad to call, because yad runs in its own space and doesn't recognise the function. Hence my question Thus, I want a command like echo , cat or printf that takes an output file as an argument rather than a redirect. I have searched for such a command but cannot find it. I can, of course, write my own and put it in the default path: FILENAME="${1}"shiftprintf '%s\n' "${*}" >"${FILENAME}" and then yad --notification --command=quit --menu='Example!myscript text.txt sample' But, I'll be surprised indeed if Linux doesn't already have something like this! Thank you
This is a bit of an XY problem but fortunately you've explained your real problem so it's possible to give a meaningful answer. Sure, there are commands that can write text to a file without relying on their environment to open the file. For example, sh can do that: pass it the arguments -c and echo text >filename . Note that this does meet the requirement of “without redirection” here, since the output of sh is not redirected anywhere. There's a redirection inside the program that sh runs, but that's ok, it's just an internal detail of how sh works. But does this solve your actual problem? Your actual problem is to write text to a file from a yad action. In order to resolve that, you need to determine what a yad action is. Unfortunately, the manual does not document this. All it says is menu:STRING Set popup menu for notification icon. STRING must be in form name1[!action1[!icon1]]|name2[!action2[!icon2]]... . Empty name add separator to menu. Separator character for values (e.g. | ) sets with --separator argument. Separator character for menu items (e.g. ! ) sets with --item-separator argument. The action is a string, but a Unix command is a list of strings: a command name and its arguments. There are several plausible ways to turn a string into a command name and its arguments, including: Treating the string as a command name and calling it with no arguments. Since echo foo prints foo , rather than attempting to execute the program echo foo , this is not what yad does. Passing the string to a shell. Since echo >filename prints >filename , rather than writing to filename , this is not what yad does. Some custom string splitting. At this point, this is presumably what yad does, but depending on exactly how it does it, the solution to your problem can be different. Looking at the source code , the action is passed to popup_menu_item_activate_cb which calls the Glib function g_spawn_command_line_async . This function splits the given string using g_shell_parse_argv which has the following behavior, which is almost never what is desirable but can be worked around: Parses a command line into an argument vector, in much the same way the shell would, but without many of the expansions the shell would perform (variable expansion, globs, operators, filename expansion, etc. are not supported). The results are defined to be the same as those you would get from a UNIX98 /bin/sh, as long as the input contains none of the unsupported shell expansions. If the input does contain such expansions, they are passed through literally. So you can run a shell command by prefixing it with sh -c ' and terminating with ' . If you need a single quote inside the shell command, write '\'' . Alternatively, you can run a shell command by prefixing it with sh -c " , terminating with " , and adding a backslash before any of the characters "$\` that appear in the command. Take care of the nested quoting since the action is itself quoted in the shell script that calls yad. yad --notification \ --menu='Simple example!sh -c "echo sample text >text.txt"' \ --menu='Single-double-single quotes!sh -c "echo '\''Here you can put everything except single quotes literally: two spaces, a $dollar and a single'\''\'\'''\''quote.'\'' >text.txt"' \ --menu="Double-single-double quotes!sh -c 'echo \"Here punctuation is a bit tricky: two spaces, a \\\$dollar and a single'\\''quote.\"' >text.txt'"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/615485", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/41226/" ] }
615,525
While moving a big chunk of data between two external USB drives, I notice my laptop is slowed down. It was my understanding that the files are not written to any intermediate location (such as /tmp or similar) unless there is a shortage of free RAM. Am I wrong?
If you have a copy such as this, or its GUI equivalent, cp -a /media/external/disk1/. /media/external/disk2/ the data is read from the first disk's filesystem and written directly to the second. There is no intermediate write to another storage location. If you are seeing slow speeds it may be that the two disks are sharing the same USB controller and contending for access to the bus. Anything more than that and you will have to provide further details, such as the make/model of computer, its bus topology, etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/615525", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104233/" ] }
615,563
I've seen many questions on stack overflow about why some code isn't working with times beyond the year 2038, but the answers usually only recommend to upgrade to a 64 bit operating system. My question is why does this occur in the first place? Is it similar to the year 2000 problem? Could it be fixed with a different operating system, or are 32-bit processors physically incapable of handling times past 2038? Why? (I'm new to linux so this might be an easy question but I really don't know the answer).
Time un Unix systems is tracked by the number of seconds since the Epoch, 00:00 on 1 Jan 1970 UTC. At one point in 2038, that number of seconds will exceed the ability of a 32-bit integer to store. This is why 64-bit kernels resolve the issue. Quoth Wikipedia : The Unix time_t data type that represents a point in time is, on many platforms, a signed integer, traditionally of 32 bits (but see below), directly encoding the Unix time number as described in the preceding section. Being 32 bits means that it covers a range of about 136 years in total. The minimum representable date is Friday 1901-12-13, and the maximum representable date is Tuesday 2038-01-19. One second after 03:14:07 UTC 2038-01-19 this representation will overflow. This milestone is anticipated with a mixture of amusement and dread—see year 2038 problem. In some newer operating systems, time_t has been widened to 64 bits. This expands the times representable by approximately 293 billion years in both directions, which is over twenty times the present age of the universe per direction. Further reading here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438266/" ] }
615,585
I deleted an about 200GB big file but it still seems as the space is reserved for some reason. I tried clearing it with a bunch of commands I found but nothing worked for me (for files being kept in cache since they are still opened by some application). I also rebooted a bunch of times but nothing worked. The parent folder still has the original size but it seems the subfolders are all the right size. I hope someone has an idea why it’s still reserved or how I can get rid of it. Image of space
If you didn't reboot : You should probably use : lsof -a +L1 can help you find out which deleted (rm'd) files still have opened "file handles" (ie, some program still points to it and thus the file itself is not deleted yet from the filesystem, even though its last name entry has been deleted by rm). The offset will hint at the largest files amongst them. If you see one that seems to fit the bill: you should cleanly kill (not kill -9 pid, try just kill pid) the corresponding application and it should release that file handle, and the file should be reclaimed. however you state that you rebooted: you may have hidden files somewhere underneath your home directory. You could try: find /home/vincent/ -size +1G -ls to get a view of files larger than 1G under /home/vincent directory And please note: your invotation with a * of du, ie du ... * will only do du on files that do not start with a . ( * will be expanded by your shell to all files and directories that do not start with a . ). So it will not run on the "hidden files" of /home/vincent). The printf I give above should ignore this (unless you have specific additionnal rights restrictions on them) and explore both shown and hidden directories and files. You can also re-run the find as root, by pre-pending it with sudo, and see if, when launched as root, it shows more things: sudo find /home/vincent/ -size +1G -ls lastly, if you deleted using a recent graphical interface: you may need to empty a "trashcan" to really free the space. (and that trashcan could live underneath your home directory as a ".something" directory, explaining why your home is still as big, and why your du ... * didn't see it)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615585", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438474/" ] }
615,593
I am trying to compare the value of two columns without seeing the order of it. I tried with summing the values and matching them if matches then putting Match otherwise Nomatch in additional column. But the problem here, sum of two numbers can be the same,for example: dummy thought (which I think can happen as the list is long): 7+5=12; 5+7=12 = Match6+6=12; 4+8=12 = Nomatch in theory while seeing the numbers but summing them showing the Match. locus truth predictedCSF1PO_007-BC03_20171027_2149 11,12 11,12CSF1PO_007-BC04_20171027_2149 11,12 11,12CSF1PO_19_20171027_2149 10,12 12,10CSF1PO_20_20171027_2149 10,0 10,11CSF1PO_A-10_2018123_1836 12,0 12,13CSF1PO_A-11_2018123_1836 10,12 12,10CSF1PO_A-1_20181222_0036 10,11 10,11CSF1PO_A-12_2018123_1836 11,12 11,12CSF1PO_A-13_2018123_1836 8,10 10,8CSF1PO_A-14_2018123_1836 8,11 8,11 Tried so far using summing and match cat test | sed '1d' | sed 's/,/\t/g' | awk '{print $1"\t"$2+$3"\t"$4+$5}' | awk '{ if ($2 == $3) print $1"\t"$2"\t"$3"\t""Match"; else print $1"\t"$2"\t"$3"\t""NoMatch"}'Output:CSF1PO_007-BC03_20171027_2149 23 23 MatchCSF1PO_007-BC04_20171027_2149 23 23 MatchCSF1PO_19_20171027_2149 22 22 MatchCSF1PO_20_20171027_2149 10 21 NoMatchCSF1PO_A-10_2018123_1836 12 25 NoMatchCSF1PO_A-11_2018123_1836 22 22 MatchCSF1PO_A-1_20181222_0036 21 21 MatchCSF1PO_A-12_2018123_1836 23 23 MatchCSF1PO_A-13_2018123_1836 18 18 MatchCSF1PO_A-14_2018123_1836 19 19 Match Note: one more thing has to be in mind that either one number matches with other column's values can be considered as "Matched". Example:CSF1PO_20_20171027_2149 10,0 10,11 === Match as one number matches (order does not matter)CSF1PO_A-10_2018123_1836 12,0 12,13 === Match as one number matches (order does not matter) One possible solution I tried, which seems working but need clarification or other possible solution. cat test | sed '1d' | sed 's/,/\t/g' | awk '{ if ($2 == $4 || $2 == $5) print $0 , "=>", "Match"; else if ($3 == $5 || $3 == $4) print $0 , "=>", "Match"; else print $0,"=>","Nomatch"}'CSF1PO_007-BC03_20171027_2149 11 12 11 12 => MatchCSF1PO_007-BC04_20171027_2149 11 12 11 12 => MatchCSF1PO_19_20171027_2149 10 12 12 10 => MatchCSF1PO_20_20171027_2149 10 0 10 11 => MatchCSF1PO_A-10_2018123_1836 12 0 12 13 => MatchCSF1PO_A-11_2018123_1836 10 12 12 10 => MatchCSF1PO_A-1_20181222_0036 10 11 10 11 => MatchCSF1PO_A-12_2018123_1836 11 12 11 12 => MatchCSF1PO_A-13_2018123_1836 8 10 10 8 => MatchCSF1PO_A-14_2018123_1836 8 11 8 11 => Match Need clarification if I am doing right.Thanks
If you didn't reboot : You should probably use : lsof -a +L1 can help you find out which deleted (rm'd) files still have opened "file handles" (ie, some program still points to it and thus the file itself is not deleted yet from the filesystem, even though its last name entry has been deleted by rm). The offset will hint at the largest files amongst them. If you see one that seems to fit the bill: you should cleanly kill (not kill -9 pid, try just kill pid) the corresponding application and it should release that file handle, and the file should be reclaimed. however you state that you rebooted: you may have hidden files somewhere underneath your home directory. You could try: find /home/vincent/ -size +1G -ls to get a view of files larger than 1G under /home/vincent directory And please note: your invotation with a * of du, ie du ... * will only do du on files that do not start with a . ( * will be expanded by your shell to all files and directories that do not start with a . ). So it will not run on the "hidden files" of /home/vincent). The printf I give above should ignore this (unless you have specific additionnal rights restrictions on them) and explore both shown and hidden directories and files. You can also re-run the find as root, by pre-pending it with sudo, and see if, when launched as root, it shows more things: sudo find /home/vincent/ -size +1G -ls lastly, if you deleted using a recent graphical interface: you may need to empty a "trashcan" to really free the space. (and that trashcan could live underneath your home directory as a ".something" directory, explaining why your home is still as big, and why your du ... * didn't see it)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152815/" ] }
615,676
I have tried many different solutions found around this and other websites, such as sudo apt update && sudo apt upgradesudo apt list --upgradeablesudo apt full-upgradesudo apt dist-upgrade and no information nor update of the package is displayed. I also tried aptitude update and checked the log and no information about the package is displayed. Any ideas on how to identify this package? My guess at this point, is that it might have something to do with the sources.list file. Any comments on this? I have Linux Mint with the latest distribution Ulyana / Focal. Thank you very much!
apt upgrade will tell you what it would like to do, including package upgrades; and this will include a list of packages it won’t upgrade: $ sudo apt upgrade -o APT::Get::Show-Upgraded=trueThe following packages were automatically installed and are no longer required: [any packages which could be auto-removed]Use 'sudo apt autoremove' to remove them.The following NEW packages will be installed: [any packages which will be installed]The following packages have been kept back: [any packages which are upgradeable but won’t be upgraded]The following packages will be upgraded: [any packages which will be upgraded]NN upgraded, NN newly installed, NN to remove and NN not upgraded.Need to get ... of archives.After this operation, ... of additional disk space will be used.Do you want to continue? [Y/n] (replacing NN with the four different values reflecting the packages listed above). If this still doesn’t show anything for you, perhaps the non-upgraded package is locally installed (but then, apt presumably wouldn’t want to upgrade it...). A good tool to investigate this is apt-show-versions : sudo apt install apt-show-versions Then run apt-show-versions | grep "No available version in archive" to see packages which aren’t available in the configured repositories, and apt-show-versions | grep upgradeable to see which packages are upgradeable (regardless of whether apt would upgrade them). You can usually find out more about why the package is not being upgraded by running apt install with the package name. If its upgrade would cause another package to be removed, upgrade would skip it, but full-upgrade would upgrade it. Yet another possibility is that the upgrade candidate is blocked; on Linux Mint in particular, this can happen with snapd , which is disabled by default by the /etc/apt/preferences.d/nosnap.pref configuration file.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/615676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438349/" ] }
615,767
I'm trying to replace lines that match the regex => '.*', in one file with lines from another file. Two example files. File 1: 'text_clear' => 'Clear', 'text_search' => 'Search', 'text_enabled' => 'Enabled', File 2: emptiedlostturned off I'm trying to run a linux command using awk/sed/grep to create a third file that would output File 3: 'text_clear' => 'emptied', 'text_search' => 'lost', 'text_enabled' => 'turned off', I've been successful in extracting what I want to edit a python script, but if possible I want to just use a linux command to do both. I've been racking my head over this for 3 hours now. Any help would be appreciated.
code.awk : BEGIN{j=1}NR==FNR{a[NR]=$0;next}sub(/=> '.*',$/,"=> '"a[j]"',"){++j}1 awk -f code.awk file2 file1 > file3 Line by line explanation : Initialize j=1 . Put each line of file2 in the array a . In file1 , for each line, try to substitute a string matching the => '.*',$ regex by the concatenation of => ' a[j] ', . If the substitution occurred, increment j . Print the line. $ cat file3 'text_clear' => 'emptied', 'text_search' => 'lost', 'text_enabled' => 'turned off',
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615767", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438429/" ] }
615,779
I have a bunch of directories at the same level and would like to sort them according to the last modified date of the content (recursive) inside them.However, in nautilus, it looks like the directories' "last modified date" are only updated if new files are created inside. Is there anyway to show the recursive "last modified date" of these directories? Edit:I only needed to know the date to the nearest minute. So I've adopted Stéphane Chazelas's solution with minor modifications to reduce the clutter: find . -mindepth 2 -type f -printf '%TF %TH:%TM/%P\0' |LC_ALL=C sort -zt/ -k2,2 -k1,1r |LC_ALL=C sort -t/ -zmsuk2,2 |LC_ALL=C sort -z |cut -zd/ -f1,2 | tr '\0/' '\n\t'
The last modification time of a directory (think like phone directory , not folder ) is the time it was last modified, like when an entry was removed, added or edited in that directory. To find out the newest regular file recursively in it, you would need to read the contents of that directory and every directory within and for each file, check the file's modification time. That's a costly thing to do, I wouldn't expect any file manager application to do it. You could however script it. With the GNU implementations of find and sort (and any Bourne-like shell), you could do: TZ=UTC0 find . -mindepth 2 -type f -printf '%TFZ%TT/%P\0' | LC_ALL=C sort -zt/ -k2,2 -k1,1r | LC_ALL=C sort -t/ -zmsuk2,2 | LC_ALL=C sort -z | tr '\0' '\n' Which would give something like: 2020-02-08Z19:17:22.3588966190/Scripts/.distfiles2020-02-09Z09:25:37.5336986350/StartupFiles/zshrc2020-07-26Z20:33:17.7263164070/Misc/vcs_info-examples2020-07-26Z20:33:17.7463157170/Util/ztst-syntax.vim2020-08-22Z18:06:17.9773156630/Functions/VCS_Info2020-08-30Z11:11:00.5701005930/autom4te.cache/requests2020-08-30Z11:11:31.5245491550/Config/defs.mk2020-08-30Z11:11:31.6085449480/Etc/Makefile2020-08-30Z11:12:10.9305773600/INSTALL.d/share/zsh/5.8.0.2-dev/help2020-10-22Z05:17:15.3808945480/Completion/Base/Utility2020-10-22Z05:17:15.3928938520/Doc/Zsh/zle.yo2020-10-22Z05:17:15.3968936190/Src/zsh.h2020-10-22Z05:17:15.3968936190/Test/D02glob.ztst2020-10-22Z05:17:15.4168924590/.git/logs/refs/heads/master That is, giving the newest regular file in each directory with its timestamp. Directories without regular files in them are not shown. To only see the list of directories, insert a cut -zd/ -f2 | before the tr command. For a prettier output like in the zsh approach, you could replace the tr command with: LC_ALL=C gawk -v RS='\0' -F / '{ dir = $2; mtime = $1 sub("[^/]*/[^/]*/", "") printf "%-20s %s (%s)\n", dir, mtime, $0}' While we're at using gawk , we could also tell find to print the timestamp as a fractional Unix epoch time and gawk reformat it in local time: find . -mindepth 2 -type f -printf '%T@/%P\0' | LC_ALL=C sort -zt/ -k2,2 -k1,1rn | LC_ALL=C sort -t/ -zmsuk2,2 | LC_ALL=C sort -zn | LC_ALL=C gawk -v RS='\0' -F / '{ dir = $2; split($1, mtime, ".") sub("[^/]*/", "") printf "%-20s %s (%s)\n", dir, strftime("%FT%T." mtime[2] "%z", mtime[1]), $0}' Which would give an output like: cross-build 2019-12-02T13:48:33.0505299150+0000 (cross-build/x86-beos.cache)m4 2019-12-02T13:48:33.4615093990+0000 (m4/xsize.m4)autom4te.cache 2019-12-02T13:50:48.8897482560+0000 (autom4te.cache/requests)CWRU 2020-08-09T17:17:21.4712835520+0100 (CWRU/CWRU.chlog)include 2020-08-09T17:17:21.5872807740+0100 (include/posixtime.h)tests 2020-08-09T17:17:21.8392747400+0100 (tests/type.right).git 2020-08-09T17:17:21.8472745490+0100 (.git/index)doc 2020-08-09T17:35:35.1638603570+0100 (doc/Makefile)po 2020-08-09T17:35:35.3758514290+0100 (po/Makefile)support 2020-08-09T17:35:36.7037954930+0100 (support/man2html)lib 2020-08-09T17:35:42.3755564970+0100 (lib/readline/libhistory.a)builtins 2020-08-09T17:35:42.5035511020+0100 (builtins/libbuiltins.a)examples 2020-08-09T17:35:47.1513551370+0100 (examples/loadables/cut)INSTALL.d 2020-08-09T17:35:47.3993446790+0100 (INSTALL.d/lib/bash/cut)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/319441/" ] }
615,800
I'm currently running Debian Buster with the 4.19.0-9-amd64 kernel. I find that intermittently after booting my system, selecting this kernel, entering my boot disk encryption key, and waiting for systemd to launch its services, my screen will clear, but will not proceed to launch my WM from that point. (as a note, 4.19.0-10-amd64 appears to cause this behavior 100% of the time) Normally, I expect my primary monitor to turn off between systemd's services starting and my WM login screen appearing, but that does not happen in these cases–instead, my system will hang on an apparently empty terminal screen (which does not animate or respond to input) until it is hard rebooted. To my knowledge, this has always occurred, even on a fresh Debian install with little-to-no other software installed. The only other thing I could find which seems odd is that every time I boot, before prompting me for my encrypted disk key, I see the following lines: volume group "debian-vg" not found Cannot process volume group debian-vg volume group "debian-vg" not found Cannot process volume group debian-vg I suspect they are not related, but this is the only oddity I can pinpoint in my boot.log. My graphics drivers are nvidia's proprietary drivers, with my machine running two SLI'd GTX 770 cards. My desktop environment is KDE Plasma. The output of sudo dmesg is too long to add to this post, and is posted to Debian's pastezone .
The last modification time of a directory (think like phone directory , not folder ) is the time it was last modified, like when an entry was removed, added or edited in that directory. To find out the newest regular file recursively in it, you would need to read the contents of that directory and every directory within and for each file, check the file's modification time. That's a costly thing to do, I wouldn't expect any file manager application to do it. You could however script it. With the GNU implementations of find and sort (and any Bourne-like shell), you could do: TZ=UTC0 find . -mindepth 2 -type f -printf '%TFZ%TT/%P\0' | LC_ALL=C sort -zt/ -k2,2 -k1,1r | LC_ALL=C sort -t/ -zmsuk2,2 | LC_ALL=C sort -z | tr '\0' '\n' Which would give something like: 2020-02-08Z19:17:22.3588966190/Scripts/.distfiles2020-02-09Z09:25:37.5336986350/StartupFiles/zshrc2020-07-26Z20:33:17.7263164070/Misc/vcs_info-examples2020-07-26Z20:33:17.7463157170/Util/ztst-syntax.vim2020-08-22Z18:06:17.9773156630/Functions/VCS_Info2020-08-30Z11:11:00.5701005930/autom4te.cache/requests2020-08-30Z11:11:31.5245491550/Config/defs.mk2020-08-30Z11:11:31.6085449480/Etc/Makefile2020-08-30Z11:12:10.9305773600/INSTALL.d/share/zsh/5.8.0.2-dev/help2020-10-22Z05:17:15.3808945480/Completion/Base/Utility2020-10-22Z05:17:15.3928938520/Doc/Zsh/zle.yo2020-10-22Z05:17:15.3968936190/Src/zsh.h2020-10-22Z05:17:15.3968936190/Test/D02glob.ztst2020-10-22Z05:17:15.4168924590/.git/logs/refs/heads/master That is, giving the newest regular file in each directory with its timestamp. Directories without regular files in them are not shown. To only see the list of directories, insert a cut -zd/ -f2 | before the tr command. For a prettier output like in the zsh approach, you could replace the tr command with: LC_ALL=C gawk -v RS='\0' -F / '{ dir = $2; mtime = $1 sub("[^/]*/[^/]*/", "") printf "%-20s %s (%s)\n", dir, mtime, $0}' While we're at using gawk , we could also tell find to print the timestamp as a fractional Unix epoch time and gawk reformat it in local time: find . -mindepth 2 -type f -printf '%T@/%P\0' | LC_ALL=C sort -zt/ -k2,2 -k1,1rn | LC_ALL=C sort -t/ -zmsuk2,2 | LC_ALL=C sort -zn | LC_ALL=C gawk -v RS='\0' -F / '{ dir = $2; split($1, mtime, ".") sub("[^/]*/", "") printf "%-20s %s (%s)\n", dir, strftime("%FT%T." mtime[2] "%z", mtime[1]), $0}' Which would give an output like: cross-build 2019-12-02T13:48:33.0505299150+0000 (cross-build/x86-beos.cache)m4 2019-12-02T13:48:33.4615093990+0000 (m4/xsize.m4)autom4te.cache 2019-12-02T13:50:48.8897482560+0000 (autom4te.cache/requests)CWRU 2020-08-09T17:17:21.4712835520+0100 (CWRU/CWRU.chlog)include 2020-08-09T17:17:21.5872807740+0100 (include/posixtime.h)tests 2020-08-09T17:17:21.8392747400+0100 (tests/type.right).git 2020-08-09T17:17:21.8472745490+0100 (.git/index)doc 2020-08-09T17:35:35.1638603570+0100 (doc/Makefile)po 2020-08-09T17:35:35.3758514290+0100 (po/Makefile)support 2020-08-09T17:35:36.7037954930+0100 (support/man2html)lib 2020-08-09T17:35:42.3755564970+0100 (lib/readline/libhistory.a)builtins 2020-08-09T17:35:42.5035511020+0100 (builtins/libbuiltins.a)examples 2020-08-09T17:35:47.1513551370+0100 (examples/loadables/cut)INSTALL.d 2020-08-09T17:35:47.3993446790+0100 (INSTALL.d/lib/bash/cut)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615800", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136537/" ] }
615,832
I have a txt file (input.txt) that looks like this: A_Karitiana-4.DG Ignore_Karitiana(discovery).DGA_French-4.DG Ignore_French(discovery).DGA_Dinka-4.DG Dinka.DGA_Dai-5.DG Dai.DGS_Dai-2.DG Dai.DGB_Dai-4.DG Dai.DGS_Dai-3.DG Dai.DGS_Dai-1.DG Dai.DG I need to create a new txt file (output.txt) that contains only the first column of input.txt. So output.txt must look like this: A_Karitiana-4.DG A_French-4.DG A_Dinka-4.DG A_Dai-5.DG S_Dai-2.DG B_Dai-4.DG S_Dai-3.DG S_Dai-1.DG I've tried with this command: awk '$1' input.txt > output.txt and also with this: awk -F' ' '$1' input.txt > output.txt but both of them create an output.txt file that looks exactly the same as input.txt. I suppose it's a matter of delimiter, but I can't figure out how to fix this.
You're not printing. Try awk '{print $1}' input.txt > output.txt When you just give an expression (the way you tried), awk works somewhat like default grep : completely print any matching lines: awk '/regexp/' file.txt - print lines matching regexp awk 'NR==3' file.txt - print line 3 awk '1' file.txt - print all lines where 1 is true, i.e. all (okay, an awk-ward way to cat, but we're approaching what you did) awk '$1' file.txt - print all lines where $1 evaluates to true, i.e. is non-empty (and does not otherwise evaluate to false, such as "0"), i.e. given your file, print all all lines (since $1 here will always contain a non-numerical, non-empty string)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438492/" ] }
615,928
I want to pipe in a command to sed like so: md5sum input.txt | sed 's/^\(....\).*/\1/;q' This works by only outputting the first 4 characters of the checksum. However, I want to output the first 4 characters, but also have an x in the place of every other characters (redacting info). I'm so lost now.
With GNU Sed, md5sum input.txt | sed 's/./x/5g' This simply skips substituting the 4 first characters of the string and performs the substitution for all other characters. A POSIX alternative with Awk (although there is probably something simpler), md5sum xad | awk '{ four=substr($0, 1, 4) rest=substr($0, 5) gsub(/./, "x", rest) print four, rest}' OFS=""
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438584/" ] }
615,929
Original question I downloaded the Cozette font , which includes CozetteVector.ttf and CozetteVector.otf . I copied both of these files to ~/.local/share/fonts and ran fc-cache -v ~/.local/share/fonts . Now when I run fc-list | grep -i cozette , I can see that both the TTF and OTF versions are listed, with the same name, CozetteVector:style=Regular . When I select the CozetteVector font in a graphical program, which font will be used? How does Fontconfig handle this situation? Will having both versions cause problems or create conflicts? Additional context on why this might occur, and why someone might care On Fedora 35, font packages often include both OTF and TTF formats of the same font. For example, redhat-text-fonts package includes both /usr/share/fonts/redhat/RedHatText-LightItalic.otf and /usr/share/fonts/redhat/RedHatText-LightItalic.ttf . I have also encountered a situation where I "installed" a font manually by copying it to ~/.local/share/fonts and running fc-cache -f , but then I later installed the same fonts with a system-provided package. Now I have the same font files, defining the same font families, in two entirely different directories. The filenames might be exactly identical, or there might be some variations. Usually the duplicates are actually duplicates and the difference doesn't matter. But sometimes these conflicts are nontrivial, e.g. one version is a variable font while another version isn't, or one version includes several stylistic sets while the other version doesn't.
With GNU Sed, md5sum input.txt | sed 's/./x/5g' This simply skips substituting the 4 first characters of the string and performs the substitution for all other characters. A POSIX alternative with Awk (although there is probably something simpler), md5sum xad | awk '{ four=substr($0, 1, 4) rest=substr($0, 5) gsub(/./, "x", rest) print four, rest}' OFS=""
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615929", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73256/" ] }
615,945
alias ip="ifconfig eth0 | grep '255$' | awk '{print '\$2'}'" Using the command alias with no argument I then get alias ip='ifconfig eth0 | grep '\' '255$'\'' | awk '\''{print '\''$2'\''}'\''' However, if I run ip , I will get inet xxx.xxx.x.xxx netmask xxx.xxx.xxx.x broadcast xxx.xxx.x.xxx When I run ifconfig eth0 | grep '255$' | awk '{print '\$2'}' , I can get my IP only successfully like xxx.xxx.x.xxx . I feel confused about this condition. Could you help me?
Past all that quoting hell, what you end up running is awk '{print }' . That's because in the alias assignment \$2 is in a double-quoted string, where the backslash prevents expanding $2 , but when the alias is used, the $2 is left unquoted and expands to whatever the current positional parameter $2 is, probably empty in your interactive shell. The print command without any arguments prints the whole input line. It's easiest to see with set -x : $ set -x$ ip+ grep '255$'+ /sbin/ifconfig eth0+ awk '{print }' inet ... netmask 255.0.0.0 broadcast 10.255.255.255 (And I seem to have my netmask wrong on that interface.) Things like this are easier done with a function, since then you don't need to quote the whole command: myip() { ifconfig eth0 | grep '255$' | awk '{print $2}'} However, awk can do what grep does, and we probably shouldn't match on the 255 but perhaps the inet keyword, so: myip() { ifconfig eth0 | awk '/inet/ {print $2}'} or still a bit more explicitly: myip() { ifconfig eth0 | awk '$1 == "inet" {print $2}'} which also avoids /inet/ matching could match the inet6 line, too. Note that ip is another utility .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/615945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438316/" ] }
616,151
Can some one tell me what is the meaning of each line with an example ,I am not getting why regex is used and even [!0122...] #!/bin/shis_integer (){ case "${1#[+-]}" in (*[!0123456789]*) return 1 ;; ('') return 1 ;; (*) return 0 ;; esac}
#!/bin/sh in the syntax of the shell is a comment. However, that #! tells the kernel, when executing that file that the interpreter stored at that /bin/sh path should be used to interpret that file, and should be executed with the path of the script as argument. is_integer () compound-command Is the POSIX sh syntax to define a function. { ...} is a compound command called a command group . Its only purpose is to group commands, here to make it the body of the function. Here, it's superfluous as its content is only one compound command , however using the { ... } command group as the body of every function is common practice and makes for more readable code so is generally recommended. The same function could have been written: is_integer () case "${1#[+-]}" in (*[!0123456789]*) return 1 ;; ('') return 1 ;; (*) return 0 ;;esac case something in (pattern1 | pattern2) ...;; (pattern3)... ; esac is a case / esac construct (makes up a compound command ) which matches something in turn against each pattern(s), and upon the first match, executes the corresponding code. Here something is ${1#[-+]} . That's a parameter expansion , which applies the ${param#pattern} operator to the 1 parameter which is the first argument to the function. That operator strips the shortest string that matches the pattern from the start of the contents of the parameter. [-+] is a wildcard pattern (not regexp) that matches on either the - or + character. So ${1#[-+]} expands to the value of the first argument stripped of a sign. So if the first argument was -2 , that becomes 2. If it was - is becomes the empty string. If it was 2 is stays 2 . You'll notice "${1#[+-]}" is quoted. Generally, you need to quote parameter expansions as otherwise they're subject to split+glob. Here, it's one of the very few contexts where that wouldn't happens though, so strictly speaking those quotes are superfluous (but don't harm and are still good practice). Then that value is matched against some patterns. *[!0123456789]* is * --any number of characters (though most shells will also accept non characters)-- followed by [!0123456789] --any character that is neither 0 nor 1 ... nor 9 -- followed by any number of characters ( * again). So it will match on any string that contains a character (or non-character in most shells) that is not a decimal digit number. If there's a match, the return 1 code is executed which will cause the function to return with that 1 exit code which, like any number other than 0 means false / failure . '' is one way to represent the empty string. The empty string is also not a valid number but wouldn't have been matched by the previous pattern. Then * matches anything. So the return 0 would be run for any string that didn't match any of the previous patterns. It's superfluous here as the case statement is the last command in that function, and a case statement returns success / true if no command was run within. So here, that function definition could be shortened to: is_integer() case ${1#[-+]} in ('' | *[!0123456789]*) falseesac Though that doesn't make it more legible. In any case, that code is right to use [0123456789] . Especially for input validation (and it's critical to validate input when it's used in shell arithmetic expressions, see Security Implications of using unsanitized data in Shell Arithmetic evaluation ), [0-9] or [[:digit:]] should not be used, especially if your sh implementation is bash as [0-9] may match on any character (or possibly multi-character collation element) that sorts in between 0 and 9 and [[:digit:]] on some BSDs will match on digits of any decimal numeral systems, not only the 0123456789 English ones, even in English locales. For instance, on a GNU system, in a typical US English locale (which these days tend to use UTF-8 as their charset), in bash , [0-9] would also match on , , and hundreds of other characters). And on FreeBSD, in that same locale, [[:digit:]] would match on hundreds of different characters (including ). If you let through for instance during input validation, you're not closing the paths to those arbitrary code injection vulnerabilities. In ksh and on GNU systems, is a valid variable name (and that's the case for many other characters matched by [0-9] ). If that variable is set (in the environment for instance) and contains a[0$(reboot>&2)] , then: is_integer "$1" || exitecho "$(( $1 + 1 ))" in ksh will cause a reboot if is_integer fails to reject that input. To use a regular expression to do the matching, you'd need expr or awk , though few shells have those commands builtin, so it would be less efficient. Some [ implementations like the [ builtin of zsh or yash can also do regexp matching. And some shells also have a [[ ... ]] conditional expression construct that can do regexp matching, but none of those are in standard sh and come with their own problem when it comes to input validation. While the * shell wildcard in most sh implementations will match on sequences of bytes even if some of them don't form valid characters, same for [!0123456789] , the .* or [^0123456789] regexp equivalent often doesn't. Here, it may not be a problem as long as that matching is positive . Doing a negative matching like: regexp() { awk -- 'BEGIN {exit !(ARGV[1] ~ ARGV[2])}' "$@"}is_integer() { ! regexp "${1#[-+]}" '^(.*[^012345679].*)?$'} As a direct translation of that case statement would be wrong as it would fail to reject input that contains sequences of bytes not forming valid characters, but is_number() { regexp "$1" '^[-+]?[0123456789]+$'} Should be fine as it would reject any input containing sequences of bytes not forming valid characters.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/437767/" ] }
616,157
With iproute2 userspace tools one can display the network devices using the ip commands verb link show ( sometimes shortened to l sh ). The output generate does not display the TYPE of link/interface device. root@box:/# ip link show1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether bc:97:e1:58:10:18 brd ff:ff:ff:ff:ff:ff 3: eno1np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether bc:97:e1:58:10:1a brd ff:ff:ff:ff:ff:ff4: eno2np1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether bc:97:e1:58:10:1b brd ff:ff:ff:ff:ff:ff Each of the linkes tells infos like mtu , the UP / DOWN twice in a redundant way, however there is no indication apparent to me which tells its type. Also I cannot find any indication in the manpages hot to display the TYPE , albeit there are many of them: TYPE := [ bridge | bridge_slave | bond | bond_slave | can | dummy | hsr | ifb | ipoib | macvlan | macvtap | vcan | veth | vlan | vxlan | ip6tnl | ipip | sit | gre | gretap | erspan | ip6gre | ip6gretap | ip6erspan | vti | vrf | nlmon | ipvlan | lowpan | geneve | macsec ] Is there a builtin way with the ip2route tools to output the TYPE in the listing?
The interface type information, being rarely used, is normally displayed only by adding the -details option to ip : -d , -details Output more detailed information. So ip -details link show would display this information for all these interfaces, but also many other additional informations like: $ ip -d link show lxcbr07: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.0:16:3e:0:0:0 designated_root 8000.0:16:3e:0:0:0 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer 0.00 tcn_timer 0.00 topology_change_timer 0.00 gc_timer 34.76 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 with bridge at the start of the 3rd line here. Using JSON output along the jq command (which is a must-have tool when processing JSON from shell) allows to reliably parse the command's output, still without having to know beforehand the types, if one wants only to retrieve this information along the interface name. $ ip -details -json link show | jq --join-output '.[] | .ifname," ",.linkinfo.info_kind,"\n"'lo nulldummy0 dummydummy2 dummylxcbr0 bridgewlan0 nulleth0 nullvirbr0 bridgevirbr0-nic tuntap0 tunveth0 vethtest vethwireguard0 wireguardvethZ0ZQFJ veth Real interfaces (as well as lo ) have no type (ie .[].linkinfo.info_kind doesn't exist) and jq will return null for a non-existent field. It can be filtered out with this instead: ip -details -json link show | jq --join-output '.[] | .ifname," ", if .linkinfo.info_kind != null then .linkinfo.info_kind else empty end, "\n"' Actually, the search feature of ip link show puts together the kind and the slave kind as type , and the detailed output would show one on 3rd line, the other on 4th line. In JSON output those are two different fields: .[].linkinfo.info_kind and .[].linkinfo.info_slave_kind , so the slave types would require an other command, same for displaying both. Here's an example for both: ip -details -json link show | jq --join-output '.[] | if .ifname != null then .ifname, " ", if .linkinfo.info_kind != null then .linkinfo.info_kind else empty end, " ", if .linkinfo.info_slave_kind != null then .linkinfo.info_slave_kind else empty end, "\n" else empty end' which outputs instead: lo dummy0 dummy dummy2 dummy lxcbr0 bridge wlan0 eth0 virbr0 bridge virbr0-nic tun bridgetap0 tun veth0 veth test veth wireguard0 wireguard vethZ0ZQFJ veth bridge and shows here virbr0-nic being a tun (really tuntap the fact that it's tun or tap is in a sub-field) device as well as a bridge slave, and vethZ0ZQFJ a veth device as well as a bridge slave. This same jq filter above will also cope when fed with filtered output from ip ... link show ... type ...slave when querying for slave interfaces, which apparently returns extra empty objects for non-matching interfaces, by ignoring (empty) entries without interface name. So starting the line with ip -details -json link show type bridge_slave | would return only: virbr0-nic tun bridgevethZ0ZQFJ veth bridge
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/383594/" ] }
616,165
How can I use the 500Gb of free space?I deleted Windows, and now there is a big gap of space, which I can't use. / Isn't big enough for an update, but I can not move it without un-mounting it, which isn't possible. How would I go about moving / and /boot ?
The interface type information, being rarely used, is normally displayed only by adding the -details option to ip : -d , -details Output more detailed information. So ip -details link show would display this information for all these interfaces, but also many other additional informations like: $ ip -d link show lxcbr07: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 bridge forward_delay 1500 hello_time 200 max_age 2000 ageing_time 30000 stp_state 0 priority 32768 vlan_filtering 0 vlan_protocol 802.1Q bridge_id 8000.0:16:3e:0:0:0 designated_root 8000.0:16:3e:0:0:0 root_port 0 root_path_cost 0 topology_change 0 topology_change_detected 0 hello_timer 0.00 tcn_timer 0.00 topology_change_timer 0.00 gc_timer 34.76 vlan_default_pvid 1 vlan_stats_enabled 0 vlan_stats_per_port 0 group_fwd_mask 0 group_address 01:80:c2:00:00:00 mcast_snooping 1 mcast_router 1 mcast_query_use_ifaddr 0 mcast_querier 0 mcast_hash_elasticity 16 mcast_hash_max 4096 mcast_last_member_count 2 mcast_startup_query_count 2 mcast_last_member_interval 100 mcast_membership_interval 26000 mcast_querier_interval 25500 mcast_query_interval 12500 mcast_query_response_interval 1000 mcast_startup_query_interval 3124 mcast_stats_enabled 0 mcast_igmp_version 2 mcast_mld_version 1 nf_call_iptables 0 nf_call_ip6tables 0 nf_call_arptables 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 with bridge at the start of the 3rd line here. Using JSON output along the jq command (which is a must-have tool when processing JSON from shell) allows to reliably parse the command's output, still without having to know beforehand the types, if one wants only to retrieve this information along the interface name. $ ip -details -json link show | jq --join-output '.[] | .ifname," ",.linkinfo.info_kind,"\n"'lo nulldummy0 dummydummy2 dummylxcbr0 bridgewlan0 nulleth0 nullvirbr0 bridgevirbr0-nic tuntap0 tunveth0 vethtest vethwireguard0 wireguardvethZ0ZQFJ veth Real interfaces (as well as lo ) have no type (ie .[].linkinfo.info_kind doesn't exist) and jq will return null for a non-existent field. It can be filtered out with this instead: ip -details -json link show | jq --join-output '.[] | .ifname," ", if .linkinfo.info_kind != null then .linkinfo.info_kind else empty end, "\n"' Actually, the search feature of ip link show puts together the kind and the slave kind as type , and the detailed output would show one on 3rd line, the other on 4th line. In JSON output those are two different fields: .[].linkinfo.info_kind and .[].linkinfo.info_slave_kind , so the slave types would require an other command, same for displaying both. Here's an example for both: ip -details -json link show | jq --join-output '.[] | if .ifname != null then .ifname, " ", if .linkinfo.info_kind != null then .linkinfo.info_kind else empty end, " ", if .linkinfo.info_slave_kind != null then .linkinfo.info_slave_kind else empty end, "\n" else empty end' which outputs instead: lo dummy0 dummy dummy2 dummy lxcbr0 bridge wlan0 eth0 virbr0 bridge virbr0-nic tun bridgetap0 tun veth0 veth test veth wireguard0 wireguard vethZ0ZQFJ veth bridge and shows here virbr0-nic being a tun (really tuntap the fact that it's tun or tap is in a sub-field) device as well as a bridge slave, and vethZ0ZQFJ a veth device as well as a bridge slave. This same jq filter above will also cope when fed with filtered output from ip ... link show ... type ...slave when querying for slave interfaces, which apparently returns extra empty objects for non-matching interfaces, by ignoring (empty) entries without interface name. So starting the line with ip -details -json link show type bridge_slave | would return only: virbr0-nic tun bridgevethZ0ZQFJ veth bridge
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616165", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/432424/" ] }
616,300
I am using LightDM with the Slick Greeter . In my system, I have two user accounts for myself. They have the same name , but different user names . I want to separate private and professional work. Problem: in the greeter, there is no visual difference between them. I only see the name (not the user name ). Hacky solutions: know the order of the accounts use different window managers (because they have different icons and they are visible next to the user name ) Obviously, all of the above solutions are plain out stupid. Better ideas? I am willing to change the greeter, but not the the session manager LightDM, neither my names (because they are used in other programs, such as email).
Change your avatar: create the image file as /home/username/.face from: https://wiki.archlinux.org/index.php/LightDM#Changing_your_avatar
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247336/" ] }
616,330
As far as I know changing even a bit of a file, will change the whole checksum result, but when I change a file's name this does not affect its checksum (I've tried SHA-1, SHA-256 and MD5). Why? file name is not a part of file data? does it depend on file system?
The name of a file is a string in a directory entry, and a number of other meta data (file type, permissions, ownership, timestamps etc.) is stored in the inode. The filename is therefore not part of what constitutes the actual data of the file. In fact, a single file may have any number of names (hard links) in the filesystem, and may additionally be accessible through any number of arbitrarily named symbolic links. Since the filename is not part of the file's data, it will not be included automatically when you calculate e.g. the MD5 checksum with md5 or md5sum or some similar utility. Changing the file's name (or ownership or timestamps or permission etc.) or accessing it via one of its other names or symbolic links, if it has any, will therefore not have any effect on the file's MD5 checksum.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/616330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378964/" ] }
616,371
The brightness (LCD backlight) controls on a Lenovo IdeaPad Gaming 3 (15ARH05, LCD display, AMD Renoir CPU Ryzen 5 4600H, discrete NVIDIA GeForce 1650 Ti Mobile) are not working: Fn keys show the brightness slider on the display moving. /sys/class/backlight/amdgpu_bl0/brightness changes accordingly from 0 to 255. The display does not show any brightness change. Manually writing to brightness does not change the display's brightness either. /sys/class/backlight/amdgpu_bl0/actual_brightness stays at 311 . I figure this indicates a problem with the amdgpu driver. The display seems to stay at full brightness always. Adjustments work fine on Windows 10. The laptop is running: Kali Linux Rolling linux-image-5.8.0-kali[23]-amd64 (based on 5.8.14) and custom-built kernels 5.9, 5.9.1 and 5.10-rc1, mostly based off the Kali config X with amdgpu drivers, discrete graphics unused (proprietary NVIDIA drivers loaded and unloaded for testing). I have tried booting with various acpi_backlight kernel options, which lead to various backlights being available in /sys/class/backlight/*/brightness : acpi_backlight=video : acpi_video0 acpi_video1 amdgpu_bl0 acpi_backlight=vendor : amdgpu_bl0 ideapad acpi_backlight=native : amdgpu_bl0 acpi_backlight=none : amdgpu_bl0 Other things that did not work: acpi_osi=Linux (no change) acpi_osi= (hangs at boot) BIOS update (no other version available) moving /lib/firmware/amdgpu/renoir_dmcu.bin away patching amdgpu_dm.c I am aware that there has been a number of updates related to backlights for AMDGPUs, like general support and fixes in kernel 5.7.x and updates to the scaling of brightness values >255 in 5.9, but so far this seems not to have helped my case (or possibly, broke more things). I am not looking for: software alternatives adjusting the gamma values using discrete graphics (if it can be avoided) What else can I do or look into to gain control of the backlight? I came across this comment and this bug report , which seem to suggest that some kernel fixes may have broken other things. What would be the best place to report that?
Kernels 5.11.7, 5.12-rc3, and later allow the kernel parameter amdgpu.backlight=0 to be passed at boot to fix this issue for Lenovo IdeaPad Gaming 3, Lenovo Legion 5 and possibly other laptops. For Debian-based distributions using GRUB the parameter can be added in /etc/default/grub : GRUB_CMDLINE_LINUX="amdgpu.backlight=0" After running update-grub and rebooting the backlight controls should work. Previous kernels required patching: The workaround can be found in a GitLab issue . Forcing caps->aux_support = false; in drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c fixed the issue.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169190/" ] }
616,385
My problem is that I need to move from a directory to another one by only using the ln command So here I have this relative path: ../../10/wsl and I know I have to create a symlink with ln to be able to create a tunnel to the other directory. Then I will be able to use cd to move. I tried ln -s filepath ln -rs filepath but nothing works
Kernels 5.11.7, 5.12-rc3, and later allow the kernel parameter amdgpu.backlight=0 to be passed at boot to fix this issue for Lenovo IdeaPad Gaming 3, Lenovo Legion 5 and possibly other laptops. For Debian-based distributions using GRUB the parameter can be added in /etc/default/grub : GRUB_CMDLINE_LINUX="amdgpu.backlight=0" After running update-grub and rebooting the backlight controls should work. Previous kernels required patching: The workaround can be found in a GitLab issue . Forcing caps->aux_support = false; in drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c fixed the issue.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438956/" ] }
616,492
So I have a tab-separated input file that has blanks in a certain column like: input_file : A B C D1 12 34 54534 12 5623 10 15 6731 99 100 Now, my goal is to add all such rows with blanks to my output_file like: output_file : 34 12 5631 99 100 So I use this command to achieve my result- awk -F $'\t' '$2 == ""' input_file >> output_file This works great if column "B" is always in position 2, however it won't work if it is in a different position. How do I address column "B" by its name in the awk command?
AFAIK there is no way to do this in awk short of iterating over the fields of the header and recording the index of the matching column: awk -F '\t' 'NR==1{for(i=1;i<=NF;i++) if($i=="B") bi=i} $bi == ""' file.tsv If you have access to Miller, you could filter by name directly ex. mlr --tsv filter '$B == ""' file.tsv or with utilities from the Python CSVKit: csvgrep -t -c B -r "." -i file.tsv | csvformat -T
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/363861/" ] }
616,493
What exactly is the difference between kill <pid> and kill -s TERM <pid> .Initially i thought the $TERM variable holds a signal number but when I echo TERM its gives me $echo $TERMxterm-256color
There is no difference. From man kill : The default signal for kill is TERM. kill -s TERM <pid> does not expand the variable TERM , as kill -s $TERM <pid> would. It uses the string TERM . The correspondence between signal numbers and names are in man 7 signal . Also, from the POSIX specification of kill (my italics), -s signal_name Specify the signal to send, using one of the symbolic names defined in the <signal.h> header. Values of signal_name shall be recognized in a case-independent fashion, without the SIG prefix . In addition, the symbolic name 0 shall be recognized, representing the signal value zero. The corresponding signal shall be sent instead of SIGTERM.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616493", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369707/" ] }
616,527
I have a machine running Ubuntu with a SSH config file in ~/.ssh/config with the following permissions (default when creating a new file) -rw-rw-r-- 1 dev dev 75 Oct 26 20:13 config After creating a new user (test) with the same primary group (dev) as the existing user (dev), I am no longer able to git clone when logged in as dev. dev@vm:~$ git clone ...Cloning into ...Bad owner or permissions on /home/dev/.ssh/configfatal: Could not read from remote repository.Please make sure you have the correct access rightsand the repository exists. Googling around seems to suggest that I can fix the ssh problem by running chmod 600 ~/.ssh/config , but why would this even be an issue? How can I fix this systematically, since I assume this would've affected other files too? Thanks!
In the openssh-7.6p1 source code file readconf.c we can see that the permission checking is delegated to a function secure_permissions : if (flags & SSHCONF_CHECKPERM) { struct stat sb; if (fstat(fileno(f), &sb) == -1) fatal("fstat %s: %s", filename, strerror(errno)); if (!secure_permissions(&sb, getuid())) fatal("Bad owner or permissions on %s", filename);} This function is in misc.c and we can see that it indeed explicitly enforces one member per group if the file is group-writeable: intsecure_permissions(struct stat *st, uid_t uid){ if (!platform_sys_dir_uid(st->st_uid) && st->st_uid != uid) return 0; if ((st->st_mode & 002) != 0) return 0; if ((st->st_mode & 020) != 0) { /* If the file is group-writable, the group in question must * have exactly one member, namely the file's owner. * (Zero-member groups are typically used by setgid * binaries, and are unlikely to be suitable.) */ struct passwd *pw; struct group *gr; int members = 0; gr = getgrgid(st->st_gid); if (!gr) return 0; /* Check primary group memberships. */ while ((pw = getpwent()) != NULL) { if (pw->pw_gid == gr->gr_gid) { ++members; if (pw->pw_uid != uid) return 0; } } endpwent(); pw = getpwuid(st->st_uid); if (!pw) return 0; /* Check supplementary group memberships. */ if (gr->gr_mem[0]) { ++members; if (strcmp(pw->pw_name, gr->gr_mem[0]) || gr->gr_mem[1]) return 0; } if (!members) return 0; } return 1;}
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/616527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439088/" ] }
616,787
When I run the command ./program I get the error: bash: ./program: cannot execute binary file: Exec format error When I run uname -a I get: 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:34:49 UTC 2016 i686 i686 i686 GNU/Linux Also I checked the information about the program that I was trying to run and I got: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=c154cb3d21f6bbd505d165aed3aa6ed682729441, not stripped /proc/cpuinfo shows flags : fpuvme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts How can I run the program?
You have a 64-bit x86 CPU (indicated by the lm flag in /proc/cpuinfo ), but you’re running a 32-bit kernel. The program you’re trying to run requires a 64-bit runtime, so it won’t work as-is: even on a 64-bit CPU, a 32-bit kernel can’t run 64-bit programs. If you can find a 32-bit build of the program (or build it yourself), use that. Alternatively, you can install a 64-bit kernel, reboot, and then install the 64-bit libraries required by your program. To install a 64-bit kernel, run sudo dpkg --add-architecture amd64sudo apt-get updatesudo apt-get install linux-image-generic:amd64 This will install the latest 64-bit Xenial kernel, along with various supporting 64-bit packages. Once you reboot, you should find that uname -a shows x86_64 rather than i686 . If you attempt to run your program again, it might just work, or you’ll get an error because of missing libraries; in the latter case, install the corresponding packages (use apt-file to find them) to get the program working.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/616787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439346/" ] }
616,809
I have Dell Inspiron shipped with Win 10 Home Edition and MS Office. 1 TB of HDD with partition of C: 416 GB; E and F each having 249 GB. I want to Boot Ubuntu LTS, Kali Linux, Arch Linux and Fedora. Later on maybe Gentoo and Slackware also. Will this Multiple booting cause any problem to Win 10 and purchased applications from MS Store? How do I partition 249 GB of F Drive for each of these OS? (when I shrink Volume in Disk Management Utility, it all goes into one Unallocated space. Or shall I Make New Volumes to Boot Different OS) Any recommendation on multiple Booting will be of great help. Consider Me taking 1st Dive into Ocean of Linux
You have a 64-bit x86 CPU (indicated by the lm flag in /proc/cpuinfo ), but you’re running a 32-bit kernel. The program you’re trying to run requires a 64-bit runtime, so it won’t work as-is: even on a 64-bit CPU, a 32-bit kernel can’t run 64-bit programs. If you can find a 32-bit build of the program (or build it yourself), use that. Alternatively, you can install a 64-bit kernel, reboot, and then install the 64-bit libraries required by your program. To install a 64-bit kernel, run sudo dpkg --add-architecture amd64sudo apt-get updatesudo apt-get install linux-image-generic:amd64 This will install the latest 64-bit Xenial kernel, along with various supporting 64-bit packages. Once you reboot, you should find that uname -a shows x86_64 rather than i686 . If you attempt to run your program again, it might just work, or you’ll get an error because of missing libraries; in the latter case, install the corresponding packages (use apt-file to find them) to get the program working.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/616809", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439360/" ] }
616,810
In this directory /home/Scrivania/pdb_files , I have a list of files with the same filename format, XXX_?.pdb . For example, A4R_A.pdb A4R_B.pdbA4R_C.pdbTY6_A.pdb001_A.pdb001_B.pdbATE_B.pdb I need to keep only some of these files and remove others. In particular, if I have multiple files that have the first three characters of the name identical, I would like to keep only one, regardless of the last character, " ? ". So at the end, in my directory, I should have only these files: A4R_A.pdb TY6_A.pdb001_A.pdbATE_B.pdb and remove these: A4R_B.pdb , A4R_C.pdb , 001_B.pdb It is not critical which of the files with the first three equal characters is retained ( A , B or C ). Also, there might be other cases where the character " ? " is not a letter, but a number, or maybe is a letter different to A, B or C. So the selection must be based exclusively on the first three characters. For example, one strategy is to keep, for more files with the first three equal characters, only the first file you come across. Could someone suggest a script in bash that can do this?
You have a 64-bit x86 CPU (indicated by the lm flag in /proc/cpuinfo ), but you’re running a 32-bit kernel. The program you’re trying to run requires a 64-bit runtime, so it won’t work as-is: even on a 64-bit CPU, a 32-bit kernel can’t run 64-bit programs. If you can find a 32-bit build of the program (or build it yourself), use that. Alternatively, you can install a 64-bit kernel, reboot, and then install the 64-bit libraries required by your program. To install a 64-bit kernel, run sudo dpkg --add-architecture amd64sudo apt-get updatesudo apt-get install linux-image-generic:amd64 This will install the latest 64-bit Xenial kernel, along with various supporting 64-bit packages. Once you reboot, you should find that uname -a shows x86_64 rather than i686 . If you attempt to run your program again, it might just work, or you’ll get an error because of missing libraries; in the latter case, install the corresponding packages (use apt-file to find them) to get the program working.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/616810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393805/" ] }
616,811
I am trying to add a static persistent route on a Debian 10 machine without needing to restart it.My /etc/network/interfaces looks like this: # This file describes the network interfaces available on your system# and how to activate them. For more information, see interfaces(5).source /etc/network/interfaces.d/*# The loopback network interfaceauto loiface lo inet loopback# The primary network interfaceallow-hotplug ens192iface ens192 inet static address xxx.xxx.xxx.xxx/xx gateway xxx.xxx.xxx.xxx # dns-* options are implemented by the resolvconf package, if installed dns-nameservers xxx.xxx.xxx.xxx xxx.xxx.xxx.xxx dns-search domain.com up /bin/ip route add yyy.yyy.yyy.yyy/yy via yyy.yyy.yyy.yyy After I issue /etc/init.d/networking restart I lose network connectivity. A ping to any IP address throws the message connect: Network is unreachable . If I reboot the machine everything - including the new static route - works fine. Can anyone give me a hint on how to add static persistent routes without needing to restart the machine?
You have a 64-bit x86 CPU (indicated by the lm flag in /proc/cpuinfo ), but you’re running a 32-bit kernel. The program you’re trying to run requires a 64-bit runtime, so it won’t work as-is: even on a 64-bit CPU, a 32-bit kernel can’t run 64-bit programs. If you can find a 32-bit build of the program (or build it yourself), use that. Alternatively, you can install a 64-bit kernel, reboot, and then install the 64-bit libraries required by your program. To install a 64-bit kernel, run sudo dpkg --add-architecture amd64sudo apt-get updatesudo apt-get install linux-image-generic:amd64 This will install the latest 64-bit Xenial kernel, along with various supporting 64-bit packages. Once you reboot, you should find that uname -a shows x86_64 rather than i686 . If you attempt to run your program again, it might just work, or you’ll get an error because of missing libraries; in the latter case, install the corresponding packages (use apt-file to find them) to get the program working.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/616811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439363/" ] }
616,823
In case an additional monitor would be great But problem so bigManjaro Cinnamon doesn't recognize HP 14455/LE1901w LCD Monitor's higher resolutions. Also xrandr command doesn't support "1440 x 900 @ 60 Hz;1280 x 1024 @ 60 Hz and 75 Hz; 1280 x 960 @ 60 Hz" resolutions which support in specification document document details says Resolutions supported: 1440 x 900 @ 60 Hz;1280 x 1024 @ 60 Hz and 75 Hz; 1280 x 960 @ 60 Hz; 1024 x 768 @ 60 Hz and 75 Hz; 800 x 600 @ 60 Hz and 75 Hz; 640 x 480 @ 60 Hz and 75 Hz; 1152 x 720 @ 60 Hz;1280 x 768 @ 60 Hz; 720 x 400 @ 70 Hz; 1152 x 870 @ 75 Hz and 832 x 624 @ 75 Hz Additionally doesn't accept adding a new one! xrandr --newmode "1440x900_60.00" 60.00 1440 900 1024 1064 1168 1312 600 601 604 622 -HSync +Vsync xrandr: unrecognized option '622' Try 'xrandr --help' for more information.Xrandr detect some resolutions by default xrandr --verbose VGA1 connected 1024x768+0+0 (0x15c) normal (normal left inverted right x axis y axis) 0mm x 0mm Identifier: 0x48 Timestamp: 64675771 Subpixel: unknown Gamma: 1.0:1.0:1.0 Brightness: 1.0 Clones: CRTC: 1 CRTCs: 0 1 2 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: link-status: Good supported: Good, Bad non-desktop: 0 range: (0, 1) 1024x768 (0x15c) 65.000MHz -HSync -VSync *current h: width 1024 start 1048 end 1184 total 1344 skew 0 clock 48.36KHz v: height 768 start 771 end 777 total 806 clock 60.00Hz 800x600 (0x163) 40.000MHz +HSync +VSync h: width 800 start 840 end 968 total 1056 skew 0 clock 37.88KHz v: height 600 start 601 end 605 total 628 clock 60.32Hz 800x600 (0x164) 36.000MHz +HSync +VSync h: width 800 start 824 end 896 total 1024 skew 0 clock 35.16KHz v: height 600 start 601 end 603 total 625 clock 56.25Hz 848x480 (0x27b) 33.750MHz +HSync +VSync h: width 848 start 864 end 976 total 1088 skew 0 clock 31.02KHz v: height 480 start 486 end 494 total 517 clock 60.00Hz 640x480 (0x168) 25.175MHz -HSync -VSync h: width 640 start 656 end 752 total 800 skew 0 clock 31.47KHz v: height 480 start 490 end 492 total 525 clock 59.94HzVIRTUAL1 disconnected (normal left inverted right x axis y axis) Identifier: 0x49 Timestamp: 64675771 Subpixel: no subpixels Clones: CRTCs: 3 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: non-desktop: 0 supported: 0, 1 I want to add this external monitor as second screen, but many people uses --same as and -miror command. What is reverse of them because I need additional one. Does any way for auto detect on Manjaro Cinnamon? How can I restore setting of xrandr?
You have a 64-bit x86 CPU (indicated by the lm flag in /proc/cpuinfo ), but you’re running a 32-bit kernel. The program you’re trying to run requires a 64-bit runtime, so it won’t work as-is: even on a 64-bit CPU, a 32-bit kernel can’t run 64-bit programs. If you can find a 32-bit build of the program (or build it yourself), use that. Alternatively, you can install a 64-bit kernel, reboot, and then install the 64-bit libraries required by your program. To install a 64-bit kernel, run sudo dpkg --add-architecture amd64sudo apt-get updatesudo apt-get install linux-image-generic:amd64 This will install the latest 64-bit Xenial kernel, along with various supporting 64-bit packages. Once you reboot, you should find that uname -a shows x86_64 rather than i686 . If you attempt to run your program again, it might just work, or you’ll get an error because of missing libraries; in the latter case, install the corresponding packages (use apt-file to find them) to get the program working.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/616823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/438676/" ] }
616,862
I use arch linux with i3wm. My notifications does not work. When I type dunst in to command line it responds whith: WARNING: No dunstrc found. When I type notify-send --icon=gtk-info Test "This is a test" or dunstify --action="replyAction,reply" "Message received" it keeps running until I kill it with crt+c while no notification shows up. This content of /etc/X11/xinit/xinitrc.d/30-dbus.sh file: #!/bin/bash# launches a session dbus instanceif [ -z "$DBUS_SESSION_BUS_ADDRESS" ] && type dbus-launch >/dev/null; then eval $(dbus-launch --sh-syntax --exit-with-session)fi This is in my journalctl : 18:57:43 arch-thinkpad systemd[562]: Starting Dunst notification daemon...18:57:43 arch-thinkpad dunst[49939]: CRITICAL: Cannot open X11 display.18:57:43 arch-thinkpad systemd[562]: dunst.service: Main process exited, code=exited, status=1/FAILURE18:57:43 arch-thinkpad systemd[562]: dunst.service: Failed with result 'exit-code'.18:57:43 arch-thinkpad systemd[562]: Failed to start Dunst notification daemon.18:59:43 arch-thinkpad dbus-daemon[708]: [session uid=1000 pid=708] Failed to activate service 'org.freedesktop.Notifications': timed out (service_start_timeout=120000ms)18:59:43 arch-thinkpad dbus-daemon[708]: [session uid=1000 pid=708] Activating via systemd: service name='org.freedesktop.Notifications' unit='dunst.service' requested by ':1.112' (uid=1000 pid=17718 comm="/usr/lib/electron/electron /usr/bin/caprine ") How can I fix this so programs can show notification throw dunst ? Thank you for help EDIT1: The No dunstrc found. error has been fiex whith this command: cp /usr/share/dunst/dunstrc ~/.config/dunst/dunstrc
Instead of installing dunst as a service, add it to your i3 config: Edit ~/.config/i3/config and add: exec --no-startup-id dunst
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/429595/" ] }
616,897
I just noticed that the focal-proposed repository was included in my sources.list on ubuntu 20.04, although that doesn't seem to be recommended . After disabling it, the command apt-show-versions | grep newer shows around 30 packages whose installed version is newer than the one in the repository. Is there a simple way to downgrade all of them to the available version?
I wrote a similar answer here To do this, first remove any lines with focal-proposed from /etc/apt/sources.list and /etc/apt/sources.list.d/* . Second we are going to tell apt to allow downgrades. That means pinning focal , focal-updates and focal-security with priorities higher than 1000. Create /etc/apt/preferences.d/focal with this content: Package: *Pin: release n=focalPin-Priority: 1001Package: *Pin: release n=focal-updatesPin-Priority: 1002Package: *Pin: release n=focal-securityPin-Priority: 1003 If you don't use focal-updates or focal-security then skip those sections. Third, run the following: sudo apt updatesudo apt upgradesudo apt dist-upgradesudo apt --fix-broken installsudo apt autoremove and keep rotating between those commands until everything is stable. Finally, delete /etc/apt/preferences.d/focal . Alternatively, you can just delete focal-proposed . Those packages will eventually migrate to focal-updates when they pass their test and you'll be in sync again. With your small delta, --fix-broken install and autoremove probably won't be neccessary but apt will tell you when you read the output of the previous commands. To anyone else who comes accross this post: Downgrading is not supported . Any downgrade of significant size is likely to fail. This is a pretty trivial case, but going from focal to bionic would probably be a disaster and leave you with a broken system.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616897", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/191408/" ] }
616,946
I'm using the following script, but getting unexpected output. perl -pi -e '/DB_CHARSET/ and $_.="define('SOMETHING'/, true);\n"' file.txt This command adds define(SOMETHING, true); Since the text starts with ' then have " inside, how do I escape 'SOMETHING so that I end up with define('SOMETHING', true) ? I've tried usual \ and it did not help.
You could use e.g. \x27 in the Perl string (the character code for ' in hex): $ perl -e 'print "foo\x27bar"' -lfoo'bar or handle the quoting in the shell so as to give Perl a raw ' : $ perl -e 'print "foo'\''bar"' -lfoo'bar (First ' ends the quoted string, \' inserts the quote, the third ' starts a new quoted string.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616946", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439474/" ] }
616,973
I have various Bluetooth headsets, and when I use these with Android and ChromeOS, I get decent bidirectional audio quality for calls and video chats. Unfortunately, when I use them on my arch Linux laptop (with PulseAudio 13.99.2+13+g7f4d7fcf5-1, bluez 5.55-1, and pulseaudio-modules-bt 1.4-3), I have a choice (under the pavucontrol configuration panel, or using pactl set-card-profile ): I can enable high quality sound using A2DP with a nice selection of codecs including SBC, AptX, and AAC, or I can enable the microphone using HSP/HSF instead of A2DP. With the HSP/HSF profile, the microphone "works" in the sense that there is bidirectional audio with both a source and sink in PulseAudio, but the sound quality is so bad that it can be hard to understand the words people are saying. My questions are: What is actually happening when Android or ChromeOS gets decent sound quality with the mic enabled? Is it possible to use a decent codec in HSP/HSF mode? Is it possible to use a mic with A2DP? Or is there some other Bluetooth mode? On the pulseaudio-module-bt web site , all of the codecs except LDAC seem to support decoding as well as encoding, but how do I actually use the decoding functionality? Is this only for using my laptop as a virtual headset for another device, or is there a way to use these codecs for the microphone of a Bluetooth headset? What concrete steps can I take to make my headset sound better in bidirectional calls? Or failing that, even if the microphone sounds bad, can I at least make the speakers sound good without completely disabling the microphone? Update Well, I don't fully understand this, but it seems that maybe good sound quality requires at least HFP 1.6, and bluez currently does not support HFP because doing so requires breaking backwards compatibility with oFono , which has become a contentious question. Until this has been sorted out, I worked around the problem by getting an Avantree DG80 Bluetooth audio dongle. It looks like an ordinary USB audio device in Linux but pairs with my headsets. The sound quality isn't as good as A2DP, but is noticeably better than what I was getting out of bluez/PulseAudio. It's also nice that I can switch between the A2DP and HSP/HSF modes either by switching between stereo and stereo+mono input in PulseAudio, or by double-tapping the switch on the front of the Bluetooth dongle.
A2DP is not a bidirectional profile. So it will not get bi-directional audio. What you want is something like the mSBC codec over HFP profile. There is a good article here that explains how to use it on Ubuntu 20.04 with pipewire instead of PulseAudio. It even includes a recording soundclip , and a playback soundclip (which was created by recording, with a mic the sound from the headset), to show the difference in quality.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/616973", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121912/" ] }
617,010
Why doesn't grep -E work as I expect for negative whitespace? i.e. [^\s]+ I wrote a regex to parse my .ssh/config grep -Ei '^host\s+[^*\s]+\s*$' ~/.ssh/config # cat ~/.ssh/configHost opengrok-01-Eight Hostname opengrok-01.company.comHost opengrok-02-SIX Hostname opengrok-02.company.comHost opengrok-03-forMe Hostname opengrok-03.company.comHost opengrok-04-ForSam Hostname opengrok-04.company.comHost opengrok-05-Okay Hostname opengrok-05.company.comHost opengrok-05-Okay opengrok-03-forMe IdentityFile /path/to/fileHost opengrok-* User root What I got was Host opengrok-01-EightHost opengrok-03-forMeHost opengrok-05-OkayHost opengrok-05-Okay opengrok-03-forMe Where are SIX and Sam! It took me some time to realise that [^\s*]+ i.e. Match anything that isn't white space or * , 1 or more times was actually match anything that isn't \ , s or * , 1 or more times! The fix is surprisingly easy because that regex works on rex101.com (which uses perl) i.e. switch -E for -P # grep -Pi '^host\s+[^*\s]+\s*$' ~/.ssh/configHost opengrok-01-EightHost opengrok-02-SIXHost opengrok-03-forMeHost opengrok-04-ForSamHost opengrok-05-Okay What scares me is I have been using grep -E for years in lots of scripts and not spotted that before. Maybe I've just got lucky but more likely my test cases have missed that edge case! Questions: Other than changing to use grep -P for all my extended regex how should I be writing my grep -E for this case? Are there any other nasty gotchas that I have been missing with -E or that will bite me if I use -P ? grep (GNU grep) 3.1Copyright (C) 2017 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.Written by Mike Haertel and others, see <http://git.sv.gnu.org/cgit/grep.git/tree/AUTHORS>. Running on Windows 10, WSL running Ubuntu 18.04 (bash) ... but I got the same from a proper Linux install
The complement of \s is \S , not [^\s] which (with the help of -i ) excluded 'SIX' and 'Sam' from the result because they contain a literal s . How to grep -i for lines starting with "host", followed by one or more whitespaces and a sequence of one or more characters until the end of the line, where no literal * or whitespace can exist: grep -Ei '^host[[:space:]]+[^*[:space:]]+$' file Host opengrok-01-EightHost opengrok-02-SIXHost opengrok-03-forMeHost opengrok-04-ForSamHost opengrok-05-Okay
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/617010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278031/" ] }
617,172
I am trying to retrieve 2 lines of text with sed, every 10 lines (10, 11, 20, 21, 30, 31, ...). For lines 10 and 11 I could use sed -n '10,11p' file or sed -n '10,+1p' file . And for the range (lines 10, 20, 30, ...) sed -n '10,~10p' file . But is it possible to combine somehow both things? sed -n '10,~10,+1p' file is not working. I assume this is a duplicate, but I can't find any reference. Thanks!
Use the ADDR1,ADDR2 with FIRST~STEP as ADDR1 and +OFFSET as ADDR2 , so: $ seq 30 | sed '10~10,+1!d'1011202130 In anycase, note that both ~ and + are non-standard GNU extensions. See info sed 'line selection' and info sed 'range of lines' on a GNU system for details. POSIXly, you'd use: seq 30 | sed -n '1n;n;n;n;n;n;n;n;n;p;n;p'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112019/" ] }
617,195
I recently started to develop my Linux POSIX shell scripts in a more structured way. Let me explain: Code [A] is being sourced by some code [B], just some minimal example: #!/bin/sh# REQUIREMENTS# none# METHODS (public)# none; this file when sourced sets up basic tput colors, if available... Code [B] is being executed, again some minimal example follows: #!/bin/sh# REQUIREMENTS. /home/username/Development/sh/functions/func-color_support# METHODS (public)print_error ()# this prints custom heading and a given error message{...} What I did not originally realize, because it works fine in recent versions of Bash and Dash, is that I am duplicating the shebang ( #!/bin/sh ) each time I source some of the function file(s). I am working in VS Code with self-compiled ShellCheck and the obvious reason for me to add the shebang is naturally wanting to edit and update the functions without having to change the syntax highlighter every time. Hence my question follows: Is duplication of the shebang ( #!/bin/sh ) violating any POSIX shell programming rules or guidelines ? Further, can duplication of POSIX shebang ( #!/bin/sh ) when sourcing files into one piece cause problems, be it practical or in theory? I posted this on StackOverflow yesterday and still not having an answer, but this comment seems promising: See 2.1.1 here , quote: If the first line of a file of shell commands starts with the characters "#!", the results are unspecified.
Use the ADDR1,ADDR2 with FIRST~STEP as ADDR1 and +OFFSET as ADDR2 , so: $ seq 30 | sed '10~10,+1!d'1011202130 In anycase, note that both ~ and + are non-standard GNU extensions. See info sed 'line selection' and info sed 'range of lines' on a GNU system for details. POSIXly, you'd use: seq 30 | sed -n '1n;n;n;n;n;n;n;n;n;p;n;p'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
617,245
I want column 2 to have the string Piebald, the order of the other strings doesn't matter. I have: HR0024 Black pastel PiebaldHR0028 PiebaldMC0023 PiebaldMC0039 Fire PiebaldMC0075 Piebald VPI AxanthicMC0082 Pastel PiebaldMC0120 Piebald Yellowbelly Het-LavenderMC0124 Super-Pastel Piebald Het-ClownMC0126 Fire Pastel PiebaldMC0144 Piebald Vanilla I want something like this: HR0024 Piebald pastel BlackHR0028 PiebaldMC0023 PiebaldMC0039 Piebald FireMC0075 Piebald VPI AxanthicMC0082 Piebald PastelMC0120 Piebald Yellowbelly Het-LavenderMC0124 Piebald Super-Pastel Het-ClownMC0126 Piebald Pastel FireMC0144 Piebald Vanilla Some rows are going to have the target strings on different columns (2, 3, or 4). I don't think cut -f does the job here, I think awk or sed is needed. Any help is appreciated.
With awk, we loop over the fields and if we find the chosen one, we swap with the second field. awk -v p="Piebald" '{ for (i=2;i<=NF;i++) if ($i == p) {$i = $2; $2 = p; break}}1' file 1 at the end means print the line. break exits early the loop after swapping -v var="value" is the standard way to pass a variable to awk . Also, this way, any sequential spaces in the input are shrinking to one space, which is the default output field separator. Output: HR0024 Piebald pastel BlackHR0028 PiebaldMC0023 PiebaldMC0039 Piebald FireMC0075 Piebald VPI AxanthicMC0082 Piebald PastelMC0120 Piebald Yellowbelly Het. lavender
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617245", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213707/" ] }
617,415
I'm looking for a self-maintaining, non-payware Linux. Actually, it doesn't necessarily need to be Linux at all, as long as it runs PostgreSQL and PHP in a stable manner. Once installed, and PostgreSQL and PHP are on it, I want to never have to think about it existing again. I want it to automatically detect, download and install any system patches and updates to the installed programs. The only interaction I want to have with the machine is to SFTP into it, to transfer files to it, as if it were an account at some webhost rather than my own machine. Some reasons for me wanting this are: Serious psychological stress/mental issues from 15+ years of babysitting servers. Lack of money and trust to pay for a "managed" server. Lack of trust to be able to pay for a webhost account. (Also, rarely any PostgreSQL support anyway, even if I could accept the risk.) Physical control. Several more practical issues which are important but hard to explain. Even besides all those reasons, wouldn't anyone want this unless your hobby is specifically to use a computer for the sake of using a computer? Please note that it doesn't count if there is some "optional mode" where it auto-updates, but which isn't reliable, and just breaks the server instead. If this is still not available, what exactly is the reason for this, other than "we want it to be difficult" or "ensuring work for administrators"? I consciously kept the requirement extremely basic, and don't involve a million weird and exotic software. PHP and PostgreSQL. The two basic tools in my toolbox. Hammer and saw, basically. Even just the stress alone from having to keep track of new updates/patches, and always be ready and able to log in and manually deal with it (what happens if I'm in an accident and wake up after an eight-month coma to find that my unpatched server is compromised?) would justify this a million times over in my mind. But coupled also with all the other reasons, such as people having no clue that you even need to update stuff (yes, this is really what the vast majority of people think about servers... myself included many years ago), I simply cannot understand how this is not a thing... if it isn't. It doesn't seem to be. Please prove me wrong. PS: I don't want to destroy this question by adding the further requirement that it has to run on my Raspberry Pi, but if it does, that is a huge bonus.
The short answer is no . Pin your packages to a specific version that you consider stable and feature-complete, but then also enable automatic updates so you can still receive backported security updates. This minimizes compatibility issues while maximizing your coverage of vulnerability. The reason there is no magic Linux distribution (or server, software, car, factory, healthcare system...) is not to create work and frustration, but because the operating environment is complex and dynamic. You cannot have stability and security for free and also with no effort. Someone, somewhere, must do something. From what I can tell, your requirements are stability , security , minimal user intervention (or less), and free . There are only four options: Never update, and feel confident that now that the server is stable it will remain stable. This assumes that global standards in software and hardware never change in the lifetime of the server, and that no new vulnerabilities occur. Stability with no involvement Always update, automatically. You will also still have to trust software maintainers to correctly create and push updates. Security with no involvement Version pinning with security backports is a slight modification in an attempt to integrate the best of both worlds. Mostly stable, mostly secure, minimal involvement Make your own judicious updates based on your confidence in maintainers and your understanding of current events. You know you, so you can best tailor the server to your needs. Most stable, most secure, most involvement Pay someone to do #3 for you. This is the most complete answer but in return for that you have to pay (and trust the provider, like you said). Most stable, most secure, least involvement, requires money and trust Every option involves some kind of compromise. You cannot have all your requirements. But depending on your tolerance (and where you are willing to compromise) any of them could resolve your issue on any number of server distributions.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/617415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439924/" ] }
617,438
How can I find a directory with a certain name but only if it is in another directory with a certain name? For example when I have the following directory structure a├── b│   └── e├── c│   └── e└── d I'd like to find the directory 'e', but only when it is located in a directory called 'c'. If possible with the find command alone without using grep.
Use GNU find with -path that searches the entire path for a match: $ find . -path '*/c/e'./a/c/e That will match any file or directory called e which is in a directory called c . Alternatively, if you don't have GNU find or any other that supports -path , you can do: $ find . -type d -name c -exec find {} -name e \;./a/c/e The trick here is to first find all c/ directories and then search only in them for things called e .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/617438", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117750/" ] }
617,666
In my bash script I identify the machine name - kafka01 or kafka02 or kafka03 with the following regular expression bash code if [[ $(hostname -s) =~ ^kafka[[:digit:]] ]]then/tmp/run.shfi Example from hostname command: hostname -skafka01 But now we want to run the script - /tmp/run.sh also if the machine name is mngkafka01 or mngkafka02 or mngkafka03 . So we did the following; this should run the script run.sh if machine name is kafka01 or mngkafka01 , etc: if [[ $(hostname -s) =~ ^[mng]kafka[[:digit:]] ]]then/tmp/run.shfi But this regular syntax does not work. What is wrong with my regular expression code?
[mng] does not match mng , it matches either m or n or g .An appropriate regex is (^|^mng)kafka[[:digit:]]+$ . This matches (^|^mng) either the null string or mng at the string start, kafka , [[:digit:]]+$ one or more digits anchored to the string end. Notice that your previous regex would also raise a false positive for kafka7u as it was not anchored. Test: arr=(kafka01 kafka7u mngkafka01 gkafka7x)for i in "${arr[@]}"; do if [[ $i =~ (^|^mng)kafka[[:digit:]]+$ ]]; then echo "$i" fidone Output: kafka01mngkafka01 More information: Conditional Constructs (see [[...]] section). The fixed version to your original attempt: if [[ $(hostname -s) =~ (^|^mng)kafka[[:digit:]]+$ ]]then /tmp/run.shfi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617666", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
617,700
I want to create an alias that can handle parameters ( $1 ),and can fall back to a default value if a parameter is not provided. For example, $ alias foo='NUM=${1:-42}; echo $NUM' Invoked without params it works as I want: $ foo42 But invoked with a param, it prints both my value and default value: $ foo 6942 69 I don't understand why it's this way. How should it be done properly? How can I debug this kind of problem myself?
aliases are just text replacement before another round of shell syntax interpretation, they don't take arguments, so after: foo 69 The foo text is replaced with NUM=${1:-42}; echo $NUM and then the shell interprets the resulting text: NUM=${1:-42}; echo $NUM 69 $1 is still not set, so that's NUM=42; echo 42 69 For an inline script interpreted in the current shell and that take arguments, use functions instead: foo() { NUM=${1-42} printf '%s\n' "$NUM"} Here using ${1-42} instead of ${1:-42} , as if the user calls foo '' , I would assume they want $NUM being assigned the empty string.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617700", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10745/" ] }
617,764
docker can only use cgroupv1, but fedora by default only use cgroupv2. How do I check if system is cgroupv1 compatible? So far answer in this question can only determine if cgroupv2 is installed. But it cannot determine if unified_cgroup_hierarchy set to 0 or 1. Is there any uniform way to determine if system is cgroupv1 compatible regardless whether the cgroupv2 installed or not? So far I use mount -l to check if there is cgroup2 on /sys/fs/cgroup . If there is, that means cgroupv2 only. Is this method universally applicable in all distros? So far I only tested on Fedora and ubuntu. If not, is there an universal way to determine this?
I would follow the approach used by systemd : if /sys/fs/cgroup exists and is on a cgroup2 file system, the system is running with a full unified hierarchy; if /sys/fs/cgroup exists and is on a tmpfs file system, if either /sys/fs/cgroup/unified or /sys/fs/cgroup/systemd exist and are on cgroup2 file systems, the system is using a unified hierarchy for the systemd controller only; if /sys/fs/cgroup/systemd exists and is on a cgroup file system (or, as a fallback, if it exists and isn’t on a cgroup2 file system), the system is using a legacy hierarchy.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77353/" ] }
617,811
I used in my bash script the follwing cli , in order to send the public key to remote machine sshpass -p $pass scp /root/.ssh/authorized_keys root@$remote_host:~/.ssh/authorized_keys but since we want to append the public keyes from other host then I am searching the approach top append in bash I know that the option is to use ">>" but how to use the append with my approach ? or maybe other solution ?
Use ssh together with tee -a file : < /root/.ssh/authorized_keys sshpass -p "$pass" ssh root@"$remote_host" "tee -a ~/.ssh/authorized_keys" or ssh with cat >> file if you prefer: < /root/.ssh/authorized_keys sshpass -p "$pass" ssh root@"$remote_host" "cat >> ~/.ssh/authorized_keys" Both tee and cat will read from stdin, which is sent to ssh with < file . The difference is, that tee , unlike >> will print what it appends. Note:The double quotes are needed, otherwise the >> or ~ will be interpreted by your shell before sending it to ssh command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/617811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
617,815
With Windows' command prompt, I can get a simplified system hardware overview through systeminfo command. On Ubuntu I can get a lot of non GUI commands with lots of info plus verbose details. There are some GUI tools like sysinfo . But I can't find a non-GUI one. So, can I get a simplified non-verbose non-GUI system information summary through terminal on Ubuntu?
Use ssh together with tee -a file : < /root/.ssh/authorized_keys sshpass -p "$pass" ssh root@"$remote_host" "tee -a ~/.ssh/authorized_keys" or ssh with cat >> file if you prefer: < /root/.ssh/authorized_keys sshpass -p "$pass" ssh root@"$remote_host" "cat >> ~/.ssh/authorized_keys" Both tee and cat will read from stdin, which is sent to ssh with < file . The difference is, that tee , unlike >> will print what it appends. Note:The double quotes are needed, otherwise the >> or ~ will be interpreted by your shell before sending it to ssh command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/617815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37576/" ] }
617,898
I have an old SATA harddrive with important footage on it. 15 years ago this harddrive "died" on a Windows OS. I saved the HD. Now I am going to plug it back into my Linux OS to see if the drive picks up in lsblk . My question is what should I expect to see when I plug in the cable into my mobo and start up the server? Should the harddrive show up in lsblk right away or do I have to do something else? Here is the help menu for ddrescue
I did something like that just a few weeks ago -- I got everything from a disk that had been sitting in storage for over 10 years! I had no intention of restoring the hard disk itself, I just wanted to get all the files from it, and put them onto my current disks. Fot that, I was hoping that the disk -- once plugged in -- would be alive long enough for me to grab an image of the whole disk as it is. And lucky me, it was indeed alive long enough for that. First of all, boot into a Linux OS which isn't trying to be "smart" by automounting any disk it sees. You do not want to mount that old disk, you don't want to write to it whatsoever! Best yet, boot without the GUI. And also, make sure that you already have ddrescue installed. That's all you need, and of course, enough free space on your regular disk to hold the whole image from the old disk. Find out which device your old disk is showing as, by using blkid or blockdev --report or lsblk . For the example here, I'll use /dev/sdz -- you adjust "OLD" and "DEST" accordingly. Run these commands: OLD=/dev/sdzDEST=/path/to/where-you-want-to-put-the-imageddrescue $OLD $DEST/saved.image $DEST/saved.mapfile If that finishes without errors, then you're done! You can now sync , poweroff , and remove the old disk from the computer. But if you're not that lucky, then you can run another pass like this: ddrescue -d -r3 $OLD $DEST/saved.image $DEST/saved.mapfile Once you have the good image, then you don't need the old disk inside your computer anymore. Remove it, and put it back up on the shelf. You can now work with the image, (preferrably, make a copy of it), loop-mount it, and extract anything you want from it. Good luck!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/328062/" ] }
617,899
Given this awk script: END {print "Y" | "cat" print "X"print "X"}# Output: # X# X# Y Why isn't Y printed first given that it's supposed to run before the other statements?
If you want the cat process to terminate (and the Y to be printed) before the X s, then just call close("cat") after the print "Y" | "cat" . All the rest is explained in the manpage, which you better read. Why isn't Y printed first given that it's supposed to run before the other statements? The cat is not supposed to write its output and terminate before the other statements. It may write its output before, after or in between your two print "X" calls. When you use something like print ... | "command ..." in awk, command .. is started as an asynchronous process with its stdin connected to a pipe (via popen("command ...", "w") ), and that process will not necessarily terminate and write its output before you call close("command ...") (or that is implicitly done when awk terminates). See an example like: BEGIN { print "foo" | "cat > file" print "bar" | "cat > file"} The result will be that file will contain both lines, foo and bar ; the cat > file command will not be run separately for each line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/396319/" ] }
617,902
I need to check a file with two extensions (.txt and .ctl) in a directory and if file is present with both the extensions call a script.If not the job should fail.I tried some methods but it is not working as expected.Can anyone please help me.
If you want the cat process to terminate (and the Y to be printed) before the X s, then just call close("cat") after the print "Y" | "cat" . All the rest is explained in the manpage, which you better read. Why isn't Y printed first given that it's supposed to run before the other statements? The cat is not supposed to write its output and terminate before the other statements. It may write its output before, after or in between your two print "X" calls. When you use something like print ... | "command ..." in awk, command .. is started as an asynchronous process with its stdin connected to a pipe (via popen("command ...", "w") ), and that process will not necessarily terminate and write its output before you call close("command ...") (or that is implicitly done when awk terminates). See an example like: BEGIN { print "foo" | "cat > file" print "bar" | "cat > file"} The result will be that file will contain both lines, foo and bar ; the cat > file command will not be run separately for each line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/617902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/440360/" ] }
618,093
I want to trim lines less than 4 characters, except if the line begins with # or ! . Sample input: aabbb dasasdsad! f#!# sa&B@*! Output: dasasdsad! f#!# s&B@*!
With grep : < file.in grep -E '^[#!]|.{4}' > file.out That is, select lines that either start with # or ! or contain a sequence of 4 characters. Or with awk : < file.in awk '/^[#!]/ || length >= 4' > file.out Or with sed : < file.in sed -e '/^[#!]/b' -e '/.\{4\}/!d' > file.out
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/279889/" ] }
618,109
Is there a command or shell script for converting a network script /etc/sysconfig/network-scripts/ifcfg-XXX to /etc/NetworkManager/system-connections/XXX.nmconnection config file? I am currently using the ifcfg-rh plugin and it works fine, but I want to have all interface configuration at one place. I think I could rewrite it manually for one or two interfaces, but I have to do it on several servers.
There isn't a straightforward simple way to do this. There might never be as converting between different configurations is fraught with error. That said if the keyfile plugin is listed first before the ifcfg-rh plugin (check NetworkManager --print-config), then cloning the old connection profile (nmcli con clone oldprofile newprofile) will create the new cloned profile in the keyfile format. You could then switch/up to the new one.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215626/" ] }
618,128
Sorry guys I had to edit my example, because I didn't express my query properly.Let's say I have the .txt file: Happy sadHappy sadHappy sadSad happyHappy sadHappy sadMad sadMad happyMad happy And I want to delete any string that is unique. Leaving the file with: Happy sadHappy sadHappy sadHappy sadHappy sadMad happyMad happy I understand that sort is able to get rid of duplicates ( sort file.txt | uniq ), so is there anyway we can do the opposite in bash using a command? Or would I just need to figure out a while loop for it?BTW uniq -D file.txt > output.txt doesn't work.
Using awk : $ awk 'seen[$0]++; seen[$0] == 2' fileHappy sadHappy sadHappy sadHappy sadHappy sadMad happyMad happy This uses the text of each line as the key into the associative array seen . The first seen[$0]++ will cause a line that has been seen before to be printed since the value associated with the line will be non-zero on the second and subsequent times the line is seen. The seen[$0] == 2 causes the line to be printed again if this is the second time the line has been seen (without this, you'll miss one occurrence of each duplicated line). This is related to awk '!seen[$0]++' which is sometimes used to remove duplicates without sorting (see e.g. How does awk '!a[$0]++' work? ). To only get one copy of the duplicated lines: awk 'seen[$0]++ == 1' file or, sort file | uniq -d
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/618128", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439848/" ] }
618,169
I'm trying to setup a firewall-cmd rule for incoming source IPv4 addresses using CentOS 7 . At present, I've managed to add-port for zone=public , but cannot find a way to do "granular filtering" for external access - like what is mentioned above. Is there any means to do that other than using rich language ? If (or not) so, how I'd go about using either of them for this goal?
Using awk : $ awk 'seen[$0]++; seen[$0] == 2' fileHappy sadHappy sadHappy sadHappy sadHappy sadMad happyMad happy This uses the text of each line as the key into the associative array seen . The first seen[$0]++ will cause a line that has been seen before to be printed since the value associated with the line will be non-zero on the second and subsequent times the line is seen. The seen[$0] == 2 causes the line to be printed again if this is the second time the line has been seen (without this, you'll miss one occurrence of each duplicated line). This is related to awk '!seen[$0]++' which is sometimes used to remove duplicates without sorting (see e.g. How does awk '!a[$0]++' work? ). To only get one copy of the duplicated lines: awk 'seen[$0]++ == 1' file or, sort file | uniq -d
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/618169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439592/" ] }
618,182
I'm in an SFTP session and navigated to a deep directory. I did ls -l , saw a small file, and want to just read it. I would prefer to do it while I'm still in SFTP, rather than starting up a new terminal, doing SSH, navigating to that deep directory, and then reading it, then going back to SFTP to continue. Is there a way to do that? SFTP has no cat command. I tried get file - , hoping it would treat - as stdout, but that just creates a file called "-". I tried get file /dev/stdout , but that results in: sftp> get file /dev/stdoutFetching /home/username/file to /dev/stdout/home/username/file 100% 506 9.0KB/s 00:00ftruncate "/dev/stdout": Invalid argumentCouldn't write to "/dev/stdout": Illegal seek Another way might be get file /tmp/file , come out of SFTP, read /tmp/file, delete /tmp/file, and get back into SFTP, but that's equally as tedious. Is there an easy way?
Having cat in sftp would mean that the contents of the file will still have to travel through the network, to your local machine, to be displayed for your eyes, which is basically, you get ther file. You don't have to "come out of SFTP", as sftp has !command . So, you can do: sftp> get filesftp> !cat file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/618182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/440131/" ] }
618,615
Basically a duplicate of this question but being more clear and providing more details. What I want is to have a USB drive with two things: The debian installer, and another partition using the remaining space of the USB. It should not be used for persistance within debian. Just a regular usable partition. In my linked question fdisk/gparted is recommended, but those don't really work. GParted shows this: lsblk : sdc 8:32 1 7.5G 0 disk ├─sdc1 8:33 1 2.7G 0 part └─sdc2 8:34 1 2.9M 0 part fdisk /dev/sdc : The device contains 'iso9660' signature and it will be removed by awrite command. See fdisk(8) man page and --wipe option for moredetails. I tried ignoring this and creating a third partition anyway, it worked but made debian unable to boot. The bootmenu shows up but when trying to boot it gives several errors about not finding an ext3/ext4 partition or something similar. My PC (nautilus file manager) also doesn't detect the debian partition anymore after the fdisk write with the third partition. fdisk -l : Device Boot Start End Sectors Size Id Type/dev/sdc1 * 0 5706399 5706400 2.7G 0 Empty/dev/sdc2 1600 7487 5888 2.9M ef EFI (FAT-12/16/32) dd command used: dd if=debian.iso of=/dev/sdc bs=1M status=progress
ISO hybrid images are crazy combinations of iso9660 format and multiple partition tables to make sure it boots everywhere. This is how the superblock looks: DEVICE OFFSET TYPE UUID LABELsdb 0x8001 iso9660 2020-09-26-10-19-19-00 Debian 10.6.0 amd64 nsdb 0x1fe dos sdb 0x200 gpt sdb 0x0 mac You can't remove any of these. If you want to add a new partition, simply tell fdisk to not wipe other signatures on the device and use only the dos partition table using fdisk --wipe=never -t dos /dev/sdX and add a new partition. I did a quick test with Debian netinstall ISO and the new partition is usable and the installer still boots.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/296862/" ] }
618,635
I have two wireless interfaces and I want to run different APs on each. (Neither hardware supports multiple SSIDs.) I have a .conf file for each interface. How do I get hostapd to use them both automatically? This works fine: # hostapd -dd /etd/hostapd/hostapd.wlan0.conf /etc/hostapd/hostapd.wlan1.conf The problem is getting it to work automatically. One claim is to set in /etc/default/hostapd DAEMON_CONF="/etc/hostapd/hostapd.wlan0.conf /etc/hostapd/hostapd.wlan1.conf" but this doesn't work because the whole string is interpreted as being one file -- which doesn't exist. I see this is used by /usr/lib/systemd/system/hostapd.service as ExecStart=/usr/sbin/hostapd -B -P /run/hostapd.pid -B $DAEMON_OPTS ${DAEMON_CONF} However, I also see something called /usr/lib/systemd/system/[email protected] which does something different: ExecStart=/usr/sbin/hostapd -B -P /run/hostapd.%i.pid $DAEMON_OPTS /etc/hostapd/%i.conf (what does %i mean?)However, systemctl seems only to know about hostapd and not about hostapd@ .
You were almost there at the end of the question. %i is used for i nstanced services. Here's an excerpt of the role of instances described in the systemd.unit(5) : Unit files can be parameterized by a single argument called the"instance name". The unit is then constructed based on a "templatefile" which serves as the definition of multiple services or otherunits. A template unit must have a single "@" at the end of the name(right before the type suffix). The name of the full unit is formed byinserting the instance name between "@" and the unit type suffix. Inthe unit file itself, the instance parameter may be referred to using"%i" and other specifiers, see below. Or in the specifiers list : ├─────────┼───────────────┼───────────────────────────────────────────────────┤│ │ │For instantiated units this is the string between ││"%i" │Instance name │the first "@" character and the type suffix. Empty ││ │ │for non-instantiated units. │├─────────┼───────────────┼───────────────────────────────────────────────────┤ Using instanced services will run one independent hostapd daemon per interface. This allows for example to alter or kill one instance with the assurance the other instance, and thus the other interface won't be affected. You can do this for using instances: revert to default settings everywhere and disable and stop hostapd.service rename your configurations to accommodate an unmodified hostapd instanced service which expects with %i.conf only the interface name followed by .conf : mv -i /etd/hostapd/hostapd.wlan0.conf /etd/hostapd/wlan0.confmv -i /etd/hostapd/hostapd.wlan1.conf /etd/hostapd/wlan1.conf use the instanced version of the hostapd service, distinguished by the @ character and which is configured differently from the normal instance as already written in OP. Once per interface: systemctl enable --now hostapd@wlan0systemctl enable --now hostapd@wlan1 In the end the daemon will be running twice as these: /usr/sbin/hostapd -B -P /run/hostapd.wlan0.pid /etc/hostapd/wlan0.conf/usr/sbin/hostapd -B -P /run/hostapd.wlan1.pid /etc/hostapd/wlan1.conf (unless there was something in $DAEMON_OPTS ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/435378/" ] }
618,638
we have the following line in /etc/fstab file /dev/mapper/vg_D /data/container xfs defaults 0 0 but when we try to match the line as LINE=/data/containergrep -qxF "$LINE" /etc/fstab || echo "line not in file !!!"line not in line !!! seems that grep -qxF not match the line /data/container where we are wrong? and how to match the line?
You were almost there at the end of the question. %i is used for i nstanced services. Here's an excerpt of the role of instances described in the systemd.unit(5) : Unit files can be parameterized by a single argument called the"instance name". The unit is then constructed based on a "templatefile" which serves as the definition of multiple services or otherunits. A template unit must have a single "@" at the end of the name(right before the type suffix). The name of the full unit is formed byinserting the instance name between "@" and the unit type suffix. Inthe unit file itself, the instance parameter may be referred to using"%i" and other specifiers, see below. Or in the specifiers list : ├─────────┼───────────────┼───────────────────────────────────────────────────┤│ │ │For instantiated units this is the string between ││"%i" │Instance name │the first "@" character and the type suffix. Empty ││ │ │for non-instantiated units. │├─────────┼───────────────┼───────────────────────────────────────────────────┤ Using instanced services will run one independent hostapd daemon per interface. This allows for example to alter or kill one instance with the assurance the other instance, and thus the other interface won't be affected. You can do this for using instances: revert to default settings everywhere and disable and stop hostapd.service rename your configurations to accommodate an unmodified hostapd instanced service which expects with %i.conf only the interface name followed by .conf : mv -i /etd/hostapd/hostapd.wlan0.conf /etd/hostapd/wlan0.confmv -i /etd/hostapd/hostapd.wlan1.conf /etd/hostapd/wlan1.conf use the instanced version of the hostapd service, distinguished by the @ character and which is configured differently from the normal instance as already written in OP. Once per interface: systemctl enable --now hostapd@wlan0systemctl enable --now hostapd@wlan1 In the end the daemon will be running twice as these: /usr/sbin/hostapd -B -P /run/hostapd.wlan0.pid /etc/hostapd/wlan0.conf/usr/sbin/hostapd -B -P /run/hostapd.wlan1.pid /etc/hostapd/wlan1.conf (unless there was something in $DAEMON_OPTS ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
618,645
How to generate random single digit number? I made use of seq -w 10 it gives below output 01020304050607080910 further logic how to implement so that only 1 digit should be randomly selected from 0 to 10
shuf -i0-9 -n1 shuf generates random permutations, -i accepts a range argument, so we provide the range 0-9 , and -n1 is like head , prints only the first of the random digits. Or you could use the RANDOM built-in shell variable of Korn-like shells (ksh, zsh, bash, yash; also busybox sh). echo "$((RANDOM % 10))" while RANDOM is (from man bash ) a random integer between 0 and 32767 that means the possibility of some digits is very slightly lower, if that matters. For numbers 0-7 , it is 100 * 3277 / 32767 ≈ 10.00092 % . For numbers 8 and 9 , 100 * 3276 / 32767 ≈ 9.99786 %
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618645", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439926/" ] }
618,665
When a process breaks, as I know no output will be return anymore. But always after breaking ping command we have the statistics of the execution, and as I know it's part of the output. amirreza@time:~$ ping 4.2.2.4PING 4.2.2.4 (4.2.2.4) 56(84) bytes of data.64 bytes from 4.2.2.4: icmp_seq=1 ttl=51 time=95.8 ms64 bytes from 4.2.2.4: icmp_seq=2 ttl=51 time=92.3 ms^C--- 4.2.2.4 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1002msrtt min/avg/max/mdev = 92.321/94.052/95.783/1.731 msamirreza@time:~$ How does it work?
Ctrl + C makes the terminal send SIGINT to the foreground process group. A process that receives SIGINT can do anything, it can even ignore the signal. A common reaction to SIGINT is to exit gracefully, i.e. after cleaning up etc. Your ping is simply designed to print statistics upon SIGINT and then to exit. Other tools may not exit upon SIGINT at all. E.g. a usual behavior of an interactive shell (while not running a command) is to clear its command line and redraw the prompt. SIGINT is not the only signal designed to terminate commands. See the manual ( man 7 signal ), there are many signals whose default action is to terminate the process. kill sends SIGTERM by default. SIGTERM is not SIGINT. Both can be ignored. SIGKILL cannot be caught, blocked, or ignored , but it should be your last choice.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/618665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378964/" ] }
618,683
bug: resolv.conf auto-populates search and nameserverseeking: permanent or temporary (run each time system boots.) recommended solution: resolvconf package solves the auto-population issue(not to be confused with resolv.conf) -https://www.youtube.com/watch?v=NEyXDdBrw2c-https://unix.stackexchange.com/q/209760/441088-https://unix.stackexchange.com/q/362587/441088 My question is identical to the last (441088) except need resolv.conf to no longer update (auto-populate) search and nameservers #sudo vi resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN# 127.0.0.53 is the systemd-resolved stub resolver.# run "systemd-resolve --status" to see details about the actual nameservers.nameserver 84.200.70.40nameserver 84.200.69.80nameserver 192.168.4.1 nameserver 192.168.4.1nameserver 192.168.1.1nameserver 1.1.1.1search autopopulated-isp-router 1.1.1.1 apparently it just adds additional auto-populated nameservers below the already existing. (it is a little sneaky so you must keep checking resolv.conf to catch the auto-population of nameservers & search server, which are auto-appended to resolvconf settings) how can i change the resolv.conf to prevent auto-populating of nameserver and search with isp ip addresses? Tried with: # service networking stop && service network-manager start# service networking start && service network-manager stop Network managers: Wicd with both networking and network-manager stopped, then no wicd just nmtui with networking start then with network-manager start Replicable on debian 10.1 and kali 2020 (any version - tried them all) Replicable with dhcp or static configuation (yes able to ping local gateway network router and other ip's on network) # /etc/nsswitch.conf## Example configuration of GNU Name Service Switch functionality.# If you have the `glibc-doc-reference' and `info' packages installed, try:# `info libc "Name Service Switch"' for information about this file.passwd: files systemdgroup: files systemdshadow: filesgshadow: fileshosts: files mdns4_minimal [NOTFOUND=return] dns myhostname mymachinesnetworks: filesprotocols: db filesservices: db filesethers: db filesrpc: db filesnetgroup: nis
Ctrl + C makes the terminal send SIGINT to the foreground process group. A process that receives SIGINT can do anything, it can even ignore the signal. A common reaction to SIGINT is to exit gracefully, i.e. after cleaning up etc. Your ping is simply designed to print statistics upon SIGINT and then to exit. Other tools may not exit upon SIGINT at all. E.g. a usual behavior of an interactive shell (while not running a command) is to clear its command line and redraw the prompt. SIGINT is not the only signal designed to terminate commands. See the manual ( man 7 signal ), there are many signals whose default action is to terminate the process. kill sends SIGTERM by default. SIGTERM is not SIGINT. Both can be ignored. SIGKILL cannot be caught, blocked, or ignored , but it should be your last choice.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/618683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/441088/" ] }
618,706
I have an issue with sed that I have been able to recreate with the following simple example. Consider the following input file ( input.txt ): C:\A\quick\brown\fox\ jumps over the lazy dogC:\A\quick\brown\fox\ ran with the hounds I want to generate the following output C:\Animal\ jumps over the lazy dogC:\Animal\ ran with the hounds I tried to create a simple shell script, using sed, but it is not performing the required substitution. Here is my script: FROM_PATTERN="C:\A\quick\brown\fox\"TO_PATTERN="C:\Animal\"#FROM_PATTERN="C:\\A\\quick\\brown\\fox\\" # Escaping backslash does not help either#TO_PATTERN="C:\\Animal\\" # Escaping backslash does not help eithersed 's/$FROM_PATTERN/$TO_PATTERN/g' input.txt#sed 's/"$FROM_PATTERN"/"$TO_PATTERN"/g' input.txt # Quoting the pattern does not help either I am running bash version GNU bash, version 4.4.12(3)-release-(x86_64-unknown-cygwin)
\ is special: as a quoting operator in the shell, including inside double quotes where it can be used to escape itself, " , $ , ` and do line continuations. as a regexp operator (both for escaping and introducing new operators) on the left-hand (pattern) side of the s command in sed . in the replacement part of the s sed command where it can escape & , itself and newline (or introduce C-style escape sequences such as \n in some sed implementations). Also note that shell parameter expansion is not performed inside single-quoted strings. So here, you'd want to: use single quotes instead of double quotes around the \ characters escape the \ for both the left hand side and right hand side of the s command use double quotes in the part where variables must be expanded. from_regexp='C:\\A\\quick\\brown\\fox\\'escaped_replacement='C:\\Animal\\'sed "s/$from_regexp/$escaped_replacement/g" < your-file Or you could use perl instead of sed where you can do substitutions of fixed strings without having to worry about special characters if you do it like: from='C:\A\quick\brown\fox\'to='C:\Animal\'FROM=$from TO=$to perl -pe 's/\Q$ENV{FROM}\E/$ENV{TO}/g' < your-file See also How to ensure that string interpolated into `sed` substitution escapes all metachars to work with arbitrary strings.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/313543/" ] }
618,869
just in order to understand I'm asking here... I get the message $ update-initramfs -u -k allupdate-initramfs: Generating /boot/initrd.img-5.9.0-1-amd64W: Possible missing firmware /lib/firmware/i915/rkl_dmc_ver2_01.bin for module i915update-initramfs: Generating /boot/initrd.img-5.8.0-3-amd64W: Possible missing firmware /lib/firmware/i915/rkl_dmc_ver2_01.bin for module i915 which makes me wonder if my hardware is correctly supported with the firmware installed. Thus I've tried to get this ver2_01 firmware, but unfortunately I cannot find it anywhere. I've non-free included in my sources, and I've also looked in the git repo git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git , but there I only find ver2_02 for rkl_dmc. Now, since ver2_02 is installed, can I just create a link vor ver2_01 pointing to ver2_02 ? Does the message above mean, that I something will not work properly - how can I check? Since i915 is related to my on-chip graphic card (to my knowledge), I'm afraid that e.g. OpenGL might not work correctly!? Please, could someone 'shed a light' on this for me, or could even point me to a solution? Kind Regards,George
rkl is apparently Rocket Lake, the codename for an Intel chipset that is supposed to be released in early 2021. So this is the Linux i915 driver already getting support for hardware that is not released yet. The i915 driver covers a wide range of Intel iGPUs, including all the current ones and sometimes even near-future ones if they follow a design similar to their predecessors. The kernel modules like i915 include metadata indicating the firmware files they may need: the i915 module needs to declare the firmware files for all supported Intel iGPU versions this way. The update-initramfs tool is not smart enough to cross-check the hardware information to find out which of the various firmware files declared by the i915 driver are actually needed by your hardware, so it will simply attempt to include all of them into initramfs. Unless you have installed firmware files for all the Intel iGPU variants, you may get some nuisance messages from update-initramfs ; but if they don't refer to the iGPU/chipset version you're actually using, you can simply ignore them. dmc in the firmware file name refers to "Display MicroController". A code comment in Linux i915 driver says: /** * DOC: csr support for dmc * * Display Context Save and Restore (CSR) firmware support added from gen9 * onwards to drive newly added DMC (Display microcontroller) in display * engine to save and restore the state of display engine when it enter into * low-power state and comes back to normal. */ I did not find any indication that the DMC would be used by anything other than power saving, so even if there were any problems, they would be more likely in the domain of power saving, not OpenGL. The patch updating the firmware version requirement from 2_01 to 2_02 was discussed in August this year so it's still pretty new. It looks like it did not get into your kernel version (5.9), but it will be in kernel version 5.10. And, as the Rocket Lake chipset is not released yet, the rkl_dmc_ver2_01.bin might have been distributed only internally at Intel (some Intel developers also participate in Linux kernel development, you know). But for the same reason, this is unlikely to cause any problems for you, other than an extra message or two from update-initramfs . In the unlikely case that you are actually testing pre-release hardware, you should be under a suitable NDA and you or someone in your organization should have a contact at Intel who can provide the ver2_01 firmware file for you if you really need it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/441271/" ] }
618,877
I want to extend my LUKS-encrypted lvm (volume group) with a new physical volume. In my previous question I was told - in respect to my actual setup - that I need to encrypt the new physical volume prior to add it to my existing volume group. I would like to know what steps I have to respect, to successfully add that physical volume to my existing volume group. My actual stacking looks like this: nvme0n1p8 -> luks -> physical volume -> volume group -> lvlsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT├─nvme0n1p8 259:8 0 86,5G 0 part│ └─nvme0n1p8_crypt 253:0 0 86,5G 0 crypt│ ├─lvm--crypt-wurzel 253:1 0 30,7G 0 lvm /│ ├─lvm--crypt-home 253:2 0 80G 0 lvm /home My crypttab file looks like this: cat /etc/crypttabnvme0n1p8_crypt UUID=1697ec4a-b30b-4642-b4f3-6ba94afc40ec none luks,discard Now I want to add a new physical volume to that volume group. How do I add a new physical volume to that volume group without losing encryption? What modifications to which configuration file might I need to do?
You’ll need to set up encryption on the new physical device: sudo cryptsetup luksFormat /dev/newdevice (replacing newdevice as appropriate). Then open it: sudo cryptsetup luksOpen /dev/newdevice newdevice_crypt You’ll need to add a matching line to /etc/crypttab so that it’s opened at boot, and update your initramfs using the appropriate command for your distribution ( e.g. sudo update-initramfs -c -k all on Debian derivatives). Once you have newdevice_crypt , you can create a physical volume on it: sudo pvcreate /dev/newdevice_crypt or sudo pvcreate /dev/mapper/newdevice_crypt and add it to your volume group: sudo vgextend lvm /dev/mapper/newdevice_crypt (replacing lvm with the name of the volume group). You can share the passphrase for several encrypted devices; see Using a single passphrase to unlock multiple encrypted disks at boot .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/618877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243440/" ] }
618,963
we set the following variables status=okecho $statusok now we want to verify if variables with regex will match as the following [[ $status =~ [OK] ]] && echo "is the same"[[ $status =~ OK ]] && echo "is the same"[[ $status =~ "OK" ]] && echo "is the same" but any of the above not print "is the same" what is wrong in my regex?
[OK] Will match either character within the brackets, the brackets do not tell it to be case insensitive. You could do: [[ "$status" =~ ^[Oo][Kk]$ ]] or I would probably do the following: [[ "${status,,}" == ok ]] The ,, operator for parameter expansion will convert the entire variable to lowercase for the purpose of the comparison.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/618963", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
619,068
In Linux, is there any difference between after- ip link down -condition and real link absence (e.g. the switch's port burned down, or someone tripped over a wire). By difference I mean some signs in the system that can be used to distinguish these two conditions. E.g. will routing table be identical in these two cases? Will ethtool or something else show the same things? Is there some tool/utility which can distinguish these conditions?
There are difference between an interface which is administratively up but disconnected or administratively down . Disconnected The interface gets a carrier down status. Its proper handling might depend on the driver for the interface and the kernel version. Normally it's available with ip link show . For example with a virtual ethernet veth interface: # ip link add name vetha up type veth peer name vethb# ip link show type veth2: vethb@vetha: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000link/ether 02:a0:3b:9a:ad:4d brd ff:ff:ff:ff:ff:ff3: vetha@vethb: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000 link/ether 36:e3:62:1b:a8:1f brd ff:ff:ff:ff:ff:ff vetha which is itself administratively UP, displays NO-CARRIER and the equivalent operstate LOWERLAYERDOWN flags: it's disconnected. Equivalent /sys/ entries exist too: # cat /sys/class/net/vetha/carrier /sys/class/net/vetha/operstate0lowerlayerdown In usual settings, for an interface which is administratively up the carrier and operstate match (NO-CARRIER <=> LOWERLAYERDOWN or LOWER_UP <=> UP). One exception would be for example when using IEEE 802.1X authentication (advanced details of operstate are described in this kernel documentation: Operational States , but it's not needed for this explanation). ethtool queries a lower level API to retrieve this same carrier status. Having no carrier doesn't prevent any layer 3 settings to stay in effect. The kernel doesn't change addresses or routes when this happens. It's just that in the end a packet that should be emitted won't be emitted by the interface and of course no reply will come either. So for example trying to connect to an other IPv4 address will sooner or later trigger again an ARP request which will fail, and the application will receive a "No route to host". Established TCP connections will just bid their time and stay established. Administratively down Above vethb has operstate DOWN and doesn't display any carrier status (since it has to be up to detect this. A physical Ethernet interface of course behaves the same). When the interface is brought down ( ip link set ... down ), the carrier can't be detected anymore since the underlying hardware device was very possibly powered off and the operstate becomes "down". ethtool will just say there's no link too, so can't be used reliably for this (it will surely display a few unknown entries too but is there a reliable scheme for this?). This time this will have an effect on layer 3 network settings. The kernel will refuse to add routes using this interface and will remove any previous routes related to it: the automatic ( proto kernel ) LAN routes added when adding an address any other route added (eg: the default route) in any routing table (not only the main routing table) depending directly on the interface ( scope link ) or on other previous deleted routes (probably then scope global ) . As these won't reappear when the interface is brought back up ( ip link set ... up ) they are lost until an userspace tool adds them back. Userspace interactions When using recent tools like NetworkManager, one can get confused and think a disconnect is similar to an interface down. That's because NM monitors links and will do actions when such events happen. To get an idea the ip monitor tool can be used to monitor from scripts, but it doesn't have a stable/parsable output currently (no JSON output available), so its use gets limited. So when a wire is disconnected, NM will very likely consider it's not using the current configuration anymore unless a specific setting prevents it: it will then delete the addresses and routes itself. When the wire is connected back, NM will apply its configuration again: adds back addresses and routes (using DHCP if relevant). This looks like the same but isn't. All this time the interface stayed up , or it wouldn't even have been possible for NM to be warned when the connection was back. Summary It's easy to distinguish the two cases: ip link show will display NO-CARRIER + LOWERLAYERDOWN for a disconnected interface, and DOWN for an interface administratively brought down. setting an interface administratively down (and up) can lose routes losing carrier and recovering it doesn't disrupt network settings. If the delay is short enough it should not even disrupt ongoing network connections but applications managing network might react and change network settings, sometimes with a result similar to administratively down case you can use commands like ip monitor link to receive events about interfaces set administratively down/up or carrier changes, or ip monitor to receive all the multiple related events (including address or route changes) that would happen at this time or shortly after. Most ip commands (but not ip monitor ) have a JSON output available with ip -json ... to help scripts (along with jq ). Example (continuing from the first veth example): vethb is still down: # ip -j link show dev vethb | jq '.[].operstate'"DOWN"# ip -j link show dev vetha | jq '.[].operstate'"LOWERLAYERDOWN" Set vethb up, which now gets a carrier on both: # ip link set vethb up# ip -j link show dev vetha | jq '.[].operstate'"UP" This tells about the 3 usual states: administratively down , lowerlayerdown (ie: up but disconnected) or up (ie: operational).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165555/" ] }
619,083
Simple question: for security is more safe to use ssh -X or ssh -Y ?As I know ssh -Y is X11Trusted, so no more controls are made, so is better to use -X for security reason.What did you think?I usually use -Y, because if I use the -X options I read this message when I did ssh on target machine. Warning: untrusted X11 forwarding setup failed: xauth key data not generated I connect a machine over the lan.
There are difference between an interface which is administratively up but disconnected or administratively down . Disconnected The interface gets a carrier down status. Its proper handling might depend on the driver for the interface and the kernel version. Normally it's available with ip link show . For example with a virtual ethernet veth interface: # ip link add name vetha up type veth peer name vethb# ip link show type veth2: vethb@vetha: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000link/ether 02:a0:3b:9a:ad:4d brd ff:ff:ff:ff:ff:ff3: vetha@vethb: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000 link/ether 36:e3:62:1b:a8:1f brd ff:ff:ff:ff:ff:ff vetha which is itself administratively UP, displays NO-CARRIER and the equivalent operstate LOWERLAYERDOWN flags: it's disconnected. Equivalent /sys/ entries exist too: # cat /sys/class/net/vetha/carrier /sys/class/net/vetha/operstate0lowerlayerdown In usual settings, for an interface which is administratively up the carrier and operstate match (NO-CARRIER <=> LOWERLAYERDOWN or LOWER_UP <=> UP). One exception would be for example when using IEEE 802.1X authentication (advanced details of operstate are described in this kernel documentation: Operational States , but it's not needed for this explanation). ethtool queries a lower level API to retrieve this same carrier status. Having no carrier doesn't prevent any layer 3 settings to stay in effect. The kernel doesn't change addresses or routes when this happens. It's just that in the end a packet that should be emitted won't be emitted by the interface and of course no reply will come either. So for example trying to connect to an other IPv4 address will sooner or later trigger again an ARP request which will fail, and the application will receive a "No route to host". Established TCP connections will just bid their time and stay established. Administratively down Above vethb has operstate DOWN and doesn't display any carrier status (since it has to be up to detect this. A physical Ethernet interface of course behaves the same). When the interface is brought down ( ip link set ... down ), the carrier can't be detected anymore since the underlying hardware device was very possibly powered off and the operstate becomes "down". ethtool will just say there's no link too, so can't be used reliably for this (it will surely display a few unknown entries too but is there a reliable scheme for this?). This time this will have an effect on layer 3 network settings. The kernel will refuse to add routes using this interface and will remove any previous routes related to it: the automatic ( proto kernel ) LAN routes added when adding an address any other route added (eg: the default route) in any routing table (not only the main routing table) depending directly on the interface ( scope link ) or on other previous deleted routes (probably then scope global ) . As these won't reappear when the interface is brought back up ( ip link set ... up ) they are lost until an userspace tool adds them back. Userspace interactions When using recent tools like NetworkManager, one can get confused and think a disconnect is similar to an interface down. That's because NM monitors links and will do actions when such events happen. To get an idea the ip monitor tool can be used to monitor from scripts, but it doesn't have a stable/parsable output currently (no JSON output available), so its use gets limited. So when a wire is disconnected, NM will very likely consider it's not using the current configuration anymore unless a specific setting prevents it: it will then delete the addresses and routes itself. When the wire is connected back, NM will apply its configuration again: adds back addresses and routes (using DHCP if relevant). This looks like the same but isn't. All this time the interface stayed up , or it wouldn't even have been possible for NM to be warned when the connection was back. Summary It's easy to distinguish the two cases: ip link show will display NO-CARRIER + LOWERLAYERDOWN for a disconnected interface, and DOWN for an interface administratively brought down. setting an interface administratively down (and up) can lose routes losing carrier and recovering it doesn't disrupt network settings. If the delay is short enough it should not even disrupt ongoing network connections but applications managing network might react and change network settings, sometimes with a result similar to administratively down case you can use commands like ip monitor link to receive events about interfaces set administratively down/up or carrier changes, or ip monitor to receive all the multiple related events (including address or route changes) that would happen at this time or shortly after. Most ip commands (but not ip monitor ) have a JSON output available with ip -json ... to help scripts (along with jq ). Example (continuing from the first veth example): vethb is still down: # ip -j link show dev vethb | jq '.[].operstate'"DOWN"# ip -j link show dev vetha | jq '.[].operstate'"LOWERLAYERDOWN" Set vethb up, which now gets a carrier on both: # ip link set vethb up# ip -j link show dev vetha | jq '.[].operstate'"UP" This tells about the 3 usual states: administratively down , lowerlayerdown (ie: up but disconnected) or up (ie: operational).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619083", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
619,397
I'm using Fedora 33. I'm confused by probably a simple fact. If I try to run a program which isn't installed, dnf will search repos and suggest to install it. If I answer yes, it will be installed and I'm able to use it normally. If I were to try and install that software ( dnf install <package> ), I would be asked for superuser password. For my example I tried installing "ranger" both ways and I can't seem to find a difference. Both are located in the same bin directory and both are running under user in processes. My question is following: If it is possible to simply install software without password anyway, why does dnf ask for it?
In the scenario you describe, "you" are not installing the package that is installed - the PackageKit service (which is already running as root) is doing the installation on your behalf. The PackageKit uses PolicyKit to determine who is or is not allowed to install packages - if you don't want unprivileged users to be able to install software, you can change the PolicyKit policy to disallow it. One of the reasons PackageKit exists is so the user doesn't have to know the details of the package management system on whatever distro the are using (whether it's dnf , or yum , or apt , or pacman , or whatever) - they just have to ask PackageKit to install the package, and PackageKit deals with the details of package management and installation on that particular distro. When you manually use dnf to install a package on Fedora, you are directly interacting with the package management system - PackageKit is not doing this operation on your behalf - which is why you need to be running as root (or provide the root password) in order to install packages at that level. But, as an ordinary user, you can use pkcon install <packagename> to ask PackageKit to do the installation on your behalf (as long as system policy as configured in PolicyKit allows you to do so), without ever having to be running as root (via su or sudo etc.), or providing the root password. The behavior you're talking about, where you're prompted to install a package if you try to run something on the command line that's provided by a package you do not already have installed, is driven by a package called " PackageKit-command-not-found ". I absolutely detest this behavior, and one of the very first things I do on any newly installed Linux system is to remove the PackageKit-command-not-found package so this is not done....
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311476/" ] }
619,425
I got an .xinitrc file with the following line: # it will start my window managerssh-agent dwm After that I got an ssh-agent process, but environment variables like $SSH_AGENT_PID and $SSH_AUTH_SOCK does not exists when I start a terminal from dwm . Any ideas why? I wish there was only one ssh-agent process. Each call to ssh-add should connect to the agent that started dwm .
In the scenario you describe, "you" are not installing the package that is installed - the PackageKit service (which is already running as root) is doing the installation on your behalf. The PackageKit uses PolicyKit to determine who is or is not allowed to install packages - if you don't want unprivileged users to be able to install software, you can change the PolicyKit policy to disallow it. One of the reasons PackageKit exists is so the user doesn't have to know the details of the package management system on whatever distro the are using (whether it's dnf , or yum , or apt , or pacman , or whatever) - they just have to ask PackageKit to install the package, and PackageKit deals with the details of package management and installation on that particular distro. When you manually use dnf to install a package on Fedora, you are directly interacting with the package management system - PackageKit is not doing this operation on your behalf - which is why you need to be running as root (or provide the root password) in order to install packages at that level. But, as an ordinary user, you can use pkcon install <packagename> to ask PackageKit to do the installation on your behalf (as long as system policy as configured in PolicyKit allows you to do so), without ever having to be running as root (via su or sudo etc.), or providing the root password. The behavior you're talking about, where you're prompted to install a package if you try to run something on the command line that's provided by a package you do not already have installed, is driven by a package called " PackageKit-command-not-found ". I absolutely detest this behavior, and one of the very first things I do on any newly installed Linux system is to remove the PackageKit-command-not-found package so this is not done....
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/441813/" ] }
619,429
I'm trying to install TeXLive 2020 onto iSh (which uses Alpine Linux) and have extracted all files in the install-tl-unx.tar.gz tarball from the TUG webpage . However, when I execute ./install-tl from the extracted directory I get the following error: Loading http://www.ctan.org/tex-archive/systems/texlive/tlnet/tlpkg/texlive.tlpdb./install-tl: TLPDB::from_file could not initialize from: http://www.ctan.org/tex-archive/systems/texlive/tlnet/tlpkg/texlive.tlpdb./install-tl: Maybe the repository setting should be changed../install-tl: More info: https://tug.org/texlive/acquire.html This happens whether or not I select a specific repository via the --select-repository option and happens both as root and normal user. How can I solve this problem?
In the scenario you describe, "you" are not installing the package that is installed - the PackageKit service (which is already running as root) is doing the installation on your behalf. The PackageKit uses PolicyKit to determine who is or is not allowed to install packages - if you don't want unprivileged users to be able to install software, you can change the PolicyKit policy to disallow it. One of the reasons PackageKit exists is so the user doesn't have to know the details of the package management system on whatever distro the are using (whether it's dnf , or yum , or apt , or pacman , or whatever) - they just have to ask PackageKit to install the package, and PackageKit deals with the details of package management and installation on that particular distro. When you manually use dnf to install a package on Fedora, you are directly interacting with the package management system - PackageKit is not doing this operation on your behalf - which is why you need to be running as root (or provide the root password) in order to install packages at that level. But, as an ordinary user, you can use pkcon install <packagename> to ask PackageKit to do the installation on your behalf (as long as system policy as configured in PolicyKit allows you to do so), without ever having to be running as root (via su or sudo etc.), or providing the root password. The behavior you're talking about, where you're prompted to install a package if you try to run something on the command line that's provided by a package you do not already have installed, is driven by a package called " PackageKit-command-not-found ". I absolutely detest this behavior, and one of the very first things I do on any newly installed Linux system is to remove the PackageKit-command-not-found package so this is not done....
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619429", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/381284/" ] }
619,477
It seems that the file name length limitation is 255 "characters" on Windows (NTFS), but 255 "bytes" on Linux (ext4, BTRFS). I am not sure what text encoding those file systems use for file names, but if it is UTF-8, one Asian character, such as Japanese, could take 3 or more bytes. So, for English, 255 bytes means 255 characters, but for Japanese, 255 bytes could mean a lot less characters, and this limitation could be problematic in some cases. Other than practically impossible method for a general user like modifying Linux file system/kernel etc, is there any practical way to increase the limitation so that I could have guaranteed 255-character file name capacity for Asian characters on Linux?
While glibc defines #define FILENAME_MAX 4096 on Linux which limits path length to 4096 bytes there's a hard 255 bytes limit in Linux VFS which all filesystems must conform to. The said limit is defined in /usr/include/linux/limits.h : /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */#ifndef _LINUX_LIMITS_H#define _LINUX_LIMITS_H#define NR_OPEN 1024#define NGROUPS_MAX 65536 /* supplemental group IDs are available */#define ARG_MAX 131072 /* # bytes of args + environ for exec() */#define LINK_MAX 127 /* # links a file may have */#define MAX_CANON 255 /* size of the canonical input queue */#define MAX_INPUT 255 /* size of the type-ahead buffer */#define NAME_MAX 255 /* # chars in a file name */#define PATH_MAX 4096 /* # chars in a path name including nul */#define PIPE_BUF 4096 /* # bytes in atomic write to a pipe */#define XATTR_NAME_MAX 255 /* # chars in an extended attribute name */#define XATTR_SIZE_MAX 65536 /* size of an extended attribute value (64k) */#define XATTR_LIST_MAX 65536 /* size of extended attribute namelist (64k) */#define RTSIG_MAX 32#endif And here's a piece of code from linux/fs/libfs.c which will throw an error in case you dare use a filename length longer than 255 chars: /* * Lookup the data. This is trivial - if the dentry didn't already * exist, we know it is negative. Set d_op to delete negative dentries. */struct dentry *simple_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags){ if (dentry->d_name.len > NAME_MAX) return ERR_PTR(-ENAMETOOLONG); if (!dentry->d_sb->s_d_op) d_set_d_op(dentry, &simple_dentry_operations); d_add(dentry, NULL); return NULL;} So, not only you'll have to redefine this limit, you'll have to rewrite filesystems source code (and disk structure) to be able to use it. And then outside of your device, you won't be able to mount such a filesystem unless you use its extensions to store very long filenames (like FAT32 does). TLDR: there's a way but unless you're a kernel hacker/know C very well, there's no way.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/379327/" ] }
619,625
I asked about Linux's 255-byte file name limitation yesterday, and the answer was that it is a limitation that cannot/will not be easily changed. But I remembered that most Linux supports NTFS, whose maximum file name length is 255 UTF-16 characters. So, I created an NTFS partition, and try to name a file to a 160-character Japanese string, whose bytes in UTF-8 is 480. I expected that it would not work but it worked, as below. How come does it work, when the file name was 480 bytes? Is the 255-byte limitation only for certain file systems and Linux itself can handle file names longer than 255 bytes? ----PS----- The string is the beginning part of a famous old Japanese essay titled "方丈記" . Here is the string. ゆく河の流れは絶えずして、しかももとの水にあらず。よどみに浮かぶうたかたは、かつ消えかつ結びて、久しくとどまりたるためしなし。世の中にある人とすみかと、またかくのごとし。たましきの都のうちに、棟を並べ、甍を争へる、高き、卑しき、人の住まひは、世々を経て尽きせぬものなれど、これをまことかと尋ぬれば、昔ありし家はまれなり。 I had used this web application to count the UTF-8 bytes.
The answer, as often, is “it depends”. Looking at the NTFS implementation in particular, it reports a maximum file name length of 255 to statvfs callers, so callers which interpret that as a 255-byte limit might pre-emptively avoid file names which would be valid on NTFS. However, most programs don’t check this (or even NAME_MAX ) ahead of time, and rely on ENAMETOOLONG errors to catch errors. In most cases, the important limit is PATH_MAX , not NAME_MAX ; that’s what’s typically used to allocate buffers when manipulating file names (for programs that don’t allocate path buffers dynamically, as expected by OSes like the Hurd which doesn't have arbitrary limits). The NTFS implementation itself doesn’t check file name lengths in bytes, but always as 2-byte characters; file names which can’t be represented in an array of 255 2-byte elements will cause a ENAMETOOLONG error. Note that NTFS is generally handled by a FUSE driver on Linux. The kernel driver currently only supports UCS-2 characters, but the FUSE driver supports UTF-16 surrogate pairs (with the corresponding reduction in character length).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/619625", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/379327/" ] }
619,654
What is the purpose and benefit of using the --system option when adding auser, or even a group? I'd like to know why I'm seeing this added to many Docker containers and recommended as abest practice? For example sake I'm adding a non-root user to an Alpine Docker containerfor use when developing and again for runtime. The current versions I'm using are: adduser version is 3.118, and the Alpine adduser man Alpine version is 3.12 Docker v19.03.13 on Windows 10 (20H2 update) The man page reads "Create a system user", O.K. but what do you get as a systemuser? Or being in a system group when using addgroup -S . I do not have a System Admin background, so I'm not sure what that means andwould like clarity as to when I should use this? Some Other Reading Searching Google has provide some insight but no way to verify what I've read.That it does not ask you to set a password for the user, but then I can use use --disabled-password for that. I then found this post here, I got that its for organization purposes, but doesnot help me much either. I'm only a little bit more clearm, yet not confidentenough to explain when to use them. What's the difference between a normal user and a system user?
System users are a like normal users but for are set an organizational purpose.The only difference is : They don't have an expiry date ( no aging set ) Their uids are below 999 like set on /etc/login.defs (can be changed) : Also there is Standard System Users which come with the OS or with a package install most of them have the above attributes ( Conventional ): The majority of them have /sbin/nologin or /bin/false as a shell They have "*" or "!!" in /etc/shadow meaning that none can simply use them. And can have attributes that i have shown on the first section. To check these standard system users list : /usr/share/doc/setup-/uidgid An example could be by adding mypapp user as a system user ; so in case for example we want to setup Identity Access Management policy in our environment that we can automate for all users ; we have to do it only for system users based on their uids because in case of mypapp account expires the application will stop running.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/619654", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9071/" ] }
619,658
-f, --canonicalizecanonicalize by following every symlink in every component of the given name recursively; all but the last component must exist-e, --canonicalize-existingcanonicalize by following every symlink in every component of the given name recursively, all components must exist I am not able to understand what does -f or -e do? The wordings are not at all clear. Canonical name is basically shortest unique absolute path. But what does it mean by component of a name? Does it mean subdirectories? What does it mean by "recursively" here? What I understand is recursively search every subdirectories of the given canonical name. But then it doesn't make sense for a symbolic link. Next what does it mean for "-e" option that all component must exist? What is a component here? Can someone please help with a simple example? Thanks
First component here means an element of the path.Example : /home/user/.ssh => <component1>/<component2>/<component3> 1- Suppose we have a directories structure like this : lols├── lol├── lol1 -> lol└── lol2 -> lol1 And also the non-existent directory here will be lols/lol3So you can compare the output of each command : readlink -f lols/lol1 : /lols/lolreadlink -e lols/lol1 : /lols/lol The output here will be the same because all the components of the path exists. readlink -f lols/lol8 : lols/lol8readlink -e lols/lol8 : <empty outpyt> The output here is different because with -f it will show the result because there is one component that exists at least in the path which is lols and with -e the output will be empty because all path components must exist . And the last one is with having multiple non-existent directories : readlink -f lols/lol8/lol10 : <empty output>readlink -e lols/lol8/lol10 : <empty output> Here the output will be empty because as described in the map page :-f : all but the last component must exist => Not respected-o : all components must exist => Not respected 2- Why not use only ls -l : Suppose we create a file named file1 and create asymlink to this file named link1 and from link1 create another symlink link2 : touch file1 : file1 ln -s file1 link1 : link1 -> file1ln -s link1 link2 : link2 -> link1 Then with ls -l link2 the output will be : link2 -> link1And if we use readlink link2 the output will be : link1 ; same as ls -lBut if we use readlink -f|-e link2 the output will be : file1 ; so it will point to the source file. So when to use readlink instead of ls ?When there is a nested symlinks (Recursive Read).When the files/directories are on a different locations. So better to use readlink instead of ls to avoid errors.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619658", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402997/" ] }
619,779
S-123-P Bash Pocket Ref. 2010 Cengage Learning $55E-P234 Python Pocket Ref. 2012 Cengage Learning $4555-MNP Unix System Programming 2001 Sybex $230 I need to replace the numbers after the $ not including the $ with * , so the output need to be: S-123-P Bash Pocket Ref. 2010 Cengage Learning $**E-P234 Python Pocket Ref. 2012 Cengage Learning $**55-MNP Unix System Programming 2001 Sybex $*** I've been able to replace the last digit or last 2, but not every digit after the $ .I've tried sed and awk gsub but nothing I try seems to work.
You can use sed and advantage of not touching the Tabs/Spaces or fields intention: sed -E ':a s/(\$\**)[^*]/\1*/; ta' infile replace every ($<zero-or-more-*>)[<non-*-character>] with $<zero-or-more-*><plus-additional-*-added> ( \1* ; \1 is the back-reference to the first matched group in sed defining by (...) ) until all <non-*-character> s replaced with * s. A bit complex, but in case you wanted to force the changes only on the last field, you could use the command in this way: sed -E ':a s/(\$\**)[^*]([^$]*)$/\1*\2/; ta' infile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619779", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/442107/" ] }
619,789
So, I want to cycle through previews of windows, not applications with Alt - Tab . I tried setting Alt + Tab to switch windows in settings, but this made it seemingly do nothing. I did notice that if I were in Firefox with this setting on, when I held Alt + Tab and used the scroll-wheel, it cycled through tab history.
I solved the problem. In order to fix this, you must run ALL the following commands. On my first attempt, I only ran 1 but you have to run them all. gsettings set org.gnome.desktop.wm.keybindings switch-applications "[]"gsettings set org.gnome.desktop.wm.keybindings switch-applications-backward "[]"gsettings set org.gnome.desktop.wm.keybindings switch-windows "['<Alt>Tab', '<Super>Tab']"gsettings set org.gnome.desktop.wm.keybindings switch-windows-backward "['<Alt><Shift>Tab', '<Super><Shift>Tab']" This worked in my case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/442104/" ] }
619,791
I am trying to compare two files ( Extensions.txt and Temp.txt ). If there is a line that does not partially match from Extensions.txt in Temp.txt I would like to append the missing line to Temp.txt . Extensions.txt (Very basic, one column): 11112344321 Temp.txt : 1234/sip:[email protected]:5060 9421b96c5e Avail 1.4804321/sip:[email protected]:5060 e9b6b979a4 Avail 1.855 Basically, what I want to do is find a match based on everything before the / in the first column and if there is no match, I would like to print the non matching line to the bottom of the file so that it would end up like this: 1234/sip:[email protected]:5060 9421b96c5e Avail 1.4804321/sip:[email protected]:5060 e9b6b979a4 Avail 1.855111 So far I have attempted grep -v and it doesn't produce the results that I want, I also tried with awk and it seems like that is the way to go, however I do not have a full understanding of how awk works in order to produce the appropriate results.
You can parse the files with awk awk -F '/' ' FNR == NR {seen[$1] = $0; next} {if ($1 in seen) print seen[$1]; else missing[$1]} END {for (x in missing) print x}' Temp.txt Extensions.txt Output: 1234/sip:[email protected]:5060 9421b96c5e Avail 1.4804321/sip:[email protected]:5060 e9b6b979a4 Avail 1.855111 Set field separator to slash, -F '/' The action after FNR == NR is executed for the lines of the first input file. We store the lines in the associative array seen as keys, and go to next line. The second action is executed for the second file, when FNR != NR . If the first field matches, we print the stored line, else we save the field into another array missing . At the END , we print the missing lines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619791", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/442116/" ] }
619,861
I am having problem with nc command, I cannot use a proxy, because there is no -x option, which should be there. nc -h[v1.10-41]connect to somewhere: nc [-options] hostname port[s] [ports] ...listen for inbound: nc -l -p port [-options] [hostname] [port]options: -c shell commands as `-e'; use /bin/sh to exec [dangerous!!] -e filename program to exec after connect [dangerous!!] -b allow broadcasts -g gateway source-routing hop point[s], up to 8 -G num source-routing pointer: 4, 8, 12, ... -h this cruft -i secs delay interval for lines sent, ports scanned -k set keepalive option on socket -l listen mode, for inbound connects -n numeric-only IP addresses, no DNS -o file hex dump of traffic -p port local port number -r randomize local and remote ports -q secs quit after EOF on stdin and delay of secs -s addr local source address -T tos set Type Of Service -t answer TELNET negotiation -u UDP mode -v verbose [use twice to be more verbose] -w secs timeout for connects and final net reads -C Send CRLF as line-ending -z zero-I/O mode [used for scanning]port numbers can be individual or ranges: lo-hi [inclusive];hyphens in port names must be backslash escaped (e.g. 'ftp\-data'). Is my netcat outdated? How do I update it? Thanks for help.
Looks like you have the "traditional" netcat ( netcat-traditional ) installed. The -x option is available in the OpenBSD netcat ( netcat-openbsd ). See also: What are the differences between netcat-traditional and netcat-openbsd? on Ask Ubuntu.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/619861", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/442181/" ] }
620,070
While using the backup tool duplicity , I noticed that if I rename a file on the source, the data will be resent again over the network to the destination , which is a bit sad. Since duplicity uses librsync internaly, I decided to have a look at rsync . This ArchWiki page states: Handles renames Moved/renamed files are detected and not stored or transferred twice. It typically means that a checksum of files or its chunks is computed. Applications missing this functionality can be supplemented by combiningwith hsync AUR , which only synchronizes renames. rsync: Handles renames: No Does this really mean that, when using rsync , there is no way to prevent 10 GB to be re-transferred over the network to the destination , if I rename /test/10GBfile to /test/10GBfile_newname on the source computer? Given the long-time popularity of rsync , is there no mode in which this would be better handled?
There is no mechanism for rsync to track renames as it does not maintain state other than while it is running. If you rename /test/10GBfile to /test/10GBfile_newname on the source computer, then by default rsync can only see that 10GBfile has been deleted and that 10GBfile_newname has been created. The --fuzzy parameter ( -y ) may help identify 10GBfile as a potential source of data for 10GBfile_newname on the target, thereby avoiding a network copy at the expense of a file copy. However, it can (mostly) only consider matches of files in the same directory so although your example would match, a rename of /test/10GBfile to /test/otherdir/10GBfile_newname would not. Notice also that the documentation ( man rsync ) advises that if you want to use --delete you should instead use either --delay-updates or --delete-after so that potential matches for --fuzzy are not deleted before they can be used. Example # Prepare an uncompressible 100MB filemkdir -p /tmp/testdd bs=1M count=100 iflag=fullblock if=/dev/urandom >/tmp/test/file1# Normal first-time copyrsync -av --fuzzy --delete-after /tmp/test/ remote:/tmp/test# Skip copy because unchangedrsync -av --fuzzy --delete-after /tmp/test/ remote:/tmp/test# Rename file (per your example)mv /tmp/test/file1 /tmp/test/file2# Fast copy because fuzzy matchrsync -av --fuzzy --delete-after /tmp/test/ remote:/tmp/test Add two more -v flags (i.e. rsync -avvv … ) to see the block-by-block detail of what's going on.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/620070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59989/" ] }
620,071
uniq seems to do something different than uniq -u , even though the description for both is "only unique lines". What's the difference here, what do they do?
This ought to be easy to test: $ cat file123344 $ uniq file1234 $ uniq -u file12 In short, uniq with no options removes all but one instance of consecutively duplicated lines. The GNU uniq manual formulates that as With no options, matching lines are merged to the first occurrence. while POSIX says [...] write one copy of each input line on the output. The second and succeeding copies of repeated adjacent input lines shall not be written. With the -u option, it removes all instances of consecutively duplicated lines, and leaves only the lines that were never duplicated. The GNU uniq manual says only print unique lines and POSIX says Suppress the writing of lines that are repeated in the input.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/620071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/442348/" ] }
620,260
I understand that ! is used to negate an if condition in Bash, but I've just seen code which takes the format: if ! [[ CONDITION ]]; then SOMETHINGfi Is there a difference between this format and the below? if [[ ! CONDITION ]]; then SOMETHINGfi I've tried to Google but haven't found anything yet about the former syntax.
for single condition they are both same: $ if [[ ! 1 = 1 ]]; then echo true; else echo false; fifalse$ if ! [[ 1 = 1 ]]; then echo true; else echo false; fifalse the difference come when testing multiple conditions: $ if [[ ! 1 = 2 || 2 = 2 ]]; then echo true; else echo false; fitrue$ if ! [[ 1 = 2 || 2 = 2 ]]; then echo true; else echo false; fifalse so outer negation has more priority on the result. of course you can apply inner negation for whole condition, so in this case you will prefer to use a outer negation instead: $ if [[ ! ( 1 = 2 || 2 = 2 ) ]]; then echo true; else echo false; fifalse
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/620260", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/428326/" ] }
620,282
So, I've used Wine before and I've used Valve's Proton as well. I understand that Wine and Proton attempt to translate the internal Windows-based logic of a program to something that is compatible with Linux, but I'm unclear why Proton has so much more success with complex games than Wine does for most programs. Thanks to things like Winetricks I've managed to install enough dependencies to successfully run programs like Adobe Photoshop, but how come this is not needed for Proton games? Or is it a more simply related to the fact that Valve already knows the programs the games need to run and they make sure to install them as you install the game? Another thing that stumps me is how much progress Proton has made in such little time, but it's still difficult to find information on how to run many programs on Wine, with or without Winetricks. Is there a way to know what I should install that a program I want to run might need? Or is it just blind luck?
for single condition they are both same: $ if [[ ! 1 = 1 ]]; then echo true; else echo false; fifalse$ if ! [[ 1 = 1 ]]; then echo true; else echo false; fifalse the difference come when testing multiple conditions: $ if [[ ! 1 = 2 || 2 = 2 ]]; then echo true; else echo false; fitrue$ if ! [[ 1 = 2 || 2 = 2 ]]; then echo true; else echo false; fifalse so outer negation has more priority on the result. of course you can apply inner negation for whole condition, so in this case you will prefer to use a outer negation instead: $ if [[ ! ( 1 = 2 || 2 = 2 ) ]]; then echo true; else echo false; fifalse
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/620282", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/328985/" ] }
620,825
I have a huge log file (about 6GB) from a simulation. Among the millions of lines in that file, there are two lines that are frequently repeating for a given time: ...Max value of omega = 3.0355Time = 0.000001....Max value of omega = 4.3644Time = 0.000013...Max value of omega = 3.7319Time = 0.000025.........Max value of omega = 7.0695Time = 1.32125...... etc. I would like to extract both "Max value of omega" and "Time" and save them in a single file as columns: #time max_omega0.000001 3.03550.000013 4.36440.000025 3.7319...etc. I proceeded as follows: # The following takes about 15 secondsgrep -F 'Max value of omega' logfile | cut -d "=" -f 2 > max_omega_file.txt , and the same for "Time" # This also takes about 15 seconds# Very important: match exactly 'Time =' because there other lines that contain the word 'Time'grep -F 'Time =' logfile | cut -d "=" -f 2 > time.txt Then I need to use the command paste to create a two-columns file: Time.txt as the first column and "max_omega_file.txt" as the second column. As you can see, the time is doubled in the steps above. I wonder if there a single solution to achieve the same results in a single pass so I save some time?
sed -n '/^Max/ { s/^.*=\s*//;h; }; /^Time/{ s/^.*=\s*//;G; s/\n/ /;p; }' infile match-run syntax /.../{ ... } : commands within {...} will only run on the lines that matched with regex/pattern within /.../ ; s/^.*=\s*// : deletes everything up-to last = and whitespaces \s* also if there was any. h : copy the result into hold-space G : append the hold-space to pattern-space with embedded newline s/\n/ / : replace that embedded newline with space in the pattern-space p : print pattern-space; you can use P command here instead too. 0.000001 3.03550.000013 4.36440.000025 3.73191.32125 7.0695 A similar approach proposed by @stevesliva that is used s//<replace>/ which is shorthand to do substitution on the last match: sed -n '/^Max.*=\s*/ { s///;h; }; /^Time.*=\s*/{ s///;G; s/\n/ /;p; }' infile
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/620825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/429557/" ] }