source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
586,334
Something I've noticed on all my home machines is that none of them can resolve .local addresses for IPv6. This seems odd because they can resolve them for IPv4 and all of my home machines have both Link-Local fe80:: addresses and public 2a00:: addresses. So far I've been unable to figure out what's missing for these to work. IPv4 # ping neptune.localPING neptune.local (192.168.1.223) 56(84) bytes of data.64 bytes from neptune (192.168.1.223): icmp_seq=1 ttl=64 time=275 ms64 bytes from neptune (192.168.1.223): icmp_seq=2 ttl=64 time=197 ms IPv6 # ping -6 neptune.localping: neptune.local: Name or service not known# ping -6 2a00:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxxPING 2a00:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx(2a00:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx) 56 data bytes64 bytes from 2a00:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx: icmp_seq=2 ttl=64 time=2.21 ms64 bytes from 2a00:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx: icmp_seq=3 ttl=64 time=3.13 ms Hosts entry from /etc/nsswitch.conf : hosts: files mdns4_minimal [NOTFOUND=return] dns How do you enable mDNS for IPv6 on Ubuntu and/or Debian?
For enabling IPv6 for mDNS in avahi there is a need to change configurations, both in the client and server side for Linux VMs. The steps are: 1) Configure avahi for IPv6, if it is not already done (Debian 10 has already that as a default): In /etc/avahi/avahi-daemon.conf [server]use-ipv6=yes 2) Change the mDNS line line entry in /etc/nsswitch.conf from: hosts: files mdns4_minimal [NOTFOUND=return] dns To: hosts: files mdns_minimal [NOTFOUND=return] dns 3) Then restart the avahi service, either with: sudo service avahi-daemon restart or: sudo systemctl restart avahi-daemon.service see Enabling IPv6 support in Avahi (Zeroconf/Bonjour)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/586334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20140/" ] }
586,443
The power of shell pipeline is so great that sometimes fails me. Example Just as an example, the pipeline echo abc > file.txtcat file.txt | sed 's/a/1/' > file.txt gives me an empty file.txt . Realizing that the shell probably calls > first, I made a change: echo abc > file.txt{cat file.txt | sed 's/a/1/'} > file.txt Again it surprises me by another empty file file.txt . An ugly way that finally works is echo abc > file.txtecho $(cat file.txt | sed 's/a/1') > file.txt which forces the shell to run a subshell first, and then redirect. Question While I'm aware of better practice of sed , which allows you to get rid of echo , cat , grep .. etc, what I am curious about here is to learn shell's grammar completely. This questions is not about how to fix the particular problem above. Q1(EDIT: off-topic) Is there a good resource for me to learn the grammar? I'm afraid that different shells could have different grammars, so Q2 Can I make shell verbose, and see clearly what it is doing every completely every single time I run a command? Q3(EDIT: off-topic) Any other advice on good practices? Thank you!
You have successfully found one of the things you should not do :-)Never redirect to the file you are working on! A1: a good resource to learn shell grammar would be the absolute bash scripting guide , IMO A2: For bash-scripts, you can use set +x for more verbose output, but I don't know how to achieve the same at a 'run things at the prompt'-level. A3: Add [solved] to your search-terms. Finds you the solution to your problem instead of more of the problem you already know.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/586443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291142/" ] }
586,685
I have a file containing the below input. ---------- ID: alternatives_create_editor Function: alternatives.install Name: editor Result: None Comment: Alternative will be set for editor to /usr/bin/vim.basic with priority 100 Started: 10:45:49.115459 Duration: 0.78 ms Changes:---------- ID: hosts_1 Function: host.present Name: ip6-allnodes Result: None Comment: Host ip6-allnodes (ff02::1) already present Started: 10:45:49.127667 Duration: 1.117 ms Changes:---------- ID: sfrsyslog_fix_logrotate Function: file.managed Name: /etc/logrotate.d/rsyslog Result: None Comment: The file /etc/logrotate.d/rsyslog is set to be changed Started: 10:45:50.771278 Duration: 12.751 ms Changes:---------- My goal is to remove the whole part where Comment: contains already present. I tried, grep -B 5 -A 4 'already present' FILE1 > FILE2 . Then, diff -y FILE1 FILE2 | grep '<' | sed 's/ In this output I am getting like below, ID: alternatives_create_editor Function: alternatives.install Name: editor Result: None Comment: Alternative will be set for editor to /usr/bin/ Started: 10:45:49.115459 Duration: 0.78 ms Changes:---------- ID: sfrsyslog_fix_logrotate Function: file.managed Name: /etc/logrotate.d/rsyslog Result: None Comment: The file /etc/logrotate.d/rsyslog is set to be Started: 10:45:50.771278 Duration: 12.751 ms Changes:---------- Problem is the Comment: section is not giving whole complete line. Please help. Thanks
More straightforward is to use a single GNU awk command: awk 'BEGIN{RS=ORS="----------\n"};!/Comment:[^\n]*already present/' file It simply sets the record separator as the ten-dash lines and if the record contains Comment:[^\n]*already present , it suppresses that record. [^\n]* means "not a newline character", so that it won't suppress the record if it has already present in Changes: but not in Comment: . Output: ---------- ID: alternatives_create_editor Function: alternatives.install Name: editor Result: None Comment: Alternative will be set for editor to /usr/bin/vim.basic with priority 100 Started: 10:45:49.115459 Duration: 0.78 ms Changes:---------- ID: sfrsyslog_fix_logrotate Function: file.managed Name: /etc/logrotate.d/rsyslog Result: None Comment: The file /etc/logrotate.d/rsyslog is set to be changed Started: 10:45:50.771278 Duration: 12.751 ms Changes:---------- If you do not have GNU awk, the above may not work because POSIX does not define the behavior of a multi-character record separator. So I leave this portable alternative: awk '/Comment:.*already present/{p=1} /^-{10}$/{ if(p!=1){printf "%s",lines} p=0 lines="" } {lines=lines $0 RS}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/586685", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289962/" ] }
587,002
How can I pipe any data to audio output? For example, i want to listen to a file -- an archive, a drive backup, a program. Or I want to listen to my HDD -- I vaguely remember reading something about this being possible about 7 years ago, but can't find anything now. So, files, disk reads, even network connections -- I want to be able to listen to anything. I know that it's definitely possible with Linux. How can I do it? Using Lubuntu 20.04
I find piping things into aplay works well. journalctl | aplay doesn't sound pretty but does work surprisingly well. Here's an example from aplay(1) : aplay -c 1 -t raw -r 22050 -f mu_law foobar will play the raw file "foobar" as a 22050-Hz, mono, 8-bit, Mu-Law .au file. It can be found as part of the alsa-utils package on debian/ubuntu. Here's a 1-liner that I like which echos a small C program into gcc, and runs the compiled version, piping it to aplay. The result is a surprisingly nice 15-minute repeating song. echo "g(i,x,t,o){return((3&x&(i*((3&i>>16?\"BY}6YB6$\":\"Qj}6jQ6%\")[t%8]+51)>>o))<<4);};main(i,n,s){for(i=0;;i++)putchar(g(i,1,n=i>>14,12)+g(i,s=i>>17,n^i>>13,10)+g(i,s/3,n+((i>>11)%3),10)+g(i,s/5,8+n-((i>>10)%3),9));}"|gcc -xc -&&./a.out|aplay
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/587002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/405442/" ] }
587,007
I wish to find all files and folders except/excluding folders starting with 'tmp' and 'logs' under '/app/test' directory using prune Thus I tried the below command. find /app/test -type d \( ! -name logs* \) -o \( ! -name log \) -o \( ! -name tmp* \) -o\( ! -name tmp \) -prune | tee /tmp/findall.log But I see top folder included in the find result like below: /app/test/Admin/tmp/data/pm.trp/app/test/Admin/tmp/wond/iqc.logetc ... Can you please suggest?
I find piping things into aplay works well. journalctl | aplay doesn't sound pretty but does work surprisingly well. Here's an example from aplay(1) : aplay -c 1 -t raw -r 22050 -f mu_law foobar will play the raw file "foobar" as a 22050-Hz, mono, 8-bit, Mu-Law .au file. It can be found as part of the alsa-utils package on debian/ubuntu. Here's a 1-liner that I like which echos a small C program into gcc, and runs the compiled version, piping it to aplay. The result is a surprisingly nice 15-minute repeating song. echo "g(i,x,t,o){return((3&x&(i*((3&i>>16?\"BY}6YB6$\":\"Qj}6jQ6%\")[t%8]+51)>>o))<<4);};main(i,n,s){for(i=0;;i++)putchar(g(i,1,n=i>>14,12)+g(i,s=i>>17,n^i>>13,10)+g(i,s/3,n+((i>>11)%3),10)+g(i,s/5,8+n-((i>>10)%3),9));}"|gcc -xc -&&./a.out|aplay
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/587007", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392596/" ] }
587,039
Below is my find command that searches and dumps the searched file results to /tmp/findall.log find /app/test -type d \( -name logs -o -name 'logs*' -o -name 'tmp*' \) -prune -o -print | tee /tmp/findall.log I can see stage folder in the find results as below: grep stage /tmp/findall.log | more/app/test/Server2/stage_02032020/app/test/Server2/stage_02032020/Buyers/app/test/Server2/stage_02032020/Buyers/BuyersQuote.xlsx/app/test/Server2/stage_02032020/Buyers/xml I then decided to tar the files found in the find results using the below command. find /app/test -type d \( -name logs -o -name 'logs*' -o -name 'tmp*' \) -prune -o -print0 | xargs -0 tar -cvf /app/test_Backup.tar However my tar file does not have the stage folders as was see with the find results as seen below. tar -tvf test_Backup.tar | grep stageNo results Found. Can you please suggest why are the files from the find results missing from the tar file?
I find piping things into aplay works well. journalctl | aplay doesn't sound pretty but does work surprisingly well. Here's an example from aplay(1) : aplay -c 1 -t raw -r 22050 -f mu_law foobar will play the raw file "foobar" as a 22050-Hz, mono, 8-bit, Mu-Law .au file. It can be found as part of the alsa-utils package on debian/ubuntu. Here's a 1-liner that I like which echos a small C program into gcc, and runs the compiled version, piping it to aplay. The result is a surprisingly nice 15-minute repeating song. echo "g(i,x,t,o){return((3&x&(i*((3&i>>16?\"BY}6YB6$\":\"Qj}6jQ6%\")[t%8]+51)>>o))<<4);};main(i,n,s){for(i=0;;i++)putchar(g(i,1,n=i>>14,12)+g(i,s=i>>17,n^i>>13,10)+g(i,s/3,n+((i>>11)%3),10)+g(i,s/5,8+n-((i>>10)%3),9));}"|gcc -xc -&&./a.out|aplay
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/587039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392596/" ] }
587,069
I know that i can enable/disable globbing with set +f and set -f . But how can i test whether it is currently enabled? I could create a file with a unique name and test if a file exists with a pattern that should match it. However, I hope there is a cleaner solution.
If you do set -f , or otherwise disable globbing:, $- will contain f : $ echo $-himBHs$ set -f$ echo $-fhimBHs$ bash -fc 'echo $-'fhBc So: [[ $- = *f* ]] Or: case $- in *f*) ... ;;esac
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/587069", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/245573/" ] }
587,257
I have a sequence of files, which are given like: File.Number.{1..10}.txt I would like to get a array or sequence of which of those files exist. A tedious way is to loop over all of them and append to an array. I am wondering if there is a one-liner that can tell me which of those files do not exist or pull out the number of the file.
(Your question says "files that do exist" in one place, and "files that don't" in another, so I wasn't sure which one you wanted.) To get the ones that do exist, you could use an extended glob in Bash: $ shopt -s nullglob$ echo File.Number.@([1-9]|10).txtFile.Number.10.txt File.Number.1.txt File.Number.7.txt File.Number.9.txt The shopt -s nullglob enables bash's nullglob option so that nothing is returned if there are no files matching the glob. Without it, a glob with no matches expands to itself. Or, more simply, a numeric range in Zsh: % echo File.Number.<1-10>.txtFile.Number.10.txt File.Number.1.txt File.Number.7.txt File.Number.9.txt It's not like the loop is that bad either, and lends to finding the files that do not exist much better than a glob: $ a=(); for f in File.Number.{1..10}.txt; do [ -f "$f" ] || a+=("$f"); done$ echo "these ${#a[@]} files didn't exist: ${a[@]}"these 6 files didn't exist: File.Number.2.txt File.Number.3.txt File.Number.4.txt File.Number.5.txt File.Number.6.txt File.Number.8.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/587257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/250695/" ] }
587,297
I want to replace occurrences of "|" EXCEPT the last one in every line of a file with a space using sed only . I want to avoid doing this: sed -e "s/[|]/ /1" -e "s/[|]/ /1" -e "s/[|]/ /1" -e "s/[|]/ /1" -e "s/[|]/ /1" -e "s/[|]/ /1" -e "s/[|]/ /1" mydata.txt File input: FLD1 |SFK TK |FLD2 |FLD4 |FLD5 |- |20200515 |NNNN |406 RCO 301FLD1 |SFK TK |FLD2 |FLD4 |FLD5 |- |20200515 |NNNN |0FLD1 |SFK TK |FLD2 |FLD4 |FLD5 |- |20200515 |NNNN |0 File output: FLD1 SFK TK FLD2 FLD4 FLD5 - 20200515 NNNN |406 RCO 301FLD1 SFK TK FLD2 FLD4 FLD5 - 20200515 NNNN |0FLD1 SFK TK FLD2 FLD4 FLD5 - 20200515 NNNN |0
sed ':a;/[|].*[|]/s/[|]/ /;ta' file /[|].*[|]/ : If line has two pipes, s/[|]/ / : Substitute the first with a space. ta : If a substitution was made, go back to :a . Output: $ sed ':a;/[|].*[|]/s/[|]/ /;ta' fileFLD1 SFK TK FLD2 FLD4 FLD5 - 20200515 NNNN |406 RCO 301FLD1 SFK TK FLD2 FLD4 FLD5 - 20200515 NNNN |0FLD1 SFK TK FLD2 FLD4 FLD5 - 20200515 NNNN |0 As @steeldriver has remarked, you can use simply | instead of [|] in a basic regular expression (BRE), as is the case above. If you add the -E flag to sed, extended regular expression (ERE) is enabled and then you need to write [|] or \| . Just for completeness, POSIX sed specification says that "Editing commands other than {...}, a, b, c, i, r, t, w, :, and # can be followed by a semicolon". Then, a compliant alternative to the above is: sed -e ':a' -e '/[|].*[|]/s/[|]/ /;t a' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/587297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/412990/" ] }
587,439
I know daemons have to be sent a HUP for config changes to take effect. But I'm wondering why this is, and if it is possible to create a daemon responsive to such changes.
There are multiple reasons. One major reason is that many daemons have multiple configuration files, and any single file change might not be usable on its own — so having a daemon attempt to reload its configuration whenever one of its configuration file changes might cause more problems than it solves. From a purely implementation-related standpoint, having to watch for changes to configuration files adds more complexity to the daemon. Daemons have a central loop of some sort, checking for work to be done corresponding to the daemon’s core purpose; checking for changes to configuration files doesn’t necessarily fit nicely into that core purpose. Handling a separate signal solves both of these problems: it indicates that the user thinks the configuration is coherent, and is safe to reload, and it can be implemented asynchronously in a signal handler (typically as a basic flag change), while minimising the impact on the main loop (it reacts to the flag change). There are daemons which react to configuration changes on their own; cron for example checks its configuration files for changes every time it goes round its main loop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/587439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/413139/" ] }
587,445
I have Debian 10 installed, here the info When I try to share my screen during a video conference, for example using Microsoft Teams or Google Hangouts, I'm not able to do it. If I try using Teams, the program crashes. If I try with Hangouts, people only see a black screen and the cursor, nothing else. Screen sharing in the Sharing section of my Settings is enabled, if you guess. Do I have to modify something in some configuration file? Edit I have installed Microsoft Teams Version 1.3.00.5153 (64-bit) and I'm using Wayland. If I try with Teams on browser instead of the app, I'm not able to share the entire screen (only black image displayed) but I'm able to share each app individually. If I use the Teams app, when I click on "share screen" the app crashes.
There are multiple reasons. One major reason is that many daemons have multiple configuration files, and any single file change might not be usable on its own — so having a daemon attempt to reload its configuration whenever one of its configuration file changes might cause more problems than it solves. From a purely implementation-related standpoint, having to watch for changes to configuration files adds more complexity to the daemon. Daemons have a central loop of some sort, checking for work to be done corresponding to the daemon’s core purpose; checking for changes to configuration files doesn’t necessarily fit nicely into that core purpose. Handling a separate signal solves both of these problems: it indicates that the user thinks the configuration is coherent, and is safe to reload, and it can be implemented asynchronously in a signal handler (typically as a basic flag change), while minimising the impact on the main loop (it reacts to the flag change). There are daemons which react to configuration changes on their own; cron for example checks its configuration files for changes every time it goes round its main loop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/587445", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410261/" ] }
587,646
I am trying to get the duration of a bunch of files in current directory and its sub-directories all ending in .mp4. I'm trying to use ffprobe. To make things simple, I am not using find, but ls. If it only lists durations in seconds, it will be fine, since summing them all up would probably be beyond me! My current effort is like this: ls -a * |grep .mp4$|xargs ffprobe This shows Argument 'SomeVideo.mp4' provided as input filename, but 'DifferentVideo.mp4' was already specified. How can I make this work? Can ffprobe not just process one file at a time passed via pipe?
With exiftool : $ exiftool -ext mp4 -r -n -q -p '${Duration;$_ = ConvertDuration(our $total += $_)}' . | tail -n 14 days 22:14:59 The main difference from @don_cristi's answer to a very similar question , being the -r -ext mp4 which lets exiftool look for files with ext ensions r ecursively.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/587646", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142579/" ] }
587,709
I'm managing a VPS server (I'm using Debian 10 Buster, the latest stable version), and I want to install the latest version of the essential web packages (for example: Apache 2.4.43, Bind 9.16.3), but when I use the default apt-repository, it installs a slightly old version (apache 2.4.38 and bind 9.11.5).. I've found out that the version 2.4.43 of apache2 is only available for Debian Bullseye (testing version), but I don't want to install a testing version of Debian, I prefer stable versions. In a nutshell: I want to install the "latest" version of apt packages (like apache2, bind9, postfix, etc.) without upgrading to an unstable version of Debian.
The reason Debian stable is called stable is because the software it contains, or rather, the external interfaces of all the individual pieces of software it contains, don’t change during its lifetime. A consequence of this is that, barring a few exceptions, packaged software isn’t upgraded to newer versions. So you can’t — in general — install packaged versions of newer software while remaining on Debian stable. Some packages are however made available as backports, and the apache2 package is one of them. You can install these by enabling backports and selecting that as the source of upgrades: echo deb http://deb.debian.org/debian buster-backports main | sudo tee /etc/apt/sources.list.d/buster-backports.listsudo apt updatesudo apt install -t buster-backports apache2 If other upgrades are already available in testing, and there’s a particularly relevant reason to upgrade, you can try filing bugs requesting a backport. Note however that you should only upgrade to backports of packages if you have a specific reason to do so: backported packages don’t receive the same security support as packages in the stable repository, and while the stable distribution is tested as a coherent whole, no such testing is done with backports.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/587709", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/413392/" ] }
587,864
I would assume that setting IFS='X' would cause bash to split the string fooXbarXbaz into three words: foo , bar , and baz . However, it only works if the string is provided to the for -loop through command substitution: $(echo fooXbarXbaz) : $ IFS='X'; for x in fooXbarXbaz; do echo Y${x}Z; doneYfoo bar bazZ$ IFS='X'; for x in $(echo fooXbarXbaz); do echo Y${x}Z; doneYfooZYbarZYbazZ Can someone explain why the first example command fails to split fooXbarXbaz into three words, while the second example is successful?
$IFS is only used for word splitting after unquoted expansions. There is no expansion in for x in fooXbarXbaz . There is however an unquoted expansion in echo Y${x}Z , which means that echo is called with the three arguments Yfoo , bar and bazZ . The echo utility prints each of its arguments with a space in-between, so you get Yfoo bar bazZ . In the second example, the string is the output of a command substitution, which is an expansion, so the string is split for the loop. The IFS variable is also used for splitting the input to the read utility.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/587864", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128739/" ] }
587,916
I am learning sed, so in this context I am trying to replace 2nd occurrence of word 'line'. Therefore, I issued following command: (zet:pc:~/text) sed 's/line/LINUX/2' mytextfilethis is line 1this is line 2this is line 3this is line 4this is line 5 But from the output, you can see that 'sed' is not replacing 2nd occurrence of the world 'line'. So, am I making any mistake here?
s/../../2 replaces the second occurrence on each line. You can make it work if you read the file as one line: With GNU sed : sed -z 's/line/LINUX/2' mytextfile With normal sed : tr '\n' '\0' < mytextfile | sed 's/line/LINUX/2' | tr '\0' '\n' Note that this will produce incorrect resultsin the highly unlikely case that the file already contains nul bytes. To replace the first occurrence on the 2nd line, see other answers :-)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/587916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390521/" ] }
587,930
A simple example. I'm running a process that serves http request using TCP sockets. It might A) calculate something which means CPU will be the bottleneck B) Send a large file which may cause the network to be the bottleneck or C) Complex database query with semi-random access causing a disk bottleneck Should I try to categorize each page/API call as one or more of the above types and try to balance how much of each I should have? Or will the OS do that for me? How do I decide how many threads I want? I'll use 2 numbers for hardware threads 12 and 48 (intel xeon has that many). I was thinking of having at 2/3rds of the threads be for heavy CPU (8/32), 1 thread for heavy disk (or 1 heavy thread per disk) and the remaining 3/15 be for anything else which means no trying to balance the network. Should I have more than 12/48 threads on hardware that only supports 12/48 threads? Do I want less so I don't cause the CPU to go into a slower throttling mode (I forget what it's called but I heard it happens if too much of the chip is active at once). If I have to load and resource balance my threads how would I do it?
Linux: The Linux kernel have a great implementation for the matter and have many features/settings intended to manage the ressources for the running process (over CPU governors, sysctl or cgroup), in such situation tuning those settings along with swap adjustment (if required) is recommended, basically you will be adapting the default functioning mode to your appliance. Benchmark, stress tests and situation analysis after applying the changes are a must especially on production servers. The performance gain can be very important when the kernel settings are adjusted to the needed usage, on the other hand this require testing and a well understanding of the different settings which is time consuming for an admin. Linux does use governors to load balance CPU ressources between the running application, many governors are available; depending on your distro's kernel some governor may not be available (rebuilding the kernel can be done to add missing or non upstream governors). you can check what is the current governor , change it and more importantly in this case, tune its settings . Additional documentations: reading , guide , similar question , frequency scaling , choice of governor , the performance governor and cpufreq . SysCtl: Sysctl is a tool for examining and changing kernel parameters at runtime, adjustments can be made permanent with the config file /etc/sysctl.conf , this is an important part of this answer as many kernel settings can be changed with Sysctl, a full list of available settings can be displayed with the command sysctl -a , details are available on this and this article . Cgroup: The kernel provide the feature: control groups, which are called by their shorter name cgroups in this guide. Cgroups allow you to allocate resources such as CPU time, system memory, network bandwidth, or combinations of these resources among user-defined groups of tasks (processes) running on a system. You can monitor the cgroups you configure, deny cgroups access to certain resources, and even reconfigure your cgroups dynamically on a running system. The cgconfig (control group config) service can be configured to start up at boot time and reestablish your predefined cgroups, thus making them persistent across reboots. Source, further reading and question on the matter. Ram: This can be useful if the system have a limited amount of ram, otherwise you can disable the swap to mainly use the ram. Swap system can be adjusted per process or with the swappiness settings . If needed the ressources (ram) can be limited per process with ulimit (also used to limit other ressources). Disk: Disk I/O settings ( I/O Scheduler ) may be changed as well as the cluster size. Alternatives: Other tools like nice , cpulimit , cpuset , taskset or ulimit can be used as an alternative for the matter.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/587930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/413621/" ] }
588,038
I don't know anything about CPUs. I have a 32-bit version of ubuntu. But I need to install 64-bit applications. I came to know that it is not possible to run 64-bit apps on 32 bit OS. So I decided to upgrade my os. But a friend of mine told me to check CPU specifications before the new upgrade. I run this command as was suggested on a website. lscpu command gives the following details Architecture: i686CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 2On-line CPU(s) list: 0,1Thread(s) per core: 1Core(s) per socket: 2Socket(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 23Model name: Pentium(R) Dual-Core CPU E5300 @ 2.60GHzStepping: 10CPU MHz: 1315.182CPU max MHz: 2603.0000CPU min MHz: 1203.0000BogoMIPS: 5187.07Virtualization: VT-xL1d cache: 32KL1i cache: 32KL2 cache: 2048KFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm xsave lahf_lm pti tpr_shadow vnmi flexpriority dtherm In one word what does this mean? I want to know whether I can install 64-bit Ubuntu in my pc. My installed RAM is 2GB. Since my system is more than 10 years old I expect some expert advice on my CPU status. Should I buy a new pc? Or can I stick with my old one?I already checked this. But I expect some thing easier. https://unix.stackexchange.com/a/77724/413713 (I can share any information regarding my hardware, only tell me how to collect them).Thanks in advance. Sorry for bad english
Intel’s summary of your CPU’s features confirms that it supports 64-bit mode, as indicated by CPU op-mode(s): 32-bit, 64-bit in lscpu ’s output. This isn’t an Atom CPU either, so the rest of your system is, in all likelihood, capable of supporting a 64-bit operating system. You can re-install a 64-bit variant of your operating system, or you could use Ubuntu’s multiarch support: install a 64-bit kernel, add the amd64 architecture, and you will then be able to install and run 64-bit software without re-installing everything: sudo dpkg --add-architecture amd64sudo apt-get updatesudo apt-get install linux-image-generic:amd64 (followed by a reboot).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/588038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/413713/" ] }
588,046
I've been trying kioptrix-level-1 exercise at https://www.vulnhub.com/entry/kioptrix-level-1-1,22/ and wondering why smbclient can't identify the Samba version? smbclient version 4.11.5-Debian wolf@linux:~$ smbclient -VVersion 4.11.5-Debianwolf@linux:~$ e.g. wolf@linux:~$ smbclient -L 10.10.10.10Server does not support EXTENDED_SECURITY but 'client use spnego = yes' and 'client ntlmv2 auth = yes' is setAnonymous login successfulEnter WORKGROUP\wolf's password: Sharename Type Comment --------- ---- ------- IPC$ IPC IPC Service (Samba Server) ADMIN$ IPC IPC Service (Samba Server)Reconnecting with SMB1 for workgroup listing.Server does not support EXTENDED_SECURITY but 'client use spnego = yes' and 'client ntlmv2 auth = yes' is setAnonymous login successful Server Comment --------- ------- KIOPTRIX Samba Server Workgroup Master --------- ------- MYGROUP KIOPTRIXwolf@linux:~$ enum4linux attempt also didn't reveal the Samba's version number wolf@linux:/etc/samba$ enum4linux 10.10.10.10Starting enum4linux v0.8.9 ( http://labs.portcullis.co.uk/application/enum4linux/ ) on Thu May 21 00:04:57 2020 ========================== | Target Information | ========================== Target ........... 10.10.10.10RID Range ........ 500-550,1000-1050Username ......... ''Password ......... ''Known Usernames .. administrator, guest, krbtgt, domain admins, root, bin, none ====================================================== | Enumerating Workgroup/Domain on 10.10.10.10 | ====================================================== [+] Got domain/workgroup name: MYGROUP ============================================== | Nbtstat Information for 10.10.10.10 | ============================================== Looking up status of 10.10.10.10 KIOPTRIX <00> - B <ACTIVE> Workstation Service KIOPTRIX <03> - B <ACTIVE> Messenger Service KIOPTRIX <20> - B <ACTIVE> File Server Service ..__MSBROWSE__. <01> - <GROUP> B <ACTIVE> Master Browser MYGROUP <00> - <GROUP> B <ACTIVE> Domain/Workgroup Name MYGROUP <1d> - B <ACTIVE> Master Browser MYGROUP <1e> - <GROUP> B <ACTIVE> Browser Service Elections MAC Address = 00-00-00-00-00-00 ======================================= | Session Check on 10.10.10.10 | ======================================= [+] Server 10.10.10.10 allows sessions using username '', password '' ============================================= | Getting domain SID for 10.10.10.10 | ============================================= Domain Name: MYGROUPDomain Sid: (NULL SID)[+] Can't determine if host is part of domain or part of a workgroup ======================================== | OS information on 10.10.10.10 | ======================================== Use of uninitialized value $os_info in concatenation (.) or string at ./enum4linux.pl line 464.[+] Got OS info for 10.10.10.10 from smbclient: [+] Got OS info for 10.10.10.10 from srvinfo: KIOPTRIX Wk Sv PrQ Unx NT SNT Samba Server platform_id : 500 os version : 4.5 server type : 0x9a03 =============================== | Users on 10.10.10.10 | =============================== Use of uninitialized value $users in print at ./enum4linux.pl line 874.Use of uninitialized value $users in pattern match (m//) at ./enum4linux.pl line 877.Use of uninitialized value $users in print at ./enum4linux.pl line 888.Use of uninitialized value $users in pattern match (m//) at ./enum4linux.pl line 890. =========================================== | Share Enumeration on 10.10.10.10 | =========================================== Sharename Type Comment --------- ---- ------- IPC$ IPC IPC Service (Samba Server) ADMIN$ IPC IPC Service (Samba Server)Reconnecting with SMB1 for workgroup listing. Server Comment --------- ------- KIOPTRIX Samba Server Workgroup Master --------- ------- MYGROUP KIOPTRIX[+] Attempting to map shares on 10.10.10.10//10.10.10.10/IPC$ [E] Can't understand response:NT_STATUS_NETWORK_ACCESS_DENIED listing \*//10.10.10.10/ADMIN$ [E] Can't understand response:tree connect failed: NT_STATUS_WRONG_PASSWORD ====================================================== | Password Policy Information for 10.10.10.10 | ====================================================== [E] Unexpected error from polenum:[+] Attaching to 10.10.10.10 using a NULL share[+] Trying protocol 139/SMB... [!] Protocol failed: SMB SessionError: 0x5[+] Trying protocol 445/SMB... [!] Protocol failed: [Errno Connection error (10.10.10.10:445)] [Errno 111] Connection refused[+] Retieved partial password policy with rpcclient:Password Complexity: DisabledMinimum Password Length: 0 ================================ | Groups on 10.10.10.10 | ================================ [+] Getting builtin groups:group:[Administrators] rid:[0x220]group:[Users] rid:[0x221]group:[Guests] rid:[0x222]group:[Power Users] rid:[0x223]group:[Account Operators] rid:[0x224]group:[System Operators] rid:[0x225]group:[Print Operators] rid:[0x226]group:[Backup Operators] rid:[0x227]group:[Replicator] rid:[0x228][+] Getting builtin group memberships:Group 'Users' (RID: 545) has member: Couldn't find group UsersGroup 'Guests' (RID: 546) has member: Couldn't find group GuestsGroup 'Replicator' (RID: 552) has member: Couldn't find group ReplicatorGroup 'Account Operators' (RID: 548) has member: Couldn't find group Account OperatorsGroup 'Print Operators' (RID: 550) has member: Couldn't find group Print OperatorsGroup 'Power Users' (RID: 547) has member: Couldn't find group Power UsersGroup 'System Operators' (RID: 549) has member: Couldn't find group System OperatorsGroup 'Administrators' (RID: 544) has member: Couldn't find group AdministratorsGroup 'Backup Operators' (RID: 551) has member: Couldn't find group Backup Operators[+] Getting local groups:group:[sys] rid:[0x3ef]group:[tty] rid:[0x3f3]group:[disk] rid:[0x3f5]group:[mem] rid:[0x3f9]group:[kmem] rid:[0x3fb]group:[wheel] rid:[0x3fd]group:[man] rid:[0x407]group:[dip] rid:[0x439]group:[lock] rid:[0x455]group:[users] rid:[0x4b1]group:[slocate] rid:[0x413]group:[floppy] rid:[0x40f]group:[utmp] rid:[0x415][+] Getting local group memberships:[+] Getting domain groups:group:[Domain Admins] rid:[0x200]group:[Domain Users] rid:[0x201][+] Getting domain group memberships:Group 'Domain Users' (RID: 513) has member: Couldn't find group Domain UsersGroup 'Domain Admins' (RID: 512) has member: Couldn't find group Domain Admins I've been looking at other write up such https://blog.roskyfrosky.com/vulnhub/2017/04/01/Kioptrix1.0-vulnhub.html and found that they don't have this kind of issue. or https://blog.bladeism.com/kioptrix-level-1/ enum4linux 192.168.33.133========================== | Target Information |==========================Target ……….. 192.168.33.133RID Range …….. 500-550,1000-1050Username ……… ”Password ……… ”Known Usernames .. administrator, guest, krbtgt, domain admins, root, bin, none======================================================| Enumerating Workgroup/Domain on 192.168.33.133 |======================================================[+] Got domain/workgroup name: MYGROUP==============================================| Nbtstat Information for 192.168.33.133 |==============================================Looking up status of 192.168.33.133KIOPTRIX <00> – B <ACTIVE> Workstation ServiceKIOPTRIX <03> – B <ACTIVE> Messenger ServiceKIOPTRIX <20> – B <ACTIVE> File Server Service..__MSBROWSE__. <01> – <GROUP> B <ACTIVE> Master BrowserMYGROUP <00> – <GROUP> B <ACTIVE> Domain/Workgroup NameMYGROUP <1d> – B <ACTIVE> Master BrowserMYGROUP <1e> – <GROUP> B <ACTIVE> Browser Service ElectionsMAC Address = 00-00-00-00-00-00=======================================| Session Check on 192.168.33.133 |=======================================[+] Server 192.168.33.133 allows sessions using username ”, password ”=============================================| Getting domain SID for 192.168.33.133 |=============================================Domain Name: MYGROUPDomain Sid: (NULL SID)[+] Can’t determine if host is part of domain or part of a workgroup========================================| OS information on 192.168.33.133 |========================================[+] Got OS info for 192.168.33.133 from smbclient: Domain=[MYGROUP] OS=[Unix] Server=[Samba 2.2.1a][+] Got OS info for 192.168.33.133 from srvinfo:KIOPTRIX Wk Sv PrQ Unx NT SNT Samba Serverplatform_id : 500os version : 4.5server type : 0x9a03===============================| Users on 192.168.33.133 |===============================Use of uninitialized value $users in print at ./enum4linux.pl line 874.Use of uninitialized value $users in pattern match (m//) at ./enum4linux.pl line 877.Use of uninitialized value $users in print at ./enum4linux.pl line 888.Use of uninitialized value $users in pattern match (m//) at ./enum4linux.pl line 890.===========================================| Share Enumeration on 192.168.33.133 |===========================================WARNING: The “syslog” option is deprecatedDomain=[MYGROUP] OS=[Unix] Server=[Samba 2.2.1a]Domain=[MYGROUP] OS=[Unix] Server=[Samba 2.2.1a]Sharename Type Comment——— —- ——-IPC$ IPC IPC Service (Samba Server)ADMIN$ IPC IPC Service (Samba Server)Server Comment——— ——-KIOPTRIX Samba ServerWorkgroup Master——— ——-MYGROUP KIOPTRIXWORKGROUP BLADEISM[+] Attempting to map shares on 192.168.33.133//192.168.33.133/IPC$ [E] Can’t understand response:WARNING: The “syslog” option is deprecatedDomain=[MYGROUP] OS=[Unix] Server=[Samba 2.2.1a]NT_STATUS_NETWORK_ACCESS_DENIED listing \*//192.168.33.133/ADMIN$ [E] Can’t understand response:WARNING: The “syslog” option is deprecatedDomain=[MYGROUP] OS=[Unix] Server=[Samba 2.2.1a]tree connect failed: NT_STATUS_WRONG_PASSWORD
Intel’s summary of your CPU’s features confirms that it supports 64-bit mode, as indicated by CPU op-mode(s): 32-bit, 64-bit in lscpu ’s output. This isn’t an Atom CPU either, so the rest of your system is, in all likelihood, capable of supporting a 64-bit operating system. You can re-install a 64-bit variant of your operating system, or you could use Ubuntu’s multiarch support: install a 64-bit kernel, add the amd64 architecture, and you will then be able to install and run 64-bit software without re-installing everything: sudo dpkg --add-architecture amd64sudo apt-get updatesudo apt-get install linux-image-generic:amd64 (followed by a reboot).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/588046", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409008/" ] }
588,063
I have thousands of JSON file that I want to merge in one object. Those all are not in a similar format. Let me explain in detail. Here is the first sample of JSON [ { "value 1": 1, "value 2": 2, "value 3": 3, "value 4": 4 }] and another types are not similar, something like following having few common fields and other fields [ { "value 3": 300, "value 4": 400, "value 5": 500, "value 6": 600 }] Such as I have 2 file having first sample format and one file having second example format. I am trying to using jq for merging this. jq -s '.' *.json > myfile.json It is returning the following with three different objects [ { "value 1": 1, "value 2": 2, "value 3": 3, "value 4": 4 }],[ { "value 1": 10, "value 2": 20, "value 3": 30, "value 4": 40 }],[ { "value 3": 300, "value 4": 400, "value 5": 500, "value 6": 600 }] I need to merge this into one object like following and if there any jq option to exclude those files that have specific field. Something like excluding those files which have the field "value 6" . So finally the JSON output will be [ { "value 1": 1, "value 2": 2, "value 3": 3, "value 4": 4 }, { "value 1": 10, "value 2": 20, "value 3": 30, "value 4": 40 }]
You can use inputs to apply the filter on the contents of all the JSON files together and apply the select filter. The -n flag is to ensure the output JSON is constructed from scratch from the given input. jq -n '[ inputs[] | select( has("value 6") | not ) ]' *.json By doing jq -n 'inputs[]' , all the objects in all the constituent JSON files are made available to the select function which discards any object containing key field as "value 6" . The [..] surrounding the filter is put the final resultant objects within the array. Another way would be to use a reduce() function to add the required objects in an iterative way jq -n 'reduce inputs[] as $data (.; . + [ if $data | has("value 6") | not then $data else empty end ] )'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/413732/" ] }
588,073
I am looking for a program like cat but with syntax highlighting. As an example I would like to display the content of one of my Python script highlighted in the terminal without using a pager like we do with cat filename.py .
bat is cat alternative with syntax highlighting and other functionalities. You can see some previews on the GitHub page . This is a fairly new program and may not be available in your favorite distribution repositories. In that case you will have to build it from source or to download the .deb package.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/588073", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
588,102
I have a file as below. "ID" "1" "2""00000687" 0 1"00000421" 1 0 I want to make it as below. 00000687 0 100000421 1 0 I want to remove the first line and remove double quotes from fields on any other lines. FWIW, double quotes appear only in the first column. I think cut -c would work, but cannot make it. What should I do?
tail + tr : tail -n +2 file | tr -d \" tail -n+2 prints the file starting from line two to the end. tr -d \" deletes all double quotes.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/588102", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/328248/" ] }
588,127
I have a file with about one million lines, like this: "ID" "1" "2""00000687" 0 1"00000421" 1 0"00000421" 1 0"00000421" 1 0 with the last line repeated more than one million times. Taking inspiration from this question , I've tried some of the proposed solutions to see which one is faster. I was expecting that the solutions with only one process would have been faster than those with a pipeline, because they only use one process. But those are the results of my tests: tail -n +2 file.txt | tr -d \" $ time tail -n +2 file.txt | tr -d \" 1> /dev/nullreal 0m0,032suser 0m0,020ssys 0m0,028s sed '1d;s/"//g' file.txt $ time sed '1d;s/"//g' file.txt 1> /dev/nullreal 0m0,410suser 0m0,399ssys 0m0,011s perl -ne ' { s/"//g; print if $. > 1 }' file.txt $ time perl -ne ' { s/"//g; print if $. > 1 }' file.txt 1> /dev/nullreal 0m0,379suser 0m0,367ssys 0m0,013s I repeated the tests many times and I have always obtained similar numbers. As you can see, tail -n +2 file.txt | tr -d \" is much faster than the others. Why?
It boils down to the amount of work being done. Your tail | tr command ends up doing the following: in tail : read until a newline; output everything remaining, without caring about newlines; in tr , read, without caring about newlines, and output everything apart from ‘"’ (a fixed character). Your sed command ends up doing the following, after interpreting the given script: read until a newline, accumulating input; if this is the first line, delete it; replace all double quotes with nothing, after interpreting the regular expression; output the processed line; loop until the end of the file. Your Perl command ends up doing the following, after interpreting the given script: read until a newline, accumulating input; replace all double quotes with nothing, after interpreting the regular expression; if this is not the first line, output the processed line; loop until the end of the file. Looking for newlines ends up being expensive on large inputs.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/588127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410261/" ] }
588,143
Is it possible to rearrange awk's output? If yes, what is the right way to do it? wolf@Linux:~$ ip a s | awk '/^[0-9]/ {print $2} /inet / {print $2}'lo:127.0.0.1/8enp0s3:10.1.1.1/24enp0s8:172.16.1.1/24enp0s9:o192.168. it2?1.1/24wolf@Linux:~$ Desired Output wolf@Linux:~$ ip a s | <awk syntax here>lo: 127.0.0.1/8enp0s3: 10.1.1.1/24enp0s8: 172.16.1.1/24enp0s9: 192.168.1.1/24wolf@Linux:~$
It boils down to the amount of work being done. Your tail | tr command ends up doing the following: in tail : read until a newline; output everything remaining, without caring about newlines; in tr , read, without caring about newlines, and output everything apart from ‘"’ (a fixed character). Your sed command ends up doing the following, after interpreting the given script: read until a newline, accumulating input; if this is the first line, delete it; replace all double quotes with nothing, after interpreting the regular expression; output the processed line; loop until the end of the file. Your Perl command ends up doing the following, after interpreting the given script: read until a newline, accumulating input; replace all double quotes with nothing, after interpreting the regular expression; if this is not the first line, output the processed line; loop until the end of the file. Looking for newlines ends up being expensive on large inputs.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/588143", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409008/" ] }
588,333
I am using latest available builds with up to date system as follows:Arch kernel 5.6.4-arch1-1, Openbox 3.6.1, NetworkManager 1.24.0-1, wpa_supplicant v2.9. I use my laptop in three geographic locations.In each of these locations there is a unique wifi network, with it's own unique SSID that I connect to.The laptop uses only WPA and WPA2 Personal protocol to connect in all three of these locations. In two of these three locations my laptop WiFi always connects quickly, seamlessly and perfectly, without any problems. In the third location, and only in the third location, my laptop continually cycles through WiFi disconnect & reconnect every 10 secs as outlined in the below steps 5 secs, searching for network 1 sec, connected to network 4 secs, disconnected go to 1 and start over again ... I am able to get a network connection to the internet in this third location, as outlined in the list above, but only for about one in every 10 seconds. This makes working in this third location impossible. In this third location the router provides perfect wifi to every other device.So the router does not appear to be the problem. I have tried deleting the network manager connection profile for this third location, countless times, using the nm-applet GUI on the tint2 taskbar , and then reconnecting again to the third locations SSID with the correct password etc.But doing this changes nothing about the problem. I have been using this command line sudo killall -STOP NetworkManager about half way through step 2, above, to sucessfully stop the cycling and remain connected, but this has stopped working. So now I have an Ethernet cable plugged into the laptop, and the laptop WiFi physical switch, on the side of the laptop switched off. It connects to the internet through the attached Ethernet cable, but it looses it's connection for 5 seconds in every 10 seconds. In other words it cycles the connection, on and off, as it does with the WiFi. Again, this also only happens in the third location. Once connected I am again using sudo killall -STOP NetworkManager after the connection is established to stop the cycling. This seems to be something to do with the laptop wifi config in the third location. Addition: I have added one cycle of output, from the system journal, while the system is connecting / disconnecting, below Command, journalctl -ef , produced this output. network is down. This is the end of last connection / disconnection cycle and the beginning of the next May 23 13:46:52 t430 dhcpcd[5050]: ps_bpf_recvbpf: Network is down preamble May 23 13:46:52 t430 kernel: iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0May 23 13:46:52 t430 kernel: iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0 registering ipv4 address May 23 13:46:52 t430 avahi-daemon[451]: Joining mDNS multicast group on interface wlp3s0.IPv4 with address 192.168.1.106.May 23 13:46:52 t430 avahi-daemon[451]: New relevant interface wlp3s0.IPv4 for mDNS.May 23 13:46:52 t430 avahi-daemon[451]: Registering new address record for 192.168.1.106 on wlp3s0.IPv4. withdrawing ipv4 address May 23 13:46:52 t430 avahi-daemon[451]: Withdrawing address record for 192.168.1.106 on wlp3s0.May 23 13:46:52 t430 avahi-daemon[451]: Leaving mDNS multicast group on interface wlp3s0.IPv4 with address 192.168.1.106.May 23 13:46:52 t430 avahi-daemon[451]: Interface wlp3s0.IPv4 no longer relevant for mDNS.May 23 13:46:52 t430 dhcpcd[492]: wlp3s0: deleting route to 192.168.1.0/24May 23 13:46:52 t430 dhcpcd[492]: wlp3s0: deleting default route via 192.168.1.1May 23 13:46:52 t430 NetworkManager[454]: <warn> [1590238012.8707] device (wlp3s0): Activation: failed for connection 'datastream5' registering ipv4 address again May 23 13:46:52 t430 avahi-daemon[451]: Joining mDNS multicast group on interface wlp3s0.IPv4 with address 192.168.1.106.May 23 13:46:52 t430 wpa_supplicant[570]: wlp3s0: Reject scan trigger since one is already pendingMay 23 13:46:52 t430 NetworkManager[454]: <info> [1590238012.8775] device (wlp3s0): state change: failed -> disconnected (reason 'none', sys-iface-state: 'managed')May 23 13:46:52 t430 avahi-daemon[451]: New relevant interface wlp3s0.IPv4 for mDNS.May 23 13:46:52 t430 avahi-daemon[451]: Registering new address record for 192.168.1.106 on wlp3s0.IPv4.May 23 13:46:52 t430 NetworkManager[454]: <info> [1590238012.8778] dhcp4 (wlp3s0): canceled DHCP transactionMay 23 13:46:52 t430 wpa_supplicant[570]: wlp3s0: CTRL-EVENT-REGDOM-CHANGE init=CORE type=WORLDMay 23 13:46:52 t430 NetworkManager[454]: <info> [1590238012.8778] dhcp4 (wlp3s0): state changed bound -> done registering ipv6 address May 23 13:46:52 t430 avahi-daemon[451]: Joining mDNS multicast group on interface wlp3s0.IPv6 with address fe80::fd81:7780:6410:e759.May 23 13:46:52 t430 avahi-daemon[451]: New relevant interface wlp3s0.IPv6 for mDNS.May 23 13:46:52 t430 avahi-daemon[451]: Registering new address record for fe80::fd81:7780:6410:e759 on wlp3s0.*.May 23 13:46:52 t430 avahi-daemon[451]: Withdrawing address record for fe80::fd81:7780:6410:e759 on wlp3s0.May 23 13:46:52 t430 NetworkManager[454]: <info> [1590238012.8822] device (wlp3s0): supplicant interface state: completed -> scanningMay 23 13:46:52 t430 avahi-daemon[451]: Leaving mDNS multicast group on interface wlp3s0.IPv6 with address fe80::fd81:7780:6410:e759.May 23 13:46:52 t430 avahi-daemon[451]: Interface wlp3s0.IPv6 no longer relevant for mDNS.`withdrawing ipv4 address`May 23 13:46:52 t430 avahi-daemon[451]: Withdrawing address record for 192.168.1.106 on wlp3s0.May 23 13:46:52 t430 avahi-daemon[451]: Leaving mDNS multicast group on interface wlp3s0.IPv4 with address 192.168.1.106.May 23 13:46:52 t430 avahi-daemon[451]: Interface wlp3s0.IPv4 no longer relevant for mDNS. new ipv4 mac address May 23 13:46:52 t430 dhcpcd[492]: wlp3s0: new hardware address: ca:b3:01:7c:20:73 disconnected May 23 13:46:56 t430 NetworkManager[454]: <info> [1590238016.0852] device (wlp3s0): supplicant interface state: scanning -> disconnected NetworkManager succeeds at something May 23 13:47:00 t430 systemd[1]: NetworkManager-dispatcher.service: Succeeded.May 23 13:47:00 t430 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'May 23 13:47:00 t430 kernel: audit: type=1131 audit(1590238020.888:153): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.4312] policy: auto-activating connection 'datastream5' (4c200721-a84a-4619-9d08-319531f8c338)May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.4322] device (wlp3s0): Activation: starting connection 'datastream5' (4c200721-a84a-4619-9d08-319531f8c338)May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.4324] device (wlp3s0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.4335] manager: NetworkManager state is now CONNECTINGMay 23 13:47:02 t430 dhcpcd[492]: wlp3s0: new hardware address: 6c:88:14:62:84:3cMay 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.4486] device (wlp3s0): set-hw-addr: reset MAC address to 6C:88:14:62:84:3C (preserve) scanning for connection May 23 13:47:02 t430 kernel: iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0May 23 13:47:02 t430 kernel: iwlwifi 0000:03:00.0: Radio type=0x1-0x2-0x0May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8441] device (wlp3s0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8450] device (wlp3s0): Activation: (wifi) access point 'datastream5' has security, but secrets are required.May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8451] device (wlp3s0): state change: config -> need-auth (reason 'none', sys-iface-state: 'managed')May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8491] device (wlp3s0): supplicant interface state: disconnected -> interface_disabledMay 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8512] device (wlp3s0): state change: need-auth -> prepare (reason 'none', sys-iface-state: 'managed')May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8519] device (wlp3s0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8525] device (wlp3s0): Activation: (wifi) connection 'datastream5' has security, and secrets exist. No new secrets needed.May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8526] Config: added 'ssid' value 'datastream5'May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8526] Config: added 'scan_ssid' value '1'May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8526] Config: added 'bgscan' value 'simple:30:-70:86400'May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8527] Config: added 'key_mgmt' value 'WPA-PSK WPA-PSK-SHA256 FT-PSK'May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8527] Config: added 'auth_alg' value 'OPEN'May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8527] Config: added 'psk' value '<hidden>'May 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8794] device (wlp3s0): supplicant interface state: interface_disabled -> inactiveMay 23 13:47:02 t430 NetworkManager[454]: <info> [1590238022.8920] device (wlp3s0): supplicant interface state: inactive -> scanningMay 23 13:47:06 t430 wpa_supplicant[570]: wlp3s0: SME: Trying to authenticate with 70:4f:57:97:52:06 (SSID='datastream5' freq=2447 MHz)May 23 13:47:06 t430 kernel: wlp3s0: authenticate with 70:4f:57:97:52:06May 23 13:47:06 t430 kernel: wlp3s0: send auth to 70:4f:57:97:52:06 (try 1/3)May 23 13:47:06 t430 kernel: wlp3s0: authenticatedMay 23 13:47:06 t430 kernel: wlp3s0: associate with 70:4f:57:97:52:06 (try 1/3)May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1197] device (wlp3s0): supplicant interface state: scanning -> authenticatingMay 23 13:47:06 t430 wpa_supplicant[570]: wlp3s0: Trying to associate with 70:4f:57:97:52:06 (SSID='datastream5' freq=2447 MHz)May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1226] device (wlp3s0): supplicant interface state: authenticating -> associatingMay 23 13:47:06 t430 kernel: wlp3s0: RX AssocResp from 70:4f:57:97:52:06 (capab=0x1011 status=0 aid=3)May 23 13:47:06 t430 wpa_supplicant[570]: wlp3s0: Associated with 70:4f:57:97:52:06May 23 13:47:06 t430 wpa_supplicant[570]: wlp3s0: CTRL-EVENT-SUBNET-STATUS-UPDATE status=0May 23 13:47:06 t430 kernel: wlp3s0: associatedMay 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1533] device (wlp3s0): supplicant interface state: associating -> 4way_handshakeMay 23 13:47:06 t430 wpa_supplicant[570]: wlp3s0: WPA: Key negotiation completed with 70:4f:57:97:52:06 [PTK=CCMP GTK=TKIP]May 23 13:47:06 t430 wpa_supplicant[570]: wlp3s0: CTRL-EVENT-CONNECTED - Connection to 70:4f:57:97:52:06 completed [id=0 id_str=]May 23 13:47:06 t430 dhcpcd[492]: wlp3s0: carrier acquiredMay 23 13:47:06 t430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): wlp3s0: link becomes readyMay 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1680] device (wlp3s0): supplicant interface state: 4way_handshake -> completed ######################################################################################################### May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1681] device (wlp3s0): Activation: (wifi) Stage 2 of 5 (Device Configure) successful. Connected to wireless network "datastream5"May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1683] device (wlp3s0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1688] dhcp4 (wlp3s0): activation: beginning transaction (timeout in 45 seconds)May 23 13:47:06 t430 avahi-daemon[451]: Joining mDNS multicast group on interface wlp3s0.IPv6 with address fe80::fd81:7780:6410:e759.May 23 13:47:06 t430 avahi-daemon[451]: New relevant interface wlp3s0.IPv6 for mDNS.May 23 13:47:06 t430 avahi-daemon[451]: Registering new address record for fe80::fd81:7780:6410:e759 on wlp3s0.*.May 23 13:47:06 t430 dhcpcd[492]: wlp3s0: IAID 14:62:84:3cMay 23 13:47:06 t430 dhcpcd[492]: wlp3s0: adding address fe80::c685:c3d0:8b05:a9e5May 23 13:47:06 t430 avahi-daemon[451]: Registering new address record for fe80::c685:c3d0:8b05:a9e5 on wlp3s0.*.May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1957] dhcp4 (wlp3s0): option dhcp_lease_time => '86400'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1958] dhcp4 (wlp3s0): option domain_name_servers => '192.168.1.1 0.0.0.0'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1958] dhcp4 (wlp3s0): option expiry => '1590324426'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1958] dhcp4 (wlp3s0): option ip_address => '192.168.1.106'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1958] dhcp4 (wlp3s0): option requested_broadcast_address => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1958] dhcp4 (wlp3s0): option requested_domain_name => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1958] dhcp4 (wlp3s0): option requested_domain_name_servers => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1958] dhcp4 (wlp3s0): option requested_domain_search => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1958] dhcp4 (wlp3s0): option requested_host_name => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1959] dhcp4 (wlp3s0): option requested_interface_mtu => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1959] dhcp4 (wlp3s0): option requested_ms_classless_static_routes => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1959] dhcp4 (wlp3s0): option requested_nis_domain => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1959] dhcp4 (wlp3s0): option requested_nis_servers => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1959] dhcp4 (wlp3s0): option requested_ntp_servers => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1959] dhcp4 (wlp3s0): option requested_rfc3442_classless_static_routes => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1959] dhcp4 (wlp3s0): option requested_root_path => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1959] dhcp4 (wlp3s0): option requested_routers => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1960] dhcp4 (wlp3s0): option requested_static_routes => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1960] dhcp4 (wlp3s0): option requested_subnet_mask => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1960] dhcp4 (wlp3s0): option requested_time_offset => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1960] dhcp4 (wlp3s0): option requested_wpad => '1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1960] dhcp4 (wlp3s0): option routers => '192.168.1.1'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1960] dhcp4 (wlp3s0): option subnet_mask => '255.255.255.0'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1960] dhcp4 (wlp3s0): state changed unknown -> boundMay 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.1975] device (wlp3s0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')May 23 13:47:06 t430 avahi-daemon[451]: Joining mDNS multicast group on interface wlp3s0.IPv4 with address 192.168.1.106.May 23 13:47:06 t430 avahi-daemon[451]: New relevant interface wlp3s0.IPv4 for mDNS.May 23 13:47:06 t430 avahi-daemon[451]: Registering new address record for 192.168.1.106 on wlp3s0.IPv4.May 23 13:47:06 t430 dbus-daemon[453]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service' requested by ':1.13' (uid=0 pid=454 comm="/usr/bin/NetworkManager --no-daemon ")May 23 13:47:06 t430 systemd[1]: Condition check resulted in First Boot Wizard being skipped.May 23 13:47:06 t430 systemd[1]: Condition check resulted in File System Check on Root Device being skipped.May 23 13:47:06 t430 systemd[1]: Condition check resulted in Rebuild Dynamic Linker Cache being skipped.May 23 13:47:06 t430 systemd[1]: Condition check resulted in Store a System Token in an EFI Variable being skipped.May 23 13:47:06 t430 systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.May 23 13:47:06 t430 systemd[1]: Condition check resulted in Rebuild Journal Catalog being skipped.May 23 13:47:06 t430 systemd[1]: Condition check resulted in Commit a transient machine-id on disk being skipped.May 23 13:47:06 t430 systemd[1]: Condition check resulted in Create System Users being skipped.May 23 13:47:06 t430 systemd[1]: Condition check resulted in Update is Completed being skipped.May 23 13:47:06 t430 systemd[1]: Starting Network Manager Script Dispatcher Service...May 23 13:47:06 t430 dbus-daemon[453]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'May 23 13:47:06 t430 systemd[1]: Started Network Manager Script Dispatcher Service.May 23 13:47:06 t430 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.2125] device (wlp3s0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.2128] device (wlp3s0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.2132] manager: NetworkManager state is now CONNECTED_LOCALMay 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.2143] manager: NetworkManager state is now CONNECTED_SITEMay 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.2145] policy: set 'datastream5' (wlp3s0) as default for IPv4 routing and DNSMay 23 13:47:06 t430 kernel: audit: type=1130 audit(1590238026.208:154): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'May 23 13:47:06 t430 dunst[1185]: WARNING: No icon found in path: 'nm-signal-75'May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.2232] device (wlp3s0): Activation: successful, device activated.May 23 13:47:06 t430 dhcpcd[492]: wlp3s0: rebinding lease of 192.168.1.106May 23 13:47:06 t430 NetworkManager[454]: <info> [1590238026.3305] manager: NetworkManager state is now CONNECTED_GLOBALMay 23 13:47:06 t430 dhcpcd[492]: wlp3s0: leased 192.168.1.106 for 86400 secondsMay 23 13:47:06 t430 dhcpcd[492]: wlp3s0: adding route to 192.168.1.0/24May 23 13:47:06 t430 dhcpcd[492]: wlp3s0: adding default route via 192.168.1.1May 23 13:47:06 t430 systemd[1]: systemd-hostnamed.service: Succeeded.May 23 13:47:06 t430 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'May 23 13:47:06 t430 kernel: audit: type=1131 audit(1590238026.351:155): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'May 23 13:47:06 t430 audit: AUDIT1334 prog-id=19 op=UNLOADMay 23 13:47:06 t430 audit: AUDIT1334 prog-id=18 op=UNLOADMay 23 13:47:06 t430 kernel: audit: type=1334 audit(1590238026.391:156): prog-id=19 op=UNLOADMay 23 13:47:06 t430 kernel: audit: type=1334 audit(1590238026.391:157): prog-id=18 op=UNLOADMay 23 13:47:06 t430 wpa_supplicant[570]: wlp3s0: CTRL-EVENT-SIGNAL-CHANGE above=1 signal=-60 noise=9999 txrate=13000May 23 13:47:06 t430 dhcpcd[492]: wlp3s0: soliciting an IPv6 routerMay 23 13:47:08 t430 dhcpcd[492]: wlp3s0: Router Advertisement from fe80::724f:57ff:fe97:5206May 23 13:47:08 t430 NetworkManager[454]: <info> [1590238028.0508] dhcp6 (wlp3s0): activation: beginning transaction (timeout in 45 seconds)May 23 13:47:08 t430 NetworkManager[454]: <warn> [1590238028.0510] device (wlp3s0): failure to start DHCPv6: failed to start client: Address already in useMay 23 13:47:08 t430 NetworkManager[454]: <info> [1590238028.0510] device (wlp3s0): state change: activated -> failed (reason 'dhcp-start-failed', sys-iface-state: 'managed')May 23 13:47:08 t430 NetworkManager[454]: <info> [1590238028.0526] manager: NetworkManager state is now DISCONNECTEDMay 23 13:47:08 t430 kernel: wlp3s0: deauthenticating from 70:4f:57:97:52:06 by local choice (Reason: 3=DEAUTH_LEAVING)May 23 13:47:08 t430 dunst[1185]: WARNING: No icon found in path: 'nm-no-connection'May 23 13:47:08 t430 dhcpcd[492]: wlp3s0: soliciting a DHCPv6 leaseMay 23 13:47:08 t430 wpa_supplicant[570]: wlp3s0: CTRL-EVENT-DISCONNECTED bssid=70:4f:57:97:52:06 reason=3 locally_generated=1May 23 13:47:08 t430 dhcpcd[492]: wlp3s0: carrier lostMay 23 13:47:08 t430 avahi-daemon[451]: Interface wlp3s0.IPv6 no longer relevant for mDNS.May 23 13:47:08 t430 avahi-daemon[451]: Leaving mDNS multicast group on interface wlp3s0.IPv6 with address fe80::fd81:7780:6410:e759.May 23 13:47:08 t430 avahi-daemon[451]: Interface wlp3s0.IPv4 no longer relevant for mDNS.May 23 13:47:08 t430 avahi-daemon[451]: Leaving mDNS multicast group on interface wlp3s0.IPv4 with address 192.168.1.106.May 23 13:47:08 t430 NetworkManager[454]: <info> [1590238028.0927] device (wlp3s0): set-hw-addr: set MAC address to 66:51:DB:82:D6:15 (scanning)May 23 13:47:08 t430 avahi-daemon[451]: Withdrawing address record for fe80::c685:c3d0:8b05:a9e5 on wlp3s0.May 23 13:47:08 t430 avahi-daemon[451]: Withdrawing address record for fe80::fd81:7780:6410:e759 on wlp3s0.May 23 13:47:08 t430 avahi-daemon[451]: Withdrawing address record for 192.168.1.106 on wlp3s0.May 23 13:47:08 t430 dhcpcd[5441]: ps_bpf_recvbpf: Network is down
@eblock's comment on the question, above, suggested looking at the system log. I did this and identified, at least in part, a working solution, for the problematic third location, as described above. The network connection cycling has stopped and the connection is now stable. This is not to say that I have got to the root of the problem, but I may have. Here is the solution I have: re-booted the laptop, and let it begin cycling the wi-fi network connection on and off. In a terminal, typed, journalctl -ef , and watched system live journal output cycle through conenction and disconnection steps as outlined above. outputed the journal to a text file with journalctl -ef > journal_output.txt opened this in vim text editor and identified the part relating to one cycle through conenct and disconnect only, about 14 secs of infornmation only. I have added this system journal output to the question above, as an amendment. From this output I was able to identify that either the router was not issuing an ipv6 address, or that the laptop was, for whatever reason, not able to set up an ipv6 address up and that this was causing 'NetworkManager` to disconenct the network conenction. This problem might have been caused by a recent router firmware upgrade, however I have an identical laptop with, more or less identical software installed, which does not have this network connection issue. Also other devices on the router LAN do not experience this cyclic connect / disconnect problem. That said, I think it may, I'm not certian on this, have been at about the time of the router firmware upgrade, that this problem began. The fix To fix this I simply disabled ipv6 connections, for both wifi and ethernet cable connections, in the nm-applet network manager gui tab for the third location only. After disabling, both wifi and cable ethernet connections connect to the network in a durable consistent way that does not cycle through connect and disconnect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588333", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46470/" ] }
588,435
ip a produces bunch IP Addresses and interfaces status With awk '/inet / {print $2}' , I manage to get the IP and the subnet wolf@linux:~$ ip a | awk '/inet / {print $2}'127.0.0.1/810.10.0.1/2410.10.1.1/24wolf@linux:~$ Then, I pipe it to another awk to remove the CIDR notation. wolf@linux:~$ ip a | awk '/inet / {print $2}' | awk -F / '{print $1}'127.0.0.110.10.0.110.10.1.1wolf@linux:~$ Is it possible to do this without piping the awk output to another awk? Desired Output wolf@linux:~$ ip a | <awk syntax here without additional piping>127.0.0.110.10.0.110.10.1.1wolf@linux:~$
ip a | awk '/inet / {FS="/"; $0=$2; print $1; FS=" "}' The first matching record is split into fields according to the default FS (space). Then a new FS is set. When we substitute $0=$2 , splitting is done again according to the new FS . Now $1 contains what we want, we print it. Finally we set FS for the next matching record.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588435", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409008/" ] }
588,458
I have a vlsi code like this. instance1/instance2/instance3/and_gate_inst/and_gate/a_pininstance1/instance2/or_gate/b_pin Now can you help me to get something like below: instance1/instance2/instance3/and_gate_inst/and_gate instance1/instance2/or_gate Sorry for the confusion, above is the right question...
sed 's|/[^/]*$||' Where / matches literal / [^/]* matches zero or more non- / characters $ matches the end of the line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/414110/" ] }
588,464
I downloaded many YouTube videos and want to process them using bash scripts. However the filenames used contain all kinds of special and non-ASCII characters. How do I handle this in a bash script? Lets say I want to create a symbolic link to each such file in a folder: # Write filenames to filelist.txt in parent folderls ./* > ../filelist.txt# Create sym links for all files in filelist.txtcounter=0while read video_name; do counter=$((counter+1)); ln -s $video_name link_name_${counter}.mp4done < ../filelist.txt The above function is not working due to the special characters in the filename. Here are some example filenames: पेट (Stomach) कम करने के लिए 5 योग आसन-3G4pEY5njYE.mp4मन शांत करने के लिए करे वृक्षासन योग _ स्वामी रामदेव-sPytQlaxoIg.mp4वृक्षासन करने का तरीका और फायदे _ Swami Ramdev-A-2d04ON9hA.mp4 Bonus: I also would like to have "leading zeros" when printing the counter variable, but that's not crucial.
Variables in the shell can contain any character, except for the NUL character, just like filenames in the filesystem. You should therefore not have any problem storing the filenames in variables, unless you read the mangled output of ls , which will possibly be modified for display purposes ( ls output is strictly for looking at). In the edited question, you additionally read the filenames from a text file with read and the default value of $IFS (which determines aspects of how read works). This would strip flanking whitespace from the lines read from the file, and may interpret the \ character specially if it occurs in the input. Also note that technically, filenames may contain newline characters, so storing them as a newline-delimited list (lines in a text file) limits the types of names that can be used. You also need to quote the expansion of variables. You have filenames with spaces in them, and without quoting the $video value, the shell would split these up in to multiple words and give these words (after additionally performing filename globbing with these as patterns) as separate arguments to ln -s . Don't use ls to generate the list of the filenames, and quote the expansions of all variables: counter=0for video in ./*; do counter=$(( counter + 1 )) ln -s -- "$video" "link_name_$counter.mp4"done Note that the above code would generate the symbolic links in the current directory. If you run this a second time, it would pick up these links and create further links to those symbolic links. It would be better to create the links in a separate directory, to be more careful with the filename globbing pattern used with the loop so that the links are avoided, or explicitly test for links in the loop and skip these. counter=0for video in ./*; do [ -L "$video" ] && continue # skip symbolic links counter=$(( counter + 1 )) ln -s -- "$video" "link_name_$counter.mp4"done To get a zero-filled counter with four digits, you may use printf -v zcounter '%.4d' "$counter" This prints the re-formatted counter directly to the zcounter variable. You would then use that variable in generating the filename. Or you could just generate the name of the symbolic link in one go in this way: counter=0for video in ./*; do [ -L "$video" ] && continue # skip symbolic links counter=$(( counter + 1 )) printf -v linkname 'link_name_%.4d.md4' "$counter" ln -s -- "$video" "$linkname"done See also: Why *not* parse `ls` (and what to do instead)? Why does my shell script choke on whitespace or other special characters? When is double-quoting necessary?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103565/" ] }
588,586
When running this script: #!/usr/bin/env python3f = open("foo", "w")f.write("1"*10000000000)f.close()print("closed") I can observe the following process on my Ubuntu machine: The memory fills with 10GB.The Page Cache fills with 10GB of dirty pages. (/proc/meminfo)"closed" is printed and the script terminates.A while after, the dirty pages decrease. However, if file "foo" already exists, close() blocks until all dirty pages have been written back. What is the reason for this behavior? This is the strace if the file does NOT exist: openat(AT_FDCWD, "foo", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 3fstat(3, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0ioctl(3, TCGETS, 0x7ffd50dc76f0) = -1 ENOTTY (Inappropriate ioctl for device)lseek(3, 0, SEEK_CUR) = 0ioctl(3, TCGETS, 0x7ffd50dc76c0) = -1 ENOTTY (Inappropriate ioctl for device)lseek(3, 0, SEEK_CUR) = 0lseek(3, 0, SEEK_CUR) = 0mmap(NULL, 10000003072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcd9892e000mmap(NULL, 10000003072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcb4486f000write(3, "11111111111111111111111111111111"..., 10000000000) = 2147479552write(3, "11111111111111111111111111111111"..., 7852520448) = 2147479552write(3, "11111111111111111111111111111111"..., 5705040896) = 2147479552write(3, "11111111111111111111111111111111"..., 3557561344) = 2147479552write(3, "11111111111111111111111111111111"..., 1410081792) = 1410081792munmap(0x7fcb4486f000, 10000003072) = 0munmap(0x7fcd9892e000, 10000003072) = 0close(3) = 0write(1, "closed\n", 7closed) = 7rt_sigaction(SIGINT, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fcfedd5cf20}, {sa_handler=0x62ffc0, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fcfedd5cf20}, 8) = 0sigaltstack(NULL, {ss_sp=0x2941be0, ss_flags=0, ss_size=8192}) = 0sigaltstack({ss_sp=NULL, ss_flags=SS_DISABLE, ss_size=0}, NULL) = 0exit_group(0) = ?+++ exited with 0 +++ This is the strace if it exists: openat(AT_FDCWD, "foo", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 3fstat(3, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0ioctl(3, TCGETS, 0x7fffa00b4fe0) = -1 ENOTTY (Inappropriate ioctl for device)lseek(3, 0, SEEK_CUR) = 0ioctl(3, TCGETS, 0x7fffa00b4fb0) = -1 ENOTTY (Inappropriate ioctl for device)lseek(3, 0, SEEK_CUR) = 0lseek(3, 0, SEEK_CUR) = 0mmap(NULL, 10000003072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f71de68b000mmap(NULL, 10000003072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f6f8a5cc000write(3, "11111111111111111111111111111111"..., 10000000000) = 2147479552write(3, "11111111111111111111111111111111"..., 7852520448) = 2147479552write(3, "11111111111111111111111111111111"..., 5705040896) = 2147479552write(3, "11111111111111111111111111111111"..., 3557561344) = 2147479552write(3, "11111111111111111111111111111111"..., 1410081792) = 1410081792munmap(0x7f6f8a5cc000, 10000003072) = 0munmap(0x7f71de68b000, 10000003072) = 0close(3#### strace will block exactly here until write-back is completed ####) = 0 write(1, "closed\n", 7closed) = 7rt_sigaction(SIGINT, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7f7433ab9f20}, {sa_handler=0x62ffc0, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7f7433ab9f20}, 8) = 0sigaltstack(NULL, {ss_sp=0x1c68be0, ss_flags=0, ss_size=8192}) = 0sigaltstack({ss_sp=NULL, ss_flags=SS_DISABLE, ss_size=0}, NULL) = 0exit_group(0) = ?+++ exited with 0 +++ The same behaviour can be observed when simply printing and piping into a file instead of using python file-io, as well as when doing the same with a small equivalent C++ program printing to cout. It seems to be the actual systemcall that blocks.
That sounds like a reminder of the O_PONIES fiasco, which just recently had its 11th birthday. Before ext4 came, ext3 had acquired a sort of a reputation for being stable in the face of power losses. It seldom broke, it seldom lost data from files. Then, ext4 added delayed allocation of data blocks, meaning that it didn't even try to write file data to disk immediately. Normally, that's not a problem as long as the data gets there at some point, and for temporary files, it might turn out that there was no need to write the data to disk at all. But ext4 did write metadata changes, and recorded that something had changed with the file. Now, if the system crashed, the file was marked as truncated, but the writes after that weren't stored on disk (because no blocks were allocated for them). Hence, on ext4, you'd often see recently-modified files truncated to a zero length after a crash. That, of course was not exactly what most users wanted, but the argument was made that application programs that cared about their data so much, should have called fsync() , and if they actually cared about renames , they should fsync() (or at least fdatasync() ) the containing directory too. Next to no-one did that, though, partly because on ext3, an fsync() synced the whole disk, possibly including large amounts of unrelated data. (Or as close to the whole disk that the difference doesn't matter anyway.) Now, on one hand, you had ext3 which performed poorly with fsync() and on the other, ext4 that required fsync() to not lose files. Not a nice situation, considering that most application programs would care to implement filesystem-specific behavior even less than the rigid dance with calling fsync() at just the right moments. Apparently it wasn't even easy to figure out if a filesystem was mounted as ext3 or ext4 in the first place. In the end, the ext4 developers made some changes to the most common critical-seeming cases Renaming a file on top of another. On a running system, this is an atomic update and is commonly used to put a new version of a file in place. Overwriting an existing file (your case). This isn't atomic on a running system, but usually means the application wants the file replaced, not truncated. If an overwrite is botched, you'd lose the old version of the file too, so this is a bit different from creating a completely new file where a power-out would only lose the most recent data. As far as I can remember, XFS also exhibited similar zero-length files after a crash even before ext4. I never followed that, though, so I don't know what sorts of fixes they'd have done. See, e.g. this article on LWN, which mentions the fixes: ext4 and data loss (March 2009) There were other writings about that at the time, of course, but I'm not sure it's useful to link to them, as it's mostly a question of pointing fingers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/414224/" ] }
588,600
This came from ~/.bashrc PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' Notice the \033[01;32m I know \033[ is a Control Sequence Introducer.I know 32 is the color code for green. But, what are the 01; and m ? Which part of ANSI escape code does \033[01;32m belongs to.
That sounds like a reminder of the O_PONIES fiasco, which just recently had its 11th birthday. Before ext4 came, ext3 had acquired a sort of a reputation for being stable in the face of power losses. It seldom broke, it seldom lost data from files. Then, ext4 added delayed allocation of data blocks, meaning that it didn't even try to write file data to disk immediately. Normally, that's not a problem as long as the data gets there at some point, and for temporary files, it might turn out that there was no need to write the data to disk at all. But ext4 did write metadata changes, and recorded that something had changed with the file. Now, if the system crashed, the file was marked as truncated, but the writes after that weren't stored on disk (because no blocks were allocated for them). Hence, on ext4, you'd often see recently-modified files truncated to a zero length after a crash. That, of course was not exactly what most users wanted, but the argument was made that application programs that cared about their data so much, should have called fsync() , and if they actually cared about renames , they should fsync() (or at least fdatasync() ) the containing directory too. Next to no-one did that, though, partly because on ext3, an fsync() synced the whole disk, possibly including large amounts of unrelated data. (Or as close to the whole disk that the difference doesn't matter anyway.) Now, on one hand, you had ext3 which performed poorly with fsync() and on the other, ext4 that required fsync() to not lose files. Not a nice situation, considering that most application programs would care to implement filesystem-specific behavior even less than the rigid dance with calling fsync() at just the right moments. Apparently it wasn't even easy to figure out if a filesystem was mounted as ext3 or ext4 in the first place. In the end, the ext4 developers made some changes to the most common critical-seeming cases Renaming a file on top of another. On a running system, this is an atomic update and is commonly used to put a new version of a file in place. Overwriting an existing file (your case). This isn't atomic on a running system, but usually means the application wants the file replaced, not truncated. If an overwrite is botched, you'd lose the old version of the file too, so this is a bit different from creating a completely new file where a power-out would only lose the most recent data. As far as I can remember, XFS also exhibited similar zero-length files after a crash even before ext4. I never followed that, though, so I don't know what sorts of fixes they'd have done. See, e.g. this article on LWN, which mentions the fixes: ext4 and data loss (March 2009) There were other writings about that at the time, of course, but I'm not sure it's useful to link to them, as it's mostly a question of pointing fingers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/383004/" ] }
588,629
Some applications, like ssh have a unit file that ends with @, like ssh.service and [email protected] . They contain different contents, but I cannot understand what exactly is the difference in functionality or purpose. Is it some naming convention I'm not aware of?
As others have mentioned, it's a service template. In the specific case of [email protected] , it's for invoking sshd only on-demand, in the style of classic inetd services. If you expect SSH connections to be rarely used, and want to absolutely minimize sshd 's system resource usage (e.g. in an embedded system), you could disable the regular ssh.service and instead enable ssh.socket . The socket will then automatically start up an instance of [email protected] (which runs sshd -i ) whenever an incoming connection to TCP port 22 (the standard SSH port) is detected. This will slow down the SSH login process, but will remove the need to run sshd when there are no inbound SSH connections.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/588629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/244418/" ] }
588,651
I want to perform what some data analysis software call an anti-join: remove from one list those lines matching lines in another list. Here is some toy data and the expected output: $ echo -e "a\nb\nc\nd" > list1$ echo -e "c\nd\ne\nf" > list2$ antijoincommand list1 list2ab
I wouldn't use join for this because join requires input to be sorted, which is an unnecessary complication for such a simple job. You could instead use grep : $ grep -vxFf list2 list1ab Or awk : $ awk 'NR==FNR{++a[$0]} !a[$0]' list2 list1ab If the files are already sorted, an alternative to join -v 1 would be comm -23 $ comm -23 list1 list2 ab
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/366162/" ] }
588,658
How do I configure an Ubuntu 20.04 system so it overrides the default DNS? It seems that by default there is a global and per-link DNS setting. I tried a couple of things that did not work: Edit /etc/systemd/resolved.conf with the DNS Servers Created /etc/systemd/network/enp0s3.conf with the DNS serversconfigured Removed all DNS related parameters from the DHCP request by editing /etc/dhcp/dhclient.conf All these changes (and the combinations) result in the DNS servers being prepended to the list of global DNS servers. Most 'solutions' are to either install resolvconf or replace the /etc/resolv.conf symbolic link with a file and set the DNS servers there. Both of these seem like a workaround. I would like to use the existing tooling ( systemd-resolved ) to override the DNS Servers. As suggested by @xenoid in the comments:Setting the DNS for the interface through the GUI resulted in a file /etc/NetworkManager/system-connections/enp0s3.nmconnection that contains the correct DNS servers, the output of resolvectl status includes the correct DNS servers, this however is not what I had in mind.I am looking for a solution that does the configuration using systemd-resolved , which is possible from what I can find, but it is unclear how. Since this requires a GUI installation.
Update /etc/systemd/resolved.conf [Resolve]DNS=1.1.1.1 8.8.8.8FallbackDNS=8.8.4.4 Restart system resolved: service systemd-resolved restart Run systemd-resolve --status the output should look like this: Global DNS Servers: 1.1.1.1 8.8.8.8...
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/588658", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/414242/" ] }
588,714
I am solving a Hackerrank question where the output is A 25 27 50;B 35 37 75C 75 78 80;D 99 88 76 for input A 25 27 50B 35 37 75C 75 78 80D 99 88 76. I am using ORS to do the above task. But I don't know why runtime error is coming? awk 'NR%2 == 1?ORS=";":ORS="\n"' Error coming is awkNR: cmd. line:1: Possible syntax error
Busybox awk seems to need parentheses around the last two operands. I get the same error with $ busybox awk 'NR%2 == 1?ORS=";":ORS="\n"' fileawkNR: cmd. line:1: Possible syntax error but it works with $ busybox awk 'NR%2 == 1?(ORS=";"):(ORS="\n")' fileA 25 27 50;B 35 37 75C 75 78 80;D 99 88 76
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588714", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/412804/" ] }
588,734
What code can my program call to behave similarly to pressing Ctrl + D in a terminal? That is, to cause a read function called on STDIN to return 0 in a child process, but without having this "STDIN" file descriptor closed in the parent process? I'm trying to understand how the EOF condition is communicated to a process in Linux. It appears read returning 0 is only advisory, and you can actually carry on reading after encountering such an EOF condition. Consider the following program: #include <unistd.h>#include <stdio.h>void terminate_buf(char *buf, ssize_t len){ if(len > 0) { buf[len-1] = '\0'; } else { buf[0] = '\0'; }}int main(){ int r; char buf[1024]; while(1) { r = read(0, buf, 1024); terminate_buf(buf, r); printf("read %d bytes: %s\n", r, buf); } return 0;} If you compile ( gcc -o reader reader.c , assuming you named it reader.c) and run this program in a terminal, then press Ctrl + D , then input foobar Enter , you will see this: $ ./reader read 0 bytes: foobarread 7 bytes: foobar indicating it's totally possible to read meaningful data after an EOF event has occurred. You can press Ctrl + D multiple times on a line of its own, followed by some text, and reader will carry on reading your data as if nothing had happened (after printing "read 0 bytes: " for each press of Ctrl + D ). The file descriptor remains open. So what's going on here and how can I replicate this behaviour? How can I cause a child process to see some data after seeing EOF? Is there a way to do it using just regular file I/O (and perhaps ioctl s), or do I need to open a pty? Calling write with a count of 0 doesn't seem to work, read doesn't even return on the other end. What is EOF, really?
Busybox awk seems to need parentheses around the last two operands. I get the same error with $ busybox awk 'NR%2 == 1?ORS=";":ORS="\n"' fileawkNR: cmd. line:1: Possible syntax error but it works with $ busybox awk 'NR%2 == 1?(ORS=";"):(ORS="\n")' fileA 25 27 50;B 35 37 75C 75 78 80;D 99 88 76
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588734", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/285049/" ] }
588,761
Any command line tools to rename files that don't already contain a string or a set of strings to then include that string? So all files in a given folder should have high in the name for "high priority", but some do and some don't. Rather than add it to all files, how may I first check to see if a file has high in the name then, if it doesn't, add it to the beginning of the name?
Busybox awk seems to need parentheses around the last two operands. I get the same error with $ busybox awk 'NR%2 == 1?ORS=";":ORS="\n"' fileawkNR: cmd. line:1: Possible syntax error but it works with $ busybox awk 'NR%2 == 1?(ORS=";"):(ORS="\n")' fileA 25 27 50;B 35 37 75C 75 78 80;D 99 88 76
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409618/" ] }
588,789
I chrooted into my testsystem with mount /dev/vg0/vm01.buster-test-disk /media/vm01.buster-test-disk/mount -t proc none /media/vm01.buster-test-disk/procmount --bind /dev /media/vm01.buster-test-disk/devmount -t sysfs sysfs /media/vm01.buster-test-disk/syschroot /media/vm01.buster-test-disk/ /bin/bash adapted the Hostname and exit hostname buster-testecho buster-test > /etc/hostnameecho "127.0.0.1 buster-test" >> /etc/hostsexit unmount umount /media/vm01.buster-test-disk/procumount /media/vm01.buster-test-disk/devumount /media/vm01.buster-test-disk/sysumount -l /media/vm01.buster-test-disk Problem now the host has its hostname set to buster-test even if I login in in another shell Why did the hostname change? And are there other things that could change outside the chroot, when doing stuff inside?
Running hostname buster-test changed the hostname in the running kernel (on Linux, in the current UTS namespace ). chroot on its own doesn’t control that at all, so the hostname change was visible outside too. When you use chroot , you’re only limiting access to a portion of the file system; anything which isn’t managed in a file system won’t be constrained to the “environment” created by chroot . This includes network setup, the date and time, user permissions, etc. To constrain such changes, you need to use namespaces (or similar technologies on non-Linux systems); on Linux, you can isolate processes by running them with unshare .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20661/" ] }
588,892
I am calling a bash script that has a number of flags in the following manner: /home/username/myscript -a -b 76 So to illustrate, the -a flag does something in myscript and the -b flag sets some parameter to 76 in myscript. How do I make these flags conditional based on some previously defined variable? For example, I only want to use the -a flag if some variable var=ON, else I don't what to use that flag. I am looking for something like: var=ON/home/username/myscript if [ "$var" == "ON" ]; then-afi-b 76 This scheme did not work when implemented, however. What can be used to accomplish the desired result?
You can build the command in an array: #!/bin/bashvar=ONcmd=( /home/username/myscript ) # create array with one elementif [ "$var" == "ON" ]; then cmd+=( -a ) # append to the arrayficmd+=( -b 76 ) # append two elements And then run it with: "${cmd[@]}" Note the quotes around the last part, and the parenthesis around the assignments above. The syntax is ugly, but it works in the face of arguments containing white space and such. (Use quotes to add arguments with spaces, e.g. cmd+=("foo bar") ) Related, with less ugly methods and ways they can fail: How can we run a command stored in a variable? Using shell variables for command options In simple cases, like that one optional argument here, you could get away with the alternate value expansion: var=xmyscript ${var:+"-a"} -b 76 Here, ${var:+foo} inserts foo if var is not empty (so var=ON , var=FALSE , var=x would all insert it), and nothing if it is empty or unset ( var= , or unset var ). Be careful with the usual quoting issues.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/588892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373894/" ] }
588,906
GNU awk manual on nextfile reads: NOTE: For many years, nextfile was a common extension. In September 2012, it was accepted for inclusion into the POSIX standard. See the Austin Group website . Likewise, mawk manual says: Nextfile is a gawk extension (also implemented by BWK awk), is not yetpart of the POSIX standard (as of October 2012), although it has beenaccepted for the next revision of the standard. What confuses me is that there is no mention of nextfile in the latest POSIX specification , from 2018. Following the link to the Austin Group, you find that the issue was resolved in 2012 (with even a final accepted text), but only applied in 2020(!). All in all, does it mean nextfile is an awk's feature specified by POSIX? Or will it only be so in a future POSIX version? (For practical purposes, nextfile is also to be found in BSD awk .) Two more statements are in the same situation as nextfile : fflush and delete ( delete is already specified, but is to be expanded so as to be able to delete an entire array).
You'll see that bug 607 is targetted for Issue 8, not released yet (see the issue8 Tags ). Issue 7 was released in 2008, there have been a few newer editions of issue 7, latest in 2018, but those are technical corrigenda, they don't bring new features. nextfile is not only a new feature but also breaks backward compatibility as awk '{nextfile = 1}' and awk '{nextfile}' are valid awk invocations which in the current POSIX version set and retrieve the value of a nextfile variable respectively, so it could possibly not be added as part of a technical corrigendum. What could be added (and probably should have) in a TC is to tell people that nextfile is a word reserved for future use so that people should not use it in their variable or function names, as a script that does awk '{nextfile = 1}' , though perfectly standard, does not work in many awk implementations (that's not limited to nextfile btw). You can check a HTML rendition of the awk part of the 2018 edition of Issue 7 of the Single UNIX Specification at https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/utilities/awk.html (note the .2018edition part), though note that even though it is published by the Opengroup, the HTML version has no value of standard, only the PDF does (you need to register with them to get access to it). They're meant to be equivalent, though there have been several bugs in the conversion to HTML in the past which have caused sections to be missing (though they're generally fixed quickly when spotted), so when in doubt, best is to check the PDF.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/588906", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388654/" ] }
589,007
I have two files on my Linux machine. The first "list.txt" contains a list of objects (2649 objects) while the second "list_interactors.txt" contains a shorter list with some of the objects in the previously list (719 objects) and for each of these there are in other columns some variables associated. I would like to obain a list of all the objects (2649) with the associated variable for the specific objects in file "list_interactors". Example: file list.txt 6tyr_A_002__________7yer_2_009__________3erf_1_001__________2dr5_D_2-3__________ file list_interactors.txt 6tyr_A_002__________ 6tyr1_B QRT54R AAAAA3erf_1_001__________ 3erf2_B QAEF6R XXXXX output.txt 6tyr_A_002__________ 6tyr1_B QRT54R AAAAA7yer_2_009__________3erf_1_001__________ 3erf2_B QAEF6R XXXXX2dr5_D_2-3__________ I'm not very pratical of the programming languages. I try to use the function grep with this script: grep -f list.txt list_interactors.txt but the output is a file like the file "list_interactors.txt". Could you help me please?
$ join -a 1 <( sort list.txt ) <( sort list_interactors.txt )2dr5_D_2-3__________3erf_1_001__________ 3erf2_B QAEF6R XXXXX6tyr_A_002__________ 6tyr1_B QRT54R AAAAA7yer_2_009__________ This uses join to do a relational JOIN operation between the two files. The first field will be used as the join key by default. The -a 1 option makes join output all lines in the first file, even if there is no match in the second file (it does a "left join"). The input data to join needs to be sorted, and we do this by calling sort on each file individually in two process substitutions on the command line. You could also opt for pre-sorting the files. If your data is tab-delimited, you may want to add -t $'\t' to the start of the join command's arguments. This would make the output retain the existing tab delimiters. Redirect the output by adding >output.txt to the end of the command if you want to store it in a file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589007", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/412055/" ] }
589,029
When I ran the below command it prints the whole string e.g. Note="Peptidase S59%2C nucleoporin" awk '$3=="mRNA"' Nitab-v4.5_gene_models_Chr_Edwards2017.gff | head Nt01 maker mRNA 143295 155540 . + . ID=Nitab4.5_0006317g0010.1;Parent=Nitab4.5_0006317g0010;Name=Nitab4.5_0006317g0010.1;_AED=0.08;_eAED=0.08;_QI=0|0.45|0.25|1|0.90|0.75|12|0|1011;Note="Peptidase S59%2C nucleoporin"Nt01 maker mRNA 170633 173860 . + . ID=Nitab4.5_0006317g0020.1;Parent=Nitab4.5_0006317g0020;Name=Nitab4.5_0006317g0020.1;_AED=0.26;_eAED=0.26;_QI=15|0|0|0.83|0.6|0.33|6|0|424;Note="Putative S-adenosyl-L-methionine-dependent methyltransferase"Nt01 maker mRNA 156516 160996 . - . ID=Nitab4.5_0006317g0030.1;Parent=Nitab4.5_0006317g0030;Name=Nitab4.5_0006317g0030.1;_AED=0.01;_eAED=0.01;_QI=161|1|1|1|0|0.5|2|358|141;Note="Unknown"Nt01 maker mRNA 78554 80638 . - . ID=Nitab4.5_0006317g0040.1;Parent=Nitab4.5_0006317g0040;Name=Nitab4.5_0006317g0040.1;_AED=0.02;_eAED=0.02;_QI=0|0|0|1|1|1|3|0|187;Note="Heavy metal-associated domain%2C HMA"Nt01 maker mRNA 111288 129916 . - . ID=Nitab4.5_0006317g0050.1;Parent=Nitab4.5_0006317g0050;Name=Nitab4.5_0006317g0050.1;_AED=0.24;_eAED=0.24;_QI=0|0|0|0.5|1|1|2|0|72;Note="Unknown"Nt01 maker mRNA 470560 474346 . + . ID=Nitab4.5_0002367g0010.1;Parent=Nitab4.5_0002367g0010;Name=Nitab4.5_0002367g0010.1;_AED=0.11;_eAED=0.11;_QI=0|0|0|1|1|1|14|0|668;Note="Auxin response factor%2C B3 DNA binding domain%2C DNA-binding pseudobarrel domain%2C AUX/IAA protein%2C Aux/IAA-ARF-dimerisation"Nt01 maker mRNA 499946 502182 . + . ID=Nitab4.5_0002367g0020.1;Parent=Nitab4.5_0002367g0020;Name=Nitab4.5_0002367g0020.1;_AED=0.26;_eAED=0.26;_QI=0|0.5|0|0.66|0|0|3|0|258;Note="Cellulose synthase"Nt01 maker mRNA 496891 497596 . + . ID=Nitab4.5_0002367g0030.1;Parent=Nitab4.5_0002367g0030;Name=Nitab4.5_0002367g0030.1;_AED=0.33;_eAED=0.33;_QI=0|0|0|0.5|0|0.5|2|0|213;Note="Cellulose synthase"Nt01 maker mRNA 505125 506853 . - . ID=Nitab4.5_0002367g0040.1;Parent=Nitab4.5_0002367g0040;Name=Nitab4.5_0002367g0040.1;_AED=0.09;_eAED=0.09;_QI=0|0|0|1|0.5|0.66|3|0|230;Note="Zinc finger%2C RING-type%2C Zinc finger%2C RING/FYVE/PHD-type"Nt01 maker mRNA 564383 570328 . + . ID=Nitab4.5_0002367g0050.1;Parent=Nitab4.5_0002367g0050;Name=Nitab4.5_0002367g0050.1;_AED=0.08;_eAED=0.08;_QI=75|1|1|1|1|1|6|146|267;Note="SAC3/GANP/Nin1/mts3/eIF-3 p25%2C 26S proteasome non-ATPase regulatory subunit Rpn12" However, when I use this following command the string is shortened to e.g. Note="Peptidase awk '$3=="mRNA"' Nitab-v4.5_gene_models_Chr_Edwards2017.gff | awk '{print $9}' | head ID=Nitab4.5_0006317g0010.1;Parent=Nitab4.5_0006317g0010;Name=Nitab4.5_0006317g0010.1;_AED=0.08;_eAED=0.08;_QI=0|0.45|0.25|1|0.90|0.75|12|0|1011;Note="PeptidaseID=Nitab4.5_0006317g0020.1;Parent=Nitab4.5_0006317g0020;Name=Nitab4.5_0006317g0020.1;_AED=0.26;_eAED=0.26;_QI=15|0|0|0.83|0.6|0.33|6|0|424;Note="PutativeID=Nitab4.5_0006317g0030.1;Parent=Nitab4.5_0006317g0030;Name=Nitab4.5_0006317g0030.1;_AED=0.01;_eAED=0.01;_QI=161|1|1|1|0|0.5|2|358|141;Note="Unknown"ID=Nitab4.5_0006317g0040.1;Parent=Nitab4.5_0006317g0040;Name=Nitab4.5_0006317g0040.1;_AED=0.02;_eAED=0.02;_QI=0|0|0|1|1|1|3|0|187;Note="HeavyID=Nitab4.5_0006317g0050.1;Parent=Nitab4.5_0006317g0050;Name=Nitab4.5_0006317g0050.1;_AED=0.24;_eAED=0.24;_QI=0|0|0|0.5|1|1|2|0|72;Note="Unknown"ID=Nitab4.5_0002367g0010.1;Parent=Nitab4.5_0002367g0010;Name=Nitab4.5_0002367g0010.1;_AED=0.11;_eAED=0.11;_QI=0|0|0|1|1|1|14|0|668;Note="AuxinID=Nitab4.5_0002367g0020.1;Parent=Nitab4.5_0002367g0020;Name=Nitab4.5_0002367g0020.1;_AED=0.26;_eAED=0.26;_QI=0|0.5|0|0.66|0|0|3|0|258;Note="CelluloseID=Nitab4.5_0002367g0030.1;Parent=Nitab4.5_0002367g0030;Name=Nitab4.5_0002367g0030.1;_AED=0.33;_eAED=0.33;_QI=0|0|0|0.5|0|0.5|2|0|213;Note="CelluloseID=Nitab4.5_0002367g0040.1;Parent=Nitab4.5_0002367g0040;Name=Nitab4.5_0002367g0040.1;_AED=0.09;_eAED=0.09;_QI=0|0|0|1|0.5|0.66|3|0|230;Note="ZincID=Nitab4.5_0002367g0050.1;Parent=Nitab4.5_0002367g0050;Name=Nitab4.5_0002367g0050.1;_AED=0.08;_eAED=0.08;_QI=75|1|1|1|1|1|6|146|267;Note="SAC3/GANP/Nin1/mts3/eIF-3 As final results, I would like to retrieve Nitab4.5_0006317g0010.1,Peptidase S59%2C nucleoporin . What did I miss? Thank you in advance
GFF is a tab-separated format but you are are not using tabs. Unless you use -F'\t' or BEGIN{FS="\t"} , awk will use any whitespace as the field delimiter, and that includes spaces. Since you are cutting on spaces, $9 ends at the first space. You also don't need two commands, what you want to do is: $ awk -F'\t' '$3=="mRNA"{print $9}' file.gff ID=Nitab4.5_0006317g0010.1;Parent=Nitab4.5_0006317g0010;Name=Nitab4.5_0006317g0010.1;_AED=0.08;_eAED=0.08;_QI=0|0.45|0.25|1|0.90|0.75|12|0|1011;Note="Peptidase S59%2C nucleoporin"ID=Nitab4.5_0006317g0020.1;Parent=Nitab4.5_0006317g0020;Name=Nitab4.5_0006317g0020.1;_AED=0.26;_eAED=0.26;_QI=15|0|0|0.83|0.6|0.33|6|0|424;Note="Putative S-adenosyl-L-methionine-dependent methyltransferase"ID=Nitab4.5_0006317g0030.1;Parent=Nitab4.5_0006317g0030;Name=Nitab4.5_0006317g0030.1;_AED=0.01;_eAED=0.01;_QI=161|1|1|1|0|0.5|2|358|141;Note="Unknown"ID=Nitab4.5_0006317g0040.1;Parent=Nitab4.5_0006317g0040;Name=Nitab4.5_0006317g0040.1;_AED=0.02;_eAED=0.02;_QI=0|0|0|1|1|1|3|0|187;Note="Heavy metal-associated domain%2C HMA"ID=Nitab4.5_0006317g0050.1;Parent=Nitab4.5_0006317g0050;Name=Nitab4.5_0006317g0050.1;_AED=0.24;_eAED=0.24;_QI=0|0|0|0.5|1|1|2|0|72;Note="Unknown"ID=Nitab4.5_0002367g0010.1;Parent=Nitab4.5_0002367g0010;Name=Nitab4.5_0002367g0010.1;_AED=0.11;_eAED=0.11;_QI=0|0|0|1|1|1|14|0|668;Note="Auxin response factor%2C B3 DNA binding domain%2C DNA-binding pseudobarrel domain%2C AUX/IAA protein%2C Aux/IAA-ARF-dimerisation"ID=Nitab4.5_0002367g0020.1;Parent=Nitab4.5_0002367g0020;Name=Nitab4.5_0002367g0020.1;_AED=0.26;_eAED=0.26;_QI=0|0.5|0|0.66|0|0|3|0|258;Note="Cellulose synthase"ID=Nitab4.5_0002367g0030.1;Parent=Nitab4.5_0002367g0030;Name=Nitab4.5_0002367g0030.1;_AED=0.33;_eAED=0.33;_QI=0|0|0|0.5|0|0.5|2|0|213;Note="Cellulose synthase"ID=Nitab4.5_0002367g0040.1;Parent=Nitab4.5_0002367g0040;Name=Nitab4.5_0002367g0040.1;_AED=0.09;_eAED=0.09;_QI=0|0|0|1|0.5|0.66|3|0|230;Note="Zinc finger%2C RING-type%2C Zinc finger%2C RING/FYVE/PHD-type"ID=Nitab4.5_0002367g0050.1;Parent=Nitab4.5_0002367g0050;Name=Nitab4.5_0002367g0050.1;_AED=0.08;_eAED=0.08;_QI=75|1|1|1|1|1|6|146|267;Note="SAC3/GANP/Nin1/mts3/eIF-3 p25%2C 26S proteasome non-ATPase regulatory subunit Rpn12" If you only want to get the value of Note= , you can do: $ awk -F"\t" '$3=="mRNA"{sub(/.*Note=/,"",$9); print $9}' file.gff "Peptidase S59%2C nucleoporin""Putative S-adenosyl-L-methionine-dependent methyltransferase""Unknown""Heavy metal-associated domain%2C HMA""Unknown""Auxin response factor%2C B3 DNA binding domain%2C DNA-binding pseudobarrel domain%2C AUX/IAA protein%2C Aux/IAA-ARF-dimerisation""Cellulose synthase""Cellulose synthase""Zinc finger%2C RING-type%2C Zinc finger%2C RING/FYVE/PHD-type""SAC3/GANP/Nin1/mts3/eIF-3 p25%2C 26S proteasome non-ATPase regulatory subunit Rpn12" And, to remove the quotes at the beginning and end of the Note, while keeping any that might be part of note itself: $ awk -F"\t" '$3=="mRNA"{sub(/.*Note=/,"",$9); print $9}' file.gff | sed 's/^"//; s/"$//'Peptidase S59%2C nucleoporinPutative S-adenosyl-L-methionine-dependent methyltransferaseUnknownHeavy metal-associated domain%2C HMAUnknownAuxin response factor%2C B3 DNA binding domain%2C DNA-binding pseudobarrel domain%2C AUX/IAA protein%2C Aux/IAA-ARF-dimerisationCellulose synthaseCellulose synthaseZinc finger%2C RING-type%2C Zinc finger%2C RING/FYVE/PHD-typeSAC3/GANP/Nin1/mts3/eIF-3 p25%2C 26S proteasome non-ATPase regulatory subunit Rpn12 Finally, to get the value of both Note and ID , you could do: $ awk -F"\t" '$3=="mRNA"{n=$9; sub(/.*Note="/,"",n); sub(/"$/,"",n); sub(/.*ID=/,"",$9); sub(/;.*/,"",$9); print $9","n}' file.gff Nitab4.5_0006317g0010.1,Peptidase S59%2C nucleoporinNitab4.5_0006317g0020.1,Putative S-adenosyl-L-methionine-dependent methyltransferaseNitab4.5_0006317g0030.1,UnknownNitab4.5_0006317g0040.1,Heavy metal-associated domain%2C HMANitab4.5_0006317g0050.1,UnknownNitab4.5_0002367g0010.1,Auxin response factor%2C B3 DNA binding domain%2C DNA-binding pseudobarrel domain%2C AUX/IAA protein%2C Aux/IAA-ARF-dimerisationNitab4.5_0002367g0020.1,Cellulose synthaseNitab4.5_0002367g0030.1,Cellulose synthaseNitab4.5_0002367g0040.1,Zinc finger%2C RING-type%2C Zinc finger%2C RING/FYVE/PHD-typeNitab4.5_0002367g0050.1,SAC3/GANP/Nin1/mts3/eIF-3 p25%2C 26S proteasome non-ATPase regulatory subunit Rpn12 Personally, however, I would do this in perl instead: $ perl -F'\t' -lane 'if($F[2] eq "mRNA"){/ID=([^\;]+).*Note="([^"]+)/; print "$1,$2"}' file.gff Nitab4.5_0006317g0010.1,Peptidase S59%2C nucleoporinNitab4.5_0006317g0020.1,Putative S-adenosyl-L-methionine-dependent methyltransferaseNitab4.5_0006317g0030.1,UnknownNitab4.5_0006317g0040.1,Heavy metal-associated domain%2C HMANitab4.5_0006317g0050.1,UnknownNitab4.5_0002367g0010.1,Auxin response factor%2C B3 DNA binding domain%2C DNA-binding pseudobarrel domain%2C AUX/IAA protein%2C Aux/IAA-ARF-dimerisationNitab4.5_0002367g0020.1,Cellulose synthaseNitab4.5_0002367g0030.1,Cellulose synthaseNitab4.5_0002367g0040.1,Zinc finger%2C RING-type%2C Zinc finger%2C RING/FYVE/PHD-typeNitab4.5_0002367g0050.1,SAC3/GANP/Nin1/mts3/eIF-3 p25%2C 26S proteasome
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34872/" ] }
589,099
I am trying to convert a json structure that key1: array of values, key2: array of values ,.... to an array of objects. The size of arrays is same and each object is just aggregate of items at position x in each array. Need help converting this preferably with generic code with jq. Input { "IdentifierName": [ "A", "B", "C" ], "Code": [ 5, 8, 19 ]} Expected Output [ { "IdentifierName": "A", "Code": 5 }, { "IdentifierName": "B", "Code": 8 }, { "IdentifierName": "C", "Code": 19 }] Edit:progress so far: jq 'to_entries|map(.key) as $keys| (map(.value)|transpose) as $values |$values|map($keys, .)' The last step is to somehow index with the keys into the values that I am still unable to get right.
Answering my own question: jq 'to_entries|map(.key) as $keys| (map(.value)|transpose) as $values |$values|map([$keys, .] | transpose| map( {(.[0]): .[1]} ) | add)' Explanation: Extract keys ["IdentifierName", "Code"] and values as [ [ "A", 5 ], [ "B", 8 ], [ "C", 19 ] ] Then to index from keys to values, take json-seq of key-tuple with (each) value tuple and transpose and zip them in pairs. echo '[[ "IdentifierName", "Code" ], [ "C", 19 ]]'|jq '.|transpose| map( {(.[0]): .[1]} ) | add' Combining both gives solution. This will work for any number of elements (0 and 1 are just key and value, not first and second).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589099", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/221826/" ] }
589,146
I am trying to write several directories to a tape. Each directory with one tar command. So I have the following sample file/directory structure: user@host1:~/temp/original % find .../foo1./foo1/foo1.a./foo1/foo1.b./foo1/foo1.c./foo1/foo1.1./foo2./foo2/foo2.a./foo2/foo2.b./foo2/foo2.c./foo2/foo2.2./foo3./foo3/foo3.a./foo3/foo3.b./foo3/foo3.c./foo3/foo3.3 I rewind and erase the tape, which I expect it is like using a blank tape. user@host1:~/temp/original % mt -f /dev/sa0 rewinduser@host1:~/temp/original % mt -f /dev/sa0 eraseuser@host1:~/temp/original % mt -f /dev/sa0 rewinduser@host1:~/temp/original % mt -f /dev/sa0 statusDrive: sa0: <SEAGATE DAT 9SP40-000 912L> Serial Number: HN0948V---------------------------------Mode Density Blocksize bpi CompressionCurrent: 0x24:DDS-2 variable 61000 enabled (DCLZ)---------------------------------Current Driver State: at rest.---------------------------------Partition: 0 Calc File Number: 0 Calc Record Number: 0Residual: 0 Reported File Number: 0 Reported Record Number: 0Flags: BOP Then I want to write three tar files (I think they are called files when stored to tape) with three tar commands. One command for each directory (foo1, foo2 and foo3). So I do: user@host1:~/temp/original % tar cvf /dev/nsa0 foo1a foo1a foo1/foo1.aa foo1/foo1.ba foo1/foo1.ca foo1/foo1.1user@host1:~/temp/original % tar cvf /dev/nsa0 foo2a foo2a foo2/foo2.aa foo2/foo2.ba foo2/foo2.ca foo2/foo2.2user@host1:~/temp/original % tar cvf /dev/nsa0 foo3a foo3a foo3/foo3.aa foo3/foo3.ba foo3/foo3.ca foo3/foo3.3 As I have been using /dev/nsa0 I expect to have three tar files stored in the tape. Now I want to recover the three files from the tape into another directory I do: user@host1:~/temp/original % cd ../backup/user@host1:~/temp/backup % mt -f /dev/sa0 rewinduser@host1:~/temp/backup % tar xvf /dev/nsa0x foo1/x foo1/foo1.ax foo1/foo1.bx foo1/foo1.cx foo1/foo1.1user@host1:~/temp/backup % tar xvf /dev/nsa0user@host1:~/temp/backup % tar xvf /dev/nsa0x foo2/x foo2/foo2.ax foo2/foo2.bx foo2/foo2.cx foo2/foo2.2user@host1:~/temp/backup % tar xvf /dev/nsa0user@host1:~/temp/backup % tar xvf /dev/nsa0x foo3/x foo3/foo3.ax foo3/foo3.bx foo3/foo3.cx foo3/foo3.3user@host1:~/temp/backup % mt -f /dev/nsa0 statusDrive: sa0: <SEAGATE DAT 9SP40-000 912L> Serial Number: HN0948V---------------------------------Mode Density Blocksize bpi CompressionCurrent: 0x24:DDS-2 variable 61000 enabled (DCLZ)---------------------------------Current Driver State: at rest.---------------------------------Partition: 0 Calc File Number: 2 Calc Record Number: 1Residual: 0 Reported File Number: 2 Reported Record Number: 5Flags: None Why do I have to type twice tar xvf /dev/nsa0 to extract foo2 and foo3 ? If I try to add another directory at the end of the tape I do: user@host1:~/temp/original % mt -f /dev/nsa0 eomuser@host1:~/temp/original % tar cvf /dev/nsa0 foo4a foo4a foo4/foo4.aa foo4/foo4.ba foo4/foo4.ca foo4/foo4.4user@host1:~/temp/original % cd ..user@host1:~/temp % cd backup/user@host1:~/temp/backup % mt -f /dev/nsa0 rewinduser@host1:~/temp/backup % mt -f /dev/nsa0 fsf 3user@host1:~/temp/backup % tar xvf /dev/nsa0user@host1:~/temp/backup % Why foo4 is not extracted? As an additional test I eject the tape, reinsert it and try to extract the four directories, this is what I have to do: user@host1:~/temp/backup % mt -f /dev/nsa0 offlineuser@host1:~/temp/backup % tar xvf /dev/nsa0x foo1/x foo1/foo1.ax foo1/foo1.bx foo1/foo1.cx foo1/foo1.1user@host1:~/temp/backup % tar xvf /dev/nsa0user@host1:~/temp/backup % tar xvf /dev/nsa0x foo2/x foo2/foo2.ax foo2/foo2.bx foo2/foo2.cx foo2/foo2.2user@host1:~/temp/backup % tar xvf /dev/nsa0user@host1:~/temp/backup % tar xvf /dev/nsa0x foo3/x foo3/foo3.ax foo3/foo3.bx foo3/foo3.cx foo3/foo3.3user@host1:~/temp/backup % tar xvf /dev/nsa0user@host1:~/temp/backup % tar xvf /dev/nsa0user@host1:~/temp/backup % tar xvf /dev/nsa0x foo4/x foo4/foo4.ax foo4/foo4.bx foo4/foo4.cx foo4/foo4.4 Why do I have to repeat the tar commands, twice in the case of foo2 and foo3 and three times in the case of foo4 ? I am using FreeBSD12.1 and an IBM DDS4 (STD2401LW / Tc4200-236) SCSI Tape Drive. EDIT>Following schily's answer, I can get the tar files extracted in order. The only remaining issue would be understanding why mt eom to later add foo4 tar file still requires two mt fsf instead of just one. After reinserting the tape: user@host1:~/temp/backup % tar xvf /dev/nsa0x foo1/x foo1/foo1.ax foo1/foo1.bx foo1/foo1.cx foo1/foo1.1user@host1:~/temp/backup % mt fsfuser@host1:~/temp/backup % tar xvf /dev/nsa0x foo2/x foo2/foo2.ax foo2/foo2.bx foo2/foo2.cx foo2/foo2.2user@host1:~/temp/backup % mt fsfuser@host1:~/temp/backup % tar xvf /dev/nsa0x foo3/x foo3/foo3.ax foo3/foo3.bx foo3/foo3.cx foo3/foo3.3user@host1:~/temp/backup % mt fsfuser@host1:~/temp/backup % tar xvf /dev/nsa0user@host1:~/temp/backup % tar xvf /dev/nsa0x foo4/x foo4/foo4.ax foo4/foo4.bx foo4/foo4.cx foo4/foo4.4user@host1:~/temp/backup % EDIT>This is what mt status return right at the position that allows to extract foo4 . The commands are executed right after inserting the tape: user@host1:~/temp/backup % rm -rf *user@host1:~/temp/backup % mt statusDrive: sa0: <SEAGATE DAT 9SP40-000 912L> Serial Number: HN0948V---------------------------------Mode Density Blocksize bpi CompressionCurrent: 0x24:DDS-2 variable 61000 enabled (DCLZ)---------------------------------Current Driver State: at rest.---------------------------------Partition: 0 Calc File Number: 0 Calc Record Number: 0Residual: 0 Reported File Number: 0 Reported Record Number: 0Flags: BOPuser@host1:~/temp/backup % echo $TAPE/dev/nsa0user@host1:~/temp/backup % mt fsf 4user@host1:~/temp/backup % mt statusDrive: sa0: <SEAGATE DAT 9SP40-000 912L> Serial Number: HN0948V---------------------------------Mode Density Blocksize bpi CompressionCurrent: 0x24:DDS-2 variable 61000 enabled (DCLZ)---------------------------------Current Driver State: at rest.---------------------------------Partition: 0 Calc File Number: 4 Calc Record Number: 0Residual: 0 Reported File Number: 4 Reported Record Number: 7Flags: Noneuser@host1:~/temp/backup % tar xvx foo4/x foo4/foo4.ax foo4/foo4.bx foo4/foo4.cx foo4/foo4.4user@host1:~/temp/backup %
The behavior is related to the EOF handling of the tape driver. This handling differs between operating systems and it may help to read the related Solaris man page: http://schillix.sourceforge.net/man/man7i/mtio.7i.html that explains a difference between the Solaris handling and the old BSD behavior. From this explanation, I would expect the old BSD behavior to cause a read after an EOF situation to skip the file mark and to return the first record from the next file on tape. This seems to be what you expect. It seems that the observed behavior on BSD is between the documented SVr4 behavior and the old BSD behavior, but I guess that there is a way to make things work on both Solaris and current BSD: call tar to read the first tape file after that, the tape is positioned at the end of the first tape file, which is just before the file mark... call mt fsf to skip the file mark call tar to read the next file on tape. From the rest of the discussion, it seems that FreeBSD writes an additional filemark, when mt rewind is called after a write operation has been applied. The command mt eom will position the tape after the final double filemark and when another write operation takes place, this happens after the double filemark resultng in an empty tape file before that final write. A tape with three files looks this way: data1 FILEMARK data2 FILEMARK data3 FILEMARK FILEMARK If you like to append a fourth tape file, you need to call: mt fsf 3 to position the tape after the thirf filemark. If you then start writing, this overwrites the fourth filemark and if you then rewind again, you have this tape layout: data1 FILEMARK data2 FILEMARK data3 FILEMARK data4 FILEMARK FILEMARK
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378578/" ] }
589,236
when I run glxgears , I get following error. libGL error: No matching fbConfigs or visuals foundlibGL error: failed to load driver: swrastError: couldn't get an RGB, Double-buffered visual My system is ubuntu 16.04 as docker image - nvidia/cuda:8.0-runtime-ubuntu16.04 . The image contains VirtualGL and TurboVNC and its is started with the following parameters: docker run --runtime=nvidia --privileged -d -v /tmp/.X11-unix/X0:/tmp/.X11-unix/X0 -e USE_DISPLAY="7" my_image There is no problem if I change the base image to nvidia/cuda:10.2-runtime-ubuntu18.04 . But an application for which this container is, needs CUDA 8. I found some advice to remove library: sudo rm /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 . But it does not work. Ubuntu 16.04, CUDA 8: user@host:/opt/noVNC$ sudo ldconfig -p | grep -i libGL.so libGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 libGL.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGL.so libGL.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/mesa/libGL.souser@host:/usr/lib/x86_64-linux-gnu$ ll libGL* lrwxrwxrwx 1 root root 13 Jun 14 2018 libGL.so -> mesa/libGL.so lrwxrwxrwx 1 root root 32 May 25 14:14 libGLESv1_CM_nvidia.so.1 -> libGLESv1_CM_nvidia.so.440.33.01 -rw-r--r-- 1 root root 63696 Nov 12 2019 libGLESv1_CM_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 29 May 25 14:14 libGLESv2_nvidia.so.2 -> libGLESv2_nvidia.so.440.33.01 -rw-r--r-- 1 root root 111416 Nov 12 2019 libGLESv2_nvidia.so.440.33.01 -rw-r--r-- 1 root root 911218 Oct 23 2015 libGLU.a lrwxrwxrwx 1 root root 15 Oct 23 2015 libGLU.so -> libGLU.so.1.3.1 lrwxrwxrwx 1 root root 15 Oct 23 2015 libGLU.so.1 -> libGLU.so.1.3.1 -rw-r--r-- 1 root root 453352 Oct 23 2015 libGLU.so.1.3.1 lrwxrwxrwx 1 root root 26 May 25 14:14 libGLX_indirect.so.0 -> libGLX_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 26 May 25 14:14 libGLX_nvidia.so.0 -> libGLX_nvidia.so.440.33.01 -rw-r--r-- 1 root root 1114496 Nov 12 2019 libGLX_nvidia.so.440.33.01user@host:/usr/lib/x86_64-linux-gnu$ ll mesa -rw-r--r-- 1 root root 31 Jun 14 2018 ld.so.conf lrwxrwxrwx 1 root root 14 Jun 14 2018 libGL.so -> libGL.so.1.2.0 lrwxrwxrwx 1 root root 14 Jun 14 2018 libGL.so.1 -> libGL.so.1.2.0 -rw-r--r-- 1 root root 471680 Jun 14 2018 libGL.so.1.2.0 Ubuntu 18.04, CUDA 10: user@host:/opt/noVNC$ sudo ldconfig -p | grep -i libGL.so libGL.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libGL.so.1user@host:/usr/lib/x86_64-linux-gnu$ ll libGL* lrwxrwxrwx 1 root root 14 May 10 2019 libGL.so.1 -> libGL.so.1.0.0 -rw-r--r-- 1 root root 567624 May 10 2019 libGL.so.1.0.0 lrwxrwxrwx 1 root root 32 May 20 16:43 libGLESv1_CM_nvidia.so.1 -> libGLESv1_CM_nvidia.so.440.33.01 -rw-r--r-- 1 root root 63696 Nov 12 2019 libGLESv1_CM_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 29 May 20 16:43 libGLESv2_nvidia.so.2 -> libGLESv2_nvidia.so.440.33.01 -rw-r--r-- 1 root root 111416 Nov 12 2019 libGLESv2_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 15 May 21 2016 libGLU.so.1 -> libGLU.so.1.3.1 -rw-r--r-- 1 root root 453352 May 21 2016 libGLU.so.1.3.1 lrwxrwxrwx 1 root root 15 May 10 2019 libGLX.so.0 -> libGLX.so.0.0.0 -rw-r--r-- 1 root root 68144 May 10 2019 libGLX.so.0.0.0 lrwxrwxrwx 1 root root 16 Feb 19 05:09 libGLX_indirect.so.0 -> libGLX_mesa.so.0 lrwxrwxrwx 1 root root 20 Feb 19 05:09 libGLX_mesa.so.0 -> libGLX_mesa.so.0.0.0 -rw-r--r-- 1 root root 488344 Feb 19 05:09 libGLX_mesa.so.0.0.0 lrwxrwxrwx 1 root root 26 May 20 16:43 libGLX_nvidia.so.0 -> libGLX_nvidia.so.440.33.01 -rw-r--r-- 1 root root 1114496 Nov 12 2019 libGLX_nvidia.so.440.33.01 lrwxrwxrwx 1 root root 22 May 10 2019 libGLdispatch.so.0 -> libGLdispatch.so.0.0.0 -rw-r--r-- 1 root root 612792 May 10 2019 libGLdispatch.so.0.0.0user@host:/usr/lib/x86_64-linux-gnu$ ll mesa ls: cannot access 'mesa': No such file or directory The host has CUDA 10.2 but I do not know if it is required and can cause a problem. I have no idea how to solve this problem. Thank you for any advice.
The two errors also appear when Running ROS with GUI in Docker using Windows Subsystem for Linux 2 (WSL2) . The error libGL error: No matching fbConfigs or visuals found can be fixed with: export LIBGL_ALWAYS_INDIRECT=1 The error libGL error: failed to load driver: swrast can be fixed with: sudo apt-get install -y mesa-utils libgl1-mesa-glx Probably irrelevant side-note: For "the ROS with GUI on Docker guide" to run, you also have to install dbus. sudo apt-get updatesudo apt-get install -y dbus I do not think this is relevant here, since you will see the two errors in question only after having installed dbus, but I do not know the background of the question, perhaps it helps. Installing dbus would get rid of the error D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open “/var/lib/dbus/machine-id” .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589236", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388392/" ] }
589,237
How can I save a modified PATH in PATH_MOD that does not contain /usr/bin ? Output of PATH : /opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
The two errors also appear when Running ROS with GUI in Docker using Windows Subsystem for Linux 2 (WSL2) . The error libGL error: No matching fbConfigs or visuals found can be fixed with: export LIBGL_ALWAYS_INDIRECT=1 The error libGL error: failed to load driver: swrast can be fixed with: sudo apt-get install -y mesa-utils libgl1-mesa-glx Probably irrelevant side-note: For "the ROS with GUI on Docker guide" to run, you also have to install dbus. sudo apt-get updatesudo apt-get install -y dbus I do not think this is relevant here, since you will see the two errors in question only after having installed dbus, but I do not know the background of the question, perhaps it helps. Installing dbus would get rid of the error D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open “/var/lib/dbus/machine-id” .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/414784/" ] }
589,383
Using KDE Konsole as a bash terminal I would like to clear the history when I close the terminal (tab/application), note that I do need the bash history when Konsole is still open (to search it); I would like to have it cleared once the terminal is closed. I often use the terminal for a long time and clear the history before I close the terminal, I am looking for a way to automate the clearing of the history. I found a similar questions on how to clear the history or how to disable it like How do I close a terminal without saving the history? , however I found nothing helpful for my situation. The difference here is that i do need the history file while the terminal is running, setting unset HISTFILE disable the history file after that command is ran and does not clear the history file itself. Note that the history file is needed while the session is running but when it get closed it need to be cleared. How can we clear the bash history when the terminal gets closed?
erase .bash_history cat /dev/null > .bash_history or >.bash_history add a trap to .bashrc trap "history -c" EXIT
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/414903/" ] }
589,393
I have a file Gene stable GO_IDAAEL025769 AAEL025769-RA GO:0005525AAEL020629 AAEL020629-RA GO:0003677AAEL020629 AAEL020629-RA GO:0005634AAEL020629 AAEL020629-RA GO:0000786AAEL020629 AAEL020629-RA GO:0046982AAEL011255 AAEL011255-RA GO:0005525AAEL000004 AAEL000004-RA GO:0016021AAEL000004 AAEL000004-RA GO:0016757AAEL000004 AAEL000004-RA GO:0005789AAEL000004 AAEL000004-RA GO:0006506AAEL000004 AAEL000004-RA GO:0000030AAEL003589 AAEL003589-RA NAAAEL026354 AAEL026354-RA NA For some genes there are multiple GO-IDs (such as AAEL020629 and AAEL000004 in the example above) . For each gene, if there are multiple GO_IDs, I want to combine them all together in single row (separate them by comma and space). below is my desired output: Gene GO_IDAAEL025769 GO:0005525AEL020629 GO:0003677, GO:0005634, GO:0000786, GO:0046982AAEL011255 GO:0005525AAEL000004 GO:0016021, GO:0016757, GO:0005789, GO:0006506, GO:0000030AAEL003589 NAAAEL026354 NA Any idea how I can do this?Thanks
erase .bash_history cat /dev/null > .bash_history or >.bash_history add a trap to .bashrc trap "history -c" EXIT
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216256/" ] }
589,409
Is it best to uninstall -vanilla kernel or leave and modify boot setup somehow - and if so, what's the best method which isn't lost when the OS updates the extlinux.cfg file? A while ago I deployed an Alpine 3.12 edge VM to host a Wireguard server for my personal use. Recently I did an apk package update (periodic security patching, etc) and it installed a more recent version of linux-vanilla... Which broke the Wireguard support. Lots of RTNETLINK errors, modprobe wireguard wasn't happy, couldn't load the existing wireguard module from the package install. Very messy. So, I decided to sidegrade to the linux-virt kernel to get a 5.4 build with Wireguard support included. Ran apk add linux-virt , it installed 5.4.38-0-virt (4th of May 2020 build), great. But how to change the boot default...? When I updated the kernels and got Wireguard working, I manually overrode via console - then forgot that I'd not solved the boot order issue until I restarted the VM earlier today. I still can't figure out the best solution: Overriding default boot and specifying the -virt kernel works fine. However if I ever reboot the VM for whatever reason, extlinux defaults to the -vanilla kernel, which is borked Despite picking apart /boot/extlinux.cfg I'm don't really want to handtool a modified cfg that because it can be updated automatically by Alpine's update-extlinux program. the default option in extlinux.cfg points to menu.c32 , a syslinux module which renders the menu contained in the .cfg. I don't want to lose the GUI boot choice menu, but I can't see how else to update the default kernel without manually hacking the order in the config file (and later losing this whenever the OS updates the file). I'd prefer to keep this Alpine VM as close to deployed spec as possible, I very rarely touch it so have to relearn each time I do stuff with it. I'd rather not change from extlinux unless necessary, but does extlinux even offer what I want to do? Furthermore, in attempts to clean up the installed packages I ran apk del linux-vanilla which also uninstalled 87 firmware packages. Decided to reinstall and decide what to do... I don't expect to need these as this is a VM, but I'm puzzled about whether I can actually ever remove the -vanilla kernel and these associative firmware packages without harming the install. Should I leave the vanilla kernel installed for the additional firmware packages and override to the -virt kernel somehow at boot? Or as I suspect, is it OK to simply apk del linux-vanilla and the linux-virt 5.4 LTS kernel will work indefinitely with no other ill effects from the firmware packages not being installed? Having scanned the list, it doesn't appear any which would be removed would be used by my instance but I'm not sure what unintended consequences might occur as a result of removing them. FWIW, in the course of fixing the original Wireguard problem, the system has had installed/been upgraded with linux-vanilla-4.19.118-r0 linux-virt-4.19.118-r0 linux-vanilla-dev-4.19.118-r0 linux-virt-5.4.38-r0 (currently running) Forgive the relatively newbie question regarding how best to modify the , I've hunted around the internet but documentation in particular for modifying the extlinux boot screens, specifically in a compatible way for update-extlinux , seems to be lacking. Or my Google fu has deserted me...
erase .bash_history cat /dev/null > .bash_history or >.bash_history add a trap to .bashrc trap "history -c" EXIT
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589409", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15460/" ] }
589,508
I have recently switched from bash to zsh and now when I type ls * It does not simply list all files in this directory, but shows a tree of two levels. Also rm -rf somepath/* fails with the output zsh: no matches found: This used to work just fine with bash. Can anyone help me to get this behaviour back? I do have oh-my-zsh installed.
ls * would have the same effect in bash. No matter what the shell is, what happens is that the shell first expands the wildcards, and then passes the result of the expansion to the command. For example, suppose the current directory contains four entries: two subdirectories dir1 and dir2 , and two regular files file1 and file2 . Then the shell expands ls * to ls dir1 dir2 file1 file2 . The ls command first lists the names of the arguments that are existing non-directories, then lists the contents of each directory in turn. $ lsdir1 dir2 file1 file2$ ls -Fdir1/ dir2/ file1 file2$ ls *file1 file2dir1:…dir2:… If ls behaved differently in bash, either you've changed the bash configuration to turn off wildcard expansions, which would turn it off everywhere, or you've changed the meaning of the ls command to suppress the listing of directories, probably with an alias. Specifically, having alias ls='ls -d' in your ~/.bashrc would have exactly the effect you describe. If that's what you did, you can copy this line to ~/.zshrc and you'll have the same effect. The fact that rm -rf somepath/* has a different effect in bash and zsh when somepath is an empty directory is a completely different matter. In bash, if somepath/* doesn't match any files, then bash leaves the wildcard pattern in the command, so rm sees the arguments -rf and somepath/* . rm tries to delete the file called * in the directory somepath , and since there's no such file, this attempt fails. Since you passed the option -f to rm , it doesn't complain about a missing file. In zsh, by default, if a wildcard doesn't match any files, zsh treats this as an error. You can change the way zsh behaves by turning off the option nomatch : setopt no_nomatch I don't recommend this because having the shell tell you when a wildcard doesn't match is usually the preferable behavior on the command line. There's a much better way to tell zsh that in this case, an empty list is ok: rm -rf somepath/*(N) N is a glob qualifier that says to expand to an empty list if the wildcard doesn't match any file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/253751/" ] }
589,520
I want to compress a CSV file to a .tar.gz archive. I tried like this, but its not working. Can any one help on this? #!/bin/bashLOG_FILES="/tmp/"for file in $LOG_FILES;do if [ -e "$file" ] then tar -cvzf $file.tar.gz $file; fidone
ls * would have the same effect in bash. No matter what the shell is, what happens is that the shell first expands the wildcards, and then passes the result of the expansion to the command. For example, suppose the current directory contains four entries: two subdirectories dir1 and dir2 , and two regular files file1 and file2 . Then the shell expands ls * to ls dir1 dir2 file1 file2 . The ls command first lists the names of the arguments that are existing non-directories, then lists the contents of each directory in turn. $ lsdir1 dir2 file1 file2$ ls -Fdir1/ dir2/ file1 file2$ ls *file1 file2dir1:…dir2:… If ls behaved differently in bash, either you've changed the bash configuration to turn off wildcard expansions, which would turn it off everywhere, or you've changed the meaning of the ls command to suppress the listing of directories, probably with an alias. Specifically, having alias ls='ls -d' in your ~/.bashrc would have exactly the effect you describe. If that's what you did, you can copy this line to ~/.zshrc and you'll have the same effect. The fact that rm -rf somepath/* has a different effect in bash and zsh when somepath is an empty directory is a completely different matter. In bash, if somepath/* doesn't match any files, then bash leaves the wildcard pattern in the command, so rm sees the arguments -rf and somepath/* . rm tries to delete the file called * in the directory somepath , and since there's no such file, this attempt fails. Since you passed the option -f to rm , it doesn't complain about a missing file. In zsh, by default, if a wildcard doesn't match any files, zsh treats this as an error. You can change the way zsh behaves by turning off the option nomatch : setopt no_nomatch I don't recommend this because having the shell tell you when a wildcard doesn't match is usually the preferable behavior on the command line. There's a much better way to tell zsh that in this case, an empty list is ok: rm -rf somepath/*(N) N is a glob qualifier that says to expand to an empty list if the wildcard doesn't match any file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589520", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415050/" ] }
589,539
I'd like to alter the following command so that the regex matches words in /usr/share/dict/words that contain exactly 3 a's instead of at least 3 a's. cat /usr/share/dict/words | grep "a.*a.*a" | grep -v "'s$" | wc -l How do I do this?
Here's one way with [^a] (match any character other than a ) instead of . (match any character): $ grep -E '^([^a]*a){3}[^a]*$' /usr/share/dict/cracklib-small | shuf -n 4areawayhumanitariancapitalizationsautonavigator You can also write the regexp like ^[^a]*(a[^a]*){3}$ with the same results. It's also equivalent to ^[^a]*a[^a]*a[^a]*a[^a]*$ which doesn't scale when you want a different number of a's. Performance is much better though, not that it matters unless you're grepping through gigabytes of data. Instead of explicitly using the ^ and $ regexp anchor operators, you can also use the -x option which does that implicitly. See also the -i option to match case insensitively (according to locale): grep -xiE '([^a]*a){3}[^a]*'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/589539", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399296/" ] }
589,683
I've used WSL Bash/Ubuntu for several years, but for some reason this problem recently appeared. DNS is unable to resolve any names, both internal and external. The first time I re-installed WSL I think it worked, for a day... but not anymore, even if I reinstall. From a fresh install of Ubuntu 18.04 from Windows Store: user@hostname:~$ cat /etc/resolv.conf# This file was automatically generated by WSL. To stop automatic generation of this file, remove this line.nameserver <DNS server from wi-fi NIC 1>nameserver <DNS server from wi-fi NIC 2>nameserver <DNS server from ethernet 2 (VPN) NIC 1>search anyconnect.localuser@hostname:~$ ping google.com -c 1ping: google.com: Name or service not knownuser@hostname:~$ ping 8.8.8.8 -c 1PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=16.1 ms--- 8.8.8.8 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 16.197/16.197/16.197/0.000 msuser@hostname:~$ dig +short google.comuser@hostname:~$ dig +short @8.8.8.8 google.comuser@hostname:~$ After modifying /etv/resolv.conf : user@hostname:~$ dig +short google.comuser@hostname:~$ cat /etc/resolv.confsearch <internal-domain>.localsearch anyconnect.localnameserver <DNS server from wi-fi NIC 1>nameserver <DNS server from wi-fi NIC 2>nameserver <DNS server from ethernet 2 (VPN) NIC 1>nameserver <DNS server from ethernet 2 (VPN) NIC 2>nameserver 8.8.8.8nameserver 8.8.4.4user@hostname:~$ ls -la /etc/resolv.conf-rw-r--r-- 1 root root 167 May 28 09:18 /etc/resolv.confuser@hostname:~$ ping google.com -c 1ping: google.com: Name or service not knownuser@hostname:~$ ping 8.8.8.8 -c 1PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=17.0 ms--- 8.8.8.8 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 17.045/17.045/17.045/0.000 ms# disconnected VPNuser@hostname:~$ dig +short google.com172.217.21.142user@hostname:~$ ping google.com -c 1PING google.com (172.217.21.142) 56(84) bytes of data.64 bytes from arn11s02-in-f14.1e100.net (172.217.21.142): icmp_seq=1 ttl=53 time=17.4 ms--- google.com ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 17.445/17.445/17.445/0.000 msuser@hostname:~$ dig +short google.com172.217.21.142# connected VPNuser@hostname:~$ dig +short google.comuser@hostname:~$ ping google.com -c 1ping: google.com: Name or service not knownuser@hostname:~$ As you can see, as soon as I disconnect VPN I have name resolution working flawlessly. However, I stay connected to VPN throughout the day, obviously because it's required to connect to corporate resources. I'm not dependent on internal DNS on the WSL, though ideally that should work too, but I do need external DNS working. DNS works as expected locally. I can ping the DNS servers from the VPN NIC, but not the ones from the wi-fi NIC. I've tried reinstalling WSL and also tried using only Google's nameservers in /etc/resolv.conf . Have not updated WSL as apt requires DNS... Windows 10, version 1909 Ubuntu 18.04 from Windows Store Cisco AnyConnect VPN ("Allow access to local LAN when connected" is checked) Anyone have any ideas? Where to start?
Resolved. Ubuntu subsystem (WSL) could not resolve corporate and non corporate domains while on or off vpn. Fixed. Must create /etc/wsl.conf file and add an entry to kill the resolv.conf file from auto generating on reboot. Add the code block to /etc/wsl.conf: [network] generateResolvConf = false Then reboot the ubuntu subsystem by opening powershell as admin and running command: wsl --shutdown Now, Re-open ubuntu subsystem use these commands in order: cd /etcls This directory should show the 'resolv.conf' file (which is a symbolic link). The link should now be red indicating the link leads to no where. Delete the resolv.conf link and create a new /etc/resolv.conf file In the new resolv.conf file, write this code block search your.domain.comnameserver x.x.x.xnameserver x.x.x.xnameserver y.y.y.y Where X is the DNS address configured in the Cisco Anyconnect VPN adapter. Locate the Cisco VPN adapter in network settings, right click on the Cisco VPN adapter and click 'properties', now highlight IPv4 and click 'properties'. Then note the Preferred DNS and Alternate DNS and copy those into the resolv.conf file. And Y is your normal IPv4 DNS address Now restart the subsystem again from Powershell.NOTE: If this did not work, that means that the resolv.conf file was blown away by the subsystem again. In order for this to work, the wsl.conf file has to be read by the system. If it is not being read, try reinstalling the subsystem or upgrading to 20.04.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/406318/" ] }
589,710
I created a Bash script which echoes "Hello World" . I also created a test user, bob , using adduser . Nobody has permission to execute that file as denoted by ls : $ ls -l hello.sh -rw-r--r-- 1 george george 19 Mai 29 13:06 hello.sh As we can see from the above the file's owner is george where he has only read and write access but no execute access. But logged in as george I am able to execute the script directly: $ . hello.sh Hello World To make matters worse, I log in as bob , where I have only read permission, but I am still able to execute the file: $ su bobPassword: $ . /home/george/testdir/hello.sh Hello World What's going on?
In your examples, you are not executing the files, but sourcing them. Executing would be via $ ./hello.sh and for that, execution permission is necessary. In this case a sub-shell is opened in which the commands of the script file are executed. Sourcing , i.e. $ . hello.sh (with a space in between) only reads the file, and the shell from which you have called the . hello.sh command then executes the commands directly as read, i.e. without opening a sub-shell. As the file is only read, the read permission is sufficient for the operation. ( Also note that stating the script filename like that invokes a PATH search , so if there is another hello.sh in your PATH that will be sourced! Use explicit paths, as in . ./hello.sh to ensure you source "the right one". ) If you want to prevent that from happening, you have to remove the read permission, too, for any user who is not supposed to be using the script. This is reasonable anyway if you are really concerned about unauthorized use of the script, since non-authorizeded users could easily bypass the missing execution permission by simply copy-and-pasting the script content into a new file to which they could give themselves execute permissions, and as noted by Kusalananda, otherwise an unauthorized user could still comfortably use the script by calling it via sh ./hello.sh instead of ./hello.sh because this also only requires read permissions on the script file (see this answer e.g.). As a general note , keep in mind that there are subtle differences between sourcing and executing a script (see this question e.g.).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/589710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/322933/" ] }
589,750
I have a number of files showing up green when I run ls . I understand these are executables, and I understand that one can make a file executable with chmod . But they are .csv and .pdf files. I don't understand how one could 'execute' a comma-separated text file or a PDF. So: How can they actually be 'executable'? And how would I execute them? And what would happen when I did?
This is just a question of permissions . If a file has execute permissions, that just means users are allowed to execute it. Whether they will be successful is another matter. In order for a file to be executed, the user executing it must have the right to do so and the file needs to be a valid executable. The permissions shown by ls only affect the first part, permission, and have no bearing on the rest. For instance: $ cat file.csva,silly,file$ chmod a+x file.csv$ ls -l file.csv-rwxr-xr-x 1 terdon terdon 13 May 29 15:22 file.csv This file now has execute permissions (see the 3 x in the permissions string -rwxr-xr-x ). But if I try to execute it, I will get an error: $ ./file.csv ./file.csv: line 1: a,silly,file: command not found That is because the shell is trying to execute the file as a shell script, and there are no valid shell commands in it, so it fails.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589750", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/414846/" ] }
589,798
I'm trying to parse an HTML page with pup .This is a command-line HTML parser and it accepts general HTML selectors. I know I can use Python which I do have installed on my machine, but I'd like to learn how to use pup just to get practice with the command-line. The website I want to scrape from is https://ucr.fbi.gov/crime-in-the-u.s/2018/crime-in-the-u.s.-2018/topic-pages/tables/table-1 I created an html file: curl https://ucr.fbi.gov/crime-in-the-u.s/2018/crime-in-the-u.s.-2018/topic-pages/tables/table-1 > fbi2018.html How do I extract out a column of data, such as 'Population'? This is the command I originally wrote: cat fbi2018.html | grep -A1 'cell31 ' | grep -v 'cell31 ' | sed 's/text-align: right;//' | sed 's/<[/]td>//' | sed 's/--//' | sed '/^[[:space:]]*$/d' | sort -nk1,1 It actually works but it's an ugly, hacky way to do it, which is why I want to use pup. I noticed that all of the values I need from the column 'Population' have headers="cell 31 .." somewhere within the <td> tag. For example: <td id="cell211" class="odd group1 valignmentbottom numbercell" rowspan="1" colspan="1" headers="cell31 cell210">323,405,935</td> I want to extract all the values that have this particular header in its <td> tag, which in this particular example, would be 323,405,935 It seems that multiple selectors in pup doesn't work, however. So far, I can select all the td elements: cat fbi2018.html | pup 'td' But I don't know how to select headers that contain a particular query. EDIT: The output should be: 272,690,813281,421,906285,317,559287,973,924290,788,976293,656,842296,507,061299,398,484301,621,157304,059,724307,006,550309,330,219311,587,816313,873,685316,497,531318,907,401320,896,618323,405,935325,147,121327,167,434
TLDR Use this if you want whole column under 'Population' of that table: ... | pup 'div#table-data-container:nth-of-type(3) td.group1 text{}' Basic usage pup does support multiple selectors. For example, if you want to grab wanted text!! below: $ cat file.html<div> <table> <tr class='class-a'> <td id='aaa'> some text </td> <td id='bbb'> some other text. </td> </tr> <tr class='class-b'> <td id='aaa'> wanted text!! </td> <td id='bbb'> some other text. </td> </tr> </table></div>$ cat file.html | pup 'div table tr.class-b td#aaa'<td id="aaa"> wanted text!!</td> Then add text{} to get only the text: $ cat file.html | pup 'div table tr.class-b td#aaa text{}' wanted text!! So in your case it should be: $ cat fbi2018.html | pup 'td#cell211 text{}'323,405,935 Or better, you don't have to download the page, just pipe curl to pup url="https://ucr.fbi.gov/crime-in-the-u.s/2018/crime-in-the-u.s.-2018/topic-pages/tables/table-1"curl -s "$url" | pup 'td#cell211 text{}' Explanation If you want values from an entire column, then you should know the characteristic of the element you wanted to scrape. In this case 'Population' column from given link. On the page, there's 2 tables wrapped in <div id='table-data-container'>... If you use ... | pup 'div#table-data-container' , it will also grab data from the second table. You don't want that. How do pup know you want the first table? Well, there's another hint. As you can see, there's few <div> s. And your table is on 3rd div. So you can use CSS's psuedo-classes , in this case div#table-data-container:nth-of-type(3) Then, the column has unique selector as td.group1 Combine them all then pipe it to grep -v -e '^$' to get rid of blank spaces. ... | pup 'div#table-data-container:nth-of-type(3) td.group1 text{}' | grep -v -e '^$' and you will get what you wanted: 272,690,813281,421,906285,317,559...327,167,434
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589798", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399296/" ] }
589,822
I have an old machine with an Intel D2700DC motherboard . I use it as a home server for some side projects. I have Ubuntu 32-bit installed on it, but recently figured out that its embedded D2700DC CPU is actually a 64-bit processor. My question is does it worth reinstalling Ubuntu 64-bit there instead of 32-bit? I have 3GB RAM and looks like there's a limit of 4GB for this hardware. Do you think it will be faster in some ways or what other benefits I can get from installing 64-bit? One reason I see is that Ubuntu stops supporting 32-bit in the last major releases and I still have Ubuntu 18.04 there.
It won't be faster in any meaningful way, but the reality of 32 bit is that it's increasingly legacy, which means, in the real world, bugs are not getting found and fixed. I would install 64 bit if I were you primarily because it will be more reliable and have bugs fixed more often. It will also future-proof your machine, since you can then just update to the next OS version every now and then without having to worry about it. All FOSS projects have finite developer eyes/hours available to them, and 32 bit just isn't getting many of those eyes and hours anymore, and there are no magic coders that just fix stuff because it's broken, sadly. It's wisest to just accept this limitation as reality, since it won't really change. I know of at least one 3rd party Linux kernel builder who stopped supporting 32 bit relatively recently because of a series of 32 bit kernel bugs that were known but were not getting fixed, nor did they appear likely to ever get fixed in the future. And that's the Linux kernel project, which has thousands of contributors. It just goes downhill from there with other projects with far fewer developers. This situation will happen to more and more core and not so core software and tools as 32 bit gets removed from more and more primary GNU/Linux distribution pools. This becomes increasingly relevant as major projects like Google Chrome, Firefox, etc, start to drop, if they have not already dropped, 32 bit support, which means you'll be using insecure non-updateable software to access the internet. Note that you can in theory sort of cross grade 32 bit to 64 bit (at least on Debian, not sure about Ubuntu), I tested that on one machine to see, but it's such a pain, and takes so long, and leaves so much cruft, and requires so many manual fixes, that in the end, I decided that was not worth it, and just switched the rest of my systems to 64 bit by reinstalling. Keep in mind you can copy your main configs, and then get a package list, and reinstall the packages when you reinstall to 64 bit, it doesn't take that long, and once it's done, no more need to worry about it. Your other option is to just never upgrade your box again, and just let it run until it dies. On systems that don't interact with the internet that's not a terrible way to deal with the stuff, but you may hit a snag one day when you need to match versions of something like samba or nfs and you can't because your server box OS is too old.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/589822", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415303/" ] }
589,851
I have recently decided to move my personal scripts to a directory in my $HOME . I have included this directory in my $PATH variable but have had ZSH respond with "command not found" when attempting to run them. The following line is what I appended to .xprofile PATH="$PATH:~/.local/share/";export PATH I have a few scripts in ~/.local/share/ but haven't been able to execute them in ZSH, but in bash it works fine. I have tried adding the live above to my .zshrc , but that has not worked.
~ is not a variable and does not behave like a variable. Shells generally don't expand ~ when it is quoted. You may use $HOME instead of ~ in any shell to make sure that you get the correct path to your home directory, without relying on the shell's special treatment of the tilde character (which is a shortcut mainly for use in interactive shells). $HOME does behave like you'd expect a variable to behave, i.e. it gets expanded to the path of your home directory when it's quoted using double quotes. Also note that it's unlikely that PATH is not already an environment variable. Exporting it is therefore not needed. In the zsh shell, to add ~/.local/share to the end of your command search path, you could also do path+=~/.local/share or path+=$HOME/.local/share The array variable path is tied to the scalar variable PATH , meaning that when you update the array path as above, the added element gets added to the end of the value $PATH .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589851", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/369258/" ] }
589,862
I changed my password and username for mySQL and I need to replace the connection string in all of my PHP scripts accordingly. I am having trouble with the command because many of my php file names contain spaces. What can I change to make this command work without having "No such file or directory" errors? Here is the command I am using: pattern='mysql_connect("localhost", "olduser", "oldpwd")'replacement='mysql_connect("localhost", "newuser", "newpwd")'find . -name "*.php" | xargs -n 1 sed -i -e 's|$pattern|$replacement|g' I am unfamiliar with xargs and sed and I copied this code from this answer to a similar question. (I think I will put the mysql_connect statement into a php include file and only change it in one place next time.) I'm running Ubuntu 14.04 LTS using BASH
~ is not a variable and does not behave like a variable. Shells generally don't expand ~ when it is quoted. You may use $HOME instead of ~ in any shell to make sure that you get the correct path to your home directory, without relying on the shell's special treatment of the tilde character (which is a shortcut mainly for use in interactive shells). $HOME does behave like you'd expect a variable to behave, i.e. it gets expanded to the path of your home directory when it's quoted using double quotes. Also note that it's unlikely that PATH is not already an environment variable. Exporting it is therefore not needed. In the zsh shell, to add ~/.local/share to the end of your command search path, you could also do path+=~/.local/share or path+=$HOME/.local/share The array variable path is tied to the scalar variable PATH , meaning that when you update the array path as above, the added element gets added to the end of the value $PATH .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589862", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409980/" ] }
589,899
I'm stuck in the boot and can't open any terminal. When I try to boot my laptop it shows the following error: Gave up waiting for root device. Common problems:-Boot args (cat /proc/cmdline) -Check rootdelay= (did the system wait long enough?)-Missing modules (cat /proc/modules; ls /dev)ALERT! UUID=718ed077-947d-4018-80ad-59825678e81d does not exist. Dropping to a shell!BusyBox v1.27.2 (Ubuntu 1:1.27.2-2ubuntu3.2) built-in shell (ash)Enter 'help' for a list of built-in commands.(initramfs)_ I tried to follow these steps mentioned in this link ( https://forums.linuxmint.com/viewtopic.php?t=47594 ). They changed root=UUID=9c05139c-b5bb-4683-a860a7bdf456ccda ro quiet splash toroot=/dev/sda5 (i used my UUID and /dev/sda1), but then the error becomes: -Missing modules (cat /proc/modules; ls /dev)ALERT! root=/dev/sda1 does not exist. Dropping to a shell! (I tried to find out my root partition where I installed ubuntu, but i can't run fdisk command in initramfs and also i tried running (initramfs) cat /proc/cmdlineBOOT_IMAGE=/boot/vmlinuz-5.3.0-53-generic root=UUID=718ed077-947d-4018-80ad-59825678e81d ro quiet splash and (initramfs) cat /proc/modules(shows nothing) ) This is where I changed the root=UUID=718ed077-947d-4018-80ad-59825678e81d ro quiet splash toroot=/dev/sda1 (only the line started with linux)
Your empty output from cat /proc/modules indicates you have no kernel modules loaded, so something has gone wrong with either initrd file generation when installing your current kernel, or with hardware detection as the kernel starts up. You might try running modprobe ahci to try and force loading the standard AHCI SATA driver, but I'd expect it to just tell you that the module ahci was not found. If you have other kernel versions available in your GRUB boot menu, now would be a good time to try them. If the GRUB boot menu has "Advanced options for " item, select it: it should bring up a sub-menu of all kernel versions you have installed. Typically there are two boot menu items for each kernel version: one normal boot and another for booting into recovery mode. If there is a kernel version older than 5.3.0-53 available, try the normal (non-recovery mode) option for it. If the system now boots up normally, it confirms that there was a problem with creating the initrd file during the installation of the latest kernel update. But that should be fairly easy to fix: first run sudo apt-get clean to clean up the package manager's cache: anything in that cache can just be downloaded again if necessary, and over time the cache can sometimes grow very large, so cleaning it up is a good first step. Then run df -h and make sure that any of the filesystems on your system disks is not 100% full. Finally run sudo update-initramfs -k 5.3.0-53-generic -u to re-build the initramfs file for the kernel version 5.3.0-53. If that command completes without any error messages, you should now be able to boot normally with the latest kernel version you have.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415341/" ] }
589,904
I want to change format of my filesFrom: 00:00: 03 006 to: 00:00: 03,006 sed -i 's/[0-9][0-9] [0-9][0-9][0-9]/[0-9][0-9],[0-9][0-9][0-9]/g' file doesn't work. It find these text but doesn't replace by original numbers.
Your empty output from cat /proc/modules indicates you have no kernel modules loaded, so something has gone wrong with either initrd file generation when installing your current kernel, or with hardware detection as the kernel starts up. You might try running modprobe ahci to try and force loading the standard AHCI SATA driver, but I'd expect it to just tell you that the module ahci was not found. If you have other kernel versions available in your GRUB boot menu, now would be a good time to try them. If the GRUB boot menu has "Advanced options for " item, select it: it should bring up a sub-menu of all kernel versions you have installed. Typically there are two boot menu items for each kernel version: one normal boot and another for booting into recovery mode. If there is a kernel version older than 5.3.0-53 available, try the normal (non-recovery mode) option for it. If the system now boots up normally, it confirms that there was a problem with creating the initrd file during the installation of the latest kernel update. But that should be fairly easy to fix: first run sudo apt-get clean to clean up the package manager's cache: anything in that cache can just be downloaded again if necessary, and over time the cache can sometimes grow very large, so cleaning it up is a good first step. Then run df -h and make sure that any of the filesystems on your system disks is not 100% full. Finally run sudo update-initramfs -k 5.3.0-53-generic -u to re-build the initramfs file for the kernel version 5.3.0-53. If that command completes without any error messages, you should now be able to boot normally with the latest kernel version you have.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415354/" ] }
589,966
I want to install mysql on Debian, but I have an error admin@localhost:~$ sudo apt-get install mysql-server[sudo] password for admin: admin is not in the sudoers file. This incident will be reported.
As mentioned by Philip Couling , your admin user isn’t allowed to become root using sudo . You need to become root in some other way ( e.g. su with the root password), and run apt install mariadb-server In current versions of Debian, mysql-server is no longer available, it’s been replaced by mariadb-server (which packages MariaDB , a fork of MySQL).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/589966", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301610/" ] }
590,029
How do I remove all empty strings from a Zsh array? a=('a' '' 'b' 'c')# remove empty elements from arrayecho ${(j./.)a} Should output a/b/c
There is the parameter expansion ${name:#pattern} (pattern can be empty), which will work on elements of an array: a=('a' '' 'b' 'c')echo ${(j./.)a:#}# If the expansion is in double quotes, add the @ flag:echo "${(@j./.)a:#}" man 1 zshexpn : ${name:#pattern} If the pattern matches the value of name, then substitute the empty string; otherwise, just substitute the value of name. If name is an array the matching array elements are removed (use the (M) flag to remove the non-matched elements).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/590029", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15170/" ] }
590,038
Consider: (Using Linux/BASH, unsure about UNIX Proper) I am expecting a 2 error when arguing a file that does not exist... grep "i am here" real-file# Returns: 0 (via: echo $?)grep "i am not here" real-file# Returns: 1grep "i am not here" not-a-file# Returns: 2 (No such file or directory)ls real-files# Returns: 0ls not-files# Returns: 2 (No such file or directory) ...Those make sense, but... cat real-files# Returns: 0cat not-files# Returns: 1 (No such file or directory) ...Shouldn't "No such file or directory" be STDERR with exit-status 2? Status 2 came with grep and ls non-files, but cat returns a 1 with the identical error message. I recognize that grep could have three results (each above), but I think ls would only have two as would cat . So, two possible results couldn't be the reason for cat because it isn't so with ls . Is this a problem in the BASH code? Do we need to call Linus and Richard? If this is correct, please help me understand why. After accepting an answer, I'd love an answer that expands on the original Question since this is Linux/BASH, not UNIX Proper: Does UNIX (ie on a Mac) do the same thing or similar things?
Let's address some of the parts bottom to top and get rid of non-important parts first: Is this a problem in the BASH code? No, cat is entirely separate binary application and unrelated to bash . In some shell configurations, as pointed out by Stephane Chazelas , cat can be a built in, but even then return status of an application is entirely separate from whether or not that application is related to shell or not. Do we need to call Linus and Richard? If this is correct, please help me understand why. No, it is not a problem, and Linus and Richard are completely unrelated here. Well, correction: unless they someday declare that exit() and errno absolutely MUST be related and for some odd reason we must follow all their technical decisions. It is entirely OK that both applications return different exit status , because POSIX specifications have no explicit restrictions or assignments that say "This non-zero exit status shall mean this and that". POSIX documentation of exit syscall states: The value of status may be 0, EXIT_SUCCESS, EXIT_FAILURE, or any other value, though only the least significant 8 bits (that is, status & 0377) shall be available to a waiting parent process. This means that only status 0 has assigned meaning, which is assigned to EXIT_SUCCESS as specified in stdlib.h specs. But this is POSIX spec, how does Linux spec compare ? Well, it's about the same: Linux exit(3) manual doesn't even specify what possible values may be. Note also that it says "may be" and not "shall be", in the sense that it is not absolutely required for application to exit with specific value , even in case of errors. Your application can encounter an error or failure and still return 0 upon exit. However, POSIX spec for each portable application does specify EXIT STATUS section, which is specific to each application. Again, there's no pattern besides 0 for success and non-zero for anything else. For instance, POSIX cat specs requires: The following exit values shall be returned:0 All input files were output successfully.>0 An error occurred. For grep we have: The following exit values shall be returned: 0 One or more lines were selected. 1 No lines were selected.>1 An error occurred. Within Linux context, cat(1) doesn't explicitly state these status values, but GNU documentation does . grep(1) manual mentions using exit code of 2, but even then acknowledges that POSIX implementation only requires greater-than-zero condition for errors and urges "...for the sake of portability, to use logic that tests for this general condition instead of strict equality with 2." It is worth mentioning that in some cases there is assumption that exit() status value equals to errno value. I couldn't find any documentation or reference that would suggest POSIX requires that so far. In fact, it's the opposite. Note, that POSIX exit spec and Linux exit(3) man page do not explicitly state that exit status has to somehow match errno. So the fact that return value of 2 in GNU grep matches ENOENT error value 2 is purely coincidental. In fact, if we consider errno.h specific integer value isn't even required to be assigned and is implementation dependent. So there could very well be Unix-like implementation that treats ENOENT as integer 2. But again - that's entirely unrelated, because exit status and errno are separate things. In conclusion : The fact that cat returns different exit code than grep is appropriate and consistent with the spec for those applications. Exit code meaning is not fixed, and is dependent on each individual application (unless it's a POSIX application like cat or grep , in which case for the sake of portability they should follow). To quote GNU OS documentation : "The most common convention is simply 0 for success and 1 for failure. Programs that perform comparison use a different convention: they use status 1 to indicate a mismatch, and status 2 to indicate an inability to compare. Your program should follow an existing convention if an existing convention makes sense for it."
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/590038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/315069/" ] }
590,108
Since the majority of the Linux kernel is written in the C language, so when the kernel gets loaded in Main memory, does the standard C library also get loaded along the Linux kernel? If that's the reason the programs written in C consume less memory than other program as the standard C library is already loaded and as a result are faster also (less page faults) compared to program written in other languages when run on a Linux machine?
The kernel is written in C, but it doesn’t use the C library (as dave_thompson_085 points out, it’s “ freestanding ”). Even if it did, a C library loaded along with the kernel for the kernel’s use would only be available to the kernel (unless the kernel made it explicitly accessible to user space, in some way or other), so it wouldn’t help reduce the memory requirements for programs. That said, in most cases, the earliest programs run after the kernel starts (programs in the initramfs, although they’ll use their own copy of the C library; and ultimately, init ), use the C library, so it ends up being mapped early on, and it’s highly likely that the portions of the library that are widely used will always remain in physical memory. The kernel contains implementations of many of the C library’s functions , or variants (for example, printk instead of printf ); but they don’t all follow the standard exactly. In some circumstances, the implementations of C library functions in the compiler are used instead. (Note that the vast majority of programs written in languages other than C ultimately use the C library.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/590108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/288789/" ] }
590,138
I want to get the exact number when I try to find the average of a column of values. For example, this is the column of input values: 14260441425486143948014236771383676136008813907451435123142297013944611325896125124812060051217057116829811530221199310125016212479171206836 When I use the following command: ... | awk '{ sum+=$1} END { print sum/NR}' I get the following output: 1.31638e+06 . However, I want the exact number, which is 1316375.05 or even better, in this format 1,316,375.05 How can I do this with command line tools only? EDIT 1 I found the following one-liner awk command which will get me the max, min and mean: awk 'NR == 1 { max=$1; min=$1; sum=0 } { if ($1>max) max=$1; if ($1<min) min=$1; sum+=$1;} END {printf "Min: %d\tMax: %d\tAverage: %.2f\n", min, max, sum/NR}' Why is it that NR must be initialized as 1? When I delete NR == 1 , I get the wrong result. EDIT 2 I found the following awk script from Is there a way to get the min, max, median, and average of a list of numbers in a single command? . It will get the sum, count, mean, median, max, and min values of a single column of numeric data, all in one go. It reads from stdin, and prints tab-separated columns of the output on a single line. I tweaked it a bit. I noticed that it does not need NR == 1 unlike the awk command above (in my first edit). Can someone please explain why? I think it has to do with the fact that the numeric data has been sorted and placed into an array. #!/bin/shsort -n | awk ' $1 ~ /^(\-)?[0-9]*(\.[0-9]*)?$/ { a[c++] = $1; sum += $1; } END { ave = sum / c; if( (c % 2) == 1 ) { median = a[ int(c/2) ]; } else { median = ( a[c/2] + a[c/2-1] ) / 2; } {printf "Sum: %d\tCount: %d\tAverage: %.2f\tMedian: %d\tMin: %d\tMax: %d\n", sum, c, ave, median, a[0], a[c-1]} }'
... | awk '{ sum+=$1} END { print sum/NR}' By default, (GNU) awk prints numbers with up to 6 significant digits (plus the exponent part). This comes from the default value of the OFMT variable . It doesn't say that in the docs, but this only applies to non-integer valued numbers. You could change OFMT to affect all print statements, or rather, just use printf here, so it also works if the average happens to be an integer. Something like %.3f would print the numbers with three digits after the decimal point. ...| awk '{ sum+=$1} END { printf "%.3f\n", sum/NR }' See the docs for the meaning of the f and g , and the precision modifier ( .prec in the second link): https://www.gnu.org/software/gawk/manual/html_node/Control-Letters.html https://www.gnu.org/software/gawk/manual/html_node/Format-Modifiers.html awk 'NR == 1 { max=$1; min=$1; sum=0 } ...' This doesn't initialize NR . Instead, it checks if NR is equal to one, i.e. we're on the first line. ( == is comparison, = is assignment.) If so, initializes max , min and sum . Without that, max and min would start as zeroes. You could never have a negative maximum value, or a positive minimum value.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399296/" ] }
590,209
I have a local and a remote host, both running Ubuntu, with the shell set to bash. In my home directory, I have two files, file-1 and file-2 , both on the local host, and on a remote host called remote . There are some other files in each home directory, and I want to list only files matching file-* . Locally, these produce the expected result, file-1 file-2 : $ ls file-*$ bash -c 'ls file-*' But these commands return ALL files in my home directory on the remote. What's going on there? $ ssh remote bash -c 'ls file-*'$ ssh remote bash -c 'ls "file-*"'$ ssh remote bash -c 'ls file-\*'$ ssh remote bash -c 'ls "file-\*"' I know that simply ssh remote 'ls file-*' produces the expected result, but why does ssh remote bash -c 'ls ...' seem to drop the arguments passed to ls ... ? (I've also piped the output from the remotely executed ls, and it's passed along, so only the ls seems to be affected: ssh remote bash -c 'ls file-* | xargs -I {} echo "Got this: {}"' .)
The command being executed on the remote host when you use ssh remote bash -c 'ls file-*' is bash -c ls file-* That means bash -c executes the script ls . As positional parameters, the bash -c script gets the names on the remote host matching file-* (the first of these names will be put into $0 , so it's not really part of the positional parameters). The arguments won't be passed to the ls command, so all names in the directory are listed. ssh passes the command on the the remote host for execution with one level of quotes removed (the outer set of quotes that you use on the command line). It is the shell that you invoke ssh from that removes these quotes, and ssh does not insert new quotes to separate the arguments to the remote command (as that may interfere with the quoting used by the command). You can see this if you use ssh -v : [...]debug1: Sending command: bash -c ls file-*[...] The three other commands that you show works the same, but will only set $0 to the string file-* while not setting $1 , $2 , etc. for the bash -c shell. What you may want to do is to quote the whole command: ssh remote 'bash -c "ls file-*"' Which, in the ssh -v debug output, gets reported as [...]debug1: Sending command: bash -c "ls file-*"[...] In short, you will have to ensure that the string that you pass as the remote command is the command you want to run after your local shell's quote removal. You could also have used ssh remote bash -c \"ls file-\*\" or ssh remote bash -c '"ls file-*"'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29324/" ] }
590,215
I am trying to run a Windows application in Wine but it tells me this: it looks like wine32 is missing, you should install it.as root, please execute "apt-get install wine32" So as root, I try to install wine32 but I get this: $ sudo apt install wine32Reading package lists... DoneBuilding dependency tree Reading state information... Donewine32:i386 is already the newest version (4.0-2).You might want to run 'apt --fix-broken install' to correct these.The following packages have unmet dependencies: wine32:i386 : Depends: libc6:i386 (>= 2.28) but it is not going to be installed Depends: libwine:i386 (= 4.0-2) but it is not going to be installedE: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). attempting to install libc6:i386 and libwine:i386 give me errors too: $ sudo apt install libc6:i386Reading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt --fix-broken install' to correct these.The following packages have unmet dependencies: libc6 : Breaks: libc6:i386 (!= 2.29-7) but 2.28-10 is to be installed libc6:i386 : Depends: libgcc1:i386 but it is not going to be installed Recommends: libidn2-0:i386 (>= 2.0.5~) but it is not going to be installed Breaks: libc6 (!= 2.28-10) but 2.29-7 is to be installed libcrypt1 : Breaks: libc6:i386 (< 2.29-4) but 2.28-10 is to be installed wine32:i386 : Depends: libwine:i386 (= 4.0-2) but it is not going to be installedE: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). And: $ sudo apt install libwine:i386[sudo] password for optiplex: Reading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt --fix-broken install' to correct these.The following packages have unmet dependencies: libwine:i386 : Depends: libc6:i386 (>= 2.28) but it is not going to be installed Depends: libfontconfig1:i386 (>= 2.12.6) but it is not going to be installed Depends: libfreetype6:i386 (>= 2.6.2) but it is not going to be installed Depends: libncurses6:i386 (>= 6) but it is not going to be installed Depends: libtinfo6:i386 (>= 6) but it is not going to be installed Depends: libasound2:i386 (>= 1.0.16) but it is not going to be installed Depends: libglib2.0-0:i386 (>= 2.12.0) but it is not going to be installed Depends: libgphoto2-6:i386 (>= 2.5.10) but it is not going to be installed Depends: libgphoto2-port12:i386 (>= 2.5.10) but it is not going to be installed Depends: libgstreamer-plugins-base1.0-0:i386 (>= 1.0.0) but it is not going to be installed Depends: libgstreamer1.0-0:i386 (>= 1.4.0) but it is not going to be installed Depends: liblcms2-2:i386 (>= 2.2+git20110628) but it is not going to be installed Depends: libldap-2.4-2:i386 (>= 2.4.7) but it is not going to be installed Depends: libmpg123-0:i386 (>= 1.13.7) but it is not going to be installed Depends: libopenal1:i386 (>= 1.14) but it is not going to be installed Depends: libpcap0.8:i386 (>= 0.9.8) but it is not going to be installed Depends: libpulse0:i386 (>= 0.99.1) but it is not going to be installed Depends: libudev1:i386 (>= 183) but it is not going to be installed Depends: libvkd3d1:i386 (>= 1.0) but it is not going to be installed Depends: libx11-6:i386 but it is not going to be installed Depends: libxext6:i386 but it is not going to be installed Depends: libxml2:i386 (>= 2.9.0) but it is not going to be installed Depends: ocl-icd-libopencl1:i386 but it is not going to be installed or libopencl1:i386 Depends: zlib1g:i386 (>= 1:1.1.4) but it is not going to be installed Recommends: libcapi20-3:i386 but it is not going to be installed Recommends: libcups2:i386 (>= 1.4.0) but it is not going to be installed Recommends: libdbus-1-3:i386 (>= 1.9.14) but it is not going to be installed Recommends: libgl1:i386 but it is not going to be installed Recommends: libglu1-mesa:i386 but it is not going to be installed or libglu1:i386 Recommends: libgnutls30:i386 (>= 3.6.5) but it is not going to be installed Recommends: libgsm1:i386 (>= 1.0.18) but it is not going to be installed Recommends: libgssapi-krb5-2:i386 (>= 1.6.dfsg.2) but it is not going to be installed Recommends: libjpeg62-turbo:i386 (>= 1.3.1) but it is not going to be installed Recommends: libkrb5-3:i386 (>= 1.6.dfsg.2) but it is not going to be installed Recommends: libodbc1:i386 (>= 2.3.1) but it is not going to be installed Recommends: libosmesa6:i386 (>= 10.2~) but it is not going to be installed Recommends: libpng16-16:i386 (>= 1.6.2-1) but it is not going to be installed Recommends: libsdl2-2.0-0:i386 (>= 2.0.9) but it is not going to be installed Recommends: libtiff5:i386 (>= 4.0.3) but it is not going to be installed Recommends: libv4l-0:i386 (>= 0.5.0) but it is not going to be installed Recommends: libvulkan1:i386 but it is not going to be installed Recommends: libxcomposite1:i386 (>= 1:0.3-1) but it is not going to be installed Recommends: libxcursor1:i386 (> 1.1.2) but it is not going to be installed Recommends: libxfixes3:i386 but it is not going to be installed Recommends: libxi6:i386 but it is not going to be installed Recommends: libxinerama1:i386 but it is not going to be installed Recommends: libxrandr2:i386 but it is not going to be installed Recommends: libxrender1:i386 but it is not going to be installed Recommends: libxslt1.1:i386 (>= 1.1.25) but it is not going to be installed Recommends: libxxf86vm1:i386 but it is not going to be installed Recommends: libgl1-mesa-dri:i386 but it is not going to be installed Recommends: libasound2-plugins:i386 but it is not going to be installed wine32:i386 : Depends: libc6:i386 (>= 2.28) but it is not going to be installed E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). Doing apt --fix-broken install works but does not fix libc6 or libwine after running them again. I know there are other answers and forum posts on the same thing and I have tried using their solutions but I don't get anything new. Can someone help me fix or work around this? Thank you for your time! I am running Debian 10 on an Optiplex 755 EDIT Output of apt policy libc6:{amd64,i386} : (As asked by Stephen Kitt) $ apt policy libc6:{amd64,i386}libc6: Installed: 2.29-7 Candidate: 2.29-7 Version table: *** 2.29-7 100 100 /var/lib/dpkg/status 2.28-10 500 500 http://ftp.au.debian.org/debian buster/main amd64 Packageslibc6:i386: Installed: (none) Candidate: 2.28-10 Version table: 2.28-10 500 500 http://ftp.au.debian.org/debian buster/main i386 Packages
Your setup is somewhere between Debian 10 and the forthcoming Debian 11, with package versions which are no longer available from repositories. Packages for multiple architectures ( amd64 and i386 , in this case) need to be installed in exactly the same version, and that’s no longer possible in your system’s current state. Assuming your desired state is Debian 10, you need to downgrade every package which can’t be installed: sudo apt install libc6/stable etc. Once that’s done, you’ll be able to install wine32 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590215", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/386653/" ] }
590,303
I need to find the 10 most frequent words in a .csv file.The file is structured so that each line contains comma-separated words. If the same word is repeated more than once in the same line, it should be counted as one.So, in the example below: green,blue,blue,yellow,red,yellowred,blue,green,green,green,brown green, blue and red should be counted as 2 and yellow and brown as 1 I know similar questions have been asked before, and one solution was: <file.csv tr -c '[:alnum:]' '[\n*]' | sort|uniq -c|sort -nr|head -10 But this will count the number of time a word appears in the same line, like this: 4 green 3 blue 2 yellow 2 red 1 brown and this is not actually what I need.Any help? Also I will appreciate a short explanation of the command and why does the command I found in similar questions does not do what I need.
I would probably reach for perl Use uniq from the List::Util module to de-duplicate each row. Use a hash to count the resulting occurrences. For example perl -MList::Util=uniq -F, -lnE ' map { $h{$_}++ } uniq @F }{ foreach $k (sort { $h{$b} <=> $h{$a} } keys %h) {say "$h{$k}: $k"}' file.csv2: red2: green2: blue1: yellow1: brown If you have no option except the sort and uniq coreutils, then you can implement a similar algorithm with the addition of a shell loop while IFS=, read -a words; do printf '%s\n' "${words[@]}" | sort -udone < file.csv | sort | uniq -c | sort -rn 2 red 2 green 2 blue 1 yellow 1 brown however please refer to Why is using a shell loop to process text considered bad practice?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/590303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/413624/" ] }
590,319
UPDATE: No, it is not safe to delete these snaps. I deleted them and can no longer open three of my applications. Attempt at opening Visual Studio Code: ~$ codeinternal error, please report: running "code" failed: cannot find installed snap "code" at revision 33: missing file /snap/code/33/meta/snap.yaml The snaps in /var/lib/snapd/snaps are taking up 2.0 GB of space on my disk right now. I want to clear up space, but I'm not sure if deleting these snaps is safe (if so, can I just run sudo rm -rf * ?) This is what I see when I run snap list : code_32.snap gnome-3-28-1804_116.snap gnome-logs_93.snapcode_33.snap gnome-3-34-1804_27.snap gnome-system-monitor_135.snapcore18_1705.snap gnome-3-34-1804_33.snap gnome-system-monitor_145.snapcore18_1754.snap gnome-calculator_730.snap gtk-common-themes_1502.snapcore_8935.snap gnome-calculator_748.snap gtk-common-themes_1506.snapcore_9066.snap gnome-characters_495.snap partialdiscord_109.snap gnome-characters_539.snap spotify_36.snapgnome-3-28-1804_110.snap gnome-logs_100.snap spotify_41.snap What are the gnome, code, and core snaps? I've installed discord and spotify. Will deleting the discord and spotify snaps lead to any issues with opening those applications? I'm using Ubuntu 18.04.3 LTS.
Yes it is safe to free up some space by deleting the the snap cache in /var/lib/snapd/snaps/ when the folder grows large. Try this: sudo apt purge snapd This should actually remove that dir and all traces of snaps on your system. More snap versions are stored by the system after snap package updates. Meaning that for each installed snap package that had updates, you could have several revisions stored on your system, thus taking up quite a bit of disk space. There is a snap option (starting with snapd version 2.34), called refresh.retain , to set the maximum number of a snap's revisions stored by the system after the next refresh, which can be set to a number between 2 and 20. You can change this from the default value of 3 to 2 by using: sudo snap set system refresh.retain=2 But what if you want to remove all versions kept on the system for all snap packages that had updates? Follow This link for more information.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590319", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415717/" ] }
590,329
I'm trying to numerically sort every column individually in a very large file. I need the command to be fast, so I'm trying to do it in an awk command. Example Input: 1,4,2,7,49,2,1,1,13,9,9,2,25,7,7,8,8 Example Output: 1,2,1,1,13,4,2,2,25,7,7,7,49,9,9,8,8 I made something that will do the job (but its not the powerful awk command I need): for i in $(seq $NumberOfColumns); do SortedMatrix=$(paste <(echo "$SortedMatrix") <(awk -F ',' -v x=$i '{print $x}' File | sort -nr) -d ,)done but it is very slow! I've tried to do it in awk and I think I'm close: SortedMatrix=$(awk -F ',' 'NR==FNR {for (i=1;i<=NF;i++) print|"sort -nr"}' File) But it doesn't output columns (just one very long column), I understand why its doing this but I don't know how to resolve it, I was thinking of using paste inside awk but I have no idea how to implement it. Does anyone know how to do this in awk? Any help or guidance will be much appreciated
You can do it in a single GNU awk: gawk -F ',' ' { for(i=1;i<=NF;i++){matrix[i][NR]=$i} } END{ for(i=1;i<=NF;i++){asort(matrix[i])} for(j=1;j<=NR;j++){ for(i=1;i<NF;i++){ printf "%s,",matrix[i][j] } print matrix[i][j] } }' file for(i=1;i<=NF;i++){matrix[i][NR]=$i} Multidimensional array (GNU extension) matrix gets populated, so that matrix[i][j] contains the number of column i , row j . for(i=1;i<=NF;i++){asort(matrix[i])} Sorts each column (GNU extension). Finally for(j=1;j<=NR;j++){ for(i=1;i<NF;i++){ printf "%s,",matrix[i][j] } print matrix[i][j]} Prints a sequence of a[1], , a[2], , ..., a[NF-1], , a[NF]\n for each line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590329", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170497/" ] }
590,401
Regarding systemd , is it possible to pause or stop a running service (ensuring all data have been written) while another service is running? My scenario looks like this: I have built a sensor (based upon a Raspberry Pi Zero W and running Kali Linux). The daemon process for sensing is starting (as systemd service unit sensor.service ) right after the boot process finishes. [Unit]Description=Arbitrary sensorAfter=network.target[Service]ExecStart=/usr/local/bin/sense --daemon[Install]WantedBy=multi-user.target The sensor is collecting and storing the data locally and continuously. The data is being computed periodically and its result shall be sent away (over the network). Now, for the time of computation the sensor shall stop receiving further data to ensure all data have been written , before the computation starts. Right after the computation is done, the sensor shall start to collect further data. Yes, I could write a script by myself which handles that: systemctl stop sensor.servicesystemctl start compute.service # Terminates itselfsystemctl start sensor.service But I wonder, whether this is a scenario that can be handled by systemd for me? Especially without the need to write a script by myself. What I'm imaging is a compute.timer unit which handles the three lines above.
The script you wrote in your question has a small problem. It doesn't wait for compute.service to stop before starting sensor.service . Strategies Here are four strategies. I think the third one is my favorite. Find another means of concurrency I'm not sure that stopping sensor.service is necessary. It sounds like the reason you would do this is so that new data doesn't affect the computation. In that case, I'd take a snapshot of the data file. Then compute off of that snapshot. That snapshot won't update during the compute process, solving your concurrency problem. Using PrivateTmp= means that your /tmp is actually a fresh temporary mount which is deleted when the service stops. # /etc/systemd/system/compute.service[Service]ExecStartPre=/bin/cp /var/lib/sensor/data /tmp/data_to_computeExecStart=/usr/bin/compute /tmp/data_to_computePrivateTmp=yes Do it all in a timed script If it really is necessary to stop sensor.service, your script idea isn't bad. I'd probably make compute.timer which calls compute.service which contains ExecStart=/usr/local/bin/compute.sh . That would contain something like this: #!/bin/shsystemctl stop sensor.service/usr/bin/compute /var/lib/sensor/datasystemctl start sensor.service Use systemd relationships In the previous example, we called systemctl from inside a script run by systemd. Something about that just doesn't feel right. The Conflicts= relationship in systemd means that when the service is started, it will automatically stop the conflicting service. systemd doesn't have a relationship to start a process when another finishes, so we can use ExecStartPost= for that here: Therefore adding this to compute.service : [Unit]Conflicts=sensor.service[Service]ExecStart=/usr/bin/compute /var/lib/sensor/dataExecStopPost=systemctl start sensor.service will achieve your goal of stopping sensor.service whenever compute.timer triggers. Instead of a custom shell script, we have moved everything into the unit file. The discomfort of using systemctl from inside of systemd isn't solved, but at least we've reduced it and made it more transparent (instead of hiding it in a script). Two timers This is a little in-elegant, but if you want to replace the ExecStopPost= line, you could have two timers. compute.timer is set to compute as frequently as you like (just like you are already doing). If you want to compute 5 minutes worth of data, then set sensor.timer to launch sensor.service 5 minutes earlier than you trigger compute.timer . Putting it all togeather Regardless of the strategy, you'll need compute.timer and compute.service , but shouldn't need to touch sensor.service . I realized that you might have been asking how to make a systemd timer. Here is how I would completely implement strategy 3: # /etc/systemd/system/compute.timer[Unit]Description=Timer to trigger computations every 15 minutes[Timer]OnCalendar=*:0/15[Install]WantedBy=timers.target compute.timer (above) will start compute.service (below) every 15 minutes. # /etc/systemd/system/compute.service[Unit]Description=Runs computation (triggered by compute.timer)Conflicts=sensor.service[Service]ExecStart=/usr/bin/compute /var/lib/sensor/dataExecStopPost=systemctl start sensor.service Then just enable the timer (and start if you don't want to reboot): systemctl enable compute.timersystemctl start compute.timer
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/208465/" ] }
590,411
I use the following command to match some IDs in file 1 and retrieve data stored in referencefile . while read -r line; do awk -v pattern=$line -v RS=">" '$0 ~ pattern { printf(">%s", $0); }' referencefile;done <file1 >output I have 50 files similar to file1 stored in a directory and want to perform the above command on all those files and save the outputs as seperate files. Is there a way to achieve this in one single command like a nested loop. reference file >LD200FFFFFFFFFFFFFFFFFFFFSSSSSSSSS FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF SSSSSSSSSSSSSSS>LD400HHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH>LD311DDDDDDDDDDDDDDDDDDDDDDDDDDDDD>LD500TTTTTTTTTTTTTTTTTTTTTTTTTTTTT>LD100KKKKKKKKKKKKKKKKKKKKKKKKKKKKK example file 1 LD100LD200LD311 expected output1.txt >LD100KKKKKKKKKKKKKKKKKKKKKKKKKKKKK>LD200FFFFFFFFFFFFFFFFFFFFSSSSSSSSS FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF SSSSSSSSSSSSSSS>LD311DDDDDDDDDDDDDDDDDDDDDDDDDDDDD example file 2 LD500LD400 expected output2.txt >LD500TTTTTTTTTTTTTTTTTTTTTTTTTTTTT>LD400HHHHHHHHHHHHHHHHHHHHHHHHHHHHH HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH
The script you wrote in your question has a small problem. It doesn't wait for compute.service to stop before starting sensor.service . Strategies Here are four strategies. I think the third one is my favorite. Find another means of concurrency I'm not sure that stopping sensor.service is necessary. It sounds like the reason you would do this is so that new data doesn't affect the computation. In that case, I'd take a snapshot of the data file. Then compute off of that snapshot. That snapshot won't update during the compute process, solving your concurrency problem. Using PrivateTmp= means that your /tmp is actually a fresh temporary mount which is deleted when the service stops. # /etc/systemd/system/compute.service[Service]ExecStartPre=/bin/cp /var/lib/sensor/data /tmp/data_to_computeExecStart=/usr/bin/compute /tmp/data_to_computePrivateTmp=yes Do it all in a timed script If it really is necessary to stop sensor.service, your script idea isn't bad. I'd probably make compute.timer which calls compute.service which contains ExecStart=/usr/local/bin/compute.sh . That would contain something like this: #!/bin/shsystemctl stop sensor.service/usr/bin/compute /var/lib/sensor/datasystemctl start sensor.service Use systemd relationships In the previous example, we called systemctl from inside a script run by systemd. Something about that just doesn't feel right. The Conflicts= relationship in systemd means that when the service is started, it will automatically stop the conflicting service. systemd doesn't have a relationship to start a process when another finishes, so we can use ExecStartPost= for that here: Therefore adding this to compute.service : [Unit]Conflicts=sensor.service[Service]ExecStart=/usr/bin/compute /var/lib/sensor/dataExecStopPost=systemctl start sensor.service will achieve your goal of stopping sensor.service whenever compute.timer triggers. Instead of a custom shell script, we have moved everything into the unit file. The discomfort of using systemctl from inside of systemd isn't solved, but at least we've reduced it and made it more transparent (instead of hiding it in a script). Two timers This is a little in-elegant, but if you want to replace the ExecStopPost= line, you could have two timers. compute.timer is set to compute as frequently as you like (just like you are already doing). If you want to compute 5 minutes worth of data, then set sensor.timer to launch sensor.service 5 minutes earlier than you trigger compute.timer . Putting it all togeather Regardless of the strategy, you'll need compute.timer and compute.service , but shouldn't need to touch sensor.service . I realized that you might have been asking how to make a systemd timer. Here is how I would completely implement strategy 3: # /etc/systemd/system/compute.timer[Unit]Description=Timer to trigger computations every 15 minutes[Timer]OnCalendar=*:0/15[Install]WantedBy=timers.target compute.timer (above) will start compute.service (below) every 15 minutes. # /etc/systemd/system/compute.service[Unit]Description=Runs computation (triggered by compute.timer)Conflicts=sensor.service[Service]ExecStart=/usr/bin/compute /var/lib/sensor/dataExecStopPost=systemctl start sensor.service Then just enable the timer (and start if you don't want to reboot): systemctl enable compute.timersystemctl start compute.timer
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590411", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415518/" ] }
590,518
It seems the common solution to this is to run remove /etc/initramfs-tools/conf.d/resume and run update-initramfs -u . But it is unclear how you get to that stage. Can you give a special command to GRUB to ignore the resume for a single boot? Or is there a different trick to get past the blocking?
Yes, there is "a special command", more specifically a Linux kernel boot option . A lot of kernel boot options are available for various purposes. To skip the attempt to resume from a configured suspend/resume disk/partition (usually a swap partition): Interrupt GRUB by pressing any key, then press E to edit the currently-selected boot entry, find the line starting with the word linux and add noresume as a separate word to the end of that line. This change will not persist: it will take effect for this one boot only. For a persistent change, use the instructions you mentioned in the question.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
590,533
On my Ubuntu 20 desktop, I've scheduled gedit to run a minute from now, but nothing happens. Why is that? $ echo "echo foo > at.sux" | at now + 1warning: commands will be executed using /bin/shjob 8 at Tue Jun 2 21:47:00 2020$ echo `which gedit` | at now + 1warning: commands will be executed using /bin/shjob 9 at Tue Jun 2 21:47:00 2020$ atq9 Tue Jun 2 21:47:00 2020 a dandv8 Tue Jun 2 21:47:00 2020 a dandv# A minute later$ cat at.suxfoo$ ps auxf | grep gedit # nothing but grep$ `which gedit` # launches gedit I tried echo "gedit &" | at now + 1 as suggested in the comment, but gedit is still not running a minute later.
GUI applications access the screen through a server. When you run them from the command line or the menu, the environment tells the application how to connect to the server. When you run an at command though, the environment does not include that information (you can do at -c JOBNUMBER to see the environment the application inherits) and that's why the application will start, but not be able to run. To run a GUI application you could specify the server, either calling the application with something this (your display might be different): DISPLAY=:0 application or, depending on the application: application --display :0 You might need to change the server permissions and other services might be needed and not accessible from outside the session though (things like dbus).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29324/" ] }
590,540
I'm using KDE plasma, and an ethernet adapter to install things on my laptop before getting wifi working. But I can't seem to get wifi to work at all. Here's some things I tried: $ iwconfigenp0s13f0u3u2 no wireless extensions.lo no wireless extensions. Sorry if my formatting is wrong. Note that enp0s13f0u3u2 is my ethernet connection/adapter (I'm pretty sure). $ ip link1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:004: enp0s13f0u3u2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000link/ether 4c:e1:73:42:1c:97 brd ff:ff:ff:ff:ff:ff For ispci: I only pasted the relevant info $ lspci00:14.3 Network controller: Intel Corporation Killer Wi-Fi 6 AX1650i 160MHz Wireless Network Adapter (201NGW) (rev 30) I kind of really need wifi on this laptop. Thanks a bunch guys, let me know if there's anything else you need me to provide. edit: I've installed network manager, wpa_supplicant, netctl, wireless_tools Also when I try to just use wifi-menu I get invalid interface specification
GUI applications access the screen through a server. When you run them from the command line or the menu, the environment tells the application how to connect to the server. When you run an at command though, the environment does not include that information (you can do at -c JOBNUMBER to see the environment the application inherits) and that's why the application will start, but not be able to run. To run a GUI application you could specify the server, either calling the application with something this (your display might be different): DISPLAY=:0 application or, depending on the application: application --display :0 You might need to change the server permissions and other services might be needed and not accessible from outside the session though (things like dbus).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415912/" ] }
590,699
apt is the command that is being recommended by the Linux distributions. It provides the necessary option to manage the packages. It is easier to use with its fewer but easy to remember options. As quoted in itsfoss.com There is no reason to stick with apt-get unless you are going to do specific operations that utilize more features of apt-get. apt is a subset of apt-get and apt-cache commands providing necessary commands for package management while apt-get won’t be deprecated, as a regular user, you should start using apt more often I get this error when I use apt in shell scripts whereas it does not happen when I use apt-get instead" WARNING: apt does not have a stable CLI interface. Use with caution in scripts. My questions are: Why doesn't apt have a stable CLI interface? How can I use apt with caution or safely? How can I disable this error message?
apt is the recommended command for interactive use by administrators, not for use in shell scripts. This is addressed to a large extent in the apt manpage : The apt(8) commandline is designed as an end-user tool and it may change behavior between versions. While it tries not to break backward compatibility this is not guaranteed either if a change seems beneficial for interactive use. All features of apt(8) are available in dedicated APT tools like apt-get(8) and apt-cache(8) as well. apt(8) just changes the default value of some options (see apt.conf(5) and specifically the Binary scope). So you should prefer using these commands (potentially with some additional options enabled) in your scripts as they keep backward compatibility as much as possible. Thus: apt doesn’t have a stable CLI interface to allow breaking changes, if they’re deemed beneficial. You can’t, the tool is explicitly not designed for this. Use apt-get or apt-cache in your scripts to avoid the error message.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/590699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/292716/" ] }
590,709
When executing wget https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf I have this error: --2020-06-03 20:55:06-- https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdfResolving docs.conda.io (docs.conda.io)... 104.31.71.166, 104.31.70.166, 172.67.149.185, ...Connecting to docs.conda.io (docs.conda.io)|104.31.71.166|:443... connected.ERROR: cannot verify docs.conda.io's certificate, issued by ‘CN=SSL-SG1-GFRPA2,OU=Operations,O=Cloud Services,C=US’: Unable to locally verify the issuer's authority.To connect to docs.conda.io insecurely, use `--no-check-certificate'. What I have tried to solve this problem: sudo update-ca-certificates -f Export the certificate from browser when opening the url, save it in a file conda.cer , then perform openssl x509 -in conda.cer -inform der -outform pem -out conda.pem , then execute: wget --ca-certificate=conda.pem \https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf => still the same error Put the file under /etc/ssl/certs , sudo cp conda.pem /etc/ssl/certs => still same error I know I can use --no-check-certificate , but this is not what i want. This problem occurs for some other websites too. Anyone know the reason? Thanks. UPDATE1 I have tried the following steps:1) sudo cp conda.crt /usr/share/ca-certificates/mozilla/ 2) sudo vi /etc/ca-certificates.conf and append mozilla/conda.crt at the end 3) run sudo update-ca-certificates -f 4) i can see symbolic link created under /etc/ssl/certs which looks like: conda.pem -> /usr/share/ca-certificates/mozilla/conda.crt However, it's still not working! UPDATE2 - Deleted. Please refer to UPDATE3 UPDATE 3 The certificates chain in the URL above contains 4 certificates. Just to ensure not missing any one, I put all the 4 certificates (namely conda1.crt , conda2.crt , conda3.crt , conda4.crt ) in /usr/share/ca-certificates/mozilla/ and repeat the steps mentioned in UPDATE1 . Symbolic links are created successfully in /etc/ssl/certs . Verification: openssl verify -no-CAfile -no-CApath -partial_chain -CAfile conda1.pem conda2.pemconda2.pem: OKopenssl verify -no-CAfile -no-CApath -partial_chain -CAfile conda2.pem conda3.pemconda3.pem: OKopenssl verify -no-CAfile -no-CApath -partial_chain -CAfile conda3.pem conda4.pemconda4.pem: OK Result: still fail with wget UPDATE4 Part of the cause is found Bluecoat service which intercepts the network is the root cause (it has problem to VM Ubuntu only though, the host machine windows works fine with ssl). Both of these works ( conda1.crt is extracted from browser which should be from the Bluecoat service): wget --ca-certificates=/etc/ssl/certs/ca-certificates.crt https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdfwget --ca-certificates=conda1.crt https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf CURRENT STATUS I have installed conda1.crt in /etc/ssl/certs following the steps described in UPDATE1 . conda1.crt is believed to be the right one as shown in the wget step in UPDATE4 . However, even after this step, connection still failed. If I disable Bluecoat service forecely, ssl problem disappeared. However, I am required to use Bluecoat, hence any help to resolve the problem under Bluecoat is really appreciated!
apt is the recommended command for interactive use by administrators, not for use in shell scripts. This is addressed to a large extent in the apt manpage : The apt(8) commandline is designed as an end-user tool and it may change behavior between versions. While it tries not to break backward compatibility this is not guaranteed either if a change seems beneficial for interactive use. All features of apt(8) are available in dedicated APT tools like apt-get(8) and apt-cache(8) as well. apt(8) just changes the default value of some options (see apt.conf(5) and specifically the Binary scope). So you should prefer using these commands (potentially with some additional options enabled) in your scripts as they keep backward compatibility as much as possible. Thus: apt doesn’t have a stable CLI interface to allow breaking changes, if they’re deemed beneficial. You can’t, the tool is explicitly not designed for this. Use apt-get or apt-cache in your scripts to avoid the error message.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/590709", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402658/" ] }
590,710
I can't find the answer to this anywhere. It sounds simple but I'm starting to think maybe it's not. I want sed to remove all the CATs before the STOP in this string: two CAT two four CAT CAT seven one STOP four CAT two CAT three So the output I'm hoping for will be: two two four seven one STOP four CAT CAT two CAT three There could be any number of CATs anywhere in the string. The stop marker can be anywhere too, but just one of them, and always spelled STOP. (Edit: as pointed out below my question is ambiguous - must CAT have adjacent spaces or can any chars border it? Maybe only non-alphanumeric chars are ok? Presenting my actual use case was intense (a big bash function) so I simplified, too much. Readers please bear in mind that solutions below may make different assumptions about adjacency. Thanks)
You could replace one at a time in a loop, until there are no more CAT s before the STOP : $ echo 'two CAT two four CAT CAT seven one STOP four CAT two CAT three' | sed -e :a -e '/CAT.*STOP/s/CAT //;ta'two two four seven one STOP four CAT two CAT three
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/590710", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/316535/" ] }
591,028
Using awk , how we can add text at the bottom, only if a certain pattern was not found in the file? This is not working: awk '{if ($0 !~ /pattern/) {print}; END {print "someText"}}' file The file is small, less than 1MB.
The following should work: awk '/pattern/ {found=1} {print} END{if (!found) print "someText"}' file This will a priori print the entire file ( {print} rule without conditions) and at that time look for the pattern. If it is found, it sets a flag ( {found=1} with the condition /pattern/ , which is equivalent to $0 ~ /pattern/ ). If that flag is not set at end-of-file, the "someText" will be printed. You can either redirect the output to a new file, as in awk ' <see above> ' file > newfile or, if you have a newer version of GNU awk , use the "inplace" editing function (with care!): awk -i inplace ' <see above> ' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591028", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416355/" ] }
591,352
the zerofree command finds the unallocated, non-zeroed blocks in an ext2 or ext3 file-system and fills them with zeroes A NTFS Windows machine, with a mechanical drive, was upgraded from 7 to 10. The drive is old and I suspect that much of the free space actually has data and not filled with zeros. Is it possible (how?) to zero out the free space, so that when an image is created, the size is minimal? Assume either a bootable USB configured with Ubuntu or SysRescueCD is available to process the HDD by mounting the NTFS partion with NTFS-3G if necessary
Depends on the tool you are using to create the image. Usually you don't need to zero it out. For example ntfsclone (part of ntfs-3g) states this in the man page : ntfsclone will efficiently clone (copy, save, backup, restore) or rescue an NTFS filesystem to a sparse file, image, device (partition) or standard output. It works at disk sector level and copies only the used data. Unused disk space becomes zero (cloning to sparse file), encoded with control codes (saving in special image format), left unchanged (cloning to a disk/partition) or filled with zeros (cloning to standard output). So, the free space will be ignored and, if you are cloning to a file, converted to "holes" in the sparse result. Other cloning software, such as Clonezilla will use ntfsclone by default to create the partition image.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182280/" ] }
591,364
I installed Google Chrome via terminal in an Ubuntu based AWS instance by following: https://www.cyberciti.biz/faq/how-to-install-google-chrome-in-ubuntu-linux-12-xx-13-xx/ Problem is that I can invoke Chrome as root user but not as normal user: This is my installation directory: I manipulated access permissions using chmod trying to debug the issue but, had no luck solving it. How can I invoke Chrome as normal user in this case? Because of this issue, when I run tests via Jenkins I am getting this error: org.openqa.selenium.WebDriverException: unknown error: no chrome binary at /usr/bin/google-chrome
Depends on the tool you are using to create the image. Usually you don't need to zero it out. For example ntfsclone (part of ntfs-3g) states this in the man page : ntfsclone will efficiently clone (copy, save, backup, restore) or rescue an NTFS filesystem to a sparse file, image, device (partition) or standard output. It works at disk sector level and copies only the used data. Unused disk space becomes zero (cloning to sparse file), encoded with control codes (saving in special image format), left unchanged (cloning to a disk/partition) or filled with zeros (cloning to standard output). So, the free space will be ignored and, if you are cloning to a file, converted to "holes" in the sparse result. Other cloning software, such as Clonezilla will use ntfsclone by default to create the partition image.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416648/" ] }
591,378
In awk, we use $ to extract the contents of the matched lines (based on pattern) separated by the IFS awk `/LIG/{print $1, $2}' topol.top In shell, we use $ for extracting the value which we stored for a variable. for eg. for loop. for i in *; do mv -- $i $i.pdb; done (Corrected based on an answer) Is $ usage in both these contexts essentially the same or different? If different why is the difference? How to use awk and shell together with $ intermingled?
$ in shell and $ in awk only look like they're related because sometimes there can be a number and sometimes a variable after $ in shell and the same is true in awk but they aren't in any way related, just the same character used by 2 completely different tools in 2 completely different contexts and what $<number> and $<variable> mean in shell is completely different from what they mean in awk. In shell you dereference a variable by prepending it with $ and the positional parameters, etc. are treated the same as variables in this regard so you can have: $0 = the path to the current command$1 = the first argument passed to the command$2 = the second argument passed to the command$# = the number of arguments passed to the command$? = the exit status of the last command run$var = the value of the shell variable varetc. In awk $ is the name of the array (conceptually) of fields that the current record is split into so you only use $ in these expressions: $0 = the current record being processed$1 = the first field of the current record$2 = the second field of the current record To dereference an awk variable var , you just use it's name var , just like in C, and in fact the awk syntax is far more similar to C than it is to shell. If you ever see $var used in awk then it's not the $ that's dereferencing var , it's the use of the name var alone that's dereferencing var and if var has a numeric value like say, 5 , then $var means the same as $5 which is the 5th field of the current record whie if var does not have a numeric value then $var means the same as $0 which is the current record: var=0 => $var = $0 = the current record being processedvar=1 => $var = $1 = the first field of the current recordvar=2 => $var = $2 = the second field of the current recordvar="" => $var = $0 = the current record being processedvar="foo" => $var = $0 = the current record being processed awk and shell are 2 completely different tools with their own syntax, semantics, and scope for variables etc. so just treat them as such and don't assume anything you see in an awk script is in any way related to anything you see in a shell script in syntax, semantics, or scope and vice-versa.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591378", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/412078/" ] }
591,389
Can anyone help ? I have 2 disks spanning my main partitions. 1 is 460Gb and the other is a 1TB. I would like to remove the 1TB - I would like to use it in another machine. The volume group isn't using a lot of space anyway, I only have docker with a few containers using that disk and my docker container volumes are on a different physical disk anyway. If I just remove the disk ([physically]), it is going to cause problems right? Here is some info pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name ubuntu-vg PV Size <464.26 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 118850 Free PE 0 Allocated PE 118850 PV UUID DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4 --- Physical volume --- PV Name /dev/sdb1 VG Name ubuntu-vg PV Size 931.51 GiB / not usable 4.69 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 238466 Free PE 0 Allocated PE 238466 PV UUID Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU LVM confuses me a little :-) Is there not just a simple case of saying, "remove yourself from the VG and assing anything you are using the remaining group member" ? Its worth noting that the 1TB was added afterwards, so assume its easier to remove ? Any help really appreciated EDIT Also some more info df -hFilesystem Size Used Avail Use% Mounted onudev 16G 0 16G 0% /devtmpfs 3.2G 1.4M 3.2G 1% /run/dev/mapper/ubuntu--vg-ubuntu--lv 1.4T 5.1G 1.3T 1% / It sames its using only 1% also output of lvs lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert ubuntu-lv ubuntu-vg -wi-ao---- 1.36t EDIT pvdisplay -m --- Physical volume --- PV Name /dev/sda3 VG Name ubuntu-vg PV Size <464.26 GiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 118850 Free PE 0 Allocated PE 118850 PV UUID DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4 --- Physical Segments --- Physical extent 0 to 118849: Logical volume /dev/ubuntu-vg/ubuntu-lv Logical extents 0 to 118849 --- Physical volume --- PV Name /dev/sdb1 VG Name ubuntu-vg PV Size 931.51 GiB / not usable 4.69 MiB Allocatable NO PE Size 4.00 MiB Total PE 238466 Free PE 0 Allocated PE 238466 PV UUID Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU --- Physical Segments --- Physical extent 0 to 238465: Logical volume /dev/ubuntu-vg/ubuntu-lv Logical extents 118850 to 357315 EDIT Output of lsblk -fNAME FSTYPE LABEL UUID MOUNTPOINTloop0 squashfs /snap/core/9066loop2 squashfs /snap/core/9289sda├─sda1 vfat E6CC-2695 /boot/efi├─sda2 ext4 0909ad53-d6a7-48c7-b998-ac36c8f629b7 /boot└─sda3 LVM2_membe DA7Q8E-zJEz-2FzO-N64t-HtU3-2Z8P-UQydU4 └─ubuntu--vg-ubuntu--lv ext4 b64f2bf4-cd6c-4c21-9009-76faa2627a6b /sdb└─sdb1 LVM2_membe Sp6b1v-nOj2-XXdb-GZYf-1Vej-cfdr-qLB3GU └─ubuntu--vg-ubuntu--lv ext4 b64f2bf4-cd6c-4c21-9009-76faa2627a6b /sdc xfs 1a9d0e4e-5cec-49f3-9634-37021f65da38 /gluster/bricks/2 sdc above is a different drive - and not related.
Since the filesystem you'll need the disk removed from is your root filesystem, and the filesystem type is ext4 , you'll have to boot the system from some live Linux boot media first. Ubuntu Live would probably work just fine for this. Once booted from the external media, run sudo vgchange -ay ubuntu-vg to activate the volume group so that you'll be able to access the LVs, but don't mount the filesystem: ext2/3/4 filesystems need to be unmounted for shrinking. Then shrink the filesystem to 10G (or whatever size you wish - it can easily be extended again later, even on-line): sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv 10G Pay attention to the messages output by resize2fs - if it says the filesystem cannot be shrunk that far, specify a bigger size and try again. This is the only step that needs to be done while booted on the external media; for everything after this point, you can boot the system normally. At this point, the filesystem should have been shrunk to 10G (or whatever size you specified). The next step is to shrink the LV. It is vitally important that the new size of the LV should be exactly the same or greater than the new size of the filesystem! You don't want to cut off the tail end of the filesystem when shrinking the LV. It's safest to specify a slightly bigger size here: sudo lvreduce -L 15G /dev/mapper/ubuntu--vg-ubuntu--lv Now, use pvdisplay or pvs to see if LVM now considers /dev/sdb1 totally free or not. In pvdisplay , the Total PE and Free PE values for sdb1 should be equal - in pvs output, the PFree value should equal PSize respectively. If this is not the case, then it will be time to use pvmove : sudo pvmove /dev/sdb1 After this, the sdb1 PV should definitely be totally free according to LVM and it can be reduced out of the VG. sudo vgreduce vg-ubuntu /dev/sdb1 If you wish, you can then remove the LVM signature from the ex-PV: sudo pvremove /dev/sdb1 But if you are going to overwrite it anyway, you can omit this step. After these steps, the shrunken filesystem will still be sized at 10G (or whatever you specified) even though the LV might be somewhat bigger than that. To fix that: sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv When extending a filesystem, you don't have to specify a size: the tool will automatically extend the filesystem to match the exact size of the innermost device containing it. In this case, the filesystem will be sized according to the size of the LV. Later, if you wish to extend the LV+filesystem, you can do it with just two commands: sudo lvextend -L <new size> /dev/mapper/ubuntu--vg-ubuntu--lvsudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv You can do this even while the filesystem is in use and mounted. Because shrinking a filesystem is harder than extending it, it might be useful to hold some amount of unallocated space in reserve at the LVM level - you will be able to use it at a moment's notice to create new LVs and/or to extend existing LVs in the same VG as needed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416680/" ] }
591,393
My nvidia-smi output is as follows COVID19_002_6LU7_Protease_Top_3/ni_fda130/fda130_fix$ nvidia-smiSun Jun 7 15:00:30 2020 +-----------------------------------------------------------------------------+| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 ||-------------------------------+----------------------+----------------------+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. ||===============================+======================+======================|| 0 Quadro K620 On | 00000000:02:00.0 On | N/A || 63% 73C P0 19W / 30W | 1253MiB / 1994MiB | 98% Default |+-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+| Processes: GPU Memory || GPU PID Type Process name Usage ||=============================================================================|| 0 1406 G /usr/lib/xorg/Xorg 12MiB || 0 2006 G /usr/lib/xorg/Xorg 193MiB || 0 2186 G /usr/bin/gnome-shell 370MiB || 0 3007 G ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files 400MiB || 0 9680 G /opt/teamviewer/tv_bin/TeamViewer 10MiB || 0 14270 G /usr/lib/rstudio/bin/rstudio 56MiB || 0 14961 G /usr/lib/rstudio/bin/rstudio 61MiB || 0 22725 G ...passed-by-fd --v8-snapshot-passed-by-fd 4MiB || 0 23617 C gmx 74MiB |+-----------------------------------------------------------------------------+ gmx is molecular dynamics simulation and is my primary process. I am not aware of some processes especially ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files . What is it? and how to prevent it from running in GPU. Can I also shift /usr/bin/gnome-shell to CPU usage rather than GPU usage? I came across one such question. But it is unanswered. I also found one more thread on this topic. But it is essentially not fully answered.
Your GPU is being used for both display and compute processes; you can see which is which by looking at the “Type” column — “G” means that the process is a graphics process (using the GPU for its display), “C” means that the process is a compute process (using the GPU for computation). To move a type “G” process of the GPU, you need to stop it from displaying on the GPU, which will involve stopping the process and (if appropriate) starting it on another GPU for display purposes. As far as the ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files process is concerned, you’ll have to look for it using ps to determine what it is.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/412078/" ] }
591,399
I'm running Debian Buster on my Thinkpad T480 ( link to manufacturer ) $ lsb_release -aNo LSB modules are available.Distributor ID: DebianDescription: Debian GNU/Linux 10 (buster)Release: 10Codename: buster I got from my office a Thinkpad docking station for my wokring computer, but would also like to use it for my Linux one (private computer). It is the following Docking station: Lenovo USB-C Hybrid Dock 135W. I searched around and tried to disable secure boot in the Thunderbolt BIOS setup. However, it still doesn't recognize my external monitors, nor USB devices. It gets charged at least. Does someone know how to make it run on Debian? I've set the Thunderbold to No Security in the Bios, without any success.
Your GPU is being used for both display and compute processes; you can see which is which by looking at the “Type” column — “G” means that the process is a graphics process (using the GPU for its display), “C” means that the process is a compute process (using the GPU for computation). To move a type “G” process of the GPU, you need to stop it from displaying on the GPU, which will involve stopping the process and (if appropriate) starting it on another GPU for display purposes. As far as the ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files process is concerned, you’ll have to look for it using ps to determine what it is.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18180/" ] }
591,483
To begin, here is an example of a filename for a daily backup file: website-db-backup06-June-2020.tar.gz The script below works fine when run manually via the terminal. However, I am getting this cron daemon message on my email when the script is run via cron: tar: website-db-backup*: Cannot stat: No such file or directorytar: Exiting with failure status due to previous errors Here is a script I made to compress all daily backups every week: #!/bin/bash## Weekly compression for database backupsBACKUP_PATH=~/backup/web/databaseBACKUP_FILE_DATE=`date '+%d-%B-%Y'`tar -czf $BACKUP_PATH/website-db-weekly-compress$BACKUP_FILE_DATE.tar.gz \ -C $BACKUP_PATH/ website-db-backup* && rm $BACKUP_PATH/website-db-backup* Since the daily backup has date on the filename, I have to use * on the script. Can this be the reason?
The problem is the current working directory of the script. website-db-backup* does not have a path so it is executed in the current directory. You must add something like this to your script: SOURCE_DIR_PATH='/path/to/backup_source'cd "$SOURCE_DIR_PATH" || exit 1 In addition you should check whether there are any matching files at all before you exeute tar : shopt -s nullglobset -- website-db-backup*test $# -eq 0 && { echo 'ERROR: No matching files; aborting'; exit 1; } It may not be a problem in this case but as danielleontiev points out in the comment it is dangerous to use ~ in a script if this script may be executed by different users. I suggest you replace it with the intended path.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/175353/" ] }
591,547
I have a network drive hosted on a Windows10 Machine, it mounts fine to my CentOS7 machine through the command: sudo mount -t cifs //ipaddress/sharedfoldername /mountpoint --verbose -o credentials:/credential/file/location,file_mode=0666,dir_mode=0777 The file and dir modes are for the permissions on the mount. Anyway, that mounts fine, but when I try to do an /etc/fstab mount, I get an error back. I will supply my entire fstab file contents and the exact error below. The error appears on startup, it boots to emergency mode and shows the error and gives me the option to use CTRL + D to continue. The fstab mount I am trying to get to work is: //ipaddress/sharedfoldername /mnt cifs credentials=/etc/smbcredentials,uid=1001,gid=1001,_netdev 0 0 My /etc/fstab contents: ## /etc/fstab# Created by anaconda on Thu Dec 13 09:33:55 2018## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#UUID=4f3871fe-a798-4d51-ad90-c40b095a2bd0 / ext4 defaults 1 1UUID=1bb03b6d-3a76-4979-aa63-ff3e0eb4cc5f /boot ext4 defaults 1 2UUID=f89fdb96-6dbf-4865-aa6b-1d5cc74f2d48 /home ext4 defaults 1 2UUID=86f38c73-f9e0-490b-8c96-3321f9413c0d swap swap defaults 0 0//ipaddress/sharedfoldername /mnt cifs credentials=/etc/smbcredentials,uid=1001,gid=1001,_netdev 0 0 The error appears on startup and you can find it below: You're looking at the CIFS bit, the bad mount option huge needs sorting anyway, that was there before the fstab cifs mount. Thanks @telcoM's answer response I rebooted and get the following error on startup: Then when I login after seeing the error, I get a shortcut appear in the left of my file browser, when I click it, I get this error: Unable to mount 'shared-folder-name', mount: only root can mount //ipaddress/sharedfoldername on /mountpoint MY FSTAB FILE AFTER @TELCOM'S SUGGESTION ## /etc/fstab# Created by anaconda on Tue Dec 11 14:28:31 2018## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#UUID=4d48ab0d-e1ab-4d7e-9f64-8481a7690060 / ext4 defaults 1 1UUID=a7fad550-81d7-4150-8b76-e89584e4cfdf /boot ext4 defaults 1 2UUID=0baabbc4-2dc0-4971-9d2b-c123e5ad7355 /home ext4 defaults 1 2UUID=7756eafb-382c-46b3-aae8-e44d7e2cfe06 swap swap defaults 0 0#//ipadress/sharedfoldername /mount/location cifs x-systemd.after=network-online.target,credentials=/credentials/location,vers=3.0,file_mode=0666,dir_mode=0777,uid=1001,gid=1001 0 0
The tmpfs: Bad mount option huge turns out to be a kernel bug: see this link. The "Error connecting to a socket" means the system is trying to mount the Windows share before network interfaces have been fully enabled. It should not be happening, but you could add a new systemd-style mount option to be explicit about it: x-systemd.after=network-online.target . The _netdev option used to be an old way to do the same, but apparently it does not work any more after CentOS moved to systemd in version 7.0. As i wrote in my answer to your earlier question, if you want everyone to be able to access the share, you'll need to supply the mount options file_mode=0666,dir_mode=0777 . And if you do this, then the uid=1001,gid=1001 options will probably be unnecessary, but you can still use them if you want. And to silence an ugly warning about a changed default version of the SMB protocol (since the aftermath of the WannaCry ransomware infestation back in May 2017), you'll want to add vers=3.0 mount option too, if the share is provided by a reasonably modern version of Windows. So, the /etc/fstab entry should probably be like this (split to multiple lines for readability): //ipaddress/sharedfoldername /mnt cifs x-systemd.after=network-online.target,credentials=/etc/smbcredentials,vers=3.0,file_mode=0666,dir_mode=0777,uid=1001,gid=1001 0 0 An fstab entry should always have exactly 6 fields separated by whitespace - no more and no less.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/414670/" ] }
591,832
I have File which contains Records qwe.google.com IN A 1.1.1.1qwe.google.com IN {uneven-space} A{space}1.1.1.1qwe.google.com IN CNAME asd.google.com I need to grep all the lines which contains IN A , as it contains uneven spaces in between i am not able to do so.
You can use grep -E "IN[[:blank:]]+A" file [[:blank:]] matches spaces and tabs, similar to [ \t]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416391/" ] }
591,839
I have a shell script that needs to delete all of the files in a directory that start with a number. This file set has grown to contain hundreds of thousands of files that need to be deleted each day. The script contains the following lines: rm -f /my/dir/11*rm -f /my/dir/12*(( etc ))rm -f /my/dir/1*rm -f /my/dir/2* And I get the error message for every line line 1: /usr/bin/rm: Argument list too long I tried to replace the lines with ls -d /my/dir/11* | xargs rm but ls -d gives me the same error message. How can I delete these files without growing the list to contain hundreds of filename permutations?
If you want to get a relative path and pass it onto rm , you can use the find command, for your use case I'd run: find /my/dir -iname '[0-9]*' -type f That would return every file that start with a number. If that list is what you want to delete, have find delete them using -delete : find /my/dir -iname '[0-9]*' -type f -delete Good luck!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/591839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/417010/" ] }
591,952
I am trying to using the following dd command to clone remote xfs disk to another location. ssh root@source_ip "dd if=/dev/vda2 bs=16M conv=noerror,sync status=progress " | sudo dd of=/dev/nvme1n1 bs=16M conv=noerror,sync status=progress However, I got the following error msg 828895133696 bytes (829 GB) copied, 39610.432258 s, 20.9 MB/s1288490188800 bytes (1.3 TB) copied, 64529.849170 s, 20.0 MB/sdd: error writing '/dev/nvme1n1': No space left on device The source disk has 1TB and the target disk has 1.2TB. Can anyone help to explain why can't perform the disk to disk clone in this situation? Thanks! I want to try to recover the deleted files from the source disk, and I'm not sure whether DD is the only right tool for this situation.
Compare this answer (note bs=1K is used there): dd is an old, cranky program initially intended to operate on tape devices. When you tell it to read one block of 1kB, it attempts to read one block. If the read returns less than 1024 bytes, tough, that's all you get. conv=sync gets important when dd indeed reads less than it expected. In your case any time the pipe from ssh fails to deliver a full 16M block (because of bs=16M ) to the local dd in one read , the conv=sync of the latter will fill the "missing" data with zeros (NUL bytes). But the real data is not missing. What the local dd considers missing will be delivered when it tries to read the next block. In effect the local dd injects zeros in random(-ish) places of the stream. In case it's not obvious I emphasize: this corrupts the data, the resulting "clone" will be virtually useless . Using conv=noerror,sync even on the remote (reading) side may be wrong. Compare this answer: What does dd conv=sync,noerror do? You need to really know how it works to use it properly. For the local dd conv=noerror,sync you used makes little sense. Drop it or add iflag=fullblock (if supported). Frankly I'm not sure if conv=noerror,sync iflag=fullblock has any advantage over not using these options at all in a case when dd reads from a pipe.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/591952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/417113/" ] }
591,957
I have a CSV file with data like this (almost 100000 lines). I need to count how many times does a specific word with a specific date appears in the file for a range of dates. For example from 15/03/2020 to 16/04/2020 I need to count how many times the word "Sinaloa" appears which means count all "Sinaloa" with 15/03/2020, 16/03/2020, 17/03/2020, ... , 16/04/2020 I tried using grep but it only counts the first and last date. Edit: Let's take "Ciudad de Mexico" and a date such as 25/04/2020. I need to count all the "Ciudad de Mexico" from 15/03/2020 to 25/04/2020. In this case the desire output is 5. But the problem is that the final date in this case is an input from the user, so it if changes to 01/05/2020 the output should be 6. "167386","Baja California","F",54,"01/04/2020","confirmado""0d01b5","Sinaloa","F",60,"13/04/2020","confirmado""1beec8","Ciudad de México","M",47,"16/04/2020","confirmado""15fcd7","Ciudad de México","M",46,"16/04/2020","confirmado""0a5675","Sinaloa","F",34,"19/05/2020","confirmado""0e9e95","Ciudad de México","F",31,"25/04/2020","confirmado""07fa63","Ciudad de México","M",37,"01/05/2020","confirmado""0693ef","Ciudad de México","F",48,"20/03/2020","confirmado""19afc8","Baja California","F",45,"06/04/2020","confirmado""093740","Baja California","M",81,"19/04/2020","confirmado""1b3c74","México","M",57,"16/04/2020","confirmado""025cb1","Baja California","M",51,"29/04/2020","confirmado""15764f","México","M",73,"05/05/2020","confirmado""07c084","Tabasco","F",52,"23/04/2020","confirmado""1b9e29","Ciudad de México","F",47,"11/04/2020","confirmado"
Compare this answer (note bs=1K is used there): dd is an old, cranky program initially intended to operate on tape devices. When you tell it to read one block of 1kB, it attempts to read one block. If the read returns less than 1024 bytes, tough, that's all you get. conv=sync gets important when dd indeed reads less than it expected. In your case any time the pipe from ssh fails to deliver a full 16M block (because of bs=16M ) to the local dd in one read , the conv=sync of the latter will fill the "missing" data with zeros (NUL bytes). But the real data is not missing. What the local dd considers missing will be delivered when it tries to read the next block. In effect the local dd injects zeros in random(-ish) places of the stream. In case it's not obvious I emphasize: this corrupts the data, the resulting "clone" will be virtually useless . Using conv=noerror,sync even on the remote (reading) side may be wrong. Compare this answer: What does dd conv=sync,noerror do? You need to really know how it works to use it properly. For the local dd conv=noerror,sync you used makes little sense. Drop it or add iflag=fullblock (if supported). Frankly I'm not sure if conv=noerror,sync iflag=fullblock has any advantage over not using these options at all in a case when dd reads from a pipe.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/591957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/417119/" ] }
592,057
I have a file with lines like the following: TsM_000477300_transcript_id_TsM_000477300_gene_id_TsM_000477300,extr 29TsM_000541200_transcript_id_TsM_000541200_gene_id_TsM_000541200,extr 9,plas 7,mito 6.5,cyto_mito 4,E.R. 3,lyso 3,golg 3,E.R._golg 3TsM_000020400_transcript_id_TsM_000020400_gene_id_TsM_000020400,extr 28,cyto 1,E.R. 1,pero 1,lyso 1,cyto_pero 1TsM_000268600_transcript_id_TsM_000268600_gene_id_TsM_000268600,extr 13,plas 7,E.R. 5,lyso 3,golg 2TsM_000533800_transcript_id_TsM_000533800_gene_id_TsM_000533800,extr 31TsM_000208300_transcript_id_TsM_000208300_gene_id_TsM_000208300,extr 19,pero 5,lyso 4,plas 2,E.R. 2TsM_000379500_transcript_id_TsM_000379500_gene_id_TsM_000379500,extr 15,golg 12,lyso 3TsM_000882200_transcript_id_TsM_000882200_gene_id_TsM_000882200,extr 32TsM_001173700_transcript_id_TsM_001173700_gene_id_TsM_001173700,extr 31 The output I want is this one: TsM_000477300,extr 29TsM_000541200,extr 9,plas 7,mito 6.5,cyto_mito 4,E.R. 3,lyso 3,golg 3,E.R._golg 3TsM_000020400,extr 28,cyto 1,E.R. 1,pero 1,lyso 1,cyto_pero 1TsM_000268600,extr 13,plas 7,E.R. 5,lyso 3,golg 2TsM_000533800,extr 31TsM_000208300,extr 19,pero 5,lyso 4,plas 2,E.R. 2TsM_000379500,extr 15,golg 12,lyso 3TsM_000882200,extr 32TsM_001173700,extr 31 I've used sed -E 's/(^.+)_transcript_id_.+.,(.*$)/\1,\2/' But I can't get what I want. Here is my output: TsM_000477300,extr 29TsM_000541200,E.R._golg 3TsM_000020400,cyto_pero 1TsM_000268600,golg 2TsM_000533800,extr 31TsM_000208300,E.R. 2TsM_000379500,lyso 3TsM_000882200,extr 32TsM_001173700,extr 31 I've tried some variations but got no success and I don't no why.
The problem is that .+., greedily matches everything up to and including the last , You could modify that to [^,]+., or just [^,]+, to simulate non-greediness in a CSV context. However you likely can do something much simpler, for example $ sed 's/_transcript_id_[^,]*//' fileTsM_000477300,extr 29TsM_000541200,extr 9,plas 7,mito 6.5,cyto_mito 4,E.R. 3,lyso 3,golg 3,E.R._golg 3TsM_000020400,extr 28,cyto 1,E.R. 1,pero 1,lyso 1,cyto_pero 1TsM_000268600,extr 13,plas 7,E.R. 5,lyso 3,golg 2TsM_000533800,extr 31TsM_000208300,extr 19,pero 5,lyso 4,plas 2,E.R. 2TsM_000379500,extr 15,golg 12,lyso 3TsM_000882200,extr 32TsM_001173700,extr 31
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/411696/" ] }
592,068
I am wondering how to compress .cbr and .cbz files very tightly to decrease the file size.
I am wondering how to compress .cbr and .cbz files cbr and cbz stand for Comic Book Rar and Comic Book Zip . Those files are basically JPEGs stored in a archive with a custom extension. Since they are already compressed in a archive you cannot reduce their sizes by archive compression. What you do can instead is to extract the images from the archive with 7z e file.cbz for example and process the JPEGs to reduce the image quality (and size) with the convert command from ImageMagick. Finally, you can rebuild your archive with the 7z utility or the zip command (or alternatively the rar command if you want .cbr files). You just need to rename your .zip files to .cbz (or your .rar files to .cbr ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/387514/" ] }
592,511
For moving all files in sub folders into the current folder I use this script while read fdo mv "$f" .done < file_list this works great but i have to generate the file_list with find . -name *.avi > file_list what i want is to add the command directly to my while loop while read fdo mv "$f" .done < find . -name *.avi but bash tells me -bash: syntax error near unexpected token `.' What is an easy solution to pipe the find command into my while loop?
proper way to do this is pipe find . -name '*.avi' |while read fdo mv "$f" .done the result from first command " find . -name *.avi " will feed the second. This is call a pipe (from the symbol | ).You can think of pipe as a temporary file like find . -name '*.avi' > file1.tmpwhile read fdo mv "$f" .done < file1.tmprm file1.tmp As pointed out filename with space or newline are likely to cause error. If the only purpose is to move file, use @Terdon's commands. Also you must quote *.avi or it will break on second run. find: paths must precede expression: `foo.avi' find: possible unquoted pattern after predicate `-name'?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592511", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/294056/" ] }
592,542
Recently I noticed that the rtkit-daemon process running on my Debian 10.4 machine may be interferring with a custom server application on that machine. So I wanted to disable that realtime kit daemon like this: $ sudo systemctl stop rtkit-daemon.service$ sudo systemctl disable rtkit-daemon.service That correctly stopped (and disabled) the rtkit-daemon process. However, after a while I noticed that it is running again, and in syslog I found the following lines: Jun 12 16:15:12 box-63 dbus-daemon[453]: [system] Activating via systemd: service name='org.freedesktop.RealtimeKit1' unit='rtkit-daemon.service' requested by ':1.6746' (uid=1000 pid=11857 comm="python pipecheck.py")Jun 12 16:15:12 box-63 systemd[1]: Starting RealtimeKit Scheduling Policy Service...Jun 12 16:15:12 box-63 dbus-daemon[453]: [system] Successfully activated service 'org.freedesktop.RealtimeKit1'Jun 12 16:15:12 box-63 systemd[1]: Started RealtimeKit Scheduling Policy Service. That python pipecheck.py is our custom application. Why does D-Bus want to start the realtime kit for our application in the first place? Anyway, so apparently the dbus-daemon has restarted the rtkit-daemon . How can i prevent that and disable the realtime kit daemon for good?
Welcome to Desktop Bus bus activation! It is a pain, and should be avoided. A Desktop Bus client (of some kind) is asking the Desktop Bus broker for communication with the org.freedesktop.RealtimeKit1 D-Bus server.When the rtkit-daemon program is running, it registers itself with the D-Bus broker as this name.When it is not running, the broker invokes D-Bus bus activation . When a client asks the D-Bus broker this in such circumstances, the broker looks at what is specfied in a org.freedesktop.RealtimeKit1.service file.(This is not a systemd service unit file, but a D-Bus configuration file living somewhere under /usr/{local/,}share/dbus-1/system-services/ , D-Bus also using the .service extension.)The broker learns from this file that on only systemd systems the org.freedesktop.RealtimeKit1 server is managed as a rtkit-daemon.service systemd service. The broker speaks to systemd over the same Desktop Bus, using an idiosyncratic and undocumented org.freedesktop.systemd1.Activator D-Bus service name, asking it to activate the rtkit-daemon.service systemd service with an ActivationRequest message.This undocumented activation function has no notion of only activating the service if enabled.It always activates the service, even if it is disabled. So, ironically, you cannot disable rtkit-daemon using the obvious method of disabling it, the disable command.You can instead: … override the org.freedesktop.RealtimeKit1.service file with one of your own that contains no ways to activate the org.freedesktop.RealtimeKit1 D-Bus server; … find out what D-Bus client is asking for org.freedesktop.RealtimeKit1 , with busctl monitor , and stop it; or … mask the rtkit-daemon.service systemd service, so that systemd itself does not know how to activate it even if the Desktop Bus broker requests its activation via the idiosyncratic and undocumented API. This is one reason that my system-control has a reset verb (which starts/stops a service according to its current enable/disable state) and my replacement dbus-daemon-launch-helper invokes system-control reset rather than system-control activate . Further reading Jonathan de Boyne Pollard (2016). Avoid Desktop Bus (D-Bus) bus activation . nosh. Softwares. Jonathan de Boyne Pollard. " reset ". system-control . nosh Guide . Jonathan de Boyne Pollard. dbus-daemon-launch-helper . nosh Guide .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592542", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59995/" ] }
592,543
I have data like this { "dateCreated": "2020-06-12", "status": "pending", "depositDate": "2020-06-15", "amount": 41237}{ "dateCreated": "2020-06-05", "status": "paid", "depositDate": "2020-06-08", "amount": 37839}{ "dateCreated": "2020-04-02", "status": "paid", "depositDate": "2020-04-03", "amount": 67} that's being formatted with jq like so: request-some-api | jq '.data[] | {dateCreated: .created | strftime("%Y-%m-%d"), status: .status, depositDate: .arrival_date | strftime("%Y-%m-%d"), amount: .amount,}' and I'd like to modify the .amount so that it displays values rather than the number of cents as a dollar amount with a decimal place... { "dateCreated": "2020-06-12", "status": "pending", "depositDate": "2020-06-15", "amount": $412.37}{ "dateCreated": "2020-06-05", "status": "paid", "depositDate": "2020-06-08", "amount": $378.39}{ "dateCreated": "2020-04-02", "status": "paid", "depositDate": "2020-04-03", "amount": $.67} ...but I haven't found any documentation about this? Is jq able to do this conversion? Even without the $ sign but just adding the . between dollars and cents would be helpful.
Like this: jq '.amount = "$" + (.amount/100|tostring)' file.json Output { "dateCreated": "2020-06-12", "status": "pending", "depositDate": "2020-06-15", "amount": "$412.37"}{ "dateCreated": "2020-06-05", "status": "paid", "depositDate": "2020-06-08", "amount": "$378.39"}{ "dateCreated": "2020-04-02", "status": "paid", "depositDate": "2020-04-03", "amount": "$0.67"}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592543", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115900/" ] }
592,553
I have bought a VPS service that is claimed to give 4 cores wuth total 6GHz power. I have installed CentOS 8 on this VPS and want to check if I have that power of CPU or not? This is the info I do get whit lscpu command: Architecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 4On-line CPU(s) list: 0-3Thread(s) per core: 1Core(s) per socket: 4Socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 63Model name: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHzStepping: 2CPU MHz: 3491.914BogoMIPS: 6983.82Hypervisor vendor: KVMVirtualization type: fullL1d cache: 32KL1i cache: 32KL2 cache: 4096KL3 cache: 16384KNUMA node0 CPU(s): 0-3
Like this: jq '.amount = "$" + (.amount/100|tostring)' file.json Output { "dateCreated": "2020-06-12", "status": "pending", "depositDate": "2020-06-15", "amount": "$412.37"}{ "dateCreated": "2020-06-05", "status": "paid", "depositDate": "2020-06-08", "amount": "$378.39"}{ "dateCreated": "2020-04-02", "status": "paid", "depositDate": "2020-04-03", "amount": "$0.67"}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/191641/" ] }
592,574
I’ve installed black arch “from live iso” and it was installed successfullyBut after reboot it stuck in black screen starting version 245.5-2-archERROR: device ‘uuid=xxxxxxxxxx‘ not found skipping fsck. mount: /new_root: can’t find UUID=xxxxxxx. You are now being dropped into an emergency shell. sh: can’t access tty; job control turned off[rootfs ]# By the way I think this picture may help to fix How can I fix this error ?
Like this: jq '.amount = "$" + (.amount/100|tostring)' file.json Output { "dateCreated": "2020-06-12", "status": "pending", "depositDate": "2020-06-15", "amount": "$412.37"}{ "dateCreated": "2020-06-05", "status": "paid", "depositDate": "2020-06-08", "amount": "$378.39"}{ "dateCreated": "2020-04-02", "status": "paid", "depositDate": "2020-04-03", "amount": "$0.67"}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592574", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415143/" ] }
592,657
I've just switched to bullseye (see sources below) deb http://deb.debian.org/debian/ testing main contrib non-freedeb-src http://deb.debian.org/debian/ testing main contrib non-freedeb http://deb.debian.org/debian/ testing-updates main contrib non-freedeb-src http://deb.debian.org/debian/ testing-updates main contrib non-freedeb http://deb.debian.org/debian-security testing-security maindeb-src http://deb.debian.org/debian-security testing-security maindeb http://security.debian.org testing-security main contrib non-freedeb-src http://security.debian.org testing-security main contrib non-free The update and upgrade went fine, but full-upgrade fails due to the following error message: The following packages have unmet dependencies: libc6-dev : Breaks: libgcc-8-dev (< 8.4.0-2~) but 8.3.0-6 is to be installedE: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. From what I see on the packages.debian.org, Debian testing should have libgcc-8-dev: 8.4.0-4 , so I don't see why an older version is to be installed. How can I fix this, to finalize the bullseye full-upgrade?
Installing gcc-8-base ( sudo apt install gcc-8-base ) appeared to do the trick for me and fix the problem for me.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/592657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137596/" ] }
592,692
I'd like to rename the recon.text file with its directory name. I have 1000 directories. Some help, please? 7_S4_R1_001_tri10_sha/recon.text8_S1_R1_001_tri15_sha/recon.text9_S8_R1_001_tri20_sha/recon.text10_S5_R1_001_tri25_sha/recon.text11_S3_R1_001_tri30_sha/recon.text
Installing gcc-8-base ( sudo apt install gcc-8-base ) appeared to do the trick for me and fix the problem for me.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/592692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311001/" ] }
592,694
I have a Medion Akoya P6687 notebook, and I have started to use GNU/Linux in it a year and a half. I have always had problems with linux kernels, in fact only 4.19 version worked well for me. I have used other 4.x versions but they didn't work, but I'm not sure if it was because of ACPI errors. I'm stuck with linux-4.19 kernel version because other recent versions (all of the 5.x kernel versions I have tested) give me the same ACPI errors when booting. This is specifically taken from Debian and 5.6.0-2-amd64 version but Arch gives the same results. [ 30.441861] APCI Error: Aborting method \_SB.PCI0.LPCB.H_EC.ECMD due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)[ 30.441872] APCI Error: Aborting method \_TZ.FNCL due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)[ 30.441879] APCI Error: Aborting method \_TZ.FN00._OFF due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)[ 30.441886] APCI Error: Aborting method \_SB.PCI0.LPCB.H_EC._REG due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)[ 31.696214] thermal thermal_zone1: critical temperature reached (128 C), shut[ 31.948073] thermal thermal_zone1: critical temperature reached (128 C), shut[ 61.971231] APCI Error: Aborting method \_SB.PCI0.LPCB.H_EC.ECMD due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)[ 61.971395] APCI Error: Aborting method \_TZ.FNCL due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)[ 61.971509] APCI Error: Aborting method \_TZ.FN00._ON due to previous error (AE_AML_LOOP_TIMEOUT) (20200110/psparse-529)... (similar messages appear every 30 seconds) (posted and screenshot here ) I have tested several distros (Arch, Debian and Void Linux) and the situation is the same: kernel 4.19 works (I currently use debian with 4.19 and I tried to boot an old arch .iso with that kernel and it boots with no problems), but recent kernel versions (5.x) don't, they have the problems above with the ACPI. I can also add that, if I use the acpi=off flag, the notebook boots, but the battery and the touchpad are not detected, and in the most recent arch .iso the keyboard is not detected also. I also have updated the BIOS to the last version but the errors persist, and I don't know what can I do to fix it. If anyone can help me to find a solution I will be very grateful. Thanks. And sorry if my english is not very good.
Installing gcc-8-base ( sudo apt install gcc-8-base ) appeared to do the trick for me and fix the problem for me.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/592694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/417776/" ] }