source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
554,595 | How to iterate file which has comma separated by value ? I tried the following: $ cat file | tr ',' '\n' > /tmp/f1$ while read -r line;do echo $line;done < /tmp/f1 How can i iterate over the first line contents without the creation of temporary file? Any ideas? | First of all, avoid doing text-parsing with shell loops . It is hard to do, easy to get wrong and very hard to read. And slow. Very, very, slow. Instead, use something like awk which is specifically designed to read by "fields". For example, with this input file: foo, bar, bazoof, rab, zab You can read each comma-separated field using awk -F, to set the field separator to , : $ awk -F, '{ print "The 1st field is",$1,"the 2nd", $2,"and the 3rd", $3}' fileThe 1st field is foo the 2nd bar and the 3rd bazThe 1st field is oof the 2nd rab and the 3rd zab Even if you insist on doing it in the shell, you don't need the temp file and you don't need tr . You can tell while read to separate on commas: $ while IFS=, read -r one two three; do echo "The 1st field is $one, the 2nd $two and the 3rd $three"; done < fileThe 1st field is foo, the 2nd bar and the 3rd bazThe 1st field is oof, the 2nd rab and the 3rd zab | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/554595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382178/"
]
} |
554,809 | I'm running a current version of Arch Linux (KDE) on a Dell laptop. When I press the Meta key (with the Microsoft logo) it brings up the application launcher menu+. So it's not a dead key. And Ctrl works as expected. However, Ctrl + Meta does not highlight the pointer / cursor position, nor does clicking the mouse animate anything, despite having both "Mouse Click Animation" and "Track Mouse" checked in the Systems Settings / Desktop Effects configuration, and confirming that the key binding for "Track Mouse" has both "Ctrl" and "Meta" checked. I tried a variety of other finger-yoga contortions, but was unable to locate whatever magic combination would work. Is there some common application or setting that might be overriding the above? (I don't believe I'm running anything especially peculiar enough to override KDE / Plasma.0 | I use KDE version 5.6 and in order for mouse tracking to work I actually need to do the following: Go to the System settings . search "Track mouse", it should be in the "Desktop effects" submenu. Enable it by ticking the checkbox. Mouse over the Track mouse and then click on the "configure" whichshould appear as settings icon on the left. As shortcut, I have unchecked "meta" and everything else. Then I clicked "none" to set a shortcut. Then I pressed the F12 key. Then I clicked OK. If it still does not work, then in System Settings, search for "compositor" and enable it. It happened to me that it was disabled automatically because it crashed. That's it. The screenshot is an example which uses different keys. This is the settings screen: Cursor is highlighted like this: | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/554809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34677/"
]
} |
554,908 | Can I disable Spectre and Meltdown mitigation features in Ubuntu 18.04LTS? I want to test how much more performance I gain when I disable these two features in Linux, and if the performance is big, to make it permanently. | A number of kernel boot parameters are available to disable or fine-tune hardware vulnerability mitigations: for Spectre v1 and v2 : nospectre_v1 (x86, PowerPC), nospectre_v2 (x86, PowerPC, S/390, ARM64), spectre_v2_user=off (x86) for SSB: spec_store_bypass_disable=off (x86, PowerPC), ssbd=force-off (ARM64) for L1TF : l1tf=off (x86) for MDS : mds=off (x86) for TAA : tsx_async_abort=off for iTLB multihit : kvm.nx_huge_pages=off for SRBDS : srbds=off for retbleed: retbleed=off KPTI can be disabled with nopti (x86, PowerPC) or kpti=0 (ARM64) A meta-parameter, mitigations , was introduced in 5.2 and back-ported to 5.1.2, 5.0.16, and 4.19.43 (and perhaps others). It can be used to control all mitigations, on all architectures, as follows: mitigations=off will disable all optional CPU mitigations; mitigations=auto (the default setting) will mitigate all known CPU vulnerabilities, but leave SMT enabled (if it is already); mitigations=auto,nosmt will mitigate all known CPU vulnerabilities and disable SMT if appropriate. Some of these can be toggled at runtime; see the linked documentation for details. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/554908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/149626/"
]
} |
554,932 | Does the stat command offer, or does bash offer, a straightforward way to check whether FILENAME refers to a file rather than, say, a directory? Using bash and stat , here is my ugly solution: if RESPONSE="$(LC_ALL=C stat -c%F FILENAME)"\ && [ "$RESPONSE" = 'regular file'\ -o "$RESPONSE" = 'regular empty file' ]then # do something ...fi See also this related question. | The -f test will be true if the given name is the name of a regular file or a symbolic link to a regular file. The -h test will be true if the given name is the name of a symbolic link. This is documented in both man test (or man [ ), and in help test in a bash shell session. Both tests are standard tests that any POSIX test and [ utility would implement. name=somethingif [ -f "$name" ] && ! [ -h "$name" ]; then # "$name" refers to a regular filefi The same thing using stat on OpenBSD (the following bits of code also uses the standard -e test just to make sure the name exists in the filesystem before calling stat ): name=somethingif [ -e "$name" ] && [ "$(stat -f %Hp "$name")" = "10" ]; then # "$name" refers to a regular filefi (a filetype of 10 indicates a regular file). If using GNU stat : name=somethingif [ -e "$name" ] && [[ $(stat --printf=%F "$name") == "regular"*"file" ]]; then # "$name" refers to a regular filefi The pattern in this last test matches both the string regular file and regular empty file . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/554932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18202/"
]
} |
554,960 | This is standard ls -l output. user@linux:~$ ls -ltotal 0-rw-rw-r-- 1 user user 0 Nov 1 00:00 file01-rw-rw-r-- 1 user user 0 Nov 1 00:00 file02-rw-rw-r-- 1 user user 0 Nov 1 00:00 file03user@linux:~$ Would it be possible to add new line like this? -rw-rw-r-- 1 user user 0 Nov 1 00:00 file01-rw-rw-r-- 1 user user 0 Nov 1 00:00 file02-rw-rw-r-- 1 user user 0 Nov 1 00:00 file03 | ls -l | sed G (that one is a common idiom and on the sed FAQ ). Or (likely faster, probably doesn't matter for the (likely short) output of ls -l ): ls -l | paste -d '\n' - /dev/null Those insert a blank line after each line of the output of ls -l . Now, if you want an empty line after each file described by ls -l , which would be different if there are files whose name contains newline characters, you would have to do something like: for f in *; do ls -ld -- "$f" && echo; done (which would also skip the total line). Or you could use ls -ql which would make sure you get one line per file (the newline characters, like all control characters would be rendered as ? (at least in the POSIX locale)). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/554960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
555,047 | I have a directory foo with several files: .└── foo ├── a.txt └── b.txt and I want to move it into a directory with the same name: .└── foo └── foo ├── a.txt └── b.txt I'm currently creating a temporary directory bar , move foo into bar and rename bar to foo afterwards: mkdir barmv foo barmv bar foo But this feels a little cumbersome and I have to pick a name for bar that's not already taken. Is there a more elegant or straight-forward way to achieve this? I'm on macOS if that matters. | To safely create a temporary directory in the current directory, with a name that is not already taken, you can use mktemp -d like so: tmpdir=$(mktemp -d "$PWD"/tmp.XXXXXXXX) # using ./tmp.XXXXXXXX would work too The mktemp -d command will create a directory at the given path, with the X -es at the end of the pathname replaced by random alphanumeric characters. It will return the pathname of the directory that was created, and we store this value in tmpdir . 1 This tmpdir variable could then be used when following the same procedure that you are already doing, with bar replaced by "$tmpdir" : mv foo "$tmpdir"mv "$tmpdir" foounset tmpdir The unset tmpdir at the end just removes the variable. 1 Usually, one should be able to set the TMPDIR environment variable to a directory path where one wants to create temporary files or directories with mktemp , but the utility on macOS seems to work subtly differently with regards to this than the same utility on other BSD systems, and will create the directory in a totally different location. The above would however work on macOS. Using the slightly more convenient tmpdir=$(TMPDIR=$PWD mktemp -d) or even tmpdir=$(TMPDIR=. mktemp -d) would only be an issue on macOS if the default temporary directory was on another partition and the foo directory contained a lot of data (i.e. it would be slow). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/555047",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/295479/"
]
} |
555,080 | Let's say I have created a cgroup and attach a memory limit of 200MB to the cgroup. I then run a memory-intensive process inside the cgroup, and it uses up its limit of 200MB. As the process runs, is it possible to lower the memory consumption of the process? So can I set a limit of 100MB while the process is already running and using up 200MB? If so, how quickly does the kernel release 100MB of memory that was being used by the cgroup, and what happens to that memory if the process tries to access it? | Yes, depending on what you mean by "allocated". You can reduce or increase the amount of physical memory that the process can use. In-use memory consists of, among other things: Images from disk which are mapped to the process (e.g. shared libraries, executables) Pages from disk which are cached by the IO cache layer Anonymous memory which may or may not be backed by swap. When reducing the cgroup memory.limit_in_bytes the system will discard pages from disk cache and from disk images (e.g. executables), as these can always be reloaded if needed. If you have swap enabled, it can also page out anonymous memory. Create a cgroup for your process and set the limit # Create a cgroupmkdir /sys/fs/cgroup/memory/my_cgroup# Add the process to itecho $PID > /sys/fs/cgroup/memory/my_cgroup/cgroup.procs# Set the limit to 40MBecho $((40 * 1024 * 1024)) > /sys/fs/cgroup/memory/my_cgroup/memory.limit_in_bytes The system will immediately swap things out until it is under the limit, and it will keep doing this to keep the process under the limit. If it is not possible for the kernel to put the process or group under the limit initially (because certain memory cannot be swapped e.g. if you don't have any swap, or certain kernel pages which might not be swappable), an error will occur, and the limit will not be changed. If it is not possible to do this at some point in the future, the OOM killer will kill the process (or one of its subprocesses). You can use the oom_control key to instead suspend the processes in the group, but then you will have to resolve the issue yourself, either by raising the limit or killing one or more processes. More information There is also a memory.soft_limit_in_bytes key. This does not invoke the oom killer, rather it is used when the system is under memory pressure to determine which processes get swapped out first (and possibly at other times). From within the process or group Processes can be made aware of memory limits by coding them to query the current usage, and the hard and soft limits. They can then take action to avoid breaching the limits, for example by refraining: from allocating memory, creating child processes or accepting new connection; or by reducing usage by e.g. terminating existing connections. Processes can subscribe to notifications using the cgroups notification API. See also Kernel documentation at: https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v1/memory.html#memory-thresholds | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/555080",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292999/"
]
} |
555,099 | I have been enthusiastic to shell scripting for some months, and found that the codes I write (in bash ) do not work in some machines. This is very disappointing. I realize that I should have learned the most portable shell scripting instead. This would, of course, be a bit painful in the beginning, but I believe it will pay off eventually. I hope my shell scripts will work almost everywhere, including the Linux family and the BSD family. (But I don't care if they run on Windows.) Questions What's the most portable shell? EDIT: Please omit this question What are some serious disadvantages to use the most portable shell? EDIT: Please omit this question What else should I pay attention to if I want to write extremely portable codes? e.g. what core-utils and which versions should I use and avoid? | Your bash scripts should work on any machine that has the bash shell installed, unless you use bash features that are too new for the version of the shell on the other system, like for example, trying to use name references ( declare -n ), associative arrays ( declare -A ), or the %(...)T format of printf (or any of a number of other different things) with the default bash on macOS (which is ancient; just install a newer bash with Homebrew in that case). Your scripts may also fail if they use external tools not available on some other system, or features of tools that may not be implemented, or that are implemented differently. The most portable shell language on POSIX systems is the sh shell. This is described here: https://pubs.opengroup.org/onlinepubs/9699919799/idx/shell.html The bash shell (and others) implement sh , with extensions. Some shells, like fish or zsh , are not even trying to be sh shells at all (but zsh provides a degree of " sh -emulation" via --emulate sh ), while shells like dash does try to be more pure to the POSIX sh standard (but still with a few extensions). In general, shell aspects that relate to syntax and built in features should be "portable" as long as the script is always executed by the same shell (and version of the shell, to some degree), even if that shell happens to be the fish shell. This is the same type of "portability" that you have with Python, Perl and Ruby scripts. There should however always be an sh shell available (usually installed as /bin/sh ), which is usually bash , dash , or ksh running in some form of "compatibility mode". The most portable utilities on a POSIX system are the POSIX utilities. These are described here: https://pubs.opengroup.org/onlinepubs/9699919799/idx/utilities.html GNU utilities, the ones in coreutils , implement the POSIX utilities (which, as user schily points out in comments , does not mean that they are POSIX- compliant ), and then expands upon them with features that are mainly for convenience. This is also true for the corresponding utilities on other Unix systems. Also note that the GNU coreutils package on Linux does not contain all utilities that POSIX specifies and that utilities such as find , sed , awk , and others, which are all POSIX utilities, are packaged separately. As long as you stay with utilities that are POSIX and use their POSIX behaviour (in terms of options etc.), in scripts that use POSIX sh syntax, you are likely to be "mostly portable" across most Unix systems. But also note that there are non-POSIX utilities and non-POSIX extension to POSIX utilities that are still fairly portable, because they are commonly implemented. Examples of common non-POSIX utilities are pkill , tar , and gzip . Examples of non-POSIX extensions to POSIX utilities that are common to find is the -iname predicate of find and the fact that sed often can be made to understand extended regular expressions using -E . You'll learn what extensions in implementations of what tools, running on what Unices, behaves in what way, by testing by yourself (in virtual machines for example), and by reading questions and answers on this site. For example, How can I achieve portability with sed -i (in-place editing)? Also note that there are instances when the POSIX standard leaves a behaviour "unspecified", which mean that the same utility on two different Unix systems, using a POSIX option, may behave differently. An example of this is found here in Why permission denied upon symbolic link update to new target with permissions OK? The disadvantage of trying to write pure POSIX shell code is that you miss out on some really useful features. I once tried to write the POSIX equivalent tests for find that GNU's -readable predicate tests for, which wasn't easy, and trying to signal a process by name using ps and grep instead of pkill is a topic of a few too many questions on this site. Likewise, trying to "stay POSIX", you would need to stay away from really useful things like perl , and you would not be able to do much of any administrative task portably (adding users, manage backups, transfer files between systems, etc.) In general I prefer a pragmatic approach to getting things done, while understanding that my code may have to be modified to support other systems later (if it's ever used on other systems), rather than a purist approach. This does not stop me from writing scripts that I know will have as few portability issues as possible (I wouldn't use unportable semantics everywhere). But that's my personal opinion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/555099",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291142/"
]
} |
555,208 | I recently purchased a i5-9600K . Which is supposed to run 6 cores and 6 threads (hyperthreading), when I take a look into /proc/cpuinfo the ht flag is on, and checking a tool like htop only shows 6 cores, as you can see in the image below. I've used other Intel and AMD processors, and usually when the product says 6 cores/6 threads the total amount is 12 , but in this case I see just 6 . Am I wrong or what could be the problem? Thank you! | If you scroll down on your CPU’s Ark page , you’ll see that it says Intel® Hyper-Threading Technology ‡ No Your CPU has six cores, but it doesn’t support hyper-threading, so your htop display is correct. The CPU specifications on Ark show the full thread count, there’s no addition or multiplication involved; see for example the Xeon E3-1245v3 for a hyper-threading-capable CPU (four cores, two threads per core, for eight threads in total). The ht moniker given to the underlying CPUID flag is somewhat misleading: in Intel’s manual (volume 3A, section 8.6), it’s described as “Indicates when set that the physical package is capable of supporting Intel Hyper-Threading Technology and/or multiple cores”. So its presence indicates that the CPU supports hyper-threads (even if they’re disabled), or contains multiple cores in the same package, or both. To determine what is really present, you need to enumerate the CPUs in the system, using firmware-provided information, and use the information given to figure out whether there are multiple logical cores, on how many physical cores, on how many sockets, etc. Depending on the CPU, a “CPU” shown in htop (and other tools) can be a thread (on a hyper-threading system), a physical core (on a non-hyper-threading system), or even a full package (on a non-hyper-threading, single-core system). The Linux kernel does all this detection for you, and you can see the result using for example lscpu . At least your CPU isn’t affected by any of the hyperthreading-related vulnerabilities! | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/555208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/384590/"
]
} |
555,257 | I am trying to get x amount of file names printed from highest line count to lowest. ATM i have this wc -l /etc/*.conf |sort -rn | head -6 | tail -5 | and i get this 543 /etc/ltrace.conf 523 /etc/sensors3.conf 187 /etc/pnm2ppa.conf 144 /etc/ca-certificates.conf Now this would be ok but i only need the names, is there any way of removing the number of lines? | If you scroll down on your CPU’s Ark page , you’ll see that it says Intel® Hyper-Threading Technology ‡ No Your CPU has six cores, but it doesn’t support hyper-threading, so your htop display is correct. The CPU specifications on Ark show the full thread count, there’s no addition or multiplication involved; see for example the Xeon E3-1245v3 for a hyper-threading-capable CPU (four cores, two threads per core, for eight threads in total). The ht moniker given to the underlying CPUID flag is somewhat misleading: in Intel’s manual (volume 3A, section 8.6), it’s described as “Indicates when set that the physical package is capable of supporting Intel Hyper-Threading Technology and/or multiple cores”. So its presence indicates that the CPU supports hyper-threads (even if they’re disabled), or contains multiple cores in the same package, or both. To determine what is really present, you need to enumerate the CPUs in the system, using firmware-provided information, and use the information given to figure out whether there are multiple logical cores, on how many physical cores, on how many sockets, etc. Depending on the CPU, a “CPU” shown in htop (and other tools) can be a thread (on a hyper-threading system), a physical core (on a non-hyper-threading system), or even a full package (on a non-hyper-threading, single-core system). The Linux kernel does all this detection for you, and you can see the result using for example lscpu . At least your CPU isn’t affected by any of the hyperthreading-related vulnerabilities! | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/555257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/384642/"
]
} |
555,364 | I’m trying to use sed to print all lines until but excluding a specific pattern. I don’t understand why the following doesn’t work: sed '/PATTERN/{d;q}' file According to my understanding of sed scripting, this expression should cause the following: When a line matches /PATTERN/ , execute the group consisting of commands to d elete the pattern space (= the current line) q uit after printing the current pattern space In isolation, both /PATTERN/d and /PATTERN/q work; that is, d deletes the offending line, and q causes sed to terminate but after printing the line , as documented. But grouping the two operations together in a block seemingly causes the q to be ignored. I know that I can use Q instead of {d;q} as a GNU extension (and this works as expected!) but I’m interested in understanding why the above doesn’t work, and in what way I am misinterpreting the documentation. My actual use-case is (only slightly) more complex, since the first line of the file actually matches the pattern, and I’m skipping that (after doing some replacement): sed -e '1{s/>21/>chr21/; n}' -e '/>/{d;q}' in.fasta >out.fasta But the above, simplified case exhibits the same behaviour. | To output all lines of a file until the matching of a particular pattern (and to not output that matching line), you may use sed -n '/PATTERN/q; p;' file Here, the default output of the pattern space at the end of each cycle is disabled with -n . Instead we explicitly output each line with p . If the given pattern matches, we halt processing with q . Your actual, longer, command, which changes the name of chromosome 21 from just 21 to chr21 on the first line of a fasta file, and then proceeds to extract the DNA for that chromosome until it hits the next fasta header line, may be written as sed -n -e '1 { s/^>21/>chr21/p; d; }' \ -e '/^>/q' \ -e p <in.fasta >out.fasta or sed -n '1 { s/^>21/>chr21/p; d; }; /^>/q; p' <in.fasta >out.fasta The issue with your original expression is that the d starts a new cycle (i.e., it forces the next line to be read into the pattern space and there's a jump to the start of the script). This means q would never be executed. Note that to be syntactically correct on non-GNU systems, your original script should look like /PATTERN/ { d; q; } . Note the added ; after q (the spaces are not significant). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/555364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3651/"
]
} |
555,380 | I have a raspberrypi ZeroW that I am trying to connect to a network with a hidden ssid. I know that I could add this line "scan_ssid=1" to my wpa_supplicant.conf file for setup that way, however I would like to do all of the network configuration through wpa_cli. The man page does not seem to have anything on hidden ssid's and when I run the set command it does not provide the output of all the variable options as stated in the man page I just get: "Invalid SET command - at least 2 arguments are required." tldr: connect to hidden ssid through wpa_cli only | $ wpa_cli > add_network x> set_network x ssid "hidden_ssid"> set_network x psk "secret"// ALLOW CONNECT TO HIDDEN SSID > set_network x scan_ssid 1> enable_network x> save_config> select_network x | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/555380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/384752/"
]
} |
555,433 | I have a text file that has not been created properly. I am trying to get the people who create the text file to fix their output, but that is a long process and in the meantime I want to try repairing what I have, as a temporary workaround. The file is supposed to contain 9 fields separated by a vertical bar ( | ) delimiter. Unfortunately, the second field is also several fields separated by a vertical bar. And there's no escaping or quoting being used. So what I have has a lot more than 9 fields. I want to repair this, by taking the first field and the last 7 fields as-is, and transforming the fields in the middle into a single field, either suppressing the delimiters or replacing them with spaces. For examples: field1|field2|field3||||||field91a|DAVID|JOY|02022|4|5|6|7|8|91b|DAVID|JOY|ZYN|02022|4|5|6|7|8|9 I am expecting output as field1|field2|field3||||||field91a|DAVIDJOY|02022|4|5|6|7|8|91b|DAVIDJOYZYN|2022|4|5|6|7|8|9 How can I do this using shell-level tools? | With GNU sed , you could use: sed ':1;s/|/|/9;T;s/|//2;t1' which joins the second with the third field (deletes the second occurrence of | ) as many times as necessary until there is no more than 9 fields in the output. On an input like: 1|a|3|4|5|6|7|8|91|a|b|3|4|5|6|7|8|91|a|b|c|3|4|5|6|7|8|9 It gives: 1|a|3|4|5|6|7|8|91|ab|3|4|5|6|7|8|91|abc|3|4|5|6|7|8|9 On non-GNU systems, you could use @RakeshSharma's POSIX sed variant or perl instead: perl -F'[|]' -lae 'BEGIN {$" = ""; $, = "|"} print $F[0], "@F[1..$#F-7]", @F[-7..-1]' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/555433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/384800/"
]
} |
555,622 | How can I prefix all commands in a shell without typing it each time? My use case 1: $ git init$ git add -A$ git commit$ git push Prefix should be git␣ ( ␣ is the space character) initadd -Acommitpush My use case 2: sudo docker run my-repo/my-imagesudo docker pssudo docker images Prefix should be sudo docker␣ run my-repo/my-imagepsimages It would be best if I could do something like this: $ git init$ use_prefix sudo docker> ps> images> exit$ sudo docker run my-repo/my-image | You could use a function like the following: use_prefix () { while read -ra c; do "$@" "${c[@]}" done} This will accept your prefix as arguments to the use_prefix command and then read from stdin for each command to be prefixed. Note: You will have to use ctrl+c to exit the while read loop. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/555622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/384943/"
]
} |
555,630 | I'm running Cinnamon on a Devuan GNU/Linux 3 (Beowulf) machine - which is essentially the same as Debian 10 (Buster). My distribution uses the wicd daemon for network connection management; and I have an applet on my panel. But - an applet for NetworkManager is also present, even though I don't need it. I tried to get rid of it, but couldn't figure out how. Haven't found where its presence is configured, and it also doesn't seem to be its own separate process which I could avoid having run. What can I do to get rid of the applet? Don't want: NetworkManager, The applet to the right of the "US". Want to keep: wicd, The applet to the left of the clock. | You could use a function like the following: use_prefix () { while read -ra c; do "$@" "${c[@]}" done} This will accept your prefix as arguments to the use_prefix command and then read from stdin for each command to be prefixed. Note: You will have to use ctrl+c to exit the while read loop. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/555630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
555,677 | I have the following script that works fine: !/bin/basha=12while [ $a -gt 10 ]do echo "$a" a=$(($a-1))doneecho "done" If I add line "echo something" above "do", I expect to see a syntax error in that line. It seems that [ $a -gt 10 ] is bypassed, and it becomes an infininte loop. How could that happen? | From the bash manual : while The syntax of the while command is: while test-commands ; do consequent-commands ;done Execute consequent-commands as long as test-commands has an exit status of zero. The return status is the exit status of the last command executed in consequent-commands , or zero if none was executed. Note: test-commands , plural . You can use multiple commands in the test, and so this is a perfectly valid loop, with the list of commands [ $a -gt 10 ]; echo "$a" as the test: while [ $a -gt 10 ]echo "$a"do a=$(($a-1))done While the command [ $a -gt 10 ] may or may not fail, the echo will (almost) always succeed (unless it couldn't write the text, or some other error happened), so the final exit status of the test commands will always be success, and the loop will always be run. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/555677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/201795/"
]
} |
555,721 | Today I tried to get Live Kali Linux working off a USB stick. When I got to the login screen, I tried entering root as the username and toor as the password like always, but the login failed. I am out of ideas, since the official Kali site still says those should be the correct login credentials. Here are the steps for reproducing on my Lenovo y700 laptop: Download a Kali linux iso image from the official website . I tried 32-bit and 64-bit versions of the latest Kali Linux and I also tried an old 2018 image, all with the same results Burn the image onto a flash drive. For this I tried using Rufus, Etcher and the dd tool from Linux. I followed the instructions on the official website. Plug the USB into the computer, choose "Legacy Mode" from BIOS boot screen (because if I choose UEFI, then even if USB boot is enabled, it says usb boot is disabled in uefi settings). When the grub screen comes up, I can choose either kali or "advanced options". There is no option for installation. Choose the live version and it will boot into a login screen. Enter root as the username and toor as the password, and get invalid credentials as the response | I don’t know if you still need this, but to anyone who does, Kali changed the password structure from root/toor to kali/kali. This was driving me crazy as it was always root/toor before. See: https://www.kali.org/news/kali-default-non-root-user/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/555721",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/320185/"
]
} |
555,746 | Because I spend most of my life in the IPython shell, I have a bad habit of prepending terminal commands with exclamation points. Usually, this just leads to an error, but sometimes, it causes something bad to happen. Can I effectively disable the ! functionality in my terminal? And would this risk interfering with any scripts? | In interactive shells, by default, ! is used for history expansion: the shell will look for a command matching the text following ! , and execute that. As indicated in this answer to Can't use exclamation mark (!) in bash? shells allow history expansion to be disabled: Bash: set +H Zsh: set -K If you never want to use history expansion driven by exclamation marks, you can add these to your shell’s startup scripts. These won’t cause ! to be ignored, but they will avoid running commands from your history. !command will complain that !command doesn’t exist. Shell scripts aren’t affected by these settings, they get their own shells with a non-interactive configuration (which doesn’t include history expansion by default). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/555746",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/181903/"
]
} |
555,761 | I'm having some trouble trying to create an awk script that checks and possibly corrects every line in a text file. Consider the example: $ cat employee.txt"100","Thomas","Sales","5000""200","Jason","Technology","5500""300","Mayla","Technology","7000""400","Nisha","Marketing","9500""500","Randy","Technology","6000""501","Ritu","Accounting","5400" As you can see, some of the lines appear to be broken at the wrong point. The pattern should follow as such: $ cat employee.txt"100","Thomas","Sales","5000""200","Jason","Technology","5500""300","Mayla","Technology","7000""400","Nisha","Marketing","9500""500","Randy","Technology","6000""501","Ritu","Accounting","5400" So I was wondering if there was a way in Awk to determine if the pattern isn't being followed, such as by verifying the number of commas in every line, and then backspacing the broken lines. I receive files like this containing hundreds or thousands of rows, so manual work to fix all the broken lines is tedious. I'm creating a control file to load data into a table using SQLLDR, but I get errors because the text file contains broken lines. So my solution is to fix every line by script. Any thoughts? Script doesn't have to be in Awk. | $ awk -F, 'FNR == 1 { nf = NF } { while (NF < nf || !/[^,]"$/) { line = $0; getline; $0 = line $0 }; print }' file"100","Thomas","Sales","5000""200","Jason","Technology","5500""300","Mayla","Technology","7000""400","Nisha","Marketing","9500""500","Randy","Technology","6000""501","Ritu","Accounting","5400" This uses awk and assumes that the 1st line has the correct number of fields and that no field may contain embedded commas. It further assumes that no line will ever have too many fields, i.e. that a line may have extra newlines, but that no line is joined up with the next/previous line. When a line with the wrong number of fields (or a line that does not end with a " character, which means that the last field was split) is found, the current line is saved in the variable line and the next line is read. The current line is then updated as the concatenation of line and the line just read. This continues (in the case of multiple consecutive split lines) until we end up with something that has the correct number of fields. The reconstructed line is then printed. NF is a special awk variable holding the number of fields in the current record (a record is a line by default). This number is updated automatically when $0 (the current record) is assigned to or when a new record is read. The nf variable is our own variable that is set to the "correct number of fields" from the first line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/555761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/385073/"
]
} |
555,770 | This is not a homework assignment. I am new to bash and trying to gather some data from our logging. I am reading lines from a file. All of the lines look like this: [info] 1 - 12-04 15:33:37.542 : Finished createWalletRandom, total time 9898ms I need to parse out the milliseconds (which I will get min, max, average). I can get 9898ms and I need to get rid of the ms for the math to work. Trying this line below, doesn't change anything: MILLI_SECONDS=${RAW_MILLI_SECONDS%??} And trying this below, generates an error get_wallet_times.sh: line 23: -2: substring expression < 0 : MILLI_SECONDS=${RAW_MILLI_SECONDS::-2} Here is my code: while read ONE_LINE;do echo $ONE_LINE RAW_MILLI_SECONDS="$(cut -d' ' -f13 <<<"$ONE_LINE")" echo $RAW_MILLI_SECONDS MILLI_SECONDS=${RAW_MILLI_SECONDS::-2} MILLI_SECONDS=${RAW_MILLI_SECONDS%??} echo ${MILLI_SECONDS} LINE_COUNT=$((LINE_COUNT+1)) FILE_SUM=$((FILE_SUM+MILLI_SECONDS))done < logfile.txt This is on macOS, in case its a bash issue specific to mac. Please let me know if there is anything thing else you need. ThnxMatt | $ awk -F, 'FNR == 1 { nf = NF } { while (NF < nf || !/[^,]"$/) { line = $0; getline; $0 = line $0 }; print }' file"100","Thomas","Sales","5000""200","Jason","Technology","5500""300","Mayla","Technology","7000""400","Nisha","Marketing","9500""500","Randy","Technology","6000""501","Ritu","Accounting","5400" This uses awk and assumes that the 1st line has the correct number of fields and that no field may contain embedded commas. It further assumes that no line will ever have too many fields, i.e. that a line may have extra newlines, but that no line is joined up with the next/previous line. When a line with the wrong number of fields (or a line that does not end with a " character, which means that the last field was split) is found, the current line is saved in the variable line and the next line is read. The current line is then updated as the concatenation of line and the line just read. This continues (in the case of multiple consecutive split lines) until we end up with something that has the correct number of fields. The reconstructed line is then printed. NF is a special awk variable holding the number of fields in the current record (a record is a line by default). This number is updated automatically when $0 (the current record) is assigned to or when a new record is read. The nf variable is our own variable that is set to the "correct number of fields" from the first line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/555770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364914/"
]
} |
555,948 | Say I have a text file text.txt and I want to replace a (multi-line) string that is contained in before.txt with another string that is contained in after.txt , how do I do that? (I dont want to use regular expression I literally want to replace the string contained in before.txt with the text contained in after.txt in text.txt . I was hoping to use the methods proposed in: https://unix.stackexchange.com/a/26289/288916 I tried: perl -i -p0e 's/`cat before.txt`/`cat after.txt`/se' text.txt But unless I am a complete idiot and messed something trivial up I cannot simply extend it to loading a string to be found from a file with cat. Perhaps something is going wrong with the escaping. The file before.txt contains symbols such as /[]" . Thanks @ilkkachu, I tried: perl -i -0 -pe '$b = `cat before.txt`; $a = `cat after.txt`; s/\Q$b\E/$a/s\' text.txt , but it is still not working correctly. I got it working in one instance by making sure the string in before exactly matches the whole lines in which the strings was to be replaced. But it does not work for instance to replace a string that is not found at the start of the line.Example: text.txt file containing: Here is some text. before.txt contains: text after.txt contains: whatever No chance is made. | perl -i -p0e 's/`cat before.txt`/`cat after.txt`/se' text.txt Here, you have backticks inside single-quotes, so they are not processed by the shell, but Perl sees them as-is. Then again, Perl also supports backticks as a form of quoting, but it doesn't work inside s/// . Having a multi-line pattern is not an issue, as long as you use -0 or -0777 on the Perl command line. ( -0 will have it separate lines with the NUL character, which a text file won't have, and -0777 would read the whole file in one go.) You could do it with double-quotes, but that would expand the contents of the files directly in the Perl script, and any special characters would be taken as part of the script. A single slash would end the s/// operator and cause issues. Instead, have Perl read the files: perl -i -0 -pe '$b = `cat before.txt`; $a = `cat after.txt`; s/$b/$a/s' text.txt Here, contents of before.txt would still be taken as a regular expression. If you want to stop that, use s/\Q$b\E/$a/s instead. I don't think you want the e flag to s/// either, it would make the replacement taken as a Perl expression (which again only really matters if you have the shell expand the file contents to the Perl command line). In your later example you have some text.\n in text.txt and text\n in before.txt , where the \n represents a newline as usual. When the files are loaded in Perl, they're taken as-is, so the final newline in before.txt counts. The other file has a dot before the newline, the other doesn't, so they don't match. You can remove a possible trailing newline with chomp $b; after loading the files. You can remove a possible trailing newline with e.g. $b =~ s/\n$//; : $ perl -0 -pe '$b = `cat before.txt`; $a = `cat after.txt`; $b =~ s/\n$//; $a =~ s/\n$//; s/$b/$a/s' text.txtHere is some whatever. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/555948",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/288916/"
]
} |
556,070 | I want to find a directory x inside a particular subdirectory y using the macOS terminal, but I do not know the directories preceding y . This command - find / -type d -name "x" works for finding x , but there are many directories named x across the system so I need to find the x which is under the directory y . I tried - find / -type d -name "/y/x" or find / -type d -name "y/x" or find / -type d -name "../y/x" but these do not show me the desired result. | Using the -path primary: find / -path '*y/x' -path pattern True if the pathname being examined matches pattern. Special shell pattern matching characters (``['', ``]'', ``*'', and ``?'') may be used as part of pattern. These characters may be matched explicitly by escaping them with a backslash (``\''). Slashes (``/'') are treated as normal characters and do not have to be matched explicitly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/556070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257542/"
]
} |
556,307 | We want to automate the process of removing old directories from /var/spool/abrt/ . We have RHEL machines - version 7.x. The known way is to do the following # systemctl stop abrtd# systemctl stop abrt-oops And we can remove all those directories and files with following rm command: # abrt-cli rm /var/spool/abrt/* And then start the services # systemctl start abrtd# systemctl start abrt-oops We want to simplify the deletion process as the following -- it will delete the directories that are older than 10 days from /var/spool/abrt/ find /var/spool/abrt/ -type d -ctime +10 -exec rm -rf {} \; Is it a good alternative to purge the /var/spool/abrt/ directory? | Here is my suggestion: 1) Create a shell script /home/yael/purgeabrt.sh $ cat purgeabrt.sh#!/bin/bashset -efunction cleanup(){ systemctl start abrtd systemctl start abrt-oops}trap cleanup EXITsystemctl stop abrtdsystemctl stop abrt-oopsfind /var/spool/abrt/ -type d -ctime +10 -exec abrt-cli rm {} \;cleanup 2) Run the script as root : sudo crontab -e Add the line: */5 * * * * bash /home/yael/purgeabrt.sh in order to execute the cron job every 5 minutes. Edit: set -e will terminate the execution of the script if a command exits with a non-zero status. trap cleanup EXIT will catch signals that may be thrown to the script and executes the cleanup code. Note: The call to cleanup in the scripts last line is probably unnecessary (redundant) but improves readability of the code. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/556307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
556,364 | I want to run a shell script while ignoring some of the commands, because they need privileges. Commands like insmod . So I filter the script with this, and it works (those commands are replaced by true ): sed -e 's/command1/true/g' -e 's/command2/true/g' -e 's/command3/true/g' ... -e 's/commandN/true/g' Is there a more concise way of expressing that? With simple caracters, to change 'a', 'b', and 'm' into an 'x' I could do something like: sed -e 's/[abm]/x/g' or tr abm x (or its tr abm '[x*]' POSIX equivalent). But with strings? | If your sed is able to use the non-standard -E option to understand extended regular expressions, you may use sed -E 's/string1|string2|string3/true/g' The alternation with | is an extended regular expression feature not supported by basic regular expression of the type that sed usually supports. The sed command above would replace any of the three given string even if they occurred as substrings of other strings (such as string1 in string10 ). To prevent that, also match word boundaries on either side: sed -E 's/\<(string1|string2|string3)\>/true/g' On macOS, you may want to use [[:<:]] and [[:>:]] in place of \< and \> to specify word boundaries. As an aside, your s/[a|b|m]/x/g expression (now corrected in the question) would not only replace a , b and m with x but would also replace any | in the text with x . This is the same as s/[abm|]/x/g or y/abm|/xxxx/ , or the tr commands tr 'abm|' 'xxxx' and tr 'abm|' '[x*4]' . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/556364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/354526/"
]
} |
556,380 | I am using the command sudo git clone git://git.moodle.org/moodle.git to download moodle. After partial downloading, it just halts without any valid reason. How can I suspend the job at such a stage and resume it later? | If your sed is able to use the non-standard -E option to understand extended regular expressions, you may use sed -E 's/string1|string2|string3/true/g' The alternation with | is an extended regular expression feature not supported by basic regular expression of the type that sed usually supports. The sed command above would replace any of the three given string even if they occurred as substrings of other strings (such as string1 in string10 ). To prevent that, also match word boundaries on either side: sed -E 's/\<(string1|string2|string3)\>/true/g' On macOS, you may want to use [[:<:]] and [[:>:]] in place of \< and \> to specify word boundaries. As an aside, your s/[a|b|m]/x/g expression (now corrected in the question) would not only replace a , b and m with x but would also replace any | in the text with x . This is the same as s/[abm|]/x/g or y/abm|/xxxx/ , or the tr commands tr 'abm|' 'xxxx' and tr 'abm|' '[x*4]' . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/556380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/385605/"
]
} |
556,427 | I'm in a directory with a couple thousand files, but the files I want to filter all have the following syntax: *.imputed.*_info I want to use awk to filter out the records in each file where the 5th column of data has a value > 0.50 and I was able to do that with: awk '{if($5 >= .5) {print}}' filename . That too worked. I then tried to loop through all 500 or so files and concatenate records from each that match this criteria. I tried the following but I am not getting the syntax right. touch snplist.txtfor name in *.imputed.*_info; do snps="awk '{if($5 >= .5) {print}}' $name" cat snplist.txt "$snps" > snplist.txtdone | Your code overwrites the output file in each iteration. You also do not actually call awk . What you want to do is something like awk '$5 >= 0.5' ./*.imputed.*_info >snplist.txt This would call awk with all your files at once, and it would go through them one by one, in the order that the shell expands the globbing pattern. If the 5th column of any line in a file is greater or equal to 0.5, that line would be outputted (into snplist.txt ). This works since the default action, if no action ( {...} block) is associated with a condition, is to output the current line. In cases where you have a large number of files (many thousands), this may generate an "Argument list too long" error. In that case, you may want to loop: for filename in ./*.imputed.*_info; do awk '$5 >= 0.5' "$filename"done >snplist.txt Note that the result of awk does not need to be stored in a variable. Here, it's just outputted and the loop (and therefore all commands inside the loop) is redirected into snplist.txt . For many thousands of files, this would be quite slow since awk would need to be invoked for each of them individually. To speed things up, in the cases where you have too many files for a single invocation of awk , you may consider using xargs like so: printf '%s\0' ./*.imputed.*_info | xargs -0 awk '$5 >= 0.5' >snplist.txt This would create a list of filenames with printf and pass them off to xargs as a nul-terminated list. The xargs utility would take these and start awk with as many of them as possible at once, in batches. The output of the whole pipeline would be redirected to snplist.txt . This xargs alternative is assuming that you are using a Unix, like Linux, which has an xargs command that implements the non-standard -0 option to read nul-terminated input. It also assumes that you are using a shell, like bash , that has a built-in printf utility ( ksh , the default shell on OpenBSD, would not work here as it has no such built-in utility). For the zsh shell (i.e. not bash ): autoload -U zargszargs -- ./*.imputed.*_info -- awk '$5 >= 0.5' >snplist.txt This uses zargs , which is basically a reimplementation of xargs as a loadable zsh shell function. See zargs --help (after loading the function) and the zshcontrib(1) manual for further information about that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/556427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224116/"
]
} |
556,441 | What specifically can Linux do when it has swap that it can't without swap? For this question I want to focus on the difference between for example, a Linux PC with 32 GB RAM and no swap vs. a near identical Linux PC with 16 GB RAM with 16 GB swap. Note I am not interested in the "yes, but you could see X improvement if you add swap to the 32 GB PC" . That's off-topic for this question. I first encountered the opinion that adding swap can be better than adding RAM in comments to an earlier problem . I have of course read through this: Do I need swap space if I have more than enough amount of RAM? and... Answers are mostly focussed on adding swap, for example discussing disk caching where adding RAM would of course also extend the disk cache. There is some mention of defragmentation only being possible with swap, but I can't find evidence to back this up. I see some reference to MAP_NORESERVE for mmap , but this seems a very specific and obscure risk only associated with OOM situations and possibly only private mmap. Swap is often seen as a cheap way to extend memory or improve performance. But when mass producing embedded Linux devices this is turned on its head... ... In that case swap will wear flash memory, causing it to fail years before the end of warranty. Where doubling the RAM is a couple of extra dollars on the device. Note that's eMMC flash NOT an SSD! . Typically eMMC flash does not have wearleveling technology meaning it wears MUCH faster than SSDs There does seem to be a lot of hotly contested opinion on this matter . I am really looking for dry facts on capabilities, not "should you / shouldn't you" opinions. What can be done with swap which would not also be done by adding RAM? | Hibernation (or suspend to disk). Real hibernation powers off the system completely, so contents of RAM are lost, and you have to save the state to some persistent storage. AKA Swap. Unlike Windows with hiberfil.sys and pagefile.sys , Linux uses swap space for both over-committed memory and hibernation. On the other hand, hibernation seems a bit finicky to get to work well on Linux. Whether you "can" actually hibernate is a different thing. ¯\_(ツ)_/¯ | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/556441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20140/"
]
} |
556,503 | What does the word " rolling " in " 1:13.0.1.9-2.rolling.el7 " below signify? ================================================================================================= Package Arch Version Repository Size=================================================================================================Installing: java-latest-openjdk x86_64 1:13.0.1.9-2.rolling.el7 epel 207 kInstalling for dependencies: java-latest-openjdk-headless x86_64 1:13.0.1.9-2.rolling.el7 epel 40 MTransaction Summary================================================================================================= | .rolling was added to the OpenJDK release number in Fedora, RHEL, and descendants to avoid file conflicts with the versioned OpenJDK packages ( java-11-openjdk etc.). The latest package sometimes matches a versioned OpenJDK package, and when that happens, both packages ship files in the same locations; adding .rolling to the latest package’s release avoids this. See #1647298 for details. One possible meaning for “rolling” here is that it’s a continuously-updated package, which might change major versions within a release of the distribution. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/556503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2298/"
]
} |
556,545 | I'm doing some stuff with audio files, most but not all of which are mp3 files. Now I want to run some commands on only the files which are not mp3 files, or only those which don't have a .mp3 extension. I consider myself pretty good at regular expressions, but not so much at file globbing, which is subtly different in unexpected ways. I looked around and learned from other SO & SE answers that Bash has "extended globbing" that allows me to do this: file ../foo/bar/*.!(mp3) But some of my filenames have dots in them besides the one forming the filename extension: ../foo/bar/Naked_Scientists_Show_19.10.15.mp3../foo/bar/YWCS_ep504-111519-pt1_5ej4_41cc9320.mp3_42827d48daefaa81ec09202e67fa8461_24419113.mp3../foo/bar/eLife_Podcast_19.09.26.mp3../foo/bar/gdn.sci.080428.bg.science_weekly.mp3 It seems the glob matches from the first dot onward, rather than from the last dot. I looked at the documentation but it seems they are far less powerful than regexes. But I didn't really grok everything as I don't spend that much time on *nix shells. Have I missed some way that I can still do this with Bash globbing? If not, a way to achieve the same thing with find or some other tool would still be worth knowing. | *.!(mp3) matches on foo.bar.mp3 because that's foo. followed by bar.mp3 which is not mp3 . You want !(*.mp3) here, which matches anything that doesn't end in .mp3 . If you want to match files whose name contains at least one . (other than a leading one which would make them a hidden file) but don't end in .mp3 , you could do !(*.mp3|!(*.*)) . In any case, note that unless your bash was built with --enable-extended-glob-default , you'll need to shopt -s extglob for that ksh glob operator to be available. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/556545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6648/"
]
} |
556,677 | I purchased a Human Machine Interface (Exor Esmart04). Running on Linux 3.10.12, however this Linux is stripped down and does not have a C compiler. Another problem is the disk space: I've tried to install GCC on it but I do not have enough disk space for this, does anyone have other solutions or other C compilers which require less disk space? | Usually, for an embedded device, one doesn't compile software directly on it. It's more comfortable to do what is called cross-compilation which is, in short, compiling using your regular PC to another architecture than x86. You said you're new to Linux; just for your information, you're facing a huge problem: cross-compiling to embedded devices is not an easy job. I researched your HMI system and noticed some results that are talking about Yocto. Yocto is, in short, a whole framework to build firmware for embedded devices. Since your HMI massively uses Open Source projects (Linux, probably busybox, etc.) the manufacturer must provide you a way to rebuild all the open source components by yourself.Usually, what you need to do that is the BSP ( Board Support Package ).Hardware manufacturer usually ship it: Using buildroot project that allows you to rebuild your whole firmware from scratch. Using yocto meta that, added to a fresh copy of the corresponding yocto project, will allow you to rebuild your whole firmware too. More rarely, a bunch of crappy scripts and pre-built compiler. So, if I was you, I would: Contact the manufacturer support to ask for the stuff to rebuild the firmware as implied by the use of Open Source. In parallel, search Google for "your HMI + yocto", "your HMI + buildroot", etc. After Googling even more, I found out a Yocto meta on github . You can check the machines implemented by this meta upon the directory conf/machine of the meta. There's currently five machines defined under the following codenames: us01-kit us02-kit us03-kit usom01 usom02 So I suggest that you dig into this. This is probably the way you can build software by yourself.You can also check this page on the github account that may give you some more clues. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/556677",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/385559/"
]
} |
556,748 | I want to observe a socket status periodically, so I need to check the socket status by command. Currently I list all listening sockets by ss and filter them by grep . ss -l | grep -q /run/php/php7.0-fpm.sock Is there better way to check socket's status? | You can get some information by trying to connect, pass nothing and accept nothing before disconnecting. socat -u OPEN:/dev/null UNIX-CONNECT:/run/php/php7.0-fpm.sock There are at least four possible outcomes: If the socket does not exist then the error will be No such file or directory and the exit status will be 1 . If you have no write access to the socket then the error will be Permission denied and the exit status will be 1 . In this case you cannot tell if there's a process listening. If you have write access to the socket and there is no listening process then the error will be Connection refused and the exit status will be 1 . If you have write access to the socket and there is a process listening then the connection will be established. The command will send nothing (like cat /dev/null ), it will not try to receive anything (because of -u ), so it will exit almost immediately. The exit status will be 0 . The connection gets established, briefly but still. The listening process may be configured to accept just one connection, serve it and exit; or to accept one connection at a time. In such case the probing connection will saturate the limit; this is undesirable. However in practice I expect vast majority of listening processes to be able to serve multiple connections and gracefully deal with clients who disconnect ruthlessly. Notes: You need to parse stderr to tell apart cases that generate exit status 1 . The procedure tells nothing about what process is listening. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/556748",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44001/"
]
} |
556,851 | I have test.json file with different lengths of rows. Some fictitious example: { "a" : 123, "b": "sd", "c": 45, "d": 1, "e": "" }{ "a": 5, "b": "bfgg", "c": "x4c", "d": 31, "e": "" } I want to just keep just b for every line: { "b": "sd"}{ "b": "bfgg"} | With a proper jq tool: jq -c '{"b": .b}' test.json The output: {"b":"sd"}{"b":"bfgg"} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/556851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/386018/"
]
} |
556,900 | I have a markdown document myfile.md with a list of English sentences in which some the first letters are lowercased and some are uppercased. All English sentences start with standard English letters; no special characters are used: x X x I need a function by that logic: If any first-in-line English letter is lowercase, uppercase it So to change the file to be looked like this: X X X What I have tried 1) tr I thought to try to do so with tr with regex, based on 'tr '[:lower:]' '[:upper:]' myfile.md but neither I found a way to combine regex in tr , nor I found a way to process data inside a file with tr . Rather, I only found a way to transform text in shell prompt as with: echo x | tr '[:lower:]' '[:upper:]' X 2) sed sed 's/^[a-z]*/[A-Z]/' myfile.mdsed -r 's/^[a-z]*/[A-Z]/' myfile.md But after executing either, myfile.md still contains x X x instead: X X X My question How could I use the described logic from shell, without using any CLUI text editors such as nano or vim ? | Use the \U function in GNU sed. s/^\([a-z]\)/\U\1/ so this captures a single character at the start of the line if it is lowercase, and upper cases it. As the \U leaves other things alone, this can be simplified to s/\(.\)/\U\1/ as the . will match the first character (if any) on the line. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/556900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
556,946 | I am getting these warnings every time I update my initramfs image(-s) with update-initramfs on my Dell PowerEdge T20 server running GNU/Linux Debian Buster 10. 0 . Is there a fix? W: Possible missing firmware /lib/firmware/i915/bxt_dmc_ver1_07.bin for module i915W: Possible missing firmware /lib/firmware/i915/skl_dmc_ver1_27.bin for module i915W: Possible missing firmware /lib/firmware/i915/kbl_dmc_ver1_04.bin for module i915W: Possible missing firmware /lib/firmware/i915/cnl_dmc_ver1_07.bin for module i915W: Possible missing firmware /lib/firmware/i915/glk_dmc_ver1_04.bin for module i915W: Possible missing firmware /lib/firmware/i915/kbl_guc_ver9_39.bin for module i915W: Possible missing firmware /lib/firmware/i915/bxt_guc_ver9_29.bin for module i915W: Possible missing firmware /lib/firmware/i915/skl_guc_ver9_33.bin for module i915W: Possible missing firmware /lib/firmware/i915/kbl_huc_ver02_00_1810.bin for module i915W: Possible missing firmware /lib/firmware/i915/bxt_huc_ver01_07_1398.bin for module i915W: Possible missing firmware /lib/firmware/i915/skl_huc_ver01_07_1398.bin for module i915 | For a general solution, apt-file is your way to solve the Possible missing firmware... warning. E.g.: apt-file search bxt_dmcfirmware-misc-nonfree: /lib/firmware/i915/bxt_dmc_ver1.binfirmware-misc-nonfree: /lib/firmware/i915/bxt_dmc_ver1_07.bin Showing that the package firmware-misc-nonfree provides the missing firmware. Installing the firmware-linux package solves the problem because firmware-linux depends on firmware-linux-nonfree which depends on firmware-misc-nonfree . Detailed instructions: Add non-free to your /etc/apt/sources.list : deb http://deb.debian.org/debian buster main contrib non-freedeb http://deb.debian.org/debian-security/ buster/updates main contrib non-freedeb http://deb.debian.org/debian buster-updates main contrib non-free Install apt-file : sudo apt updatesudo apt install apt-filesudo apt-file update Debian: apt-file | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/556946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
557,128 | Over the past few years, various live kernel patching techniques have become popular among sysadmins who strive to ensure the highest possible uptimes for their systems. For that process to be possible, a human being prepares custom patches, which are then typically distributed to paying customers, and - sometimes - free of charge to home users. Why is it not possible to automatically create these patches using the difference between the source codes of the running kernel version and the latest available? As I understand it, server kernels, which could profit the most from this, typically only undergo major changes once every a couple of years, and otherwise only receive major bugfix and security updates, seemingly making this even easier. Likewise, if stability was the concern, it would seem quite simple to set up a system where volunteers running machines of relatively low importance would build their patches first, and automatically report back on how well they work. Yet, none of this happens. What am I missing there that makes this the case? | We like to think of running programs like the static source code that creates them. But they are really continually changing. Likewise the kernel in memory is NOT the same as the kernel on disk. To quote Dijkstra in his letter " goto considered harmful "... My first remark is that, although the programmer's activity ends when he has constructed a correct program, the process taking place under control of his program is the true subject matter of his activity, for it is this process that has to accomplish the desired effect; it is this process that in its dynamic behavior has to satisfy the desired specifications. Yet, once the program has been made, the "making' of the corresponding process is delegated to the machine. My second remark is that our intellectual powers are rather geared to master static relations and that our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process , to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible From this I would infer that it is a bad idea to have a program or kernel in memory that isn't the result of loading the kernel from disk. If nothing else you want to know that you can reboot and end up with the same kernel as you are running now. As a sys-admin you want to know you got a bonafide regular kernel at the end not some Frankenstein's monster because your live kernel had subtle differences to the one they patched. And live patching is very hard indeed. It is technically impossible to automatically generate live patches It's important to understand that program code effectively rewrites itself. int X = 10;void run(){ X=5;} In this code example X=10 is never executed as code. The number 10 is placed in the location "X" by the compiler. When the 3rd line is executed at run time it replaces the value at location "X". It literally overwrites the value, meaning the number 10 disappears from the running program code entirely. Now we try to live patch this with: int X = 20;void run(){ X=15;} What should X be patched to 20 or 15? Should we patch it at all or just leave it? We are not just changing code here we are changing dynamically generated values. You might think that because they are dynamically generated you might not need to change them, but if we don't change them do we know 5 or 10 is still a valid value in the new code? This cannot be done automatically! In short there are techniques with associated tools that can create live patches, but using them and testing the result requires experts. Releasing these tools and expecting home users to understand how to use them is a good way for a lot of home users to screw up their system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/557128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/386246/"
]
} |
557,285 | I normally use pidof to get the pid of a process, and KILL -SIGTERM <pid> to terminate it. The pipe should give the output of one command as an input to another. So why doesn't the following command work? pidof firefox | kill -SIGTERM | The pipe should give the output of one command as an input to another. That's correct, but kill doesn't take any input on standard input. Instead you need to provide it as a command line argument: kill -SIGTERM "$(pidof firefox)" or: pidof firefox | xargs kill $( is command expansion inside the shell, whereas xargs is external. However, these approaches have a number of corner cases, like what to do if there are multiple pids, no pids, etc -- this is why pkill exists: pkill -TERM firefox | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/557285",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/346765/"
]
} |
557,361 | I am looking for a sh equivalent of the following (yes, really.) #!/bin/bashexe1='filepath'n=1var=exe${n}echo ${!var} Where the echo should output filepath .I want to use plain sh. I have already played around with the code a lot, but I didn't manage to get the output right so far. I want to achieve an array-like structure, I'd just want to iterate over the variable name for a known number of variables using a loop. For the array I found Arrays in a POSIX compliant shell but I'm still looking for the answer to the question for curiosity ;) | An equivalent sh script to what's presented in the question: #!/bin/shexe1='filepath'n=1var=exe$neval "echo \"\$$var\"" # or: eval 'echo "$'"$var"'"' The eval line will evaluate the given string as a command in the current environment. The given string is echo \"\$$var\" , which, after variables have been expanded etc., would become echo "$exe1" . This is passed to eval for re-evaluation, and the command prints the string filepath . You also mention arrays. It is true that the POSIX sh shell does not have named arrays in the same sense that e.g. bash has, but it still has the list of positional parameters. With it, you can do most things that you'd need to do with a list in the shell. For example, set -- 'filepath1' 'filepath2' 'filepath3'for name in "$@"; do # or: for name do echo "$name"done which is the same as just for name in 'filepath1' 'filepath2' 'filepath3'do echo "$name"done Since I'm usure of what it is you're trying to achieve, I can't come up with a more interesting example. But here you have an array-like structure that you iterate over. Why you'd want to iterate over names of variables is still not clear. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/557361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350571/"
]
} |
557,800 | Normally, to zip a directory, I can do something like this: zip -r archive.zip directory/ However, if I try to remove the extension from archive.zip like this: zip -r archive directory/ It implicitly appends the .zip extension to the output. Is there a way to do this without creating a .zip and then renaming it? I'm using this version of zip on Ubuntu 18.04: Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license.This is Zip 3.0 (July 5th 2008), by Info-ZIP. | The -A ( --adjust-sfx ) option causes zip to treat the given archive name as-is: zip -Ar archive directory/ This works even when archive isn’t created as a self-extracting archive. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/557800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/279720/"
]
} |
557,822 | I thought this would do the trick: find src -type f -regextype egrep -regex '.*(?<!\.d)\.ts' But it doesn't seem to be matching anything. I think that should work, but I guess this "egrep" flavor doesn't support negative backreferences unless I didn't escape something properly. For reference, % find src -type fsrc/code-frame.d.ts # <-- I want to filter this outsrc/foo.tssrc/index.ts Is there another quick way to filter out .d.ts files from my search results? % find --versionfind (GNU findutils) 4.7.0-git | I don't believe egrep supports that syntax (your expression is a Perl compatible regular expression). But you don't need to use regular expressions in your example, just have multiple -name tests and apply a ! negation as appropriate: find src -type f -name '*.ts' ! -name '*.d.ts' | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/557822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45556/"
]
} |
557,861 | There is a way I can remove the rightmost column: # Preview files on the rightmost column?# And collapse (shrink) the last column if there is nothing to preview?set preview_files falseset preview_directories falseset collapse_preview true But how do I remove the leftmost column which shows the parent directory? I know that I can use set viewmode=multipane to get single column. But in that case, when I use two tabs, I get two columns for two tabs, in the same screen. I want a screen per tab, but I want each screen to be composed of single column. How do I achieve that? | You can use the column ratios to hide the left-most column: set column_ratios 0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/557861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282376/"
]
} |
557,888 | On a Linux machine that runs systemd, is there any way to see what or who issued a shutdown or reboot? | Examine the system logs of the previous boot with sudo journalctl -b -1 -e . Examine /var/log/auth.log . Are you sure it's not one of "power interruption/spike", "CPU overheat", .... On MY system (Ubuntu 16.04,6), sudo journalctl | grep shutdownJan 29 12:58:07 bat sudo[14365]: walt : TTY=pts/0 ; PWD=/home/walt ; USER=root ; COMMAND=/sbin/shutdown nowFeb 12 11:23:59 bat systemd[1]: Stopped Ubuntu core (all-snaps) system shutdown helper setup service.Feb 19 09:35:18 bat ureadahead[437]: ureadahead:lxqt-session_system-shutdown.png: Ignored relative pathFeb 19 09:35:18 bat ureadahead[437]: ureadahead:gshutdown_gshutdown.png: Ignored relative pathFeb 19 09:35:18 bat ureadahead[437]: ureadahead:mate-gnome-main-menu-applet_system-shutdown.png: Ignored relative pathFeb 27 16:45:40 bat systemd-shutdown[1]: Sending SIGTERM to remaining processes...Mar 05 17:53:27 bat systemd-shutdown[1]: Sending SIGTERM to remaining processes...Mar 15 09:57:45 bat systemd[1]: Stopped Ubuntu core (all-snaps) system shutdown helper setup service.Mar 21 17:40:30 bat systemd[1]: Stopped Ubuntu core (all-snaps) system shutdown helper setup service.Apr 15 18:16:37 bat systemd[1]: Stopped Ubuntu core (all-snaps) system shutdown helper setup service.... The first line shows when user walt did a sudo shutdown now . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/557888",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166088/"
]
} |
557,894 | To block a user after a preset number of failed login attempts, one would use a module like pam_tally2 or pam_faillock . What is the difference between these two? Which one should I use when? | Since Linux-PAM 1.4.0 (8th June, 2020) pam_tally and pam_tally2 were deprecated and pam_faillock was introduced, version 1.5.0 (10th November, 2020) removed pam_tally and pam_tally2 If your distro provides pam_faillock use that one, if not use pam_tally2 Source: https://github.com/linux-pam/linux-pam/tags | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/557894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68186/"
]
} |
557,906 | using Samba between a linux server and a windows 10 pc client.From windows I go to \\mylinuxserver and go into my home folder and see .cache/.gconf/.gvfs/.mozilla/and so onDesktop/Downloads/my_folders_i_care_about/ I don't want to see all the dot folders and files like .cache .I don't want to see .anything in windows explorer when navigating into a samba share. How can I stop dot files and dot folders from being visible? My smb.conf is currently this for sharing out home directories and any other folder; this is under SLES11 with samba 3.6 but I will be using RHEL 7.6 eventually. [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes create mask = 660 directory mask = 770[data] path = /data create mask = 660 directory mask = 770 inherit acls = Yes read only = No | There are a couple of ways to do this. If you just want to hide these files (they will still be accessible, if the user(s) know what their names are), add this parameter: hide files = /.*/ To make them completely invisible to the Samba user, do this: veto files = /.*/ FYI - these settings must be put in the section that defines each share; they are not global parameters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/557906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/154426/"
]
} |
558,014 | If I want find command to stop after finding a certain number of matches, how do I do that? Background is that I have too many files in a folder, I need to put them into separate folders randomly like: find -max-matches 1000 -exec mv {} /path/to/collection1 \+; find -max-matches 1000 -exec mv {} /path/to/collection2 \+; is this possible to do with find alone? If not, what would be the simplest way to do this? | As you're not using find for very much other than walking the directory tree, I'd suggest instead using the shell directly to do this. See variations for both zsh and bash below. Using the zsh shell mv ./**/*(-.D[1,1000]) /path/to/collection1 # move first 1000 filesmv ./**/*(-.D[1,1000]) /path/to/collection2 # move next 1000 files The globbing pattern ./**/*(-.D[1,1000]) would match all regular files (or symbolic links to such files) in or under the current directory, and then return the 1000 first of these. The -. restricts the match to regular files or symbolic links to these, while D acts like dotglob in bash (matches hidden names). This is assuming that the generated command would not grow too big through expanding the globbing pattern when calling mv . The above is quite inefficient as it would expand the glob for each collection. You may therefore want to store the pathnames in an array and then move slices of that: pathnames=( ./**/*(-.D) )mv $pathnames[1,1000] /path/to/collection1mv $pathnames[1001,2000] /path/to/collection2 To randomise the pathnames array when you create it (you mentioned wanting to move random files): pathnames=( ./**/*(-.Doe['REPLY=$RANDOM']) ) You could do a similar thing in bash (except you can't easily shuffle the result of a glob match in bash , apart for possibly feeding the results through shuf , so I'll skip that bit): shopt -s globstar dotglob nullglobpathnames=()for pathname in ./**/*; do [[ -f $pathname ]] && pathnames+=( "$pathname" )donemv "${pathnames[@]:0:1000}" /path/to/collection1mv "${pathnames[@]:1000:1000}" /path/to/collection2mv "${pathnames[@]:2000:1000}" /path/to/collection3 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/558014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27695/"
]
} |
558,111 | Question Let's say I just entered this command to get a count of how many lines contain a particular string: me@machine $ command-producing-multi-line-output | grep -i "needle" | wc -l Now how could I quickly replace "needle" with some other word? Current Inefficient Solution Right now I: Press Up on the keyboard to load the last command. Press Left or Ctrl + Left until I reach "needle" . Press Backspace or Ctrl + w to delete the word. Type or paste in the new word. Hit Enter . This seems pretty inefficient. Attempted Non-working Solutions I've tried to research history shortcuts like !!:sg/needle/new-needle ; but there are a few problems with this: You can't press Up to put !!:sg/needle/new-needle back on the command line. Doing this will just show what it expands to (running right back into the original problem). Repeating this with yet another new needle requires you to replace both needle and new-needle (i.e. !!:sg/new-needle-from-before/yet-another-new-needle ). You need to type the entire needle instead of using something like !:4 or $4 (i.e. !!:sg/!:4/new-needle or !!:sg/$4/new-needle ) to save time/keystrokes on however long the needle was. I've also found things like !:^-3 "new-needle" !:5-$ but that also has issues: It expands in history so it can't be re-used quickly. Even if it didn't expand, you run into the original problem of needing to replace a word in the middle of a command chain. I believe there has to be some super fast way to do what I want to do, and that some linux gurus out there know it. I would be very grateful for any input or suggestions on this. EDIT: Background Just a bit of background, I work a lot with OpenStack on the command line and I find myself often needing to replace a parameter in several places within a long command chain of piped commands. The way our OpenStack environments are configured, several engineers share a single stack user on any number of servers, and there are several clusters within multiple environments. So a function declared within .bashrc or .profile file isn't really possible. I'm hoping for something portable that can be used quickly with no or very minimal setup required. Otherwise I may just need to resort to using a solution outside of the shell entirely (such as clipboard replacement in AutoHotKey/Keyboard Maestro/AutoKey, etc.). | I've run into a similar situation and present my solution here just in case it's a useful pattern. Once I realize that I'm repeatedly changing one piece of data that's annoying to replace interactively, I'll stop and write a little while loop: $ command-producing-multi-line-output | grep -i "needle" | wc -l0$ command-producing-multi-line-output | grep -i "haystack" | wc -l0$ while read needle; do Up-Arrow to the previous command-producing-multi-line-output line and replace "haystack" with "$needle" command-producing-multi-line-output | grep -i "$needle" | wc -l; donesomething0something else0gold1 (ending with Control + D ) Or the slightly fancier variation: $ while read needle; doprintf 'For: %s\n' "$needle"command-producing-multi-line-output | grep -i "$needle" | wc -l; done If I see a future for this command-line beyond "today", I'll write it into a function or script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253840/"
]
} |
558,124 | So, I understand the difference between the three ideas in the title. atime -- access time = last time file opened mtime -- modified time = last time file contents was modified ctime -- changed time = last time file inode was modified So, presumably when I type something like find ~/Documents -name '*.py' -type f -mtime 14 will find all match all files ending with .py which were modified in the last 2 weeks. Nothing shows up... So I try find ~/Documents -name '*.py' -type f -atime 1400 which should match anything opened within the last 1400 days (ending with .py and having type file) and still nothing. Am I misunderstanding the documentation? Does it mean exactly 1400 days, for example? A relevant post: find's mtime and ctime options | Yes, -mtime 14 means exactly 14. See the top of that section in the GNU find manual (labelled "TESTS") where it says "Numeric arguments can be specified as [...]": Numeric arguments can be specified as+n for greater than n,-n for less than n,n for exactly n. Note that "less than" means " strictly less than", so -mtime -14 means "last modified at the current time of day, 13 days ago or less" and -mtime +14 means "last modified at the current time of day, 15 days ago or more". | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/558124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284384/"
]
} |
558,149 | Somewhat related to how to pipe audio output to mic input and https://askubuntu.com/questions/602593/whats-a-good-soundflower-replacement-for-ubuntu but those both involve "recording". I want to play a movie trailer or just any video on a screenshare in Zoom and have the audio that I hear also go to attendees/viewers. I installed pavucontrol and do have the ability to list "monitors" on the inputs tab, but those "monitor of" inputs don't show up as input/mics in zoom. I feel like I need the ability to re-tag these "monitor of" inputs to not be "monitor" types so that they show up as an input source in Zoom. How can I get the output to be an input in Zoom? | We had exactly the same problem with Zoom in Linux (Ubuntu, Lubuntu and Gentoo). The solution turned out to be as follows, and does not require using PulseAudio Volume Control: Launch the application for playing audio or video. Pause the audio/video if necessary. Click on 'Mute' in Zoom, to mute the mic (see NOTE below). Click on 'Share' at the middle bottom of the Zoom window, select the application (or 'Desktop') and either a) click 'Advanced' if you only want to share audio, or b) click on 'Share computer sound' and on 'Optimise Screen Sharing for Video Clip' if you want to share a video. Click the blue 'Share' button. Unpause/start the audio/video playing in the audio/video application. NOTE: We find we get much better audio quality by muting the mic before sharing audio/video. After starting the audio/video stream, it is then possible to unmute the mic. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558149",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27902/"
]
} |
558,210 | On CentOS 8, when I attempt to change the LANG variable to Italian it does not change. Here is an example of what I am doing. LANG=it_IT.UTF-8 Then if I do ls --help I still shows the results in English. Thank you. | Several things are required for that to work: the it_IT.UTF-8 locale has to be available on the system. Check locale -a | grep it the Italian translation for the corresponding application has to be available (for GNU ls , typically something like /usr/share/locale/it/LC_MESSAGES/coreutils.mo on a GNU system like CentOS). the LANG variable has to be exported to the environment for ls to be able to access it ( export LANG ). for the message language setting specifically, on GNU systems, $LANGUAGE takes precedence over the LANG and LC_* variables (unless LC_MESSAGE , LANG or LC_ALL is set to the C/POSIX locale). So if you have LANGUAGE=fr:en:it , you'll get French messages if available even if you set everything else to it_IT.UTF-8 . for the message language setting, LC_MESSAGES takes precedence over LANG and LC_ALL takes precedence over everything else (except LANGUAGE unless it's C / POSIX as seen above). The output of locale should give you a summary of the current settings. env -i | grep -e LANG -e LC_ should give you a list of locale-related environment variables that are currently set. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/386191/"
]
} |
558,220 | I recently discovered on a machine with RHEL6: ls -lbi 917921 -rw-r-----. 1 alex pivotal 5245 Dec 17 20:36 application.yml917922 -rw-r-----. 1 alex pivotal 2972 Dec 17 20:36 application11.yml917939 -rw-r-----. 1 alex pivotal 3047 Dec 17 20:36 application11.yml917932 -rw-r-----. 1 alex pivotal 2197 Dec 17 20:36 applicationall.yml I was wondering how something like this can be achieved ? | I was able to reproduce that behavior. See for example: ls -lib268947 -rw-r--r-- 1 root root 8 Dez 20 12:32 app268944 -rw-r--r-- 1 root root 24 Dez 20 12:33 aрр This is on my system ( Linux debian 4.9.0-7-amd64 #1 SMP Debian 4.9.110-3+deb9u2 (2018-08-13) x86_64 GNU/Linux ). I have a UTF-8 locale and the character p in the above output is not the same, but it looks similar. In the first line it's a LATIN SMALL LETTER P and in the second line a CYRILLIC SMALL LETTER ER (see https://unicode.org/cldr/utility/confusables.jsp?a=p&r=None ). This is just an example, it could be every character in the filename, even the dot. When I use a UTF-8 locale, my shell gives the above output. But if I use a locale that has not all unicode characters for example the default locale c , then the output looks as follows (you can change the local by setting LC_ALL ): LC_ALL=c ls -lib268947 -rw-r--r-- 1 root root 8 Dec 20 12:32 app268944 -rw-r--r-- 1 root root 24 Dec 20 12:33 a\321\200\321\200 This is because the CYRILLIC SMALL LETTER ER is not present in ASCII. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/558220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312758/"
]
} |
558,235 | I have backed up my data previously using rsync -avz on my late Linux Mint installation. Now when I try to backup my data with the same command on Windows 10 Ubuntu subsystem, the rsync just ignores my destination files and tries to copy all the files from source to destination. I also get 2 different error messages on multiple files: rsync: chgrp "/mnt/p/file" failed: Operation not permitted (1) and rsync: mkstemp "/mnt/p/file" failed: Operation not permitted (1) I don't really care about permissions, all I want to do is backup my data to external hard drive. I tried to google a solution for this, but none of these helped: sudo chown -R user:user /mnt/p # No effect rsync -rlptgoD --chmod=ugo=rwX # Throws same errors and does not copy any files at all rsync -avz --no-o --no-g --no-perms # Throws errors "failed to set times on" EDIT: I tried one more provided option: rsync -rtDvz , it throws rsync: failed to set times on "/mnt/p/file": Operation not permitted (1) on every file. The transfer also takes super long time, since it tries to modify all the files even though it is not able. I also noticed one new thing, when I run the command sudo chown -R user /mnt/p and check the file permissions with ls -l p , it shows that all the file permissions are root root . For some reason I can't change the file permissions on my external hard drive files. Oh yeah and I should have told you that I am trying to backup data to external USB stick which is Fat32 file system... Sorry! I don't always realize what stuff is important and what is not when asking questions. | The rsync man page says that -a is the same as -rlptgoD . I recommend making this replacement in your rsync command, i.e. rsync -rlptgoDvz , and then removing individual options which break under NTFS. I expect you will at least need to remove -p (permissions), -g (group), -o (owner), and maybe also -l (symbolic links). Removing all of these would leave you with: rsync -rtDvz | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/353098/"
]
} |
558,246 | Is there a difference between a space and a blank? Are tabs, spaces, blanks considered characters? | There is no such thing as "blank", in this context. All you have are characters, and some characters that don't actually print anything visible to you in normal text. However, everything is expressed in terms of characters, yes. There are quite a few non-printing characters in ASCII, you can find a full list here: https://web.itu.edu.tr/sgunduz/courses/mikroisl/ascii.html . The ones you are likely to encounter in text files are the various whitespace characters which are: Space: Tab: \t Newline: \n Carriage return: \r And, less commonly: Bell: \a Backspace: \b Vertical tab: \v Form feed: \f You also have the NULL ( \0 ) which is non-printing but doesn't appear in text files, as well as the special escape ( \e or ^[ ) and Control-Z ( ^Z ) characters but, again, not really found in text files. Relevant links https://en.wikipedia.org/wiki/Control_character https://www.asciitable.com/ So, a "blank" can be a space or a tab or another whitespace character. Or, if you are working with Unicode and not ASCII, you have various other weird things as well. But no matter what you have, they will be characters. When you see whitespace in text, the computer sees some character. A "blank" is never the absence of a character, it is always the presence of a non-printing character. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/558246",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/359294/"
]
} |
558,262 | Is there any difference between doing i.e. dd bs=4M if=archlinux.iso of=/dev/sdx status=progress oflag=sync or doing cp archlinux.iso /dev/sdx && sync , and reason to use one over the other? (aside from the pretty progress bar in dd) | One difference is efficiency, and thus speed. For example, you could get the bytes one by one and copy them to the device, with cat if it had the idealized implementation or in older systems, for example BSD4 : cat archlinux.iso > /dev/sdx In these implementations cat will move each byte independently. That is a slow process, although in practice there will be buffers involved. Note that modern cat implementations will read blocks (see below). With dd and a good block size it will be faster. With cp it depends on the buffer size used by cp (not under your control) and other buffers on the way. The efficiency lies between the idealized implementation of cat and dd with the optimum block size. In practice though modern cat and cp will ask the system for the preferred block size : st_blksize . Note that this doesn't have to be the optimum block size . An analogy: it is like pouring the contents of a glass into another glass. idealized cat would do it one drop at a time. dd will use a spoon, and you define exactly how big the spoon is (system limits apply) cp and modern cat will use its own spoon ( stat -f -c %s filename will tell you how big it is). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/338440/"
]
} |
558,347 | What I have is a CSV file to this effect: +------------+--------------+| Category I | Sub-Category |+------------+--------------+| 1144 | 128 || 1144 | 128 || 1000 | 100 || 1001 | 100 || 1002 | 100 || 1002 | 100 || 1011 | 102 || 1011 | 102 || 1011 | 102 || 1011 | 102 || 1011 | 102 || 1011 | 102 || 1013 | 103 || 1013 | 103 || 1013 | 103 || 1013 | 103 || 1013 | 103 || 1013 | 103 || 1013 | 103 |+------------+--------------+ I wish to concatenate the first and second columns above to form a third, new column with a new arbitrary heading, to this effect: +-------------+--------------+-----------------------+| Category ID | Sub-Category | Arbitrary New Heading |+-------------+--------------+-----------------------+| 1144 | 128 | 1144128 || 1144 | 128 | 1144128 || 1000 | 100 | 1000100 || 1001 | 100 | 1001100 || 1002 | 100 | 1002100 || 1002 | 100 | 1002100 || 1011 | 102 | 1011102 || 1011 | 102 | 1011102 || 1011 | 102 | 1011102 || 1011 | 102 | 1011102 || 1011 | 102 | 1011102 || 1011 | 102 | 1011102 || 1013 | 103 | 1013103 || 1013 | 103 | 1013103 || 1013 | 103 | 1013103 || 1013 | 103 | 1013103 || 1013 | 103 | 1013103 || 1013 | 103 | 1013103 || 1013 | 103 | 1013103 |+-------------+--------------+-----------------------+ My usual go-to utility, csvkit does not have the means to achieve this, afaik - see https://github.com/wireservice/csvkit/issues/930 . What is a simple solution not requiring advanced programming knowledge, which can achieve this? I'm vaguely aware of awk and sed as potential solutions, but I don't want to limit the enquiry to those just in case there is a better (i.e. simpler) solution. The solution must be efficient for very large files, i.e containing 120,000+ lines. Edit: I have included the sample data for the convenience of those wanting to take a crack at it; download here: https://www.dropbox.com/s/achtyxg7qi1629k/category-subcat-test.csv?dl=0 | Using Miller ( https://github.com/johnkerl/miller ) and this example input file Category ID,Sub-Category1001,1281002,1271002,1261004,122 and running mlr --csv put -S '$fieldName=${Category ID}." ".${Sub-Category}' input.csv >output.csv you will have +-------------+--------------+-----------+| Category ID | Sub-Category | fieldName |+-------------+--------------+-----------+| 1001 | 128 | 1001 128 || 1002 | 127 | 1002 127 || 1002 | 126 | 1002 126 || 1004 | 122 | 1004 122 |+-------------+--------------+-----------+ And you could run also csvsql, it works, in this way csvsql -I --query 'select *,("Category ID" || " " || "Sub-Category") fieldname from input' input.csv >output.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7826/"
]
} |
558,373 | I referred to this answer to change my default shell How to change default shell to ZSH - chsh says "invalid shell" After adding zsh to /etc/shells and doing sudo chsh -s "$(command -v zsh)" "${USER}" I ran echo $SHELL (which gave no output btw so I thought that it must have ran successfully) and it gave me /bin/bash I closed my shell and opened up a new shell session and zsh showed up so I thought it must have been fixed with a restart, after that when I ran echo $SHELL in the zsh session it still gave me the same output /bin/bash``$SHELL --version also gives me the bash version not the zsh one everytime I do ctrl+alt+t I do get a zsh session but this echo $SHELL output is making me suspicious. UPDATE: I am running Ubuntu Budgie 18.04.3 and I got /bin/zsh from echo $0 | The SHELL environment variable is only set when you perform a full login, e.g. by logging out and logging in again, or by using su - "$USER" or ssh "$USER@localhost" or some other command that performs a full login. It is usually the login program that sets this variable based on what the user's login shell is in the passwd database. This is called as part of the login process by whatever process is accepting a login request from a user (e.g. via sshd or gdm etc.) From the login(1) manual on Ubuntu: Your user and group ID will be set according to their values in the /etc/passwd file. The value for $HOME , $SHELL , $PATH , $LOGNAME , and $MAIL are set according to the appropriate fields in the password entry. Ulimit, umask and nice values may also be set according to entries in the GECOS field. Just starting a new shell session will not set this variable's value, not even if the shell is started as a "login shell". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/387284/"
]
} |
558,377 | I have a scenario where i want to sum the multiple column The data in the file is : ID|NAME|SAL|COST|PER|TAG1|A|10|10|20|10|1|B|10|15|20|10|1|C|10|17|25|80|1|D|115|110|20|100|1|E|10|10|10|10| i want sum of COLUMN - SAL | COST | PER | TAG one i did with simple command but how to do by creating function awk '{FS="|"}{s+=$3}END{print s}' file.txt The function should be parameterised so that when i pass column name it should calculate sum for that column The sum column's may differ. their may be requirement like only two column sum needed then it should take the two column names and process the sum for that | The SHELL environment variable is only set when you perform a full login, e.g. by logging out and logging in again, or by using su - "$USER" or ssh "$USER@localhost" or some other command that performs a full login. It is usually the login program that sets this variable based on what the user's login shell is in the passwd database. This is called as part of the login process by whatever process is accepting a login request from a user (e.g. via sshd or gdm etc.) From the login(1) manual on Ubuntu: Your user and group ID will be set according to their values in the /etc/passwd file. The value for $HOME , $SHELL , $PATH , $LOGNAME , and $MAIL are set according to the appropriate fields in the password entry. Ulimit, umask and nice values may also be set according to entries in the GECOS field. Just starting a new shell session will not set this variable's value, not even if the shell is started as a "login shell". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/387285/"
]
} |
558,432 | This is the json file that we want to edit Example 1 { "topics": [{"topic": "hfgt_kejd_tdsfd”}], "version": 1} Example 2 { "topics": [{"topic": "hfgt_kj_fsrfdfd”}], "version": 1} We want to replace the third word in the line topics with other word ( by sed or perl one liner ) regarding to example 1 , Expected results when we want to replace hfgt_kejd_tdsfd with test_1 { "topics": [{"topic": "test_1”}], "version": 1} example 3 more /tmp/file.json{ "topics": [{"topic": "TOPIC_GGGG”}], "version": 1}# sed 's/\(topic\": "\)[a-z_]*/\1test_1/' /tmp/file.json{ "topics": [{"topic": "test_1TOPIC_GGGG”}], "version": 1} | Using jq : $ jq '.topics[0].topic |= "test_1"' file.json{ "topics": [ { "topic": "test_1" } ], "version": 1} This reads the JSON document and modifies the value of the topic entry of the first element of the array topics to the string test_1 . If you have your value in a variable (encoded in UTF-8): $ val='Some value with "double quotes"'$ jq --arg string "$val" '.topics[0].topic |= $string' file.json{ "topics": [ { "topic": "Some value with \"double quotes\"" } ], "version": 1} Using Perl: $ perl -MJSON -0777 -e '$h=decode_json(<>); $h->{topics}[0]{topic}="test_1"; print encode_json($h), "\n"' file.json{"topics":[{"topic":"test_1"}],"version":1} With a variable: $ val='Some value with "double quotes"'$ STRING=$val perl -MJSON -0777 -e '$string = $ENV{STRING}; utf8::decode $string; $h=decode_json(<>); $h->{topics}[0]{topic}=$string; print encode_json($h), "\n"' file.json{"topics":[{"topic":"Some value with \"double quotes\""}],"version":1} Both of these uses the Perl JSON module to decode the JSON document, change the value that needs changing, and then output the re-encoded data structure. The error handling is left as an exercise. For the second piece of code, the value to be inserted is passed as an environment variable, STRING , into the Perl code. This is due to reading the JSON document from file in "slurp" mode with -0777 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
558,539 | I'm following along a rather good text about SSH certificates written for RHEL 6 and openssh-5.3p1-94.el6 (which makes it about 10 years old) while trying to mimic the examples on my OpenBSD-current system. One of the examples shows creating a host CA key and then signing the host's RSA key: ssh-keygen -s ~/.ssh/ca_host_key -I host_name -h -Z host_name.example.com -V -1w:+54w5d /etc/ssh/ssh_host_rsa.pubEnter passphrase:Signed host key /root/.ssh/ssh_host_rsa-cert.pub: id "host_name" serial 0 for host_name.example.com valid from 2015-05-15T13:52:29 to 2016-06-08T13:52:29 When I tried this on OpenBSD, I don't get the for host_name.example.com output. The text says that The -Z option restricts this certificate to a specific host within the domain. ... and I'm a bit confused about this as the OpenBSD manual for ssh-keygen(1) never mentions a -Z option at all. I'm also confused about ssh-keygen accepting this undocumented option without complaining. Looking at the source code for ssh-keygen , the -Z option is accepted but seems to have something to do with a "format cipher" (or possibly "cipher format") rather than with a hostname (provided that I'm looking at the correct code): case 'Z': openssh_format_cipher = optarg; break; Looking at older versions of the code, it has always had something to do with "format cipher". The release notes for OpenSSH do not mention -Z . Question: Is ssh-keygen on RHEL 6 (I'm not sure the release is relevant, but the equivalent documentation about SSH certificates for RHEL 7 or RHEL 8 seems not to be available) patched with RedHat-only patches that makes -Z act differently? | tl;dr; with newer versions of OpenSSH, you should use the -n option instead of -Z to set the principals (eg. hostname or user). Looking at the source code for ssh-keygen , the -Z option is accepted but seems to have something to do with a "format cipher" Yes, and the reason why you don't get an error is because that openssh_format_cipher variable is not used when creating a certificate, but only when generating a key with a passphrase. If you generate a key with ssh-keygen -f ./path -Z some_garbage and set a passphrase you will get an error. Is ssh-keygen on RHEL 6 ... patched with RedHat-only patches that makes -Z act differently? Yes, it used to be. You can see in the openssh-5.3p1-ssh-certificates.patch from here : + case 'Z':+ cert_principals = optarg;+ break; case 'p': That patch is no longer used in newer rpms. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116858/"
]
} |
558,590 | Based on /etc/shadow(5) documentation on the second (password) field: encrypted password If the password field contains some string that is not a valid resultof crypt(3), for instance ! or *, the user will not be able to use aunix password to log in (but the user may log in the system by othermeans). My question is whether there is a linux command to disable the user password,i.e. set a "*" or a "!" on password field. | You are looking for passwd -l user . From man passwd : Options: [...] -l, --lock lock the password of the named account. This option disables a password by changing it to a value which matches no possible encrypted value (it adds a '!' at the beginning of the password). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/558590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27308/"
]
} |
558,636 | For example, the \c format does not work.I'm input printf "ABC\ctest" to bash console and result; ABC\ctest Considering the \c format's property, the expected output should be in the form of just ABC . Nor did I find a proper source that detailed the use of the printf command on bash. Also, as in the above example, the properties on the man page specified for the printf command do not work correctly. Please show me a source on bash that explains the printf command in detail. Because I'm so confused right now. | printf 'ABC\ctest' You've hit upon one of the unspecified parts of the printf command, whose behaviour varies from implementation to implementation. You're putting the \c in the wrong place. If you read the Single Unix Specification 's description of printf carefully, you will see that \c is not listed in the list of escape sequences that are defined for the format string (the first argument to the command). Rather, it is defined as an extra escape sequence that is recognized when given in an argument string that is to be formatted by the %b format specifier . In other words: printf '%b\n' 'ABC\ctest' has a well specified behaviour. The \c causes everything remaining (including the newline in the format string) to be ignored. printf '%s\n' 'ABC\ctest' has a well specified behaviour. The \c is not an escape sequence in the first place. printf '\c' does not have a well specified behaviour. The SUS is simply silent upon what \c is, not listing it as an escape sequence but also not saying in its File Format Notation section that such a sequence is never an escape sequence. How different shells behave in response to this non-conformant format string varies quite significantly. Here are the Debian Almquist, Bourne Again, FreeBSD Almquist, Korn '93, and Z shells' reactions (with % s showing where no newlines have been emitted): % dash -c "printf 'ABC\ctest\n'" ABC\ctest% bash -c "printf 'ABC\ctest\n'" ABC\ctest% sh -c "printf 'ABC\ctest\n'" ABC%% ksh93 -c "printf 'ABC\ctest\n'" ABCest% zsh -c "printf 'ABC\ctest\n'" ABC%% The builds of the MirBSD Korn and PD Korn shells that I have do not have a printf built-in command. The FreeBSD non-built-in printf does this: % /usr/bin/printf 'ABC\ctest\n' ABC%% Adding to the fun is that the doco for the various shells is sometimes highly misleading and even sometimes downright erroneous. For examples: The Z shell doco only recently started to give a correct description of what \c does, and lists it (via its doco for echo ) as an escape sequence allowed in format strings. (It was incorrect until 2017 , the doco neither agreeing with the SUS nor describing what the Z shell actually did.) The Korn '93 shell doco gives a description that is in line with the SUS , but it is not (as can be seen in the aforegiven) what it actually does when \c is in the format specifier. It, too, documents \c as an escape sequence for the format string. Its behaviour is clearly a bug. The doco for the Bourne Again shell and the doco for the Debian Almquist shell give a description of \c that matches the SUS and explicitly list it in relation to %b (in the case of the Bourne Again shell more clearly than it does now up until 2016 ) and not in a general list of escape sequences for printf format specifiers. These shells do not provide this as an extension to the standard. The FreeBSD Almquist shell doco defers to the FreeBSD external printf command manual, whose description of \c is in line with the SUS . It explicitly lists it as an escape sequence allowed in format strings, and its actual behaviour is as documented in the user manual. The FreeBSD Almquist shell and (recent) Z shell are the only shells, here, that both document allowing \c as an escape sequence in format strings (an extension to what is defined by the standard) and actually behave as they are documented. Ironic further reading Why is printf better than echo? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/558636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364572/"
]
} |
558,773 | One might think that echo foo >acat a | rev >a would leave a containing oof ; but instead it is left empty. Why? How would one otherwise apply rev to a ? | There's an app for that! The sponge command from moreutils is designed for precisely this. If you are running Linux, it is likely already installed, if not search your operating system's repositories for sponge or moreutils . Then, you can do: echo foo >acat a | rev | sponge a Or, avoiding the UUoC : rev a | sponge a The reason for this behavior is down to the order in which your commands are run. The > a is actually the very first thing executed and > file empties the file. For example: $ echo "foo" > file$ cat filefoo$ > file$ cat file$ So, when you run cat a | rev >a what actually happens is that the > a is run first, emptying the file, so when the cat a is executed the file is already empty. This is precisely why sponge was written (from man sponge , emphasis mine): sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before writing the output file. This allows constructing pipelines that read from and write to the same file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/558773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134690/"
]
} |
558,896 | I know that this may sound as "not as intended by designer" but I have real life situation where a bash script I am modifying needs to call another bash script that I am not allowed to modify. That "unmodifiable bash script" starts with: source `dirname $0`/setenv.sh (that setenv.sh also starts with SCR2PATH=source "$( cd "$(dirname "$0")" ; pwd -P )" and is also unmodifiable) Is there a trick in which I can fool the child script with a different $0 than that of the calling script? | Maybe this will do: bash -c '. /path/to/unmodifiable' /fake/path args ... Obviously, /path/to/unmodifiable should be a shell script; this silly trick won't work with other kind of executables. This also won't work with zsh (but will work with other shells like dash , ksh , etc). If you have to source instead of call the unmodifiable script, you can use the same trick from a wrapper calling your outer script, or have it re-call itself: $ cat unmodifiable#! /bin/shprintf '{%s} ' "$0" "$@"echo$ cat becoming#! /bin/sh[ "$MYSELF" ] || MYSELF=$0 exec sh -c '. "$MYSELF"' "I'm the mountain!" "$@". ./unmodifiable$ chmod 755 becoming$ ./becoming 1 2 3{I'm the mountain!} {1} {2} {3} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/558896",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/382805/"
]
} |
559,129 | I would like to set an environment variable so that it is set when I launch a specific Flatpak application, and only set for this application. How do I go about doing this in a permanent manner? | You can do this via the flatpak override command. To set only one environment variable you can use this syntax: flatpak override --env=VARIABLE_NAME=VARIABLE_VALUE full.application.Name To set multiple environment variables you can use this syntax: flatpak override --env=VARIABLE_NAME_ONE=VARIABLE_VALUE_ONE --env=VARIABLE_NAME_TWO=VARIABLE_VALUE_TWO full.application.Name This will set it globally and therefore requires you to run the command as root. If you want to do this for your current user, you can add the --user parameter to the command, like so: flatpak override --user --env=VARIABLE_NAME=VARIABLE_VALUE full.application.Name Source and further reading: http://docs.flatpak.org/en/latest/flatpak-command-reference.html#flatpak-override | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/559129",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324625/"
]
} |
559,201 | Why doesn't su work but sudo -i does? When I want to become root via su , the system doesn't accept my password, but via sudo -i it works. su: Authentication failure | Set/change the root password: sudo passwd root Then you will be able to run su command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/559201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/386382/"
]
} |
559,236 | I am able to filter out jobs which got stuck in our queueing system with: > qjobs | grep "racon" 5240703 racon-3/utg001564l-racon-3.fasta H 1 1 0 10.0 0.0 150 :03 5241418 racon-3/utg002276l-racon-3.fasta H 1 1 0 10.0 0.0 150 :02 5241902 racon-3/utg002759l-racon-3.fasta H 1 1 0 10.0 0.0 150 :03 5242060 racon-3/utg002919l-racon-3.fasta H 1 1 0 10.0 0.0 150 :04 5242273 racon-3/utg003133l-racon-3.fasta H 1 1 0 10.0 0.0 150 :03 5242412 racon-3/utg003270l-racon-3.fasta H 1 1 0 10.0 0.0 150 :04 5242466 racon-3/utg003325l-racon-3.fasta H 1 1 0 10.0 0.0 150 :03 However, qjobs | grep "racon" | cut -d " " -f2 did not return e.g. racon-3/utg003325l-racon-3.fasta . What did I miss? | Every space counts towards the field number, even leading and consecutive ones. Hence, you need to use -f9 instead of -f2 . Alternatively, you can use awk '{ print $2 }' in place of the cut command entirely. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/559236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34872/"
]
} |
559,323 | I'm trying to revive my rusty shell scripting skills, and I've run into a problem with case statements. My goal in the program below is to evaluate whether a user-supplied string begins with a capital or lowercase letter: # practicing case statementsecho "enter a string"read yourstringecho -e "your string is $yourstring\n"case "$yourstring" in [A-Z]* ) echo "your string begins with a Capital Letter" ;; [a-z]* ) echo "your string begins with a lowercase letter" ;; *) echo "your string did not begin with an English letter" ;;esacmyvar=nopecase $myvar in N*) echo "begins with CAPITAL 'N'" ;; n*) echo "begins with lowercase 'n'" ;; *) echo "hahahaha" ;;esac When I enter a string beginning with a lowercase letter (e.g., "mystring" with no quotes), the case statement matches my input to the first case and informs me that the string begins with a capital letter. I wrote the second case statement to see if I was making some obvious syntax or logic error (perhaps I still am), but I don't have the same problem. The second case structure correctly tells me that the string held by $myvar begins with a lowercase letter. I have tried using quotes to enclose $yourstring in the first line of the case statement, and I've tried it without quotes. I read about the 'shopt' options and verified that 'nocasematch' was off. (For good measure, I toggled it on and tried again, but I still didn't get the correct result from my first case statement.) I've also tried running the script with sh and bash, but the output is the same. (I call the shell explicitly with "sh ./case1.sh" and "bash ./case1.sh" because I did not set the execution bit. Duplicating the file and setting the execution bit on the new file did not change the output.) Though I don't understand all of the output from running the shell with the '-x' debug option, the output shows the shell progressing from the first "case" line to execution of the command following the first pattern. I interpret this to mean that the first pattern was a match for the input string, but I am uncertain why. When I switch the order of the first two patterns (and corresponding commands), the case statement succeeds for lowercase letters but incorrectly reports "MYSTRING" as beginning with lowercase letters. Since anything alphabetic is detected as matching whichever pattern appears first, I think I have a logical error...but I'm not sure what. I found a post by "pludi" on unix.com in which it is advised that "the tests for lowercase and upper case characters were [a-z] and [A-Z]. This no longer works in certain locales and/or Linux distros." (see https://www.unix.com/shell-programming-and-scripting-128929-example-switch-case-bash.html ) Sure enough, replacing the character ranges with [[:upper:]] and [[:lower:]] resolved the problem. I'm on Fedora 31, and my locale output is as follows: LANG=en_US.UTF-8LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= I'd like to know whether I'm not understanding character ranges, or not understanding how pattern matching works in case statements, or if the underlying shell capabilities changed (and why?). If anyone has the patience, I would greatly appreciate an explanation; I'm also happy to read relevant documentation. Thanks! | A simple answer, one which no doubt others can supersede. The character set ordering is now different depending on which locale is in use. The concept of locale was introduced to support different nationalities and their different languages. As you can see from the output of locale there are several different areas now addressed - not just collation. In your case it's US, and for sorting and collation purposes the alphabet is either AaBbCc...Zz or A=a, B=b, C=c, etc.(I forget which, and I'm not at a computer where I can verify one over the other). Locales are very complicated, and in certain locales there can be characters that are invisible as far as sorting and collation are concerned. The same character can sort differently depending on which locale is in use. As you've found, the correct way to identify lowercase characters is with [[:lower:]] ; this will include accented characters where necessary, and even lowercase characters in different alphabets (Greek, Cyrillic, etc.). If you want the classic ordering you can revert per application or even per command by setting LC_ALL=C . For a contrived example, grep some_pattern | LC_ALL=C sort | nl | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/559323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388061/"
]
} |
559,407 | Most Linux guides consist of pages like"you need to run command_1 , then command_2 , then command_3 " etc. Since I don't want to waste my time running all of them manually, I'd rather create a script command_1command_2command_3 and run it once. But, more often than not, some commands will fail, and I will have no idea, which commands have failed. Also, usually all the rest commands make no sense if something failed earlier. So a better script would be something like (command_1 && echo OK command_1 || (echo FAILED command_1; false) )&& (command_2 && echo OK command_2 || (echo FAILED command_2; false) )&& (command_3 && echo OK command_3 || (echo FAILED command_3; false) )&& echo DONE || echo FAILED But it requires to write too much boilerplate code, repeat each command 3 times, and there is too high chance, that I mistype some of the braces. Is there a more convenient way of doing what the last script does? In particular: run commands sequentially break if any command fails write, what command has failed, if any Allows normal interactions with commands: prints all output, and allows input from keyboard, if command asks anything. Answers summary (2 January 2020) There are 2 types of solutions: Those, that allow to copy-paste commands from guide without modifications, but they don't print the failed command in the end. So, if failed command produced a very long output, you will have to scroll a lot of lines up, to see, what command has failed. (All top answers) Those, that print the failed command in the last line, but require you to modify commands after copy-pasting them, either by adding quotations (answer by John), or by adding try statements and splitting chained commands into separate ones (answer by Jasen). You rock folks, but I'll leave this question opened for a while. Maybe someone knows a solution that satisfies both needs (print failed command on the last line & allow copy-pasting of commands without their modifications). | One option would be to put the commands in a bash script, and start it with set -e . This will cause the script to terminate early if any command exits with non-zero exit status. See also this question on stack overflow: https://stackoverflow.com/q/19622198/828193 To print the error, you could use trap 'do_something' ERR Where do_something is a command you would create to show the error. Here is an example of a script to see how it works: #!/bin/bashset -etrap 'echo "******* FAILED *******" 1>&2' ERRecho 'Command that succeeds' # this command worksls non_existent_file # this should failecho 'Unreachable command' # and this is never called # due to set -e And this is the output: $ ./test.sh Command that succeedsls: cannot access 'non_existent_file': No such file or directory******* FAILED ******* Also, as mentioned by @jick , keep in mind that the exit status of a pipeline is by default the exit status of the final command in it. This means that if a non-final command in the pipeline fails, that won't be caught by set -e . To fix this problem if you are concerned with it, you can use set -o pipefail As suggested my @glenn jackman and @Monty Harder , using a function as the handler can make the script more readable, since it avoids nested quoting. Since we are using a function anyway now, I removed set -e entirely, and used exit 1 in the handler, which could also make it more readable for some: #!/bin/basherror_handler() { echo "******* FAILED *******" 1>&2 exit 1}trap error_handler ERRecho 'Command that succeeds' # this command worksls non_existent_file # this should failecho 'Unreachable command' # and this is never called # due to the exit in the handler The output is identical as above, though the exit status of the script is different. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/559407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144004/"
]
} |
559,413 | If there's only 1 user on a system with sudo permission, can another program that ran by the user get root permission without the user knowing it if it has the sudo password? | One option would be to put the commands in a bash script, and start it with set -e . This will cause the script to terminate early if any command exits with non-zero exit status. See also this question on stack overflow: https://stackoverflow.com/q/19622198/828193 To print the error, you could use trap 'do_something' ERR Where do_something is a command you would create to show the error. Here is an example of a script to see how it works: #!/bin/bashset -etrap 'echo "******* FAILED *******" 1>&2' ERRecho 'Command that succeeds' # this command worksls non_existent_file # this should failecho 'Unreachable command' # and this is never called # due to set -e And this is the output: $ ./test.sh Command that succeedsls: cannot access 'non_existent_file': No such file or directory******* FAILED ******* Also, as mentioned by @jick , keep in mind that the exit status of a pipeline is by default the exit status of the final command in it. This means that if a non-final command in the pipeline fails, that won't be caught by set -e . To fix this problem if you are concerned with it, you can use set -o pipefail As suggested my @glenn jackman and @Monty Harder , using a function as the handler can make the script more readable, since it avoids nested quoting. Since we are using a function anyway now, I removed set -e entirely, and used exit 1 in the handler, which could also make it more readable for some: #!/bin/basherror_handler() { echo "******* FAILED *******" 1>&2 exit 1}trap error_handler ERRecho 'Command that succeeds' # this command worksls non_existent_file # this should failecho 'Unreachable command' # and this is never called # due to the exit in the handler The output is identical as above, though the exit status of the script is different. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/559413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388150/"
]
} |
559,437 | I'm trying to get the most recent version of Git installed onto my Debian Buster machine, and I'm running into trouble. The most recent version of Git on stable is 2.20. I found that the testing branch has the right version, but I'm not having any success with backports. I've added deb http://deb.debian.org/debian/ buster-backports main contribdeb-src http://deb.debian.org/debian/ buster-backports main contrib to /etc/apt/sources.list and done sudo apt-get update , but every time I run sudo apt-get -t buster-backports install git I end up with 2.20 again. I've also tried using apt-get to remove git and then install it, but no luck. Any advice? Thanks! | Since February 2020 , a new-enough version of git is available in Buster backports (2.30.2 since June 2021); to install that, run sudo apt install -t buster-backports git Readers who haven’t already enabled Buster backports will need to run echo deb http://deb.debian.org/debian buster-backports main | sudo tee /etc/apt/sources.list.d/buster-backports.listsudo apt update first. The rest of the answer is obsolete with respect to the actual question, but can be applied generally for other packages (at least, for the current release of Debian, which is no longer Buster). To get version 2.24 or later, in the absence of a backport I recommended two approaches: ask for a backport, or build the 2.24 source package. To ask for a backport, file a wishlist bug on git using reportbug . Backports have been made available in the past, so there’s a decent chance someone will provide one if you explain why you want it. To build a newer package from source, run sudo apt-get install devscripts dpkg-dev build-essentialsudo apt-get build-dep gitdget https://deb.debian.org/debian/pool/main/g/git/git_2.24.1-1.dsccd git-2.24.1dpkg-buildpackage -us -uc You can replace git_2.24.1-1.dsc and git-2.24.1 with whatever is appropriate for the version you wish to install; see the Debian package tracker to find out which versions are available as source packages. This will install the necessary build dependencies and build the packages. You can then install the ones you need using sudo dpkg -i . It’s not worth upgrading all your distribution to testing, just to get a newer version of git ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/559437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388176/"
]
} |
559,526 | I want to download a file ( https://discovery.ucl.ac.uk/1575442/1/Palmisanoetal.zip ) from a public server via the R command temp <- tempfile() utils::download.file(db_url, temp, method = 'curl') This does not work on my Ubuntu 18.04.3 LTS (Bionic Beaver) system. I get the following error: curl: (60) SSL certificate problem: unable to get local issuer certificateMore details here: https://curl.haxx.se/docs/sslcerts.htmlcurl failed to verify the legitimacy of the server and therefore could notestablish a secure connection to it. To learn more about this situation andhow to fix it, please visit the web page mentioned above.Error in utils::download.file(db_url, temp, method = "curl") : 'curl' call had nonzero exit status I get the same error on the command line with curl ( curl https://discovery.ucl.ac.uk/1575442/1/Palmisanoetal.zip ). I did some experiments and googling and realized that I can access the file without any problems with my Browser (Chromium). My system/curl seems to lack a CA certificate that my browser has. I tried to determine which certificate this server is using with openssl s_client -showcerts -servername discovery.ucl.ac.uk -connect discovery.ucl.ac.uk:443 and added the result (QuoVadis EV SSL ICA G3) to my /etc/ssl/certs/ca-certificates.crt file. This did not solve the problem. I don't want to solve this with the curl --insecure flag. I also don't have any control over https://discovery.ucl.ac.uk . I just want to access the file with R. | Curl is failing because that site is incorrectly configured Certificates are used to sign other certificates, forming chains. A CA has a root certificate , which is trusted by operating systems and browsers. This root certificate is most commonly used to sign one or several intermediate certificates , which in turn are used to sign leaf certificates (that can not sign other certificates), which are what websites use. Browsers and operating systems tend to carry only the root certificates, but to verify a leaf certificate (and establish a secure connection), a client needs the entire chain of certificates. In practice, that means that a website must not just supply its leaf certificate, it must also supply the used intermediate certificate. And discovery.ucl.ac.uk fails to do that. I'll show you. Finding the problem openssl is a X509 / SSL swiss army knife that proves very useful here: % openssl s_client -connect discovery.ucl.ac.uk:443 -servername discovery.ucl.ac.uk -showcertsCONNECTED(00000003)depth=0 jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.ukverify error:num=20:unable to get local issuer certificateverify return:1depth=0 jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.ukverify error:num=21:unable to verify the first certificateverify return:1140212799304832:error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small:../ssl/statem/statem_clnt.c:2150:---Certificate chain 0 s:jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.uk i:C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3-----BEGIN CERTIFICATE-----MIIH4DCCBcigAwIBAgIUWsWTIuklFQIki5zk7SzvkyYF4MswDQYJKoZIhvcNAQELBQAwSTELMAkGA1UEBhMCQk0xGTAXBgNVBAoMEFF1b1ZhZGlzIExpbWl0ZWQxHzAdBgNVBAMMFlF1b1ZhZGlzIEVWIFNTTCBJQ0EgRzMwHhcNMTkwOTExMTAyNDExWhcNMjEwOTExMTAzNDAwWjCBuzETMBEGCysGAQQBgjc8AgEDEwJHQjEaMBgGA1UEDwwRR292ZXJubWVudCBFbnRpdHkxFzAVBgNVBAUTDk5vdmVtYmVyLTE1LTc3MQswCQYDVQQGEwJHQjEPMA0GA1UECAwGTG9uZG9uMQ8wDQYDVQQHDAZMb25kb24xIjAgBgNVBAoMGVVuaXZlcnNpdHkgQ29sbGVnZSBMb25kb24xHDAaBgNVBAMME2Rpc2NvdmVyeS51Y2wuYWMudWswggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvh4j4ub+jjytAuayjz1jXpFooMEgg09Opvruzy1Vkz8KT7VYFurfQpp7xO0kDJV9bz4U6vVUmqd9R2NaJDs0TtpKjyDFwNq1XR2+39L6JlJuIxdGRUMNLh1geNfBB7QJHac0IxwstH/mXU9H4eU1JyS8TuVnpCbDZnSqCaQ08hl4137FGrloSL+EHqErzrmz8NzNd725EKSG1/XP8d8O1FJDaAyvES2JfJWuhrcwa6WPPQdCu2cI4GzMRzPes3aD+IjJl8tGVep5ketM+Kgsrn9tjiZhFcSOcxO0apRAAAYOA6NBoZvPCLr16CGQSJM/0e2N2PM/PUh14db39Me79AgMBAAGjggNLMIIDRzAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFOWEVNCQSZ84uvLJ4SoIxU6foEg/MHgGCCsGAQUFBwEBBGwwajA5BggrBgEFBQcwAoYtaHR0cDovL3RydXN0LnF1b3ZhZGlzZ2xvYmFsLmNvbS9xdmV2c3NsZzMuY3J0MC0GCCsGAQUFBzABhiFodHRwOi8vZXYub2NzcC5xdW92YWRpc2dsb2JhbC5jb20wMQYDVR0RBCowKIITZGlzY292ZXJ5LnVjbC5hYy51a4IRZXByaW50cy51Y2wuYWMudWswWgYDVR0gBFMwUTBGBgwrBgEEAb5YAAJkAQIwNjA0BggrBgEFBQcCARYoaHR0cDovL3d3dy5xdW92YWRpc2dsb2JhbC5jb20vcmVwb3NpdG9yeTAHBgVngQwBATAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwEwPAYDVR0fBDUwMzAxoC+gLYYraHR0cDovL2NybC5xdW92YWRpc2dsb2JhbC5jb20vcXZldnNzbGczLmNybDAdBgNVHQ4EFgQU0+IV/WaITVrZeCsIddZvFZSkuUswDgYDVR0PAQH/BAQDAgWgMIIBfwYKKwYBBAHWeQIEAgSCAW8EggFrAWkAdgC72d+8H4pxtZOUI5eqkntHOFeVCqtS6BqQlmQ2jh7RhQAAAW0f40mRAAAEAwBHMEUCIQDYLCvmTrDxh16qE30yqTirA3A+Xv6TZlpUssZxI+ApqgIgSGicwtcECtcjsSnKmEwUVv6hQnvksG7dH5AqPZ7jbQ0AdwBWFAaaL9fC7NP14b1Esj7HRna5vJkRXMDvlJhV1onQ3QAAAW0f40m4AAAEAwBIMEYCIQCPhcwTIoiYCt6Esw49b7bcvRyREX29fRuaX36wJxQ6TAIhAJyPt8r3g++LxWdb/sWRfF7Jn4zlyA7iUWFTF84dwK5xAHYAb1N2rDHwMRnYmQCkURX/dxUcEdkCwQApBo2yCJo32RMAAAFtH+NKoAAABAMARzBFAiB/85erYq3OelUTEYpdJdIK//2NAUG6EtuDCR/UspBmnQIhANbyKv+L+b02o5YIRqRKJ49LJEyJFyRxHrRM8lH9qRk8MA0GCSqGSIb3DQEBCwUAA4ICAQAjJurMYSd9KFvcOcMZNO1DLsKytJ3N6SIkHXphJ2fpXD4sfBHxxG37r7a3hWi7vqNb4PTL8VIixKw+u/Si0tknJIyHsVf64eI4tfMDkPDJGxMgr9qEsNukwVXgsnerqXYQRAcgYsnMLEdrgo+7SW7caTnm/adfqrc6r9Ar4fHRidr9p7RuEM/eRCCmBqswHI7hpsE6miKLh1aXqF6I6JiSCApz3X7mJ4OiLVFNGKw8rZHGEJUsLQBWIW0qZPjrzNG3M/LF5chVhS9D7HcUtXEFP7smNPdNGgbVTtfY3+sXpFFdhED5ooRJCkX2/JfylXN3LT8v0iNI04HNQ1/fS27k9Q5QBahEBsvSzh88OdHP/2jyyQwiGqNH9Q+UGGrYBW50OJB13ztobAeEWITPwI40nf3wU3qoCvM/nvJu8kO0lD3kD4AyLqWnOYvwgjCzgVe2zuLI9F/BZiZnmXaiJq2SSzgTmIzv/HB0zSHFBWQpgZpacZok7AhZ3vzpbOdJfgcSOCe/W6+drLyA5wTzV3m4+taU5eKvnI9NN5XbiUHXmqLElHVZYakpDAJkT20Uud5uIGHGwiHFYvyHgHlNBxa77Bn2gYxKtH95y3o/C0SaHauNL7ghuyZVxNRWsKcVWlZ+1/TrOlEp00nTFyoWqxbFgwVP9WarCRDX/rZ/Yzr/sQ==-----END CERTIFICATE--------Server certificatesubject=jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.ukissuer=C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3---No client certificate CA names sent---SSL handshake has read 2653 bytes and written 318 bytesVerification error: unable to verify the first certificate---New, (NONE), Cipher is (NONE)Server public key is 2048 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONENo ALPN negotiatedSSL-Session: Protocol : TLSv1.2 Cipher : 0000 Session-ID: 0BEE74506F0378851356FE55F7EA41ACE0E5C5C065C19C8EE24F5A1607BAD1FC Session-ID-ctx: Master-Key: PSK identity: None PSK identity hint: None SRP username: None Start Time: 1578589105 Timeout : 7200 (sec) Verify return code: 21 (unable to verify the first certificate) Extended master secret: no--- Relevant for us here is the part after Certificate chain . It shows only one certificate. Feeding that -----BEGIN CERTIFICATE----- block through openssl x509 -text -noout presents the certificate in a more readable form: Certificate: Data: Version: 3 (0x2) Serial Number: 5a:c5:93:22:e9:25:15:02:24:8b:9c:e4:ed:2c:ef:93:26:05:e0:cb Signature Algorithm: sha256WithRSAEncryption Issuer: C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3 Validity Not Before: Sep 11 10:24:11 2019 GMT Not After : Sep 11 10:34:00 2021 GMT Subject: jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.uk Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:af:87:88:f8:b9:bf:a3:8f:2b:40:b9:ac:a3:cf: 58:d7:a4:5a:28:30:48:20:d3:d3:a9:be:bb:b3:cb: 55:64:cf:c2:93:ed:56:05:ba:b7:d0:a6:9e:f1:3b: 49:03:25:5f:5b:cf:85:3a:bd:55:26:a9:df:51:d8: d6:89:0e:cd:13:b6:92:a3:c8:31:70:36:ad:57:47: 6f:b7:f4:be:89:94:9b:88:c5:d1:91:50:c3:4b:87: 58:1e:35:f0:41:ed:02:47:69:cd:08:c7:0b:2d:1f: f9:97:53:d1:f8:79:4d:49:c9:2f:13:b9:59:e9:09: b0:d9:9d:2a:82:69:0d:3c:86:5e:35:df:b1:46:ae: 5a:12:2f:e1:07:a8:4a:f3:ae:6c:fc:37:33:5d:ef: 6e:44:29:21:b5:fd:73:fc:77:c3:b5:14:90:da:03: 2b:c4:4b:62:5f:25:6b:a1:ad:cc:1a:e9:63:cf:41: d0:ae:d9:c2:38:1b:33:11:cc:f7:ac:dd:a0:fe:22: 32:65:f2:d1:95:7a:9e:64:7a:d3:3e:2a:0b:2b:9f: db:63:89:98:45:71:23:9c:c4:ed:1a:a5:10:00:01: 83:80:e8:d0:68:66:f3:c2:2e:bd:7a:08:64:12:24: cf:f4:7b:63:76:3c:cf:cf:52:1d:78:75:bd:fd:31: ee:fd Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Authority Key Identifier: keyid:E5:84:54:D0:90:49:9F:38:BA:F2:C9:E1:2A:08:C5:4E:9F:A0:48:3F Authority Information Access: CA Issuers - URI:http://trust.quovadisglobal.com/qvevsslg3.crt OCSP - URI:http://ev.ocsp.quovadisglobal.com X509v3 Subject Alternative Name: DNS:discovery.ucl.ac.uk, DNS:eprints.ucl.ac.uk X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.8024.0.2.100.1.2 CPS: http://www.quovadisglobal.com/repository Policy: 2.23.140.1.1 X509v3 Extended Key Usage: TLS Web Client Authentication, TLS Web Server Authentication X509v3 CRL Distribution Points: Full Name: URI:http://crl.quovadisglobal.com/qvevsslg3.crl X509v3 Subject Key Identifier: D3:E2:15:FD:66:88:4D:5A:D9:78:2B:08:75:D6:6F:15:94:A4:B9:4B X509v3 Key Usage: critical Digital Signature, Key Encipherment CT Precertificate SCTs: Signed Certificate Timestamp: Version : v1 (0x0) Log ID : BB:D9:DF:BC:1F:8A:71:B5:93:94:23:97:AA:92:7B:47: 38:57:95:0A:AB:52:E8:1A:90:96:64:36:8E:1E:D1:85 Timestamp : Sep 11 10:34:12.241 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:45:02:21:00:D8:2C:2B:E6:4E:B0:F1:87:5E:AA:13: 7D:32:A9:38:AB:03:70:3E:5E:FE:93:66:5A:54:B2:C6: 71:23:E0:29:AA:02:20:48:68:9C:C2:D7:04:0A:D7:23: B1:29:CA:98:4C:14:56:FE:A1:42:7B:E4:B0:6E:DD:1F: 90:2A:3D:9E:E3:6D:0D Signed Certificate Timestamp: Version : v1 (0x0) Log ID : 56:14:06:9A:2F:D7:C2:EC:D3:F5:E1:BD:44:B2:3E:C7: 46:76:B9:BC:99:11:5C:C0:EF:94:98:55:D6:89:D0:DD Timestamp : Sep 11 10:34:12.280 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:46:02:21:00:8F:85:CC:13:22:88:98:0A:DE:84:B3: 0E:3D:6F:B6:DC:BD:1C:91:11:7D:BD:7D:1B:9A:5F:7E: B0:27:14:3A:4C:02:21:00:9C:8F:B7:CA:F7:83:EF:8B: C5:67:5B:FE:C5:91:7C:5E:C9:9F:8C:E5:C8:0E:E2:51: 61:53:17:CE:1D:C0:AE:71 Signed Certificate Timestamp: Version : v1 (0x0) Log ID : 6F:53:76:AC:31:F0:31:19:D8:99:00:A4:51:15:FF:77: 15:1C:11:D9:02:C1:00:29:06:8D:B2:08:9A:37:D9:13 Timestamp : Sep 11 10:34:12.512 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:45:02:20:7F:F3:97:AB:62:AD:CE:7A:55:13:11:8A: 5D:25:D2:0A:FF:FD:8D:01:41:BA:12:DB:83:09:1F:D4: B2:90:66:9D:02:21:00:D6:F2:2A:FF:8B:F9:BD:36:A3: 96:08:46:A4:4A:27:8F:4B:24:4C:89:17:24:71:1E:B4: 4C:F2:51:FD:A9:19:3C Signature Algorithm: sha256WithRSAEncryption 23:26:ea:cc:61:27:7d:28:5b:dc:39:c3:19:34:ed:43:2e:c2: b2:b4:9d:cd:e9:22:24:1d:7a:61:27:67:e9:5c:3e:2c:7c:11: f1:c4:6d:fb:af:b6:b7:85:68:bb:be:a3:5b:e0:f4:cb:f1:52: 22:c4:ac:3e:bb:f4:a2:d2:d9:27:24:8c:87:b1:57:fa:e1:e2: 38:b5:f3:03:90:f0:c9:1b:13:20:af:da:84:b0:db:a4:c1:55: e0:b2:77:ab:a9:76:10:44:07:20:62:c9:cc:2c:47:6b:82:8f: bb:49:6e:dc:69:39:e6:fd:a7:5f:aa:b7:3a:af:d0:2b:e1:f1: d1:89:da:fd:a7:b4:6e:10:cf:de:44:20:a6:06:ab:30:1c:8e: e1:a6:c1:3a:9a:22:8b:87:56:97:a8:5e:88:e8:98:92:08:0a: 73:dd:7e:e6:27:83:a2:2d:51:4d:18:ac:3c:ad:91:c6:10:95: 2c:2d:00:56:21:6d:2a:64:f8:eb:cc:d1:b7:33:f2:c5:e5:c8: 55:85:2f:43:ec:77:14:b5:71:05:3f:bb:26:34:f7:4d:1a:06: d5:4e:d7:d8:df:eb:17:a4:51:5d:84:40:f9:a2:84:49:0a:45: f6:fc:97:f2:95:73:77:2d:3f:2f:d2:23:48:d3:81:cd:43:5f: df:4b:6e:e4:f5:0e:50:05:a8:44:06:cb:d2:ce:1f:3c:39:d1: cf:ff:68:f2:c9:0c:22:1a:a3:47:f5:0f:94:18:6a:d8:05:6e: 74:38:90:75:df:3b:68:6c:07:84:58:84:cf:c0:8e:34:9d:fd: f0:53:7a:a8:0a:f3:3f:9e:f2:6e:f2:43:b4:94:3d:e4:0f:80: 32:2e:a5:a7:39:8b:f0:82:30:b3:81:57:b6:ce:e2:c8:f4:5f: c1:66:26:67:99:76:a2:26:ad:92:4b:38:13:98:8c:ef:fc:70: 74:cd:21:c5:05:64:29:81:9a:5a:71:9a:24:ec:08:59:de:fc: e9:6c:e7:49:7e:07:12:38:27:bf:5b:af:9d:ac:bc:80:e7:04: f3:57:79:b8:fa:d6:94:e5:e2:af:9c:8f:4d:37:95:db:89:41: d7:9a:a2:c4:94:75:59:61:a9:29:0c:02:64:4f:6d:14:b9:de: 6e:20:61:c6:c2:21:c5:62:fc:87:80:79:4d:07:16:bb:ec:19: f6:81:8c:4a:b4:7f:79:cb:7a:3f:0b:44:9a:1d:ab:8d:2f:b8: 21:bb:26:55:c4:d4:56:b0:a7:15:5a:56:7e:d7:f4:eb:3a:51: 29:d3:49:d3:17:2a:16:ab:16:c5:83:05:4f:f5:66:ab:09:10: d7:fe:b6:7f:63:3a:ff:b1 Particularly relevant are these lines: Issuer: C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3Subject: jurisdictionC = GB [...] CN = discovery.ucl.ac.uk This shows that the provided certificate is a leaf certificate, for discovery.ucl.ac.uk , and that it is signed by some certificate (or rather entity) named QuoVadis EV SSL ICA G3 . It will become clear later that this is not a root certificate (for now, the lack of CA in the name is a hint; and ICA commonly means intermediate certificate authority). The certificate @little_dog suggested you download is the missing intermediate certificate (NOT the root certificate!). You can see that from the following lines in his answer: Issuer: C = BM, O = QuoVadis Limited, CN = QuoVadis Root CA 2 G3Subject: C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3 That certificate is the QuoVadis EV SSL ICA G3 referenced by the leaf certificate above! But this certificate is not a root certificate. Root certificates are signed by themselves , but this certificate is signed by QuoVadis Root CA 2 G3 . Which, by the way, has CA in its name. So, where do we get the root certificate? Ideally, it should be in your browser or OS. For Debian at least (and probably Ubuntu as well), we can check with this monstrosity: % awk -v cmd='openssl x509 -noout -subject' '/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt | grep 'QuoVadis Root CA 2 G3'subject=C = BM, O = QuoVadis Limited, CN = QuoVadis Root CA 2 G3 The first part of the command produces the certificate subjects ("names") of all system-trusted CA certificates, which we then search for the relevant QuoVadis root certificate. On my system it finds this, so the root certificate is present. To recap Root cert QuoVadis Root CA 2 G3 (on your system) signs intermediate cert QuoVadis EV SSL ICA G3 (missing) signs leaf cert discovery.ucl.ac.uk (provided by web server) Where should the intermediate cert come from? The answer to that is simple: the web server should provide it as well. Then the client can check the whole chain up until the root certificate (which comes from its trust store). Getting it fixed @little_dog's answer had you download the intermediate, and install that in your trust store, effectively turning that intermediate into a root cert for your system. That will work for this particular problem, for now, but there are drawbacks: It will only solve this very particular problem on your particular machine. Download from another misconfigured web server? Same problem. Download from this site on another machine? Same problem. Intermediates are usually shorter-lived than root certs. At some point in the future, your manually-installed intermediate will expire, and then it will stop working. Intermediates are there for a reason. In the case of a CA compromise, intermediates can be compromised as well. A CA will then revoke those intermediates, and create new ones and re-issue leaf certs. But because you manually trusted your intermediate, it won't be revoked and your system might end up trusting servers it shouldn't. The real solution is getting the website fixed. Try reporting it to the discovery.ucl.ac.uk webmasters. Any decent web server admin should know exactly what's up when you report to them that the webserver isn't serving the intermediate CA certificate. If they need more information, this answer has plenty :) There are also dozens of online services that will check any web server you specify and report a list of potential security issues and configuration problems. I tried a handful, and they all complained about the missing intermediate certificate. A few popular ones include: SSL Labs SSL server test DigiCert SSL Installation Diagnostics Tool Why No Padlock? But it worked in Chrome? The story becomes more complicated here. There's a mechanism called Authority Information Access (AIA) that allows HTTP clients to query the CA for the intermediate certificate. You can see URLs provided for it in the textual certificate output earlier in this answer. But not every client implements AIA fetching . Internet Explorer and Safari do. Chrome relies on the OS to do this (so yes on some platforms, no on others). Android does not. Firefox does not, because of privacy concerns . Curl and wget do not, as far as I can tell. Complicating things further, browsers can cache intermediate certificates they encountered, so if you visit a website that correctly sends the QuoVadis EV SSL ICA G3 intermediate with your browser, that certificate may be cached, and then website that otherwise wouldn't work suddenly would. Finally, browsers/OSes could come with (some) intermediate certificates pre-loaded, which would also hide this issue. At least Firefox is exploring this option. None of these things can be relied on though; plenty of clients don't do AIA fetching or pre-loading. So until these mechanisms become mandatory and universally supported, web servers will still need to include all the certificates to complete the chain. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/559526",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103721/"
]
} |
559,535 | The Wikipedia LTO article says that every LTO drive can read out the memory chip of a tape via 13.56 MHz NFC. I expect here to find serial numbers, tape properties and usage data. How can I read this data with free and open-source software on a Linux system? | Curl is failing because that site is incorrectly configured Certificates are used to sign other certificates, forming chains. A CA has a root certificate , which is trusted by operating systems and browsers. This root certificate is most commonly used to sign one or several intermediate certificates , which in turn are used to sign leaf certificates (that can not sign other certificates), which are what websites use. Browsers and operating systems tend to carry only the root certificates, but to verify a leaf certificate (and establish a secure connection), a client needs the entire chain of certificates. In practice, that means that a website must not just supply its leaf certificate, it must also supply the used intermediate certificate. And discovery.ucl.ac.uk fails to do that. I'll show you. Finding the problem openssl is a X509 / SSL swiss army knife that proves very useful here: % openssl s_client -connect discovery.ucl.ac.uk:443 -servername discovery.ucl.ac.uk -showcertsCONNECTED(00000003)depth=0 jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.ukverify error:num=20:unable to get local issuer certificateverify return:1depth=0 jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.ukverify error:num=21:unable to verify the first certificateverify return:1140212799304832:error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small:../ssl/statem/statem_clnt.c:2150:---Certificate chain 0 s:jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.uk i:C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3-----BEGIN CERTIFICATE-----MIIH4DCCBcigAwIBAgIUWsWTIuklFQIki5zk7SzvkyYF4MswDQYJKoZIhvcNAQELBQAwSTELMAkGA1UEBhMCQk0xGTAXBgNVBAoMEFF1b1ZhZGlzIExpbWl0ZWQxHzAdBgNVBAMMFlF1b1ZhZGlzIEVWIFNTTCBJQ0EgRzMwHhcNMTkwOTExMTAyNDExWhcNMjEwOTExMTAzNDAwWjCBuzETMBEGCysGAQQBgjc8AgEDEwJHQjEaMBgGA1UEDwwRR292ZXJubWVudCBFbnRpdHkxFzAVBgNVBAUTDk5vdmVtYmVyLTE1LTc3MQswCQYDVQQGEwJHQjEPMA0GA1UECAwGTG9uZG9uMQ8wDQYDVQQHDAZMb25kb24xIjAgBgNVBAoMGVVuaXZlcnNpdHkgQ29sbGVnZSBMb25kb24xHDAaBgNVBAMME2Rpc2NvdmVyeS51Y2wuYWMudWswggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCvh4j4ub+jjytAuayjz1jXpFooMEgg09Opvruzy1Vkz8KT7VYFurfQpp7xO0kDJV9bz4U6vVUmqd9R2NaJDs0TtpKjyDFwNq1XR2+39L6JlJuIxdGRUMNLh1geNfBB7QJHac0IxwstH/mXU9H4eU1JyS8TuVnpCbDZnSqCaQ08hl4137FGrloSL+EHqErzrmz8NzNd725EKSG1/XP8d8O1FJDaAyvES2JfJWuhrcwa6WPPQdCu2cI4GzMRzPes3aD+IjJl8tGVep5ketM+Kgsrn9tjiZhFcSOcxO0apRAAAYOA6NBoZvPCLr16CGQSJM/0e2N2PM/PUh14db39Me79AgMBAAGjggNLMIIDRzAMBgNVHRMBAf8EAjAAMB8GA1UdIwQYMBaAFOWEVNCQSZ84uvLJ4SoIxU6foEg/MHgGCCsGAQUFBwEBBGwwajA5BggrBgEFBQcwAoYtaHR0cDovL3RydXN0LnF1b3ZhZGlzZ2xvYmFsLmNvbS9xdmV2c3NsZzMuY3J0MC0GCCsGAQUFBzABhiFodHRwOi8vZXYub2NzcC5xdW92YWRpc2dsb2JhbC5jb20wMQYDVR0RBCowKIITZGlzY292ZXJ5LnVjbC5hYy51a4IRZXByaW50cy51Y2wuYWMudWswWgYDVR0gBFMwUTBGBgwrBgEEAb5YAAJkAQIwNjA0BggrBgEFBQcCARYoaHR0cDovL3d3dy5xdW92YWRpc2dsb2JhbC5jb20vcmVwb3NpdG9yeTAHBgVngQwBATAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwEwPAYDVR0fBDUwMzAxoC+gLYYraHR0cDovL2NybC5xdW92YWRpc2dsb2JhbC5jb20vcXZldnNzbGczLmNybDAdBgNVHQ4EFgQU0+IV/WaITVrZeCsIddZvFZSkuUswDgYDVR0PAQH/BAQDAgWgMIIBfwYKKwYBBAHWeQIEAgSCAW8EggFrAWkAdgC72d+8H4pxtZOUI5eqkntHOFeVCqtS6BqQlmQ2jh7RhQAAAW0f40mRAAAEAwBHMEUCIQDYLCvmTrDxh16qE30yqTirA3A+Xv6TZlpUssZxI+ApqgIgSGicwtcECtcjsSnKmEwUVv6hQnvksG7dH5AqPZ7jbQ0AdwBWFAaaL9fC7NP14b1Esj7HRna5vJkRXMDvlJhV1onQ3QAAAW0f40m4AAAEAwBIMEYCIQCPhcwTIoiYCt6Esw49b7bcvRyREX29fRuaX36wJxQ6TAIhAJyPt8r3g++LxWdb/sWRfF7Jn4zlyA7iUWFTF84dwK5xAHYAb1N2rDHwMRnYmQCkURX/dxUcEdkCwQApBo2yCJo32RMAAAFtH+NKoAAABAMARzBFAiB/85erYq3OelUTEYpdJdIK//2NAUG6EtuDCR/UspBmnQIhANbyKv+L+b02o5YIRqRKJ49LJEyJFyRxHrRM8lH9qRk8MA0GCSqGSIb3DQEBCwUAA4ICAQAjJurMYSd9KFvcOcMZNO1DLsKytJ3N6SIkHXphJ2fpXD4sfBHxxG37r7a3hWi7vqNb4PTL8VIixKw+u/Si0tknJIyHsVf64eI4tfMDkPDJGxMgr9qEsNukwVXgsnerqXYQRAcgYsnMLEdrgo+7SW7caTnm/adfqrc6r9Ar4fHRidr9p7RuEM/eRCCmBqswHI7hpsE6miKLh1aXqF6I6JiSCApz3X7mJ4OiLVFNGKw8rZHGEJUsLQBWIW0qZPjrzNG3M/LF5chVhS9D7HcUtXEFP7smNPdNGgbVTtfY3+sXpFFdhED5ooRJCkX2/JfylXN3LT8v0iNI04HNQ1/fS27k9Q5QBahEBsvSzh88OdHP/2jyyQwiGqNH9Q+UGGrYBW50OJB13ztobAeEWITPwI40nf3wU3qoCvM/nvJu8kO0lD3kD4AyLqWnOYvwgjCzgVe2zuLI9F/BZiZnmXaiJq2SSzgTmIzv/HB0zSHFBWQpgZpacZok7AhZ3vzpbOdJfgcSOCe/W6+drLyA5wTzV3m4+taU5eKvnI9NN5XbiUHXmqLElHVZYakpDAJkT20Uud5uIGHGwiHFYvyHgHlNBxa77Bn2gYxKtH95y3o/C0SaHauNL7ghuyZVxNRWsKcVWlZ+1/TrOlEp00nTFyoWqxbFgwVP9WarCRDX/rZ/Yzr/sQ==-----END CERTIFICATE--------Server certificatesubject=jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.ukissuer=C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3---No client certificate CA names sent---SSL handshake has read 2653 bytes and written 318 bytesVerification error: unable to verify the first certificate---New, (NONE), Cipher is (NONE)Server public key is 2048 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONENo ALPN negotiatedSSL-Session: Protocol : TLSv1.2 Cipher : 0000 Session-ID: 0BEE74506F0378851356FE55F7EA41ACE0E5C5C065C19C8EE24F5A1607BAD1FC Session-ID-ctx: Master-Key: PSK identity: None PSK identity hint: None SRP username: None Start Time: 1578589105 Timeout : 7200 (sec) Verify return code: 21 (unable to verify the first certificate) Extended master secret: no--- Relevant for us here is the part after Certificate chain . It shows only one certificate. Feeding that -----BEGIN CERTIFICATE----- block through openssl x509 -text -noout presents the certificate in a more readable form: Certificate: Data: Version: 3 (0x2) Serial Number: 5a:c5:93:22:e9:25:15:02:24:8b:9c:e4:ed:2c:ef:93:26:05:e0:cb Signature Algorithm: sha256WithRSAEncryption Issuer: C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3 Validity Not Before: Sep 11 10:24:11 2019 GMT Not After : Sep 11 10:34:00 2021 GMT Subject: jurisdictionC = GB, businessCategory = Government Entity, serialNumber = November-15-77, C = GB, ST = London, L = London, O = University College London, CN = discovery.ucl.ac.uk Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:af:87:88:f8:b9:bf:a3:8f:2b:40:b9:ac:a3:cf: 58:d7:a4:5a:28:30:48:20:d3:d3:a9:be:bb:b3:cb: 55:64:cf:c2:93:ed:56:05:ba:b7:d0:a6:9e:f1:3b: 49:03:25:5f:5b:cf:85:3a:bd:55:26:a9:df:51:d8: d6:89:0e:cd:13:b6:92:a3:c8:31:70:36:ad:57:47: 6f:b7:f4:be:89:94:9b:88:c5:d1:91:50:c3:4b:87: 58:1e:35:f0:41:ed:02:47:69:cd:08:c7:0b:2d:1f: f9:97:53:d1:f8:79:4d:49:c9:2f:13:b9:59:e9:09: b0:d9:9d:2a:82:69:0d:3c:86:5e:35:df:b1:46:ae: 5a:12:2f:e1:07:a8:4a:f3:ae:6c:fc:37:33:5d:ef: 6e:44:29:21:b5:fd:73:fc:77:c3:b5:14:90:da:03: 2b:c4:4b:62:5f:25:6b:a1:ad:cc:1a:e9:63:cf:41: d0:ae:d9:c2:38:1b:33:11:cc:f7:ac:dd:a0:fe:22: 32:65:f2:d1:95:7a:9e:64:7a:d3:3e:2a:0b:2b:9f: db:63:89:98:45:71:23:9c:c4:ed:1a:a5:10:00:01: 83:80:e8:d0:68:66:f3:c2:2e:bd:7a:08:64:12:24: cf:f4:7b:63:76:3c:cf:cf:52:1d:78:75:bd:fd:31: ee:fd Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 Authority Key Identifier: keyid:E5:84:54:D0:90:49:9F:38:BA:F2:C9:E1:2A:08:C5:4E:9F:A0:48:3F Authority Information Access: CA Issuers - URI:http://trust.quovadisglobal.com/qvevsslg3.crt OCSP - URI:http://ev.ocsp.quovadisglobal.com X509v3 Subject Alternative Name: DNS:discovery.ucl.ac.uk, DNS:eprints.ucl.ac.uk X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.8024.0.2.100.1.2 CPS: http://www.quovadisglobal.com/repository Policy: 2.23.140.1.1 X509v3 Extended Key Usage: TLS Web Client Authentication, TLS Web Server Authentication X509v3 CRL Distribution Points: Full Name: URI:http://crl.quovadisglobal.com/qvevsslg3.crl X509v3 Subject Key Identifier: D3:E2:15:FD:66:88:4D:5A:D9:78:2B:08:75:D6:6F:15:94:A4:B9:4B X509v3 Key Usage: critical Digital Signature, Key Encipherment CT Precertificate SCTs: Signed Certificate Timestamp: Version : v1 (0x0) Log ID : BB:D9:DF:BC:1F:8A:71:B5:93:94:23:97:AA:92:7B:47: 38:57:95:0A:AB:52:E8:1A:90:96:64:36:8E:1E:D1:85 Timestamp : Sep 11 10:34:12.241 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:45:02:21:00:D8:2C:2B:E6:4E:B0:F1:87:5E:AA:13: 7D:32:A9:38:AB:03:70:3E:5E:FE:93:66:5A:54:B2:C6: 71:23:E0:29:AA:02:20:48:68:9C:C2:D7:04:0A:D7:23: B1:29:CA:98:4C:14:56:FE:A1:42:7B:E4:B0:6E:DD:1F: 90:2A:3D:9E:E3:6D:0D Signed Certificate Timestamp: Version : v1 (0x0) Log ID : 56:14:06:9A:2F:D7:C2:EC:D3:F5:E1:BD:44:B2:3E:C7: 46:76:B9:BC:99:11:5C:C0:EF:94:98:55:D6:89:D0:DD Timestamp : Sep 11 10:34:12.280 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:46:02:21:00:8F:85:CC:13:22:88:98:0A:DE:84:B3: 0E:3D:6F:B6:DC:BD:1C:91:11:7D:BD:7D:1B:9A:5F:7E: B0:27:14:3A:4C:02:21:00:9C:8F:B7:CA:F7:83:EF:8B: C5:67:5B:FE:C5:91:7C:5E:C9:9F:8C:E5:C8:0E:E2:51: 61:53:17:CE:1D:C0:AE:71 Signed Certificate Timestamp: Version : v1 (0x0) Log ID : 6F:53:76:AC:31:F0:31:19:D8:99:00:A4:51:15:FF:77: 15:1C:11:D9:02:C1:00:29:06:8D:B2:08:9A:37:D9:13 Timestamp : Sep 11 10:34:12.512 2019 GMT Extensions: none Signature : ecdsa-with-SHA256 30:45:02:20:7F:F3:97:AB:62:AD:CE:7A:55:13:11:8A: 5D:25:D2:0A:FF:FD:8D:01:41:BA:12:DB:83:09:1F:D4: B2:90:66:9D:02:21:00:D6:F2:2A:FF:8B:F9:BD:36:A3: 96:08:46:A4:4A:27:8F:4B:24:4C:89:17:24:71:1E:B4: 4C:F2:51:FD:A9:19:3C Signature Algorithm: sha256WithRSAEncryption 23:26:ea:cc:61:27:7d:28:5b:dc:39:c3:19:34:ed:43:2e:c2: b2:b4:9d:cd:e9:22:24:1d:7a:61:27:67:e9:5c:3e:2c:7c:11: f1:c4:6d:fb:af:b6:b7:85:68:bb:be:a3:5b:e0:f4:cb:f1:52: 22:c4:ac:3e:bb:f4:a2:d2:d9:27:24:8c:87:b1:57:fa:e1:e2: 38:b5:f3:03:90:f0:c9:1b:13:20:af:da:84:b0:db:a4:c1:55: e0:b2:77:ab:a9:76:10:44:07:20:62:c9:cc:2c:47:6b:82:8f: bb:49:6e:dc:69:39:e6:fd:a7:5f:aa:b7:3a:af:d0:2b:e1:f1: d1:89:da:fd:a7:b4:6e:10:cf:de:44:20:a6:06:ab:30:1c:8e: e1:a6:c1:3a:9a:22:8b:87:56:97:a8:5e:88:e8:98:92:08:0a: 73:dd:7e:e6:27:83:a2:2d:51:4d:18:ac:3c:ad:91:c6:10:95: 2c:2d:00:56:21:6d:2a:64:f8:eb:cc:d1:b7:33:f2:c5:e5:c8: 55:85:2f:43:ec:77:14:b5:71:05:3f:bb:26:34:f7:4d:1a:06: d5:4e:d7:d8:df:eb:17:a4:51:5d:84:40:f9:a2:84:49:0a:45: f6:fc:97:f2:95:73:77:2d:3f:2f:d2:23:48:d3:81:cd:43:5f: df:4b:6e:e4:f5:0e:50:05:a8:44:06:cb:d2:ce:1f:3c:39:d1: cf:ff:68:f2:c9:0c:22:1a:a3:47:f5:0f:94:18:6a:d8:05:6e: 74:38:90:75:df:3b:68:6c:07:84:58:84:cf:c0:8e:34:9d:fd: f0:53:7a:a8:0a:f3:3f:9e:f2:6e:f2:43:b4:94:3d:e4:0f:80: 32:2e:a5:a7:39:8b:f0:82:30:b3:81:57:b6:ce:e2:c8:f4:5f: c1:66:26:67:99:76:a2:26:ad:92:4b:38:13:98:8c:ef:fc:70: 74:cd:21:c5:05:64:29:81:9a:5a:71:9a:24:ec:08:59:de:fc: e9:6c:e7:49:7e:07:12:38:27:bf:5b:af:9d:ac:bc:80:e7:04: f3:57:79:b8:fa:d6:94:e5:e2:af:9c:8f:4d:37:95:db:89:41: d7:9a:a2:c4:94:75:59:61:a9:29:0c:02:64:4f:6d:14:b9:de: 6e:20:61:c6:c2:21:c5:62:fc:87:80:79:4d:07:16:bb:ec:19: f6:81:8c:4a:b4:7f:79:cb:7a:3f:0b:44:9a:1d:ab:8d:2f:b8: 21:bb:26:55:c4:d4:56:b0:a7:15:5a:56:7e:d7:f4:eb:3a:51: 29:d3:49:d3:17:2a:16:ab:16:c5:83:05:4f:f5:66:ab:09:10: d7:fe:b6:7f:63:3a:ff:b1 Particularly relevant are these lines: Issuer: C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3Subject: jurisdictionC = GB [...] CN = discovery.ucl.ac.uk This shows that the provided certificate is a leaf certificate, for discovery.ucl.ac.uk , and that it is signed by some certificate (or rather entity) named QuoVadis EV SSL ICA G3 . It will become clear later that this is not a root certificate (for now, the lack of CA in the name is a hint; and ICA commonly means intermediate certificate authority). The certificate @little_dog suggested you download is the missing intermediate certificate (NOT the root certificate!). You can see that from the following lines in his answer: Issuer: C = BM, O = QuoVadis Limited, CN = QuoVadis Root CA 2 G3Subject: C = BM, O = QuoVadis Limited, CN = QuoVadis EV SSL ICA G3 That certificate is the QuoVadis EV SSL ICA G3 referenced by the leaf certificate above! But this certificate is not a root certificate. Root certificates are signed by themselves , but this certificate is signed by QuoVadis Root CA 2 G3 . Which, by the way, has CA in its name. So, where do we get the root certificate? Ideally, it should be in your browser or OS. For Debian at least (and probably Ubuntu as well), we can check with this monstrosity: % awk -v cmd='openssl x509 -noout -subject' '/BEGIN/{close(cmd)};{print | cmd}' < /etc/ssl/certs/ca-certificates.crt | grep 'QuoVadis Root CA 2 G3'subject=C = BM, O = QuoVadis Limited, CN = QuoVadis Root CA 2 G3 The first part of the command produces the certificate subjects ("names") of all system-trusted CA certificates, which we then search for the relevant QuoVadis root certificate. On my system it finds this, so the root certificate is present. To recap Root cert QuoVadis Root CA 2 G3 (on your system) signs intermediate cert QuoVadis EV SSL ICA G3 (missing) signs leaf cert discovery.ucl.ac.uk (provided by web server) Where should the intermediate cert come from? The answer to that is simple: the web server should provide it as well. Then the client can check the whole chain up until the root certificate (which comes from its trust store). Getting it fixed @little_dog's answer had you download the intermediate, and install that in your trust store, effectively turning that intermediate into a root cert for your system. That will work for this particular problem, for now, but there are drawbacks: It will only solve this very particular problem on your particular machine. Download from another misconfigured web server? Same problem. Download from this site on another machine? Same problem. Intermediates are usually shorter-lived than root certs. At some point in the future, your manually-installed intermediate will expire, and then it will stop working. Intermediates are there for a reason. In the case of a CA compromise, intermediates can be compromised as well. A CA will then revoke those intermediates, and create new ones and re-issue leaf certs. But because you manually trusted your intermediate, it won't be revoked and your system might end up trusting servers it shouldn't. The real solution is getting the website fixed. Try reporting it to the discovery.ucl.ac.uk webmasters. Any decent web server admin should know exactly what's up when you report to them that the webserver isn't serving the intermediate CA certificate. If they need more information, this answer has plenty :) There are also dozens of online services that will check any web server you specify and report a list of potential security issues and configuration problems. I tried a handful, and they all complained about the missing intermediate certificate. A few popular ones include: SSL Labs SSL server test DigiCert SSL Installation Diagnostics Tool Why No Padlock? But it worked in Chrome? The story becomes more complicated here. There's a mechanism called Authority Information Access (AIA) that allows HTTP clients to query the CA for the intermediate certificate. You can see URLs provided for it in the textual certificate output earlier in this answer. But not every client implements AIA fetching . Internet Explorer and Safari do. Chrome relies on the OS to do this (so yes on some platforms, no on others). Android does not. Firefox does not, because of privacy concerns . Curl and wget do not, as far as I can tell. Complicating things further, browsers can cache intermediate certificates they encountered, so if you visit a website that correctly sends the QuoVadis EV SSL ICA G3 intermediate with your browser, that certificate may be cached, and then website that otherwise wouldn't work suddenly would. Finally, browsers/OSes could come with (some) intermediate certificates pre-loaded, which would also hide this issue. At least Firefox is exploring this option. None of these things can be relied on though; plenty of clients don't do AIA fetching or pre-loading. So until these mechanisms become mandatory and universally supported, web servers will still need to include all the certificates to complete the chain. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/559535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26440/"
]
} |
559,864 | (This is on MacOS with zsh 5.7.1) Here is how I load custom functions in zsh: # Custom functionsfpath=($HOME/.zfunc $fpath)autoload -Uz mackupbackupautoload -Uz tacautoload -Uz airplaneautoload -Uz wakeMyDesktop Each function is its own file in the ~/.zfunc directory. Note this directory is symlinked into a different directory thanks to mackup . I wrote a new function to copy the current commit hash to the clipboard. I created a file in my $fpath called ghash , wrote the function, added a new autoload line in my .zshrc and executed source ~/.zshrc . Here's the function # copy the commit hash of the given git reference, or HEAD if none is givenref=$1if [[ $ref ]]; then git rev-parse $1 | pbcopyelse git rev-parse HEAD | pbcopyfi After sourcing .zshrc , the function became available and it worked, but I wanted to add a line to print a confirmation that it worked: echo "Copied $(pbpaste) to clipboard" So I added that line, saved the file, then I sourced .zshrc again. I ran the function again, but its behaviour didn't change! I thought I'd done something wrong, so I kept making changes to the function and sourcing .zshrc to no effect. All in all, I re-sourced .zshrc 22 times, by which point that operation took 37 seconds to complete... Then I realised maybe it wasn't reloading the function, so I ran zsh to start a fresh instance (which took about 1 second), and the function started working as expected! Anyone know why source picked up my new function, but didn't update it when the function changed? Bonus question: why'd it take longer to run source ~/.zshrc each time I ran it? | Sourcing an rc file rarely if ever works in practice, because people rarely write them to be idempotent. A case in point is your own, where you are prepending the same directory to the fpath path every time, which of course means that searching that path takes a little longer each time. No doubt this isn't the only place where you are doing that sort of thing, moreover. You also do not understand autoloading correctly. As the doco states, autoloading of a function without a body happens the first time that the function is executed . Obviously, if the function is already loaded, and thus has a body, it does not get loaded again. You need to unfunction the function before autoload ing it again. The sample .zshrc in the Z shell source contains an freload() function that does this very thing for all of the functions named as its arguments. It also does typeset -U path cdpath fpath manpath , notice. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/559864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388553/"
]
} |
559,918 | How can I add a virtual monitor with Nvidia proprietary driver? Previously I used an Intel card with this solution, which worked fine: Add VIRTUAL output to Xorg . Now I want to switch to new hardware, without an Intel card.The solution mentioned in VNC-Server as a virtual X11 monitor to expand screen doesn't work in my case. When I want to add the mode to an output, xrandr throws an error. xrandr --newmode test 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsyncxrandr --addmode "DP-1" testX Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 18 (RRAddOutputMode) Serial number of failed request: 41 Current serial number in output stream: 42 Basic data: Hardware: HP ZBook 15 G3, configured for discrete graphics (Optimus doesn't work!) Software: Debian 10.0.2; Kernel: 4.19.0, Nvidia-Driver-Module: xserver-xorg-video-nvidia-legacy-390xx If you ask, why I am doing this: I use a software to transfer the virtual screen to another machine via ethernet to achieve dual monitors with two notebooks. ( https://github.com/B-LechCode/sooScreenShare ) Update:There is now a proposed solution which works on my machine, but it's unable to add new modes like 1920x1200. Maybe someone has an idea? | I have a solution that works for me, though it's missing the ability to choose completely arbitrary resolutions. To be clear, this is just for the proprietary nvidia driver; the open-source nouveau driver works differently, as do other video card drivers. The short version is: Use the ConnectedMonitor nvidia xorg.conf Screen option to activate the extra output in addition to your main monitor. Here's the long version: Run xrandr --query to get the names of your primary output and the unconnected one you plan to use for the virtual screen. For example, I get the following output: LVDS-0 connected primary 1440x900+0+0 (normal left inverted right x axis y axis) 331mm x 207mm [various monitor modes elided]DP-0 disconnected (normal left inverted right x axis y axis)DP-1 disconnected (normal left inverted right x axis y axis)DP-2 disconnected (normal left inverted right x axis y axis)DP-3 disconnected (normal left inverted right x axis y axis)DP-4 disconnected (normal left inverted right x axis y axis)DP-5 disconnected (normal left inverted right x axis y axis) So in my case, the laptop screen is LVDS-0 and I have DP-0 through DP-5 available. Like you, I'll choose DP-1 for the virtual screen. You will need to add an xorg.conf Screen configuration, as well as a Device section for the screen to use. That can be anywhere xorg will find it. I put mine in /etc/X11/xorg.conf.d/30-virtscreen.conf . In that file, a minimal setup is: Section "Device" Identifier "nvidiagpu" Driver "nvidia"EndSectionSection "Screen" Identifier "nvidiascreen" Device "nvidiagpu" Option "ConnectedMonitor" "LVDS-0,DP-1"EndSection This tells the driver to use the DP-1 output even if it doesn't detect a monitor connected to it. Note that you have to list your laptop monitor (or a real, physical monitor) too, if you want to use it! If you only list the virtual output, the driver will not activate any other outputs, even if it detects monitors connected to them. Now restart X. You should see two active monitors with xrandr and other display-querying programs. On my system the newly-activated virtual output has a variety of resolutions available. I can select any of them (e.g. via xrandr --output DP-1 --mode 1600x900 ) and the virtual output will resize itself. I cannot, however, add new modes (e.g. if I wanted a 1920x1080 resolution). That still gives me the "invalid parameter attributes" error. Fortunately, I can live with the modes available to me. With luck, you'll have something useful preset for you, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/559918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388600/"
]
} |
559,936 | I've heard that we can avoid the issue of "too many TIME_WAIT" with the help of IP_TRANSPARENT in TCP/IP connection. In this way, it seems that a RST is used to disconnect, instead of FIN and ACK . But I don't quite understand how to achieve this. | I have a solution that works for me, though it's missing the ability to choose completely arbitrary resolutions. To be clear, this is just for the proprietary nvidia driver; the open-source nouveau driver works differently, as do other video card drivers. The short version is: Use the ConnectedMonitor nvidia xorg.conf Screen option to activate the extra output in addition to your main monitor. Here's the long version: Run xrandr --query to get the names of your primary output and the unconnected one you plan to use for the virtual screen. For example, I get the following output: LVDS-0 connected primary 1440x900+0+0 (normal left inverted right x axis y axis) 331mm x 207mm [various monitor modes elided]DP-0 disconnected (normal left inverted right x axis y axis)DP-1 disconnected (normal left inverted right x axis y axis)DP-2 disconnected (normal left inverted right x axis y axis)DP-3 disconnected (normal left inverted right x axis y axis)DP-4 disconnected (normal left inverted right x axis y axis)DP-5 disconnected (normal left inverted right x axis y axis) So in my case, the laptop screen is LVDS-0 and I have DP-0 through DP-5 available. Like you, I'll choose DP-1 for the virtual screen. You will need to add an xorg.conf Screen configuration, as well as a Device section for the screen to use. That can be anywhere xorg will find it. I put mine in /etc/X11/xorg.conf.d/30-virtscreen.conf . In that file, a minimal setup is: Section "Device" Identifier "nvidiagpu" Driver "nvidia"EndSectionSection "Screen" Identifier "nvidiascreen" Device "nvidiagpu" Option "ConnectedMonitor" "LVDS-0,DP-1"EndSection This tells the driver to use the DP-1 output even if it doesn't detect a monitor connected to it. Note that you have to list your laptop monitor (or a real, physical monitor) too, if you want to use it! If you only list the virtual output, the driver will not activate any other outputs, even if it detects monitors connected to them. Now restart X. You should see two active monitors with xrandr and other display-querying programs. On my system the newly-activated virtual output has a variety of resolutions available. I can select any of them (e.g. via xrandr --output DP-1 --mode 1600x900 ) and the virtual output will resize itself. I cannot, however, add new modes (e.g. if I wanted a 1920x1080 resolution). That still gives me the "invalid parameter attributes" error. Fortunately, I can live with the modes available to me. With luck, you'll have something useful preset for you, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/559936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145824/"
]
} |
559,942 | I have many files, which I want to merge using common Ids in Column 1. File1: MYORGANISM_I_05140.t1 Atypical/PIKK/FRAPMYORGANISM_I_06518.t1 CAMK/MLCKMYORGANISM_I_00854.t1 TK-assoc/SH2/SH2-RMYORGANISM_I_12755.t1 TK-assoc/SH2/Unique File2: MYORGANISM_I_05140.t1 VALUES to be takenMYORGANISM_I_12766.t1 what FILE3: MYORGANISM_I_16941.t1 OKMYORGANISM_I_93484.t1 LET IT BE I want to merge many file and add '-NA-' if value is missing, my desired Output: MYORGANISM_I_05140.t1 Atypical/PIKK/FRAP VALUES to be taken -NA-MYORGANISM_I_06518.t1 CAMK/MLCK -NA- -NA-MYORGANISM_I_00854.t1 TK-assoc/SH2/SH2-R -NA- -NA-MYORGANISM_I_12755.t1 TK-assoc/SH2/Unique -NA- -NA-MYORGANISM_I_12766.t1 -NA- what -NA-MYORGANISM_I_16941.t1 -NA- -NA- OKMYORGANISM_I_93484.t1 -NA- -NA- LET IT BE | I have a solution that works for me, though it's missing the ability to choose completely arbitrary resolutions. To be clear, this is just for the proprietary nvidia driver; the open-source nouveau driver works differently, as do other video card drivers. The short version is: Use the ConnectedMonitor nvidia xorg.conf Screen option to activate the extra output in addition to your main monitor. Here's the long version: Run xrandr --query to get the names of your primary output and the unconnected one you plan to use for the virtual screen. For example, I get the following output: LVDS-0 connected primary 1440x900+0+0 (normal left inverted right x axis y axis) 331mm x 207mm [various monitor modes elided]DP-0 disconnected (normal left inverted right x axis y axis)DP-1 disconnected (normal left inverted right x axis y axis)DP-2 disconnected (normal left inverted right x axis y axis)DP-3 disconnected (normal left inverted right x axis y axis)DP-4 disconnected (normal left inverted right x axis y axis)DP-5 disconnected (normal left inverted right x axis y axis) So in my case, the laptop screen is LVDS-0 and I have DP-0 through DP-5 available. Like you, I'll choose DP-1 for the virtual screen. You will need to add an xorg.conf Screen configuration, as well as a Device section for the screen to use. That can be anywhere xorg will find it. I put mine in /etc/X11/xorg.conf.d/30-virtscreen.conf . In that file, a minimal setup is: Section "Device" Identifier "nvidiagpu" Driver "nvidia"EndSectionSection "Screen" Identifier "nvidiascreen" Device "nvidiagpu" Option "ConnectedMonitor" "LVDS-0,DP-1"EndSection This tells the driver to use the DP-1 output even if it doesn't detect a monitor connected to it. Note that you have to list your laptop monitor (or a real, physical monitor) too, if you want to use it! If you only list the virtual output, the driver will not activate any other outputs, even if it detects monitors connected to them. Now restart X. You should see two active monitors with xrandr and other display-querying programs. On my system the newly-activated virtual output has a variety of resolutions available. I can select any of them (e.g. via xrandr --output DP-1 --mode 1600x900 ) and the virtual output will resize itself. I cannot, however, add new modes (e.g. if I wanted a 1920x1080 resolution). That still gives me the "invalid parameter attributes" error. Fortunately, I can live with the modes available to me. With luck, you'll have something useful preset for you, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/559942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/367359/"
]
} |
559,952 | I would like to append the following to my ~/.bash_profile: # ---------------# PERSONNAL EDITS# anaconda3. /opt/anaconda3/etc/profile.d/conda.sh To do so, I use the script: #!/bin/bashapp_str=\'\n# ---------------\n# PERSONNAL EDITS\n\n# anaconda3\n. /opt/anaconda3/etc/profile.d/conda.sh'echo -e $app_str >> ~/.bash_profile However, the result is the following: <lines above...> # --------------- # PERSONNAL EDITS # anaconda3 . /opt/anaconda3/etc/profile.d/conda.sh There is a spacing in front of each line appended to the file. How do I remove the spacing? | I have a solution that works for me, though it's missing the ability to choose completely arbitrary resolutions. To be clear, this is just for the proprietary nvidia driver; the open-source nouveau driver works differently, as do other video card drivers. The short version is: Use the ConnectedMonitor nvidia xorg.conf Screen option to activate the extra output in addition to your main monitor. Here's the long version: Run xrandr --query to get the names of your primary output and the unconnected one you plan to use for the virtual screen. For example, I get the following output: LVDS-0 connected primary 1440x900+0+0 (normal left inverted right x axis y axis) 331mm x 207mm [various monitor modes elided]DP-0 disconnected (normal left inverted right x axis y axis)DP-1 disconnected (normal left inverted right x axis y axis)DP-2 disconnected (normal left inverted right x axis y axis)DP-3 disconnected (normal left inverted right x axis y axis)DP-4 disconnected (normal left inverted right x axis y axis)DP-5 disconnected (normal left inverted right x axis y axis) So in my case, the laptop screen is LVDS-0 and I have DP-0 through DP-5 available. Like you, I'll choose DP-1 for the virtual screen. You will need to add an xorg.conf Screen configuration, as well as a Device section for the screen to use. That can be anywhere xorg will find it. I put mine in /etc/X11/xorg.conf.d/30-virtscreen.conf . In that file, a minimal setup is: Section "Device" Identifier "nvidiagpu" Driver "nvidia"EndSectionSection "Screen" Identifier "nvidiascreen" Device "nvidiagpu" Option "ConnectedMonitor" "LVDS-0,DP-1"EndSection This tells the driver to use the DP-1 output even if it doesn't detect a monitor connected to it. Note that you have to list your laptop monitor (or a real, physical monitor) too, if you want to use it! If you only list the virtual output, the driver will not activate any other outputs, even if it detects monitors connected to them. Now restart X. You should see two active monitors with xrandr and other display-querying programs. On my system the newly-activated virtual output has a variety of resolutions available. I can select any of them (e.g. via xrandr --output DP-1 --mode 1600x900 ) and the virtual output will resize itself. I cannot, however, add new modes (e.g. if I wanted a 1920x1080 resolution). That still gives me the "invalid parameter attributes" error. Fortunately, I can live with the modes available to me. With luck, you'll have something useful preset for you, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/559952",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388137/"
]
} |
559,985 | I want to tar a directory and write the result to stdout , then pipe it to a compression program, like this: tar -cvf - /tmp/source-dir | lzip -o /media/my-usb/result.lz - I have been using pipe all the time with commands which output several lines of text. Now I am wondering what would happen when I pipes a (fast) command with very large output such as tar and a very slow compression command followed? Will tar wait for its output to be consumed by lzip ? Or it just does as fast as it can then outputs everything to RAM? It will be a disaster in low RAM system if the latter is true. | When the data producer ( tar ) tries to write to the pipe too quickly for the consumer ( lzip ) to have time to read all of it, it will block until lzip has had time to read what tar is writing. There is a small buffer associated with the pipe, but its size is likely to be smaller than the size of most tar archives. There is no risk of filling up your system's RAM with your pipeline. "Blocking" simply means that when tar does a call to the write() library function (or equivalent), the call won't return until the data has been delivered to the pipe buffer, which could take a bit of time if lzip is slow to read from that same buffer. You should be able to see this in top where tar would slow down and sleep a lot compared to lzip (assuming tar is in fact quicker than lzip ). You would therefore not fill up a significant amount of RAM with your pipeline. To do that (if you wanted to), you could use something like pv in the middle, with some large buffer (here, a gigabyte): tar -cvf - /tmp/source-dir | pv --buffer-size 1G | lzip -o /media/my-usb/result.lz - This would still block tar whenever pv blocks. pv would block when its buffer is full and it can't write to lzip . The reverse situation works in a similar way, i.e. if you have a slow left-hand side of a pipe writing to a fast right-hand side, the consumer on the right would block on read() until there is data to be read from the pipe. This (data I/O) is the only thing that synchronises the processes taking part in a pipeline. Apart from reading and writing (and occasionally blocking while waiting for someone else to read or write), they would run independently of each other. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/559985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/352053/"
]
} |
560,064 | I have set ulimit in /etc/security/limits.conf . When I log in into my desktop environment as user testuser normally (using slim login manager), everything works fine. When I log in as user testuser via Xephyr (from my other session as another user), everything works fine except chromium browser. This is the error I get in dmesg : Chrome_ChildIOT (2472): VmData 4310827008 exceed data ulimit 4294967296. Update limits or use boot option ignore_rlimit_data. And chromium is unusable (it starts, but waits indefinitely to load any page) All other programs except chromium have correct limits set. I have verified this using: find /proc/ -maxdepth 1 -user testuser -exec cat {}/limits \; | grep 'Max data size' all PIDs have Max data size set to unlimited: Max data size unlimited unlimited bytes except chromium processes: Max data size 4294967296 4294967296 bytes Max data size 17179869184 17179869184 bytes Max data size 17179869184 17179869184 bytes Max data size 17179869184 17179869184 bytes Max data size 17179869184 17179869184 bytes Max data size 17179869184 17179869184 bytes Max data size 17179869184 17179869184 bytes Max data size 17179869184 17179869184 bytes I would like to understand: 1) why does chromium have different limits than all other programs ? 2) where do the "default" limits come from (where does chromium take the limit 4294967296 from ? 3) how can I change these default limits once and for all, globally, for all processes regardless whether they use pam or not ? | Why does chromium have different limits than all other programs? Chromium may look like a simple application but it is not, first there is the multi-threading which makes chromium run multiple process for different tasks (extensions, tab, core web-engine, etc) then the virtualisation, chromium use many sandbox to isolate the browsing activities which make it use more ressources than other application, also the used web-engine is not a light one... add to that the different needed libraries that are required to run and other heavy ressources functions... some related documentations are available here , here and this article have some useful infos. Where do the "default" limits come from (where does chromium take the limit 4294967296 from? 4294967296 bytes (4096 MB, or 4GB limit) chromium have by design a 4GB limit this is hard coded, more infos about this is available here and here How can I change these default limits once and for all, globally, for all processes regardless whether they use pam or not? Not an easy task but you are doing it right for most usual process, now for complicated process like chromium you need to customize your config for each "special" app. For chromium there are some command parameters that can be used to customise/enable/disable features, you can try to use some of them to make chromium suit your needs, here are some interesting switch: Those switch can be used with a command line like this /usr/bin/chromium --single-process --single-process--aggressive-tab-discard--aggressive-cache-discard--ui-compositor-memory-limit-when-visible-mb--disk-cache-dir # Use a specific disk cache location, rather than one derived from the UserDatadir.--disk-cache-size # Forces the maximum disk space to be used by the disk cache, in bytes.--force-gpu-mem-available-mb # Sets the total amount of memory that may be allocated for GPU resources--gpu-program-cache-size-kb # Sets the maximum size of the in-memory gpu program cache, in kb--mem-pressure-system-reserved-kb # Some platforms typically have very little 'free' memory, but plenty is available in buffers+cached. For such platforms, configure this amount as the portion of buffers+cached memory that should be treated as unavailable. If this switch is not used, a simple pressure heuristic based purely on free memory will be used.--renderer-process-limit # Overrides the default/calculated limit to the number of renderer processes. Very high values for this setting can lead to high memory/resource usage or instability. You can also run chromium with a script that update the ulimit for it (note that values lower than 4GB may crash the browser...) ulimit -Sv 4352000000 #4.2GB/usr/bin/chromium# or 0.42GB, it works but the browser may crash#ulimit -Sv 435200000 #0.42GB#/usr/bin/chromium | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/560064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
560,065 | How can I manually download a snap package? Preferably as a non-root user with wget ? For example, the Snapcraft page for Chromium is here: https://snapcraft.io/chromium How can I find the URL(s) at which Chromium's snap files can be downloaded? | snap packages are not meant to be downloaded manually so it's quite tricky. I found this on the Ubuntu side of StackExchange. As a non-root user you can use curl to retrieve all the information about a package like so : curl -H 'Snap-Device-Series: 16' http://api.snapcraft.io/v2/snaps/info/chromium >> chromium.info If you want another package you just have to replace chromium by another pacakge name. The previous command will copy all the information about the package into a chromium.info file. If a JSON processor like jq is installed on your system you can pipe the result of curl to jq with curl -H 'Snap-Device-Series: 16' http://api.snapcraft.io/v2/snaps/info/chromium | jq to ease your reading. The result will contain many entries for various channels and architectures, look for the one that fits you best. You will find something like { "channel": { "architecture": "arm64", "name": "edge", "released-at": "2019-12-21T08:18:39.959452+00:00", "risk": "edge", "track": "latest" }, "created-at": "2019-12-21T08:16:39.600827+00:00", "download": { "deltas": [], "sha3-384": "92c0824bfc8c136a2b8179fcdd14647f7174dd3103397e107b0100decc1ac8b29eb22fbba61949a4e1fdf1a282f2a8e0", "size": 144859136, "url": "https://api.snapcraft.io/api/v1/snaps/download/XKEcBqPM06H1Z7zGOdG5fbICuf8NWK5R_985.snap" }, "revision": 985, "type": "app", "version": "80.0.3987.16"}, and now you can wget the given URL to download your package file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65444/"
]
} |
560,074 | I have a scenario where I want to store values of command run output to array I have a dat set like below ID|SAL|COST|PER|TAG1"|"10.1"|"12.22"|"10.1"|"A" 2"|"10.223"|"12.333"|"10.1"|"B" after running the below command I am getting sum of column output as awk -F"|" 'NR==1{for(i=1;i<=NF;i++){if ($i ~ /SAL|COST/){print i;}}}' demo.txt23 But the issue is I want the generated output need to be stored in arrayI am using below approach not working using mapfile -t mapfile -t array < <(awk -F"|" 'NR==1{for(i=1;i<=NF;i++){if ($i ~ /SAL|COST/){print i;}}}' demo.txt) when I print echo "${array[@]}" it gives me only one output value not all the value output:3 **need all the output values need to be printed. when I echo array ** 23 | snap packages are not meant to be downloaded manually so it's quite tricky. I found this on the Ubuntu side of StackExchange. As a non-root user you can use curl to retrieve all the information about a package like so : curl -H 'Snap-Device-Series: 16' http://api.snapcraft.io/v2/snaps/info/chromium >> chromium.info If you want another package you just have to replace chromium by another pacakge name. The previous command will copy all the information about the package into a chromium.info file. If a JSON processor like jq is installed on your system you can pipe the result of curl to jq with curl -H 'Snap-Device-Series: 16' http://api.snapcraft.io/v2/snaps/info/chromium | jq to ease your reading. The result will contain many entries for various channels and architectures, look for the one that fits you best. You will find something like { "channel": { "architecture": "arm64", "name": "edge", "released-at": "2019-12-21T08:18:39.959452+00:00", "risk": "edge", "track": "latest" }, "created-at": "2019-12-21T08:16:39.600827+00:00", "download": { "deltas": [], "sha3-384": "92c0824bfc8c136a2b8179fcdd14647f7174dd3103397e107b0100decc1ac8b29eb22fbba61949a4e1fdf1a282f2a8e0", "size": 144859136, "url": "https://api.snapcraft.io/api/v1/snaps/download/XKEcBqPM06H1Z7zGOdG5fbICuf8NWK5R_985.snap" }, "revision": 985, "type": "app", "version": "80.0.3987.16"}, and now you can wget the given URL to download your package file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388004/"
]
} |
560,167 | I am trying to prevent the user from shutting down or rebooting without physically removing a flash card from the machine. To do that, I have written a SystemD service, removeflash.service: [Unit]Description=Prompt user to remove flash card[Service]ExecStop=/usr/lib/systemd/flashshutdown.shType=oneshotRemainAfterExit=yes[Install]WantedBy=multi-user.targetRequires=rsyslog.service The flashshutdown.sh is a bash script, as follows: #! /bin/bash#while [ -e /dev/flash ] ; do echo "Please remove flash card" logger "GHB: Please remove flash card" sleep 5done I didn't expect the echo to do anything at shutdown, and it doesn't. However, I was hoping the logger command would work. Without the Requires=rsyslog.service, my service exited after the rsyslog.service was shut down; I inserted the requirement to prevent that happening, but the only difference is that the shutdown of removeflash.service is earlier in the shutdown sequence. By a bit of luck, the purpose of the service is rescued by the fact that systemd itself outputs to the console a message that it is running a job for Prompt user to remove flash card. What is the proper way to issue a message to the console? | Standard output and error of services under service management — be it s6, runit, perp, daemontools, nosh service management, or systemd — is not the console. It is a pipe connected to some form of log writer. For a systemd service you need a TTYPath=/dev/console and a StandardOutput=tty in the .INI file to change this, StandardInput=tty if you want to read (but you do not) as well as write. Witness systemd's pre-supplied debug-shell.service . This is a general principle that is not systemd specific. Dæmon context involves (amongst other things) not having a controlling terminal and not having open file descriptors for terminals, and under proper service management (such as all of the daemontools family) this is where one starts from , the state that a service process begins in when the supervisor/service manager forks it. So to use the console the service has to explicitly open it. In systemd, the aforementioned TTYPath and StandardInput settings cause the forked child process to open the console before it executes the service program proper. This is hidden inside systemd and you do not really get to see it. In the run program of a similar nosh service, the run program explicitly uses some of the nosh toolset chain-loading tools to do the same thing before executing the main program ( emergency-login in this case): % cat /etc/service-bundles/services/emergency-login@console/service/run #!/bin/nosh#Emergency super-user login on consolesetsidvc-get-tty consoleopen-controlling-ttyvc-reset-tty --hard-resetline-banner "Emergency mode log-in."emergency-login% Ironically, you do not need the logger command, or any syslog dependencies. There is no point in writing this interactive prompt to a log . But you really should run this service unprivileged, on principle. It does not need superuser privileges, for anything that it does. On another principle, don't make your script use #!/bin/bash unless you really are going to use Bashisms. One of the greatest speedups to system bootstrap/shutdown in the past couple of decades on Debian Linux and Ubuntu Linux was the switch of /bin/sh from the Bourne Again shell to the Debian Almquist shell. If you are going to write a script as simple as this, keep it POSIX-conformant and use #!/bin/sh anyway , even if you are not using Debian/Ubuntu, and on Debian/Ubuntu you'll get the Debian Almquist shell benefit as a bonus. Moreover, if you decide to have more than a glass TTY message, with a tool like dialog , you will need to set the TERM environment variable so that your programs can look up the right escape and control sequences to emit in the terminfo database. Again, witness debug-shell.service . (In the aforegiven run program, for comparison, the vc-get-tty tool sets TERM .) Similarly, you will want script errors to be logged. So standard error should be left pointing at the journal with StandardError=journal . Here's a nosh service run program that illustrates the equivalent of this, and also shows dropping user privileges for a program that really does not need them, which in a systemd .INI file would be User=daemon : % cat /etc/service-bundles/services/monitor-fsck-progress/service/run #!/bin/nosh#local socket used for monitor-fsck-progresslocal-stream-socket-listen --systemd-compatibility --backlog 2 --mode 0644 /run/fsck.progresssetsidsetlogin -- daemonvc-get-tty consolefdmove -c 4 2open-controlling-ttyfdmove 2 4setuidgid -- daemon./service% The program run by ./service in this case presents a full-screen TUI on the console, whilst its errors are sent to the logging service. This is the stuff that one needs to do, under service managers in general, in order to run such programs as services, talking to the console. Of course, any such full-screen TUI program will conflict with systemd's "A stop job is running", also written to the console. But that is your problem. ☺ Further reading https://unix.stackexchange.com/a/468457/5132 https://unix.stackexchange.com/a/250965/5132 https://unix.stackexchange.com/a/499148/5132 https://unix.stackexchange.com/a/233855/5132 whiptail or dialog | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388724/"
]
} |
560,168 | I joined to ask this question as it did not seem like a stack overflow question. Question: Node and npm running on nvm installed and working, but nothing (that I have tried) works as I get errors. I suspect the errors are at the OS level (permissions or a file issue) but I am not sure how to go about solving these. Could you tell me either what is wrong (if there is enough info below) or where to look for a solution (if the info below is not enough or incorrect). I have been on Manjaro for a while and I am still a beginner on Linux. I had been using NVM to manage multiple node.js versions and a while ago it stopped working. I only use my computer sometimes and I had assumed it was some node issue but today I went in and completely removed nvm/node then reinstalled them. Then I tried to start a standard NUXT app and it failed so I think it is at the OS level. I have NVM installed, I then install the LTS node (12.14) then I try a new nuxt project: npx create-nuxt-app test Answer all the questions, instillation starts...Lots of errors - but it finishes...Example from errors: ../lib/kerberos.cc: In static member function ‘static Nan::NAN_METHOD_RETURN_TYPE Kerberos::AuthGSSServerStep(Nan::NAN_METHOD_ARGS_TYPE)’:../lib/kerberos.cc:802:44: error: no matching function for call to ‘v8::Value::ToObject()’ 802 | Local<Object> object = info[0]->ToObject(); | ^In file included from /home/un/.cache/node-gyp/12.14.0/include/node/node.h:63, from ../lib/kerberos.h:4, from ../lib/kerberos.cc:1:/home/un/.cache/node-gyp/12.14.0/include/node/v8.h:2576:44: note: candidate: ‘v8::MaybeLocal<v8::Object> v8::Value::ToObject(v8::Local<v8::Context>) const’ 2576 | V8_WARN_UNUSED_RESULT MaybeLocal<Object> ToObject( | ^~~~~~~~/home/un/.cache/node-gyp/12.14.0/include/node/v8.h:2576:44: note: candidate expects 1 argument, 0 provided node-gyp seems to be a common item in the errors, and the npm page says it is "for compiling native addon modules for Node.js" which is why I was thinking it was an OS level issue. so the app is set up, but when I try to run npm run dev I get the following: > [email protected] dev /home/un/test> nuxtsh: /home/un//test/node_modules/.bin/nuxt: Permission deniednpm ERR! code ELIFECYCLEnpm ERR! errno 126npm ERR! [email protected] dev: `nuxt`npm ERR! Exit status 126npm ERR! npm ERR! Failed at the [email protected] dev script.npm ERR! This is probably not a problem with npm. There is likely additional logging output above.npm ERR! A complete log of this run can be found in:npm ERR! /home/un/.npm/_logs/2020-01-03T15_35_00_168Z-debug.log Note for those not familiar with node/npm/NUXT: This is just the standard instillation process. There is no custom code from me here, this is what lots of people use to start a project all the time so I can't understand why it is not working (especially when it used to be working). I was wondering if the 'permission denied' means it is a user access issue but I am not sure how to check. I also get some errors about files been newer when doing a system update. I would appreciate any help. If you don't have a solution then at least some advice on what might be the issue or where to look for solutions. Let me know if you want any additional info. Also not sure what other tags to add? file-errors, instillation, update-issues Edit:File permissions: Node in .nvm/versions/v12.14.0 is -rwxr-xr-x while npm and npx are lrwxrwxrwx which link to the actual npm with -rwxr-xr-x but the actual npx is -rw-r--r-- (no executable for the user) but I have never changed these and like I said, it used to work.Every folder in node_modules has drwxr-xr-x, I looked in one folder and the js files are -rw-r--r-- (but I assume as they are JavaScript they wont need to be executed... Edit2:I just noticed that there is no .bin folder in my node_modules folder and there is no nuxt folder at all, but I would think that this would be a file not found error instead of 'Permission denied'. I then tried chmod 775 -R node_modules and ran build again. This time it created the .bin file but still failed on webpack (node_modules/.bin/webpack: Permission denied) although this link was lrwxrwxrwx and the original file is -rwxrwxr-x While this made things change, I am still unable to start the project. I also think this is not a normal way to deal with this. if it was, the website would say this was a requirement. | Standard output and error of services under service management — be it s6, runit, perp, daemontools, nosh service management, or systemd — is not the console. It is a pipe connected to some form of log writer. For a systemd service you need a TTYPath=/dev/console and a StandardOutput=tty in the .INI file to change this, StandardInput=tty if you want to read (but you do not) as well as write. Witness systemd's pre-supplied debug-shell.service . This is a general principle that is not systemd specific. Dæmon context involves (amongst other things) not having a controlling terminal and not having open file descriptors for terminals, and under proper service management (such as all of the daemontools family) this is where one starts from , the state that a service process begins in when the supervisor/service manager forks it. So to use the console the service has to explicitly open it. In systemd, the aforementioned TTYPath and StandardInput settings cause the forked child process to open the console before it executes the service program proper. This is hidden inside systemd and you do not really get to see it. In the run program of a similar nosh service, the run program explicitly uses some of the nosh toolset chain-loading tools to do the same thing before executing the main program ( emergency-login in this case): % cat /etc/service-bundles/services/emergency-login@console/service/run #!/bin/nosh#Emergency super-user login on consolesetsidvc-get-tty consoleopen-controlling-ttyvc-reset-tty --hard-resetline-banner "Emergency mode log-in."emergency-login% Ironically, you do not need the logger command, or any syslog dependencies. There is no point in writing this interactive prompt to a log . But you really should run this service unprivileged, on principle. It does not need superuser privileges, for anything that it does. On another principle, don't make your script use #!/bin/bash unless you really are going to use Bashisms. One of the greatest speedups to system bootstrap/shutdown in the past couple of decades on Debian Linux and Ubuntu Linux was the switch of /bin/sh from the Bourne Again shell to the Debian Almquist shell. If you are going to write a script as simple as this, keep it POSIX-conformant and use #!/bin/sh anyway , even if you are not using Debian/Ubuntu, and on Debian/Ubuntu you'll get the Debian Almquist shell benefit as a bonus. Moreover, if you decide to have more than a glass TTY message, with a tool like dialog , you will need to set the TERM environment variable so that your programs can look up the right escape and control sequences to emit in the terminfo database. Again, witness debug-shell.service . (In the aforegiven run program, for comparison, the vc-get-tty tool sets TERM .) Similarly, you will want script errors to be logged. So standard error should be left pointing at the journal with StandardError=journal . Here's a nosh service run program that illustrates the equivalent of this, and also shows dropping user privileges for a program that really does not need them, which in a systemd .INI file would be User=daemon : % cat /etc/service-bundles/services/monitor-fsck-progress/service/run #!/bin/nosh#local socket used for monitor-fsck-progresslocal-stream-socket-listen --systemd-compatibility --backlog 2 --mode 0644 /run/fsck.progresssetsidsetlogin -- daemonvc-get-tty consolefdmove -c 4 2open-controlling-ttyfdmove 2 4setuidgid -- daemon./service% The program run by ./service in this case presents a full-screen TUI on the console, whilst its errors are sent to the logging service. This is the stuff that one needs to do, under service managers in general, in order to run such programs as services, talking to the console. Of course, any such full-screen TUI program will conflict with systemd's "A stop job is running", also written to the console. But that is your problem. ☺ Further reading https://unix.stackexchange.com/a/468457/5132 https://unix.stackexchange.com/a/250965/5132 https://unix.stackexchange.com/a/499148/5132 https://unix.stackexchange.com/a/233855/5132 whiptail or dialog | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388809/"
]
} |
560,194 | I have an input file with these fields: ENST00000456328.2 1657 1350.015 0 0 I am trying awk to remove the number after the decimal and print the rest as it is awk -F[.] '{print $1"\t"$2"\t"$3}{next;}' But it doesn't work, as it gives an output like this: ENST00000456328 2 1657 1350 015 0 0 Can someone help. regards. | Assuming the input is tab-delimited and that you'd like to keep it that way, you can remove the version numbers from the Ensembl stable IDs with $ awk 'BEGIN { OFS=FS="\t" } { sub("\\..*", "", $1); print }' fileENST00000456328 1657 1350.015 0 0 This applies a substitution to the first tab-delimited field (only) that removes everything after the first dot. Similarly with sed : $ sed 's/\.[^[:blank:]]*//' fileENST00000456328 1657 1350.015 0 0 This removes any non-blank characters after the first dot on each line. You could also use \.[[:digit:]]* as the pattern, which would explicitly match digits instead of non-blanks. If you have non-versioned Ensembl IDs, or IDs from another database, in your data, then you may want to make sure that you match a versioned Ensembl ID before modifying the line. With awk , this may be done with $ awk 'BEGIN { OFS=FS="\t" } /^ENS[^[:blank:]]*\./ { sub("\\..*", "", $1) } { print }' fileENST00000456328 1657 1350.015 0 0 The print is now in a separate block from the block that does the modification to the first field. This is so that lines that all lines, modified or not, are printed. The whole { print } block may be replaced by the shorter 1 , if you are short on time or space for typing. And with sed : $ sed '/^ENS[^[:blank:]]*\./s/\.[^[:blank:]]*//' fileENST00000456328 1657 1350.015 0 0 The sed code already prints all lines, whether modified or not, so no other modification has to be made (whereas in the awk code, the outputting of the result had to be slightly justified compared with the first awk variation). In these last two variants, we match a versioned Ensembl ID at the start of a line with the regular expression ^ENS[^[:blank:]]*\. before attempting to do any modifications. None of the variations above cares or need to care about the rest of the data on the line. Each line may contain additional fields, and these will be passed on unmodified. Using a dot as the field delimiter is inspired, but will lead to issues as more data on the line contains dots. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/315919/"
]
} |
560,220 | I have many files in a directory that have names containing ? , and I want to remove these ? characters. Could you please help me with that? | Assuming the input is tab-delimited and that you'd like to keep it that way, you can remove the version numbers from the Ensembl stable IDs with $ awk 'BEGIN { OFS=FS="\t" } { sub("\\..*", "", $1); print }' fileENST00000456328 1657 1350.015 0 0 This applies a substitution to the first tab-delimited field (only) that removes everything after the first dot. Similarly with sed : $ sed 's/\.[^[:blank:]]*//' fileENST00000456328 1657 1350.015 0 0 This removes any non-blank characters after the first dot on each line. You could also use \.[[:digit:]]* as the pattern, which would explicitly match digits instead of non-blanks. If you have non-versioned Ensembl IDs, or IDs from another database, in your data, then you may want to make sure that you match a versioned Ensembl ID before modifying the line. With awk , this may be done with $ awk 'BEGIN { OFS=FS="\t" } /^ENS[^[:blank:]]*\./ { sub("\\..*", "", $1) } { print }' fileENST00000456328 1657 1350.015 0 0 The print is now in a separate block from the block that does the modification to the first field. This is so that lines that all lines, modified or not, are printed. The whole { print } block may be replaced by the shorter 1 , if you are short on time or space for typing. And with sed : $ sed '/^ENS[^[:blank:]]*\./s/\.[^[:blank:]]*//' fileENST00000456328 1657 1350.015 0 0 The sed code already prints all lines, whether modified or not, so no other modification has to be made (whereas in the awk code, the outputting of the result had to be slightly justified compared with the first awk variation). In these last two variants, we match a versioned Ensembl ID at the start of a line with the regular expression ^ENS[^[:blank:]]*\. before attempting to do any modifications. None of the variations above cares or need to care about the rest of the data on the line. Each line may contain additional fields, and these will be passed on unmodified. Using a dot as the field delimiter is inspired, but will lead to issues as more data on the line contains dots. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560220",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/299545/"
]
} |
560,254 | I'm getting an error with a very simple play, - name: add to environment lineinfile: path: /etc/environment line: "{{ item }}" loop: - "foo=1" - "bar=2" I simply want to add those lines to that file if they don't exist. The error I get is, fatal: [10.1.38.15]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'item' is undefined\n\nThe error appears to have been in '/home/ecarroll/cp/ansible/roles/sandbox/tasks/main.yml': line 6, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n dest: /\n- name: add to environment\n ^ here\n"} | You have mis-indented your loop directive. It's not an argument to lineinfile ; it's a task setting: - name: add to environment lineinfile: path: /etc/environment line: "{{ item }}" loop: - "foo=1" - "bar=2" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
560,275 | I was trying to upgrade debian 9 to 10 but when I tried to run sudo apt-get update, sudo apt-get upgrade and sudo apt-get full-upgrade, they all got this error message: optiplex@optiplex:~$ sudo apt-get upgradeReading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt --fix-broken install' to correct these.The following packages have unmet dependencies: linux-image-generic-lts-xenial : Depends: linux-firmware but it is not installedE: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). I tried running apt --fix-broken install but then i just got this error message: optiplex@optiplex:~$ sudo apt --fix-broken installReading package lists... DoneBuilding dependency tree Reading state information... DoneCorrecting dependencies... DoneThe following package was automatically installed and is no longer required: linux-image-4.9.0-8-amd64Use 'sudo apt autoremove' to remove it.The following additional packages will be installed: linux-firmwareThe following NEW packages will be installed: linux-firmware0 upgraded, 1 newly installed, 0 to remove and 10 not upgraded.3 not fully installed or removed.Need to get 0 B/33.9 MB of archives.After this operation, 127 MB of additional disk space will be used.Do you want to continue? [Y/n] yWARNING: The following packages cannot be authenticated! linux-firmwareInstall these packages without verification? [y/N] y(Reading database ... 514688 files and directories currently installed.)Preparing to unpack .../linux-firmware_1.127.24_all.deb ...Unpacking linux-firmware (1.127.24) ...dpkg: error processing archive /var/cache/apt/archives/linux-firmware_1.127.24_all.deb (--unpack): trying to overwrite '/lib/firmware/cis/PE-200.cis', which is also in package firmware-linux-free 3.4dpkg-deb: error: subprocess paste was killed by signal (Broken pipe)Errors were encountered while processing: /var/cache/apt/archives/linux-firmware_1.127.24_all.debE: Sub-process /usr/bin/dpkg returned an error code (1) Could anybody help me with this? EDIT: I was going through the instructions given by GAD3R but I have a slightly different error message this time whenever I trie to do anything with apt: dpkg: error processing package snapd (--configure): subprocess installed post-installation script returned error exit status 1Errors were encountered while processing: snapdE: Sub-process /usr/bin/dpkg returned an error code (1) EDIT 2: I got as far as doing apt-get upgrade but I keep getting errors: ... Selecting previously unselected package x11proto-dev.Preparing to unpack .../513-x11proto-dev_2018.4-4_all.deb ...Unpacking x11proto-dev (2018.4-4) ...Preparing to unpack .../514-xbrlapi_5.6-10_amd64.deb ...Unpacking xbrlapi (5.6-10) over (5.4-7+deb9u1) ...Preparing to unpack .../515-xscreensaver-data_5.42+dfsg1-1_amd64.deb ...Unpacking xscreensaver-data (5.42+dfsg1-1) over (5.36-1) ...Preparing to unpack .../516-xscreensaver-gl_5.42+dfsg1-1_amd64.deb ...Unpacking xscreensaver-gl (5.42+dfsg1-1) over (5.36-1) ...Preparing to unpack .../517-xserver-xephyr_2%3a1.20.4-1_amd64.deb ...Unpacking xserver-xephyr (2:1.20.4-1) over (2:1.19.2-1+deb9u5) ...Preparing to unpack .../518-xterm_344-1_amd64.deb ...Unpacking xterm (344-1) over (327-2) ...Errors were encountered while processing: /tmp/apt-dpkg-install-3w5XWy/270-libel-api-java_3.0.0-2_all.deb /tmp/apt-dpkg-install-3w5XWy/303-libjsp-api-java_2.3.4-2_all.deb /tmp/apt-dpkg-install-3w5XWy/361-libwebsocket-api-java_1.1-1_all.deb /tmp/apt-dpkg-install-3w5XWy/433-plymouth_0.9.4-1.1_amd64.debE: Sub-process /usr/bin/dpkg returned an error code (1) I am running Debian 9on an Optiplex 755 PC Thank you for your time! Nikolai. | To upgrade debian 9 to 10 you should have only the following lines in your /etc/apt/sources.list : deb http://deb.debian.org/debian buster maindeb http://deb.debian.org/debian-security/ buster/updates maindeb http://deb.debian.org/debian buster-updates main Disable the third party repository under /etc/apt/sources.list.d/ directory. In your case you have a ubuntu-xenial repository enabled ( which provide the linux-image-generic-lts-xenial package) it will break your system. Then run : sudo apt updatesudo apt install linux-image-amd64sudo apt upgradesudo apt dist-upgrade As said @Stephen Kitt , the linux-firmware_1.127.24_all.deb belong to Ubuntu Trusty which cause the error code (1) , it should be removed : apt purge linux-firmware . To solve the following error ( post-removal script): Errors were encountered while processing:/var/cache/apt/archives/linux-firmware_1.127.24_all.debE: Sub-process /usr/bin/dpkg returned an error code (1) Edit the /var/lib/dpkg/info/linux-firmware.postrm file and replace its content with: #!/bin/bash/bin/true To solve the following error ( post-installation script): subprocess installed post-installation script returned error exit status 1Errors were encountered while processing:snapdE: Sub-process /usr/bin/dpkg returned an error code (1) edit the /var/lib/dpkg/info/snapd.postinst as follows: #!/bin/bash/bin/true Update : Backup the /var/lib/dpkg/status and /var/lib/dpkg/status-old then replace status file by status-old : sudo cp /var/lib/dpkg/status /var/lib/dpkg/status.bak1sudo cp /var/lib/dpkg/status-old /var/lib/dpkg/status-old.bak1sudo mv /var/lib/dpkg/status-old /var/lib/dpkg/status Then run : sudo dpkg --configure -asudo apt cleansudo apt autocleansudo apt update sudo apt upgrade | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/386653/"
]
} |
560,277 | I don't often have to mess with -f so I am a little bit at a loss as to why it is trying to remove so many packages, but I do believe allot of these packages I very much use, apache2, aptitude, cinnamon to mention just a few :O Currently I am unable to install packages, hence the trying of f You might want to run 'apt --fix-broken install' to correct these.The following packages have unmet dependencies. google-cloud-sdk : Depends: python-crcmod but it is not going to be installed Depends: python-google-compute-engine but it is not going to be installed zoom : Depends: ibusE: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). Remove packages hutber@hutber:~$ sudo apt-get install -f[sudo] password for hutber: Reading package lists... DoneBuilding dependency tree Reading state information... DoneCorrecting dependencies... DoneThe following packages were automatically installed and are no longer required: apache2-bin apache2-data apache2-utils breeze-icon-theme cabextract chromium-codecs-ffmpeg-extra dbconfig-common dbconfig-mysql exo-utils fish fish-common git-man ipxe-qemu ipxe-qemu-256k-compat-efi-roms kded5 kdenlive-data kinit kio liba52-0.7.4 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libaribb24-0 libavresample-ffmpeg2 libbasicusageenvironment1 libcddb2 libdbusmenu-qt5-2 libdca0 libde265-0 libdirectfb-1.7-7 libdvbpsi10 libdvdcss2 libdvdnav4 libdvdread4 libebml4v5 libebur128-1 libenca0 liberror-perl libexo-1-0 libexo-common libexo-helpers libfaad2 libfdk-aac1 libfdt1 libfm-data libfm-gtk-data libgksu2-0 libgnome-keyring-common libgnome-keyring0 libgroupsock8 libgtop-2.0-10 libibverbs1 libiptcdata0 libiscsi7 libjbig-dev libjpeg-dev libjpeg-turbo8-dev libjpeg8-dev libjs-jquery libjs-sphinxdoc libjs-underscore libkate1 libkf5attica5 libkf5bookmarks-data libkf5bookmarks5 libkf5doctools5 libkf5filemetadata-data libkf5filemetadata3 libkf5globalaccel-data libkf5globalaccel5 libkf5kiofilewidgets5 libkf5kiontlm5 libkf5newstuff-data libkf5newstuff5 libkf5newstuffcore5 libkf5notifications-data libkf5notifications5 libkf5notifyconfig-data libkf5notifyconfig5 libkf5solid5 libkf5solid5-data libkf5sonnet5-data libkf5sonnetcore5 libkf5sonnetui5 libkf5textwidgets-data libkf5textwidgets5 libkf5wallet-bin libkf5wallet-data libkf5wallet5 libkf5xmlgui-bin libkf5xmlgui-data libkf5xmlgui5 libkwalletbackend5-5 liblensfun-data-v1 liblilv-0-0 liblivemedia62 liblua5.2-0 libluajit-5.1-2 libluajit-5.1-common libmad0 libmatroska6v5 libmbedcrypto1 libmbedtls10 libmbedx509-0 libmediainfo0v5 libmicrodns0 libmjpegutils-2.1-0 libmlt-data libmms0 libmpcdec6 libmpeg2-4 libmpeg2encpp-2.1-0 libmplex2-2.1-0 libmspack0 libnfs11 libnl-route-3-200 libofa0 libopenmpt-modplug1 libpcre16-3 libpcre2-32-0 libpcre32-3 libphonon4qt5-4 libplacebo4 libpng-dev libprotobuf-lite10 libpython3.7-minimal libqt4-dbus libqt4-declarative libqt4-network libqt4-script libqt4-sql libqt4-xml libqt4-xmlpatterns libqt5opengl5 libqt5positioning5 libqt5printsupport5 libqt5qml5 libqt5quick5 libqt5sensors5 libqt5texttospeech5 libqt5webchannel5 libqtcore4 libqtdbus4 libqtgui4 librados2 librbd1 librdmacm1 libresid-builder0c2a librtaudio6 libsdl-image1.2 libserd-0-0 libsidplay1v5 libsidplay2 libsord-0-0 libsoundtouch1 libsox-fmt-alsa libsox-fmt-base libsox3 libspandsp2 libsratom-0-0 libsrtp2-1 libssh2-1 libssl-dev libtagc0 libtiffxx5 libtinyxml2-6 libunshield0 libupnp6 libusageenvironment3 libusbredirparser1 libv4l2rds0 libva-wayland2 libvlc-bin libvlc-dev libvlc5 libvlccore9 libvo-aacenc0 libvorbisidec1 libvulkan1 libwildmidi-config libwildmidi2 libxcb-xtest0 libxenstore3.0 libxfce4ui-1-0 libxfce4ui-common libxfce4util-common libxfce4util7 libxfconf-0-2 libxnvctrl0 libzbar0 libzen0v5 lxmenu-data mediainfo openrazer-driver-dkms oxygen-icon-theme phonon4qt5 phonon4qt5-backend-vlc php-cli-prompt php-composer-ca-bundle php-composer-semver php-composer-spdx-licenses php-phpseclib php-psr-log php-symfony-console php-symfony-debug php-symfony-filesystem php-symfony-finder php-symfony-polyfill-mbstring php-symfony-process php7.0-common php7.2-json php7.2-opcache php7.2-readline python-gobject python3-daemonize python3-distutils python3-lib2to3 python3-netifaces python3-notify2 python3-pyudev python3.7-minimal qdbus qml-module-qtgraphicaleffects qml-module-qtquick-controls qml-module-qtquick-dialogs qml-module-qtquick-layouts qml-module-qtquick-privatewidgets qml-module-qtquick-window2 qml-module-qtquick2 qtchooser qtcore4-l10n rawtherapee-data screen-resolution-extra seabios shtool smartmontools sox thunar-data unshield vlc-bin vlc-data vlc-l10n vlc-plugin-base vlc-plugin-qt vlc-plugin-video-output xautomation xfconf xfe-themesUse 'sudo apt autoremove' to remove them.The following additional packages will be installed: libutempter0 lynx lynx-common xtermSuggested packages: xfonts-cyrillicThe following packages will be REMOVED adobe-flashplugin apache2 appstream aptitude apturl apturl-common audio-recorder avahi-utils baobab blueberry bluetooth bluez bluez-cups bluez-obexd bluez-tools bolt brltty caribou casper cheese cheese-common cifs-utils cinnamon cinnamon-common cinnamon-control-center cinnamon-control-center-dbg cinnamon-dbg cinnamon-desktop-data cinnamon-screensaver cinnamon-session cinnamon-settings-daemon cjs colord composer cups-browsed cups-pk-helper dconf-cli dnsmasq-base dnsutils dosfstools ecryptfs-utils evolution-data-server evolution-data-server-common file-roller firefox flatpak fwupd gcr gdb gdebi gedit gedit-common geoclue-2.0 gimp gir1.2-appindicator3-0.1 gir1.2-cinnamondesktop-3.0 gir1.2-clutter-1.0 gir1.2-clutter-gst-3.0 gir1.2-flatpak-1.0 gir1.2-gkbd-3.0 gir1.2-gnomebluetooth-1.0 gir1.2-gnomedesktop-3.0 gir1.2-gtkclutter-1.0 gir1.2-gtksource-3.0 gir1.2-ibus-1.0 gir1.2-keybinder-3.0 gir1.2-mate-desktop gir1.2-mate-panel gir1.2-matedesktop-2.0 gir1.2-matepanelapplet-4.0 gir1.2-meta-muffin-0.0 gir1.2-nemo-3.0 gir1.2-networkmanager-1.0 gir1.2-nma-1.0 gir1.2-peas-1.0 gir1.2-rb-3.0 gir1.2-soup-2.4 gir1.2-timezonemap-1.0 gir1.2-webkit-3.0 gir1.2-webkit2-4.0 gir1.2-wnck-3.0 gir1.2-xplayer-1.0 gir1.2-xplayer-plparser-1.0 git gkbd-capplet gksu gmusicbrowser gnome-bluetooth gnome-calculator gnome-calendar gnome-disk-utility gnome-font-viewer gnome-keyring gnome-logs gnome-online-accounts gnome-orca gnome-power-manager gnome-screenshot gnome-session-bin gnome-session-canberra gnome-settings-daemon gnome-settings-daemon-schemas gnome-system-monitor gnome-terminal gnome-themes-extra gnome-themes-standard gnome-video-effects google-chrome-stable gparted gsmartcontrol gstreamer1.0-clutter-3.0 gstreamer1.0-libav gstreamer1.0-packagekit gstreamer1.0-plugins-bad gstreamer1.0-plugins-base-apps gstreamer1.0-plugins-good gstreamer1.0-plugins-ugly gstreamer1.0-tools gstreamer1.0-vaapi gtk2-engines gtk2-engines-murrine gtk2-engines-pixbuf gtkhash gtkhash-common gucharmap gufw gvfs-backends handbrake hardinfo hexchat hplip insync-nemo ippusbxd iputils-ping iputils-tracepath irqbalance jsonlint kdenlive kerneloops keyutils libapache2-mod-php7.2 libappstream-glib8 libass5 libbabeltrace1 libblockdev-crypto2 libcacard0 libchamplain-0.12-0 libchamplain-gtk-0.12-0 libcheese-gtk25 libcheese8 libcinnamon-desktop-dbg libcinnamon-desktop4 libcjs-dbg libcjs0f libcolorhug2 libcscreensaver0 libcups2-dev libcupsimage2-dev libdazzle-1.0-0 libdmapsharing-3.0-2 libdw1 libebackend-1.2-10 libebook-1.2-19 libebook-contacts-1.2-2 libecal-1.2-19 libecryptfs1 libedata-book-1.2-25 libedata-cal-1.2-28 libedataserver-1.2-23 libedataserverui-1.2-2 libflatpak0 libfluidsynth1 libfm-extra4 libfm-gtk4 libfm4 libfox-1.6-0 libfwupd2 libgcab-1.0-0 libgdata22 libgegl-0.3-0 libgeoclue-2-0 libgeocode-glib0 libgimp2.0 libglade2-0 libgnome-bluetooth13 libgnome-desktop-3-17 libgoa-1.0-0b libgoa-1.0-common libgoa-backend-1.0-1 libgpod-common libgpod4 libgrilo-0.3-0 libgspell-1-1 libgssdp-1.0-3 libgstreamer-plugins-bad1.0-0 libgtk-3-bin libgtkmm-2.4-1v5 libgtkmm-3.0-1v5 libgtkspell0 libgucharmap-2-90-7 libgupnp-1.0-4 libgupnp-igd-1.0-4 libgweather-3-15 libgweather-common libgxps2 libhal1-flash libibus-1.0-5 libimobiledevice-utils libjavascriptcoregtk-3.0-0 libkeybinder-3.0-0 liblensfun1 liblightdm-gobject-1-0 liblvm2app2.2 liblzma-dev libmate-desktop-2-17 libmate-panel-applet-4-1 libmateweather-common libmateweather1 libmenu-cache-bin libmenu-cache3 libmetacity1 libmlt++3 libmlt6 libmovit8 libmuffin0 libmusicbrainz5-2 libmusicbrainz5cc2v5 libnautilus-extension1a libnet-libidn-perl libnice10 libnm-glib4 libnm-util2 libnotify-bin libostree-1-1 libpcre3-dev libpcrecpp0v5 libpeas-1.0-python2loader libpoppler-glib8 libpq5 libproxy1-plugin-gsettings libpython3.7-stdlib libqt4-opengl libqt5webkit5 libreoffice-avmedia-backend-gstreamer libreoffice-calc libreoffice-draw libreoffice-gnome libreoffice-gtk3 libreoffice-impress libreoffice-ogltrans librhythmbox-core10 libsnapd-glib1 libspeechd2 libspice-server1 libthunarx-2-0 libtiff-dev libtiff5-dev libtimezonemap1 libunity-protocol-private0 libunity9 libunwind8 libvisio-0.1-1 libvolume-key1 libwayland-egl1-mesa libwebkit2gtk-4.0-37 libwebkitgtk-3.0-0 libwmf0.2-7 libwmf0.2-7-gtk libwnck-3-0 libxen-4.9 libxplayer-plparser18 libxplayer0 libxreaderdocument3 libxreaderview3 libyelp0 libzeitgeist-2.0-0 lightdm ltrace lupin-casper lvm2 mate-desktop mate-desktop-common mate-panel mate-polkit melt metacity metacity-common mint-meta-cinnamon mint-meta-codecs mint-meta-core mintbackup mintdrivers mintinstall mintlocale mintmenu mintreport mintstick mintsystem mintupdate mintwelcome mousetweaks mozo mplayer muffin muffin-common muffin-dbg nautilus-data nemo nemo-data nemo-dbg nemo-emblems nemo-fileroller nemo-preview nemo-share net-tools netplan.io network-manager-gnome network-manager-openvpn network-manager-openvpn-gnome network-manager-pptp-gnome nplan nvidia-prime-applet nvidia-settings obex-data-server obs-studio onboard openrazer-daemon openrazer-meta openssh-client orca packagekit-tools pavucontrol pcmanfm php-cli php-gettext php-json-schema php-pear php-xml php7.0-gd php7.2-cli php7.2-dev php7.2-xml phpmyadmin pinentry-gnome3 pinta pix pix-data pix-dbg pkg-config plymouth-label polo-file-manager polychromatic postgresql postgresql-10 postgresql-client-10 postgresql-contrib printer-driver-postscript-hp pritunl-client-electron pulseaudio-module-bluetooth python-appindicator python-apsw python-cairo python-dbus python-glade2 python-gtk2 python-nemo python3-numpy python3-openrazer python3.7 qemu-block-extra qemu-kvm qemu-system-common qemu-system-x86 qemu-utils qt5-style-plugins qwinff rawtherapee redshift redshift-gtk rhythmbox rhythmbox-data rhythmbox-plugin-tray-icon rhythmbox-plugins seahorse session-migration sessioninstaller simple-scan slack-desktop slick-greeter snapd sopcast-player speech-dispatcher speech-dispatcher-audio-plugins speech-dispatcher-espeak-ng squashfs-tools ssh-askpass-gnome strace synaptic system-config-printer system-config-printer-common system-config-printer-gnome system-config-printer-udev system-tools-backends teamviewer thermald thunar thunar-archive-plugin thunar-gtkhash thunar-media-tags-plugin thunderbird-gnome-support timeshift tomboy transmission-gtk ubuntu-minimal ubuntu-system-adjustments v4l-utils v4l2loopback-utils vim-tiny virtualbox-6.0 vlc vlc-plugin-notify wget wmctrl xdg-desktop-portal xdg-desktop-portal-gtk xdg-user-dirs-gtk xed xed-common xed-dbg xfe xplayer xplayer-common xplayer-dbg xplayer-plugins xreader xreader-dbg xserver-xephyr xserver-xorg-input-wacom xserver-xorg-video-intel xserver-xorg-video-qxl xviewer xviewer-dbg xviewer-plugins xwayland yelp zenity zoomThe following NEW packages will be installed libutempter0 lynx lynx-common xterm0 to upgrade, 4 to newly install, 447 to remove and 8 not to upgrade.1 not fully installed or removed.Need to get 1,650 kB of archives.After this operation, 2,355 MB disk space will be freed.Do you want to continue? [Y/n] I am running Linux Mint 19.1 Cinnamon, kernel: 4.18.0-18-generic | To upgrade debian 9 to 10 you should have only the following lines in your /etc/apt/sources.list : deb http://deb.debian.org/debian buster maindeb http://deb.debian.org/debian-security/ buster/updates maindeb http://deb.debian.org/debian buster-updates main Disable the third party repository under /etc/apt/sources.list.d/ directory. In your case you have a ubuntu-xenial repository enabled ( which provide the linux-image-generic-lts-xenial package) it will break your system. Then run : sudo apt updatesudo apt install linux-image-amd64sudo apt upgradesudo apt dist-upgrade As said @Stephen Kitt , the linux-firmware_1.127.24_all.deb belong to Ubuntu Trusty which cause the error code (1) , it should be removed : apt purge linux-firmware . To solve the following error ( post-removal script): Errors were encountered while processing:/var/cache/apt/archives/linux-firmware_1.127.24_all.debE: Sub-process /usr/bin/dpkg returned an error code (1) Edit the /var/lib/dpkg/info/linux-firmware.postrm file and replace its content with: #!/bin/bash/bin/true To solve the following error ( post-installation script): subprocess installed post-installation script returned error exit status 1Errors were encountered while processing:snapdE: Sub-process /usr/bin/dpkg returned an error code (1) edit the /var/lib/dpkg/info/snapd.postinst as follows: #!/bin/bash/bin/true Update : Backup the /var/lib/dpkg/status and /var/lib/dpkg/status-old then replace status file by status-old : sudo cp /var/lib/dpkg/status /var/lib/dpkg/status.bak1sudo cp /var/lib/dpkg/status-old /var/lib/dpkg/status-old.bak1sudo mv /var/lib/dpkg/status-old /var/lib/dpkg/status Then run : sudo dpkg --configure -asudo apt cleansudo apt autocleansudo apt update sudo apt upgrade | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36220/"
]
} |
560,318 | I have read that CentOS 7.7 upgrades Python to version 3.6on CentOS Linux 7.7 released and here is how to update it . Can anyone explain me why in my Centos 7.7 server I have Python 2.7.5 which is EOL instead of Python 3.6 and yum doesn't offer me possibility to upgrade to Python 3.6? [root@cpanel ~]# hostnamectl Static hostname: hidden(myserver hostname) Icon name: computer-server Chassis: server Machine ID: ade4e1c7a3534397a3f75bdf9eee8e4d Boot ID: 6870183871774c68a23a0c04230d1408 Operating System: CentOS Linux 7 (Core) CPE OS Name: cpe:/o:centos:centos:7 Kernel: Linux 3.10.0-514.26.2.el7.x86_64 Architecture: x86-64 - [root@cpanel ~]# cat /etc/os-releaseNAME="CentOS Linux"VERSION="7 (Core)"ID="centos"ID_LIKE="rhel fedora"VERSION_ID="7"PRETTY_NAME="CentOS Linux 7 (Core)"ANSI_COLOR="0;31"CPE_NAME="cpe:/o:centos:centos:7"HOME_URL="https://www.centos.org/"BUG_REPORT_URL="https://bugs.centos.org/"CENTOS_MANTISBT_PROJECT="CentOS-7"CENTOS_MANTISBT_PROJECT_VERSION="7"REDHAT_SUPPORT_PRODUCT="centos"REDHAT_SUPPORT_PRODUCT_VERSION="7" - [root@cpanel ~]# python -VPython 2.7.5 - # cat /etc/yum.conf [main]exclude=courier* dovecot* exim* filesystem httpd* mod_ssl* mydns* nsd* p0f php* proftpd* pure-ftpd* spamassassin* squirrelmail*tolerant=1errorlevel=1cachedir=/var/cache/yum/$basearch/$releaseverkeepcache=0debuglevel=2logfile=/var/log/yum.logexactarch=1obsoletes=1gpgcheck=1plugins=1installonly_limit=5bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yumdistroverpkg=centos-release - [root@cpanel ~]# yum upgrade pythonLoaded plugins: fastestmirror, universal-hooksLoading mirror speeds from cached hostfile * EA4: 104.254.183.20 * cpanel-addons-production-feed: 104.254.183.20 * cpanel-plugins: 104.254.183.20 * base: mirror.tzulo.com * epel: mirror.steadfastnet.com * extras: mirror.den01.meanservers.net * updates: mirror.sesp.northwestern.eduNo packages marked for update | Python 3 is available in the python3 package: yum install python3 The interpreter is also python3 , python will still run the Python 2 interpreter. Python 2 has been declared EOL by the PSF, but Red Hat still provides support for Python 2 in RHEL , and CentOS should continue to benefit from that support. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/560318",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72071/"
]
} |
560,330 | I want to install a Game Client thing called Steam ( here ). It downloads a .deb package but needs root or admin password to install. I need a way to install and get Steam working without admin or root password. Are there any terminal commands I can utilize to do this? I am running Deepin 15.11 with the latest everything. | Python 3 is available in the python3 package: yum install python3 The interpreter is also python3 , python will still run the Python 2 interpreter. Python 2 has been declared EOL by the PSF, but Red Hat still provides support for Python 2 in RHEL , and CentOS should continue to benefit from that support. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/560330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388945/"
]
} |
560,423 | To identify files with the hyphen symbol - in file names such as test-19.1.txt , the find command combined with a regular expression does not appear to match. The command find . -maxdepth 1 -regextype posix-egrep -regex '.*/[a-z0-9\-\.]+\.txt' -exec echo {} \; is run in a bash shell and no such file is discovered. If the hyphen is removed from the filename, the regular expression matches. The same regular expression when tested with regexr.com is successful. | To include a hyphen in a character class it must be at the first or last position From find manual "the type of regular expression used by find and locate is almost identical to that used in GNU Emacs" and from Emacs manual : [ ... ] To include a ‘-’ , write ‘-’ as the first or last character of the set, or put it after a range. Thus, ‘[]-]’ matches both ‘]’ and ‘-’ . So your regex should be '.*/[a-z0-9.-]+\.txt' In POSIX BRE & ERE the same rule applies The <hyphen-minus> character shall be treated as itself if it occurs first (after an initial '^' , if any) or last in the list, or as an ending range point in a range expression. As examples, the expressions "[-ac]" and "[ac-]" are equivalent and match any of the characters 'a' , 'c' , or '-' ; "[^-ac]" and "[^ac-]" are equivalent and match any characters except 'a' , 'c' , or '-' ; the expression "[%--]" matches any of the characters between '%' and '-' inclusive; the expression "[--@]" matches any of the characters between '-' and '@' inclusive; and the expression "[a--@]" is either invalid or equivalent to '@' , because the letter 'a' follows the symbol '-' in the POSIX locale. To use a <hyphen-minus> as the starting range point, it shall either come first in the bracket expression or be specified as a collating symbol; for example, "[][.-.]-0]" , which matches either a <right-square-bracket> or any character or collating element that collates between <hyphen-minus> and 0, inclusive. If a bracket expression specifies both '-' and ']' , the ']' shall be placed first (after the '^' , if any) and the '-' last within the bracket expression. Regular Expressions In fact most regex variants has the same rule for matching hyphen The hyphen can be included right after the opening bracket, or right before the closing bracket, or right after the negating caret. Both [-x] and [x-] match an x or a hyphen. [^-x] and [^x-] match any character that is not an x or a hyphen. This works in all flavors discussed in this tutorial. Hyphens at other positions in character classes where they can’t form a range may be interpreted as literals or as errors. Regex flavors are quite inconsistent about this. Character Classes or Character Sets | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/560423",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/389009/"
]
} |
560,467 | I've been trying to find a consistent shell command to adjust the volume on my laptop. I was recommended to try (for muting/unmuting): pactl set-sink-mute 0 toggle and it didn't work, I got the error Failed to get sink information: No such entity After some more looking, I found out that changing 0 to 1 in the command worked. I think this is because pactl assigns my sound card a number when it starts up, and that number changed when I restarted my laptop. That was fine, but when I restarted my laptop the audio keys aren't working again. I tried the working command in the shell and got the "No such entity" error again. If I changed 1 back to 0 (i.e. the original command), it works again. This is confusing to me, because I think I only have one sound card. In any case, if the number assigned to the card isn't consistent, is there a consistent way to refer to that card and adjust its volume? | A laptop may have only one audio card, but can have more than one Pulseaudio sink for audio playback. To see a list of available sinks: pactl list short sinks Sink index numbers are assigned during boot, and the order of sinks can change between boots. To ensure the mute command works on the correct sink use the symbolic name instead of the index number. For example: The sinks on my system are listed as: $ pactl list short sinks0 alsa_output.pci-0000_00_1b.0.analog-stereo module-alsa-card.c s16le 2ch 44100Hz RUNNING1 alsa_output.pci-0000_01_00.1.hdmi-stereo-extra1 module-alsa-card.c s16le 2ch 44100Hz SUSPENDED The device that is RUNNING is the one to be muted: symbolic name = alsa_output.pci-0000_00_1b.0.analog-stereo so the command to toggle mute state on that device is: $ pactl set-sink-mute alsa_output.pci-0000_00_1b.0.analog-stereo toggle | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/560467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284679/"
]
} |
560,545 | VIDEO = https://www.youtube.com/watch?v=Hy-yntM2qvk&feature=youtu.be Basically the audio (not matter what application) flickers and stutters. I have a video with the KDE desktop (because it has more information than the GNOME desktop in 19.10), that shows strange behavior in the audio settings, alsamixer and pavucontrol. I've tried a lot of solutions, but never occurred to me that I could get help if I showed directly the problem. I didn't have any problems with older distributions, the only thing that happened was that in any sound settings, the sound icon or volume would flicker visually, but never affected practically. Now, every distro has flickery settings and the audio is choppy and stutters. And I've realized that if in those older (or stable) distributions I connected and HDMI monitor, the problem appears. I can't imagine what would happen if I tried a distribution with the problem and then connected and HDMI monitor. Maybe I would hear just crackling. This issue renders the audio unusable, at least with hardware connected to jacks, because blutetooth works fine. Apparently, it has something to do with something called S/PDIF, because it turns on and off while the audio cracks. Also, Pavucontrol frequently says "Establishing connection to pulseaudio" while this happens. With KDE, there is a constant message of changing devices, that says "Built-in-Audio" (has seen in the video). [?] = Is unknown is the distro has that problem [A] = The distro always has that problem. The audio config always flickers and the sound always stutters [H] = The distro has that problem, but only when an HDMI monitor is connected. The audio doesn't stutter but the configuration flickers.Distros Ubuntu 19.04 = [?] Manjaro (early 2019) = [?] Linux Mint 19.2 = [H] Arco Linux (late 2019) = [A] Elementary OS = [?] Arch Linux (late 2019) = [?] Zorin OS 15 = [H] Linux Mint 19.3 = [H] Manjaro (late 2019) = [A] Ubuntu 19.10 = [A] Debian Buster = [H] Fedora 31 = [A] Ubuntu 18.04 = [H] KDE Neon = [H] My post in Ask Fedora Some time ago, I made a post on Ask Fedora about the problem, there is a lot of information there. Maybe it will also give useful information. However, I think you can ignore most of it and just see what happens in the video. Take in consideration that although in that post I say that some distributions don't have the problem, later I discovered that if I connected and HDMI monitor, the problem will appear. Here it is: # This is something that has given me a lot of stress. It’s quite a long problem, so I’ll be very thankful if you could help. What is the basic description of the problem? I cannot hear music and audio normally. On every application and site, would it be Rhythmbox, Audacious, YouTube, etc, the audio is choppy. What I mean by that is that the audio cuts itself every two or three seconds, and the sound icon on the bar of GNOME disappears for the moment in which the audio goes off. Is very infuriating. And the worst thing, it happens on almost any other Linux distro! Info #1) About the audio specifications, lspci -nnk | grep -A2 Audioshows: 00:1f.3 Audio device [0403]: Intel Corporation 200 Series PCH HD Audio [8086:a2f0]DeviceName: Onboard - SoundSubsystem: Biostar Microtech Int’l Corp Device [1565:824d] #2) Trying to investigate the problem, I installed a plugin in GNOME that let me choose the audio output, and it showed lots of entries labeled «Dummy output». When I clicked on one of them, the shell crashed #3) List of systems I have tried, and what I deduce from it. The [✗] means that the error is present, while the [✓] means that it is absent. [✓] Ubuntu 19.04 [✓] Manjaro GNOME (early 2019) [✓] Linux Mint 19.2 [✓] Zorin OS [?] Elementary OS [✗] Arco Linux (current) [✗] Ubuntu 19.10 [✓] Debian 10 XFCE [✓] Debian 10 Cinnamon [✗] Manjaro XFCE (current) [✗] Manjaro Cinnamon (current) [✗] Fedora 31 And what do I deduce from that? All the distributions that have the [✗] are either newer releases (like Ubuntu 19.10) or rolling-release (like Manjaro or Arco Linux), while the ones with the [✓] are either released before June (Ubuntu 19.04, Mint 19.2 and the version of Manjaro I was using in May) or use old packages (Debian 10), so that, for me, means that maybe there was an update on alsa or pulseaudio that bugged some things. But there is one factor that could be crucial, and it is that both Linux Mint and Ubuntu enable third-party drivers, so that can change some things. #4) I don’t think it is a problem related to the headphones, because the speakers I tried also were buggy, and the problem persist even if i unplug the headphones. I think it is a software-related issue, more on that later. #5) The level of choppyness increases with the volume. At lower volumes, there is less stops in the audio, while at maximun volume, the audio is practically unusable. That only happens with the system volume, not the physical volume, so that’s another point for the software theory. #6) When executing alsamixer in the terminal, I only can choose the input and output volumes, not any other options. If I press F6, and select HD Intel PCH, I can see the other options. More on that below. #7) Disabling the «Auto-mute» option doesn’t change anything. This could be two things, either: it isn’t related to the problem, or, the system uses the «default» configuration, the one that doesn’t have the other options. #8) Adding load-module module-udev-detect tsched=0 to /etc/pulse/default.pa doesn’t work. #9) Executing echo "options snd-hda-intel model=generic" | sudo tee -a /etc/modprobe.d/alsa-base.conf doesn’t work, but now it lets me choose between HDMI and Built-in Audio #10) Opening the «Sound» tab in «Settings» shows something very interesting. Below the audio dispositive, there is a bar that shows the intensity of the sound. In the microphone section (the headset has a microphone) the bar changes according to the level of sound it is receiving. However, the output section doesn’t show any change. Please, i’ll be really, really thankful if someone has the knowledge to identify and solve this problem. I can’t work normally without sound! Thanks, thanks, in advance. I maybe will come up with more details later. | It really is terrible that problems that were resolved back in 2008 are still haunting us in 2020 - 12 years on :(. I'm on Ubuntu 20.04.1 LTS. To get rid of choppy / stuttering / skipping audio when listening to music, simply follow post #6 here . The top of that answer reckons the method described therein is obsolete but it worked perfectly on my Dell Precision M6700 with this audio: $ lspci | egrep -i audio00:1b.0 Audio device: Intel Corporation 7 Series/C216 Chipset Family High Definition Audio Controller (rev 04) Maybe it worked on my laptop being it's an older model. UPDATE : I just realized that I did not do the right thing in case the link above dies at some point in the future. The solution is to edit /etc/pulse/daemon.conf and ensure the following is added/uncommented in the file: high-priority = yesnice-level = -15default-sample-rate = 48000default-fragments = 8default-fragment-size-msec = 10 I had uncommented just the lines starting with default-... prior to this edit, but I found that occasionally I'd get the odd stutter/skip. I haven't had any issue since adding high-priority and nice-level as noted in later posts in the thread linked above. Hopefully this is the last you'll see of me regarding this issue. UPDATE - 2021/01/03 : Despite all these changes, I was still occasionally getting stuttering audio after laptop has been running for an extended period. I'm now trying the low latency kernel as supplied by Ubuntu. The only issue I've had so far is that I cannot access my ZFS formatted USB drive - I installed the low latency kernel manually and not via the HWE method as discussed in the linked article. I might change to the HWE method to see if I can access ZFS and then I'll be fully content. There's a really good discussion about the benefits of using a low latency kernel where audio is concerned . Also, see this article that details how to go about installing a low latency kernel on Ubuntu . NOTE: Since the articles I've linked to in this article are all from Stack Exchange, I figured there's no need to duplicate the content. UPDATE - 2021/05/18 : Okay - the stuttering has returned with a vengence - despite all the changes. Feels like the machine has perfected its AI on how to become a major PITA! I'm currently trying the changes below as suggested by this article [Solved] Mint 13 Mate 32 bit, Sound Skips from back in 2012 - just 6 days shy of 9 years today. Hopefully this turns out to be the ultimate fix and hence my last edit: In /etc/pulse/default.pa , find and change: from: load-module module-udev-detect to: load-module module-udev-detect tsched=0 In /etc/pulse/daemon.conf , find, uncomment, then change: from: ;realtime-scheduling = yes...;default-fragments = 4;default-fragment-size-msec = 25 Note : Only edit the 3 lines shown above. The ... is there to indicate that the first line and the other 2 aren't close together. to: realtime-scheduling = yes...default-fragments = 8default-fragment-size-msec = 5 UPDATE - 2022/12/06 : I finally bit the bullet and decided to ditch PulseAudio in favor of PipeWire in Nov 2021. However, although the problem seemed to be less severe, it was still there. But just 3 days ago, I stumbled upon this GitHub issue where the OP stated that commenting one line in the default /etc/pipewire.conf file and adding 4 new lines, his skipping, stuttering, and choppy audio issues simply faded away. I did the same and what do you know - I've been listening to loads of music for the last 3 days and it is truly resolved. So, here's what I did: Switch to PipeWire as shown here Copy default pipewire.conf to /etc/pipewire/ sudo mkdir /etc/pipewiresudo cp /usr/share/pipewire/pipewire.conf /etc/pipewire/sudo vi /etc/pipewire/pipewire.conf In section context.properties , comment out default.clock.rate Add the following 4 lines beneath the line commented out in step 3 above: default.clock.rate = 19200default.clock.quantum = 512default.clock.min-quantum = 32default.clock.max-quantum = 4096 Save the file and Restart Pipewire: systemctl --user restart pipewire-media-session pipewire-pulse pipewire Enjoy clean music! This works for me on the following laptop: $ inxi -SMSystem: Host: akk-m6700.anroet.com Kernel: 5.15.0-53-lowlatency x86_64 bits: 64 Desktop: Gnome 3.36.9 Distro: Ubuntu 20.04.5 LTS (Focal Fossa) Machine: Type: Laptop System: Dell product: Precision M6700 v: 01 serial: <superuser/root required> Mobo: Dell model: 0JWMFY v: A00 serial: <superuser/root required> UEFI: Dell v: A20 date: 11/30/2018 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/560545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/389079/"
]
} |
560,697 | I know that it is possible to reverse "$@" using an array: arr=( "$@" ) And using this answer , reverse the array. But that requires a shell that has arrays. It is also possible using tac : set -- $( printf '%s\n' "$@" | tac ) But that breaks if the parameters have spaces, tabs or newlines (assuming the default value of $IFS ) or contain wildcard characters (unless globbing is disabled beforehand) and removes empty elements, and requires the GNU tac command (using tail -r is slightly more portable outside of GNU systems but with some implementations fails on large input). Is there a way to reverse shell positional arguments portably, without using an array, and that works even if arguments contain whitespaces or newlines or wildcards or are possibly empty? | Portably, no arrays required (only positional parameters) and works with spaces and newlines: flag=''; for a in "$@"; do set -- "$a" ${flag-"$@"}; unset flag; done Example: $ set -- one "two 22" "three> 333" four$ printf '<%s>' "$@"; echo<one><two 22><three333><four>$ flag=''; for a in "$@"; do set -- "$a" ${flag-"$@"}; unset flag; done$ printf '<%s>' "$@"; echo<four><three333><two 22><one> The value of flag controls the expansion of ${flag-"$@"} . When flag is set, it expands to the value of flag (even if it is empty). So, when flag is flag='' , ${flag....} expands to an empty value and it gets removed by the shell as it is unquoted. When the flag gets unset, the value of ${flag-"$@"} gets expanded to the value at the right side of the - , that's the expansion of "$@" , so it becomes all the positional arguments (quoted, no empty value will get erased). Additionally, the variable flag ends up erased (unset) not affecting following code. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/560697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
560,720 | I try to write a script to get the version of my distro so that I can pass it to a variable. The following command is what I wrote to achieve the result. lsb_release -ar | grep -i release | cut -s -f2 The unwanted output: No LSB modules are available.18.04 As you can see, the No LSB modules are available message is the unwanted part. Since I prefer my script to be portable across servers, I don't want to install any extra packages beside utilizing the lsb_release -a command. | That message is sent to standard error, so redirecting that to /dev/null will get rid of it (along with any other error message produced by lsb_release ): lsb_release -ar 2>/dev/null | grep -i release | cut -s -f2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560720",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250915/"
]
} |
560,724 | I'm trying to use a cron job to see when the battery gets lower than a given threshold, and then to send a battery critical notification. However, when I make the cron job execute a script every minute, and make the script send me a notification, it doesn't work. To make sure that it's not a permissions issue with the script or something causing the cron job to not run, I made the script create a file instead, and it worked. This is the crontab entry: * * * * * /home/aravk/test.sh And, to simplify the problem, these are the contents of test.sh : #!/bin/sh/usr/bin/dunstify hi No notification shows up, however. The script does work when I execute it manually. I also tried setting the DISPLAY environment variable to :0 by changing the crontab entry to * * * * * export DISPLAY=:0 && /home/aravk/test.sh , but it still didn't work. How do I send notifications from a script executed by a cron job? I'm on Arch Linux, if it's relevant. | I added this to my crontab and all my notifications work (currently tested with zenity and notify-send ): DISPLAY=":0.0"XAUTHORITY="/home/me/.Xauthority"XDG_RUNTIME_DIR="/run/user/1000"DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560724",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/389251/"
]
} |
560,796 | i recently created a test file, because I wanted to play around with logrotate. I was searching for a solution to create a file with any size I want. I found out that this is my solution: dd if=/dev/zero of=/path bs=1M count=30 status=progress I also found this: dd if=/dev/urandom of=/path bs=1M count=30 status=progress The first example creates a file with zero's, the second one with random text.Both of the files have the same size of 30M. Can anyone explain, why it takes longer to create this this file with random text than with zero's? Because they both have the same size of data... Thanks in advance :) | As you can see from your output, both methods are quite fast. However, there is a distinct difference between the sources of your data. /dev/zero is a pseudo-file which simply generates a stream of zeroes, which is a rather trivial task /dev/urandom actually accesses the kernel's pool of entropy to generate random numbers and therefore has much more I/O and process call overhead than simply producing the same fixed value all over as would be the case for /dev/zero . That is the reason why reading from /dev/urandom can never be as fast as reading from /dev/zero . If you are interested, the Wikipedia article on /dev/random can serve as a starting point for further reading. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357855/"
]
} |
560,800 | I have transferred the whole 64 gb sd-card to a 128 gb sd card. the 64gb sd card is from a Raspberry Pi 4 and contained 5 partitions. I used this command: dd if=/dev/mmcblk0p of=/dev/sdb after the process finished I inserted the 128gb sd card and the Raspberry is working fine. However, there are now 64gb of "unallocated" disk space on the sd card. My guess was to expand the partition, which I attempted using gparted. But I cannot resize the partition as the option to do so is greyed out. I also tried this command resize2fs /dev/mmcblk0p7 - mmcblk0p7 being the root partition - which has this output: The filesystem is already 15500800 (4k) blocks long. Nothing to do! How can I expand the root partition to use the as yet unallocated diskspace? #fdisk -lDisk /dev/mmcblk0: 119.1 GiB, 127865454592 bytes, 249737216 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xb134d0fdDevice Boot Start End Sectors Size Id Type/dev/mmcblk0p1 8192 137215 129024 63M e W95 FAT16 (LBA)/dev/mmcblk0p2 137216 124735487 124598272 59.4G 5 Extended/dev/mmcblk0p5 139264 204797 65534 32M 83 Linux/dev/mmcblk0p6 204800 729085 524286 256M c W95 FAT32 (LBA)/dev/mmcblk0p7 729088 124735487 124006400 59.1G 83 Linux | As you can see from your output, both methods are quite fast. However, there is a distinct difference between the sources of your data. /dev/zero is a pseudo-file which simply generates a stream of zeroes, which is a rather trivial task /dev/urandom actually accesses the kernel's pool of entropy to generate random numbers and therefore has much more I/O and process call overhead than simply producing the same fixed value all over as would be the case for /dev/zero . That is the reason why reading from /dev/urandom can never be as fast as reading from /dev/zero . If you are interested, the Wikipedia article on /dev/random can serve as a starting point for further reading. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314346/"
]
} |
560,806 | Suppose there is some server running on remotehost.some.where:4444. I have a software that insists to connect to localhost:5555. I can forward this with ssh as ssh -L 5555:remotehost.some.where:4444 myuser@localhost But this requires an unnecessary ssh connection to localhost, which the addition of -N would not prevent. How can I do this port forwarding without the login, possibly with another tool? | The easiest solution is to run some sort of TCP proxy. You can use socat , for example: socat tcp-listen:5555,bind=127.0.0.1,fork tcp:remotehost.some.where:4444 While this is running, connections to port 5555 on your local host will be forwarded to port 4444 on remotehost.some.where . The command I'm using here only listens on 127.0.0.1 . If you actually want to accept connections from other hosts on port 5555 , you can drop the bind=127.0.0.1 option. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/140503/"
]
} |
560,871 | I read topics about how to get bc to print the first zero, but that is not exactly what I want. I want more... I want a function that returns floating point numbers with eight decimal digits. I am open to any solutions, using awk or whatever to be fair. An example will illustrate what I mean: hypothenuse () { local a=${1} local b=${2} echo "This is a ${a} and b ${b}" local x=`echo "scale=8; $a^2" | bc -l` local y=`echo "scale=8; $b^2" | bc -l` echo "This is x ${x} and y ${y}"# local sum=`awk -v k=$x -v k=$y 'BEGIN {print (k + l)}'`# echo "This is sum ${sum}" local c=`echo "scale=8; sqrt($a^2 + $b^2)" | bc -l` echo "This is c ${c}"} Sometime, a and b are 0.00000000 , and I need to keep all these 0s when c is returned. Currently, when this happens, this code give back the following output: This is a 0.00000000 and b 0.00000000This is x 0 and y 0This is c 0 And I would like it to print This is a 0.00000000 and b 0.00000000This is x 0.00000000 and y 0.00000000This is c 0.00000000 Help will be much appreciated! | You can externalize formatting this way, using printf : printf "%0.8f" ${x} Example: x=3printf "%0.8f\n" ${x}3.00000000 Note: printf output depends on your locale settings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/389372/"
]
} |
560,920 | I am trying to write a script that will build multiple cppUtes lib.a files for different sanitizers with a bash script. When trying to pass the compiler flags as variables to cmake I am unable to correctly format them. Please see the brief example below. #!/bin/shset -euo pipefailCOMMON_OPTS='-fno-common -g -fno-omit-frame-pointer'ASAN_OPT='-fsanitize=address'ASAN_C_FLAGS="-DCMAKE_C_FLAGS=\"$ASAN_OPT $COMMON_OPTS\""echo "cmake ../ $ASAN_C_FLAGS"cmake ../ $ASAN_C_FLAGSmake The output of echo is what I expect, and what works for setting flags when not running in a bash script: cmake ../ -DCMAKE_C_FLAGS="-fsanitize=address -fno-common -g -fno-omit-frame-pointer" However, when I run the script cmake is not interpreting the flags correctly. They do not appear in the cmake flags that are displayed during the configuration: CppUTest CFLAGS: -include "/home/ubuntu/cpputest/include/CppUTest/MemoryLeakDetectorMallocMacros.h" -Wall -Wextra -pedantic -Wshadow -Wswitch-default -Wswitch-enum -Wconversion -Wsign-conversion -Wno-padded -Wno-long-long -Wstrict-prototypes and finally, I get the following compilation error from make: c++: error: unrecognized argument to '-fsanitize=' option: 'address -g -fno-omit-frame-pointer' Any help would be kindly appreciated. | You can externalize formatting this way, using printf : printf "%0.8f" ${x} Example: x=3printf "%0.8f\n" ${x}3.00000000 Note: printf output depends on your locale settings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/560920",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/385954/"
]
} |
561,095 | I have one SSD on which Windows 10 is installed, and one hard drive initially partitioned entirely in NTFS - but I shrunk the NTFS using the Debian installer and installed Debian on the newly freed space. Oddly enough, the installer told me that it did not detect Windows and asked me to confirm to write GRUB to the MBR. I refused, and asked to install GRUB onto the hard drive on which Debian has just been installed (which is different from the SSD on which Windows is installed). I have disabled secure boot in my BIOS, but even so the computer boots automatically on Windows without showing GRUB, and Windows boot manager is the only one detected by my BIOS in the boot manager. I do not have any option in my BIOS to trust/choose an EFI file, as is often pointed to in the forums I have checked. I would like to boot automatically on Windows unless I keep SHIFT pressed to boot on Debian (I know how to do that as soon as GRUB is showing at startup... Which it does not). What should I do? My computer is a Lenovo Legion Y520. | If your BIOS has a boot option that says literally "Windows Boot Manager", that is a pretty strong indication that your Windows has been installed in UEFI style.The fact that the Debian installer even offers to write GRUB to the MBR indicates the Debian installer has been booted in legacy BIOS style. A 16-bit legacy BIOS bootloader cannot chainload an UEFI bootloader (without first transitioning into either 32- or 64-bit mode and setting up an UEFI environment, which kind of defeats the purpose of being in the legacy 16-bit mode in the first place). Usually, the boot mode (BIOS vs. UEFI) you use to boot the OS installer will automatically determine which mode the new OS to be installed will use. An OS installer running in UEFI mode could technically install a BIOS-based bootloader instead of UEFI-based one, but the opposite is generally not possible as activating the legacy BIOS compatibility requires disabling the UEFI Runtime Services interface, which is needed to write the boot settings into system NVRAM (e.g. that "Windows Boot Manager" text in the BIOS settings) - and that is a requirement to complete the installation of an UEFI bootloader. It looks like your laptop may currently prefer booting in legacy BIOS mode over UEFI, if the boot media has both options available, and the Debian 10 installation media does indeed have both options. So it might have booted the Debian installer in BIOS mode, and thus been unable to complete the installation of an UEFI bootloader in the standard way. When in BIOS mode, the installer also won't tell you that installing an UEFI bootloader requires having an ESP (EFI System Partition) on the disk you want to install the UEFI bootloader onto. If you did not opt to create one on your HDD, there was no valid place to install the UEFI bootloader. There is also the problem that some UEFI firmware implementations are buggy and/or Windows-centric. As the Debian Wiki says: Many UEFI firmware implementations are unfortunately buggy, as mentioned earlier. Despite the specification for boot entries and boot order being quite clear about how things should work, there are lots of systems in the wild which get it wrong. Some systems simply ignore valid requests to add new boot entries. Others will accept those requests, but will refuse to use them unless they describe themselves as "Windows" or similar. There are lots of other similar bugs out there, suggesting that many system vendors have done very little testing beyond "does it work with Windows?" Fortunately, the system vendors occasionally get firmware bugs fixed. So, as the first step, see if Lenovo has a an updated firmware ("a BIOS update") available for your model, and install it if there is one. That may make installing a dual-boot configuration easier. As the second step, you probably should disable the legacy BIOS compatibility feature, if you can. If you find a "BIOS" setting that allows to force the system to UEFI only, select that setting; or there is a setting that mentions CSM ("Compatibility Support Module"), disable it. Now it should be easier to get the Debian installer to boot in UEFI mode, just like your existing Windows installation does. That will make it install the correct type of boot loader. As the third step, be aware of the requirement to have an ESP (EFI System Partition). It is essentially a small FAT32 partition (256M is plenty for Debian 10 alone) which in Debian should be mounted to /boot/efi . If you use MBR partitioning, it should have a special partition type code 0xef ; if using GPT partitioning, the partitioner option to mark a partition as "bootable" and/or "ESP" should do the right thing. Having an ESP on your HDD will allow you to move the HDD to another system and boot your existing Debian installation from it, if you want to do so later. The alternative to creating a separate ESP on HDD for Debian is to select the Windows ESP on the SSD when setting up your partitioning, choose to not format it, but mount it with its existing filesystem as /boot/efi . The standardized directory structure on the ESP is designed to handle the bootloaders of multiple OSs on the same ESP. The UEFI bootloader of Debian 10 should fit nicely into the standard Windows 10 ESP with room to spare, if you choose this option. But you may have to go into "advanced/manual" partitioning options to do this. If you still have problems after this, I'd recommend reading Roderick W. Smith's text on challenges with UEFI bootloaders. It's written for the rEFInd bootloader, but many things described in it are applicable to the UEFI version of GRUB also. Once you've got both your OSs booting in UEFI style: If your SSD is NVMe type, then it might appear in Linux as /dev/nvme0n1 (and its partitions as /dev/nvme0n1pN , N = partition number). Make sure Linux sees your SSD, then run update-grub as root. If it doesn't say it detected Windows, run blkid as root: it should report the UUIDs of all your filesystems and partitions. Find the filesystem UUID of the Windows ESP on the SSD. If the Windows ESP uses the standard FAT32 filesystem type, it should be listed in the form UUID="xxxx-xxxx" (it's actually a FAT32 filesystem serial number). Once you know the UUID, you can configure a custom GRUB menu entry for Windows, by adding these lines to the end of /etc/grub.d/40_custom : menuentry 'Whatever you want the Windows 10 boot menu entry to say' { insmod part_gpt insmod fat search --no-floppy --fs-uuid --set=root xxxx-xxxx chainloader /EFI/Microsoft/Boot/bootmgfw.efi} That should be the bare-bones UEFI chainloader entry for Windows. Just replace xxxx-xxxx with the actual Windows ESP filesystem UUID. After editing the file, run update-grub as root. Then set the default boot option to debian and you should have your boot menu. It is possible that Lenovo has chosen to go above and beyond the UEFI specification requirements and implemented a NTFS filesystem support in their UEFI firmware, and has provided a Windows installation that uses NTFS on the ESP. In that case, replace insmod fat with insmod ntfs , and the filesystem UUID will have a different format: in blkid output will look something like UUID="A268B58668B559AD" . Note: check out the efibootmgr command. You can use it to manipulate the firmware boot settings within Linux, without entering the BIOS settings menus. In Windows, bcdedit can do the same, although its user interface is more awkward in my opinion. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/561095",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/389566/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.