source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
620,981 | I would like to get >chr05_pilon_pilon.12.1 but unfortunately the below command does not remove the t echo '>chr05_pilon_pilon.12.t1' | sed '/^\\>chr[0-9][0-9]_pilon_pilon/ s/\(.*\)t/\1/g'>chr05_pilon_pilon.12.t1 What did I miss? | Whenever you work with regular expressions, you should remember that "less is more". I mean you should always try to use the simplest and shortest pattern that matches your data. Don't try to match everything, only go for the part you actually need. In this case, you have >chr05_pilon_pilon.12.t1 and all you want to do is remove the last t after the last . . So don't try to match from the beginning, you don't care about that and it will only make your regular expression more complicated and easier to get wrong, as you did. Here are a few alternatives, depending on what you actually need: Remove all non-numerical characters after the last . on lines starting with > : $ echo '>chr05_pilon_pilon.12.t1' | sed -E 's/^(>.*)\.[^0-9]*/\1./' >chr05_pilon_pilon.12.1 Remove the last t on lines starting with > : $ echo '>chr05_pilon_pilon.12.t1' | sed -E 's/^(>.*)t/\1/' >chr05_pilon_pilon.12.1 As above, but only if the t is immediately after a . $ echo '>chr05_pilon_pilon.12.t1' | sed -E 's/^(>.*\.)t/\1/' >chr05_pilon_pilon.12.1 Remove the last t that comes after a . but only on lines starting with > then chr followed by exactly two numbers and pilon_pilon : $ echo '>chr05_pilon_pilon.12.t1' | sed -E 's/^(>chr[0-9][0-9]_pilon_pilon.*\.)t/\1/' >chr05_pilon_pilon.12.1 Finally, assuming you might also have X , Y and M or MT chromosomes, you might want to extend the above to match on those as well $ printf '>chrX_pilon_pilon.12.t1\n>chr05_pilon_pilon.12.t1\n>chrMT_pilon_pilon.12.t1\n' | sed -E 's/^(>chr([0-9XYM]{1,2}|MT)_pilon_pilon.*\.)t/\1/' >chrX_pilon_pilon.12.1 >chr05_pilon_pilon.12.1 >chrMT_pilon_pilon.12.1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/620981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34872/"
]
} |
621,124 | Hello I have this in my ~/.bash_profile export GOPATH="$HOME/go_projects"export GOBIN="$GOPATH/bin"program(){ $GOBIN/program $1} so I'm able to do program "-p hello_world -tSu" . Is there any way to run the program and custom flags without using the quotation marks? if I do just program -p hello_world -tSu it'll only use the -p flag and everything after the space will be ignored. | Within your program shell function, use "$@" to refer to the list of all command line arguments given to the function. With the quotes, each command line argument given to program would additionally be individually quoted (you generally want this). program () { "$GOBIN"/program "$@"} You would then call program like so: program -p hello_world -tSu or, if you want to pass hello world instead of hello_world , program -p 'hello world' -tSu Using $1 refers to only the first command line argument (and $2 would refer to the second, etc.), as you have noticed. The value of $1 would additionally be split on white-spaces and each generated string would undergo filename globbing, since the expansion is unquoted. This would make it impossible to correctly pass an argument that contains spaces or filename globbing patterns to the function. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/621124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/443247/"
]
} |
621,180 | I have several utility programs that do not have their own directory and are just a single executable. Typically I put the binary in /usr/local/bin. A problem I have is how to manage preference settings. One idea is to use environment variables and require the user to define such variables, for example, in their bash.rc. I am a little reluctant, however, to clutter up the bash.rc with miscellaneous preference settings for a minor program. Is there a Standard (or standard recommendation), that defines some place or method that is appropriate for storing preferences for small utility programs that do not have their own directory? | Small utilities for interactive desktop use would be expected to follow the XDG Base Directory Specification and keep their config files under $XDG_CONFIG_HOME or (if that is empty or unset) default to $HOME/.config The picture is a little less clear for non-GUI tools, since they might run on systems which are headless or which don't otherwise adhere to XDG/freedesktop standards. However, there's no obvious drawback to using $XDG_CONFIG_HOME if set or $HOME/.config if not, and it should be relatively unsurprising everywhere. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/621180",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47542/"
]
} |
621,336 | I use awk a fair bit for parsing logs; I have never seen anything like this:I have six file containing a number of lines; I want the ones containing "100", and to choose which columns to print me:~/tmp> grep 100 *.dl.tst outputs what I expect: 100 139M 100 139M 0 0 6376k 0 0:00:22 0:00:22 --:--:-- 6539k100 139M 100 139M 0 0 6677k 0 0:00:21 0:00:21 --:--:-- 6579k100 139M 100 139M 0 0 6022k 0 0:00:23 0:00:23 --:--:-- 6093k100 139M 100 139M 0 0 13.9M 0 0:00:10 0:00:10 --:--:-- 14.3M100 139M 100 139M 0 0 14.3M 0 0:00:09 0:00:09 --:--:-- 14.7M100 139M 100 139M 0 0 13.2M 0 0:00:10 0:00:10 --:--:-- 13.3M as does: me:~/tmp> grep 100 *.dl.tst|awk '{print$0}'100 139M 100 139M 0 0 6376k 0 0:00:22 0:00:22 --:--:-- 6539k100 139M 100 139M 0 0 6677k 0 0:00:21 0:00:21 --:--:-- 6579k100 139M 100 139M 0 0 6022k 0 0:00:23 0:00:23 --:--:-- 6093k100 139M 100 139M 0 0 13.9M 0 0:00:10 0:00:10 --:--:-- 14.3M100 139M 100 139M 0 0 14.3M 0 0:00:09 0:00:09 --:--:-- 14.7M100 139M 100 139M 0 0 13.2M 0 0:00:10 0:00:10 --:--:-- 13.3M Why then does $1 become the file name: me:~/tmp> grep 100 *.dl.tst|awk '{print$1}'shpr002.20201124_141036.dl.tst:shpr003.20201124_141036.dl.tst:shpr004.20201124_141036.dl.tst:hipr002.20201124_141036.dl.tst:hipr003.20201124_141036.dl.tst:hipr004.20201124_141036.dl.tst: And $2 : me:~/tmp> grep 100 *.dl.tst|awk '{print$2}'000000 I logged out and back in in case my shell (bash) was screwed up; no change... what am I doing wrong? Output from grep 100 *.dl.tst | awk '{print$1}' | head -n1 | od -c (some of the alpha characters have been substituted by x ; the list above had been edited/obfuscated) 0000000 x s h p r 0 0 2 x x x . x x x .0000020 x x x x . c o m . 2 0 2 0 - 1 10000040 - 2 4 _ 1 4 1 0 3 6 . d l . t s0000060 t : \r \n0000064 | Those files contain the output from curl downloading files, and curl updates its progress information during downloads by outputting a carriage return (commonly represented as \r , the escape used to produce it in a number of contexts), which causes the cursor to return to the start of the line. When you run grep 100 *.dl.tst , each line that’s output starts with the file name, but that’s followed by multiple updates which return the cursor to the start of the line, so you don’t see the file name — it’s overwritten by subsequent output. In more detail, the output looks like shpr002.20201124_141036.dl.tst: followed by a carriage return, followed by the first progress output from curl , 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 followed by a carriage return, etc., until the percentage reaches 100. Because all this is only separated by carriage returns, not line feeds, it counts as a single line, and grep matches that in its entirety. The same effect explains the output of grep 100 *.dl.tst|awk '{print$0}' . When you ask AWK to output $1 , it outputs the first field, and now you can see it: it contains the file name, a colon, a carriage return, and that’s it — the start of curl ’s output then starts with a space (to leave room for the percentage count), which is a field separator. When you ask it to output $2 , it outputs the second field, which is the first percentage count, 0 : shpr002.20201124_141036.dl.tst:\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0<-- Field 1 --> ! ! ! ! ... $2 $3 $4 $5 ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/621336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/428183/"
]
} |
621,506 | I would like to delete each line in data.txt which contains one of the parameters of the second column in keys.txt file. keys.txt 2 aa2 bb2 cc2 dd data.txt 1 aa It is great1 aa I want to delete this line1 kk Really ?1 bb Yes, I think so.1 bb Why ?1 kk Because I don't like the current situation1 ll I want to change1 cc Indeed it's a need1 cc Sorry1 zz Ok ! Desired output 1 kk Really ?1 kk Because I don't like the current situation1 ll I want to change1 zz Ok ! I tried with the following awk program: awk ' NR == FNR {pattern[$0]; next} { for (var in pattern) { if ($0 ~ var) { getline next } } print >> GoodFile.txt }' keys.txt | You are already close, only missing a few minor points: You need to add data.txt as argument to your awk call, otherwise that file will not be processed. You are currently registering the entire line in keys.txt to your removal database, so you should restrict that to the second field ( $2 instead of $0 ). You are using if ($0 ~ var) to check if a line in data.txt should be excluded. Here too, you should only compare the second field of the line, and you should use the exact match ( == ) instead of regular expression match to guard against situations in which your keys can contain characters that are special to regular expressions. You print from awk , which you actually don't need to. You can redirect the output instead. So, with slight modifications: awk 'NR==FNR{pattern[$2];next} !($2 in pattern)' keys.txt data.txt > GoodFile.txt This will register the second column of each line in keys.txt in the array pattern , but do nothing else for that file. For data.txt , it will reach the point where the !($2 in pattern) condition is evaluated for each line. If the condition evaluates to "true" (i.e. the second column of the line is not among the indices of the array pattern ), the current line will be printed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/621506",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/442259/"
]
} |
621,523 | In How does Linux “kill” a process? it is explained that Linux kills a process by returning its memory to the pool. On a single-core machine, how does it actually do this? It must require CPU time to kill a process, and if that process is doing some extremely long running computation without yielding, how does Linux gain control of the processor for long enough to kill off that process? | The kernel gains control quite frequently in normal operations: whenever a process calls a system call, and whenever an interrupt occurs. Interrupts happen when hardware wants the CPU’s attention, or when the CPU wants the kernel’s attention, and one particular piece of hardware can be programmed to request attention periodically (the timer). Thus the kernel can ensure that, as long as the system doesn’t lock up so hard that interrupts are no longer generated, it will be invoked periodically. As a result, if that process is doing some extremely long running computation without yielding isn’t a concern: Linux is a preemptive multitasking operating system, i.e. it multitasks without requiring running programs’ cooperation. When it comes to killing processes, the kernel is involved anyway. If a process wants to kill another process, it has to call the kernel to do so, so the kernel is in control. If the kernel decides to kill a process ( e.g. the OOM killer, or because the process tried to do something it’s not allowed to do, such as accessing unmapped memory), it’s also in control. Note that the kernel can be configured to not control a subset of a system’s CPUs itself (using the deprecated isolcpus kernel parameter), or to not schedule tasks on certains CPUs itself (using cpusets without load balancing, which are fully integrated in cgroup v1 and cgroup v2 ); but at least one CPU in the system must always be fully managed by the kernel. It can also be configured to reduce the number of timer interrupts which are generated, depending on what a given CPU is being used for. There’s also not much distinction between single-CPU (single-core, etc.) systems and multi-CPU systems, the same concerns apply to both as far as kernel control is concerned: each CPU needs to call into the kernel periodically if it is to be used for multitasking under kernel control. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/621523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134585/"
]
} |
621,542 | I am running Linux Mint 20. Cinnamon edition. I installed osdclock.When I runosd_clockit displays it in the left bottom corner. If I runosd_clock -tit runs on the top left corner. I can run it at all 4 corner. I can also offset it using -o, but it only moves it at the vertical line. I can not seem to move it along the horizontal line... But, is there any way to run it in the center of the screen? Here is the man page https://manpages.debian.org/testing/osdclock/osd_clock.1.en.html Anyone familiar with that program? Cheers. | The kernel gains control quite frequently in normal operations: whenever a process calls a system call, and whenever an interrupt occurs. Interrupts happen when hardware wants the CPU’s attention, or when the CPU wants the kernel’s attention, and one particular piece of hardware can be programmed to request attention periodically (the timer). Thus the kernel can ensure that, as long as the system doesn’t lock up so hard that interrupts are no longer generated, it will be invoked periodically. As a result, if that process is doing some extremely long running computation without yielding isn’t a concern: Linux is a preemptive multitasking operating system, i.e. it multitasks without requiring running programs’ cooperation. When it comes to killing processes, the kernel is involved anyway. If a process wants to kill another process, it has to call the kernel to do so, so the kernel is in control. If the kernel decides to kill a process ( e.g. the OOM killer, or because the process tried to do something it’s not allowed to do, such as accessing unmapped memory), it’s also in control. Note that the kernel can be configured to not control a subset of a system’s CPUs itself (using the deprecated isolcpus kernel parameter), or to not schedule tasks on certains CPUs itself (using cpusets without load balancing, which are fully integrated in cgroup v1 and cgroup v2 ); but at least one CPU in the system must always be fully managed by the kernel. It can also be configured to reduce the number of timer interrupts which are generated, depending on what a given CPU is being used for. There’s also not much distinction between single-CPU (single-core, etc.) systems and multi-CPU systems, the same concerns apply to both as far as kernel control is concerned: each CPU needs to call into the kernel periodically if it is to be used for multitasking under kernel control. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/621542",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/443288/"
]
} |
621,586 | I am trying to download an update for a piece of software, and my package manager says that the key is invalid and thus warns me. W: Failed to fetch https://deb.torproject.org/torproject.org/dists/buster/InRelease The following signatures were invalid: EXPKEYSIG 74A941BA219EC810 deb.torproject.org archive signing key Then the output after listing the key in GPG. pub rsa2048/0xEE8CBC9E886DDD89 2009-09-04 [SC] [expires: 2022-08-05] Key fingerprint = A3C4 F0F9 79CA A22C DBA8 F512 EE8C BC9E 886D DD89uid [ unknown] deb.torproject.org archive signing keysub rsa2048/0x74A941BA219EC810 2009-09-04 [S] [expires: 2020-11-23] Key fingerprint = 2265 EB4C B2BF 88D9 00AE 8D1B 74A9 41BA 219E C810 As you can see, the subkey has expired recent to writing this post. I went to the developer's website and the signing key is unchanged. How do I continue the software update without skipping the signing process? | From 2019.www.torproject.org/docs/debian.html.en , you can run these commands to add the key to the trusted apt keys, I only added sudo : curl https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --importgpg --export A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89 | sudo apt-key add - After that sudo apt-key list (or gpg --list-keys ) should list the updated key: pub rsa2048 2009-09-04 [SC] [expires: 2024-11-17] A3C4 F0F9 79CA A22C DBA8 F512 EE8C BC9E 886D DD89uid [ unknown] deb.torproject.org archive signing keysub rsa2048 2009-09-04 [S] [expires: 2022-06-11] Now you can install the keyring package if you wish to keep the key current: sudo apt updatesudo apt install deb.torproject.org-keyring The deb.torproject.org-keyring package contains the current version of the archive signing key (/etc/apt/trusted.gpg.d/deb.torproject.org-keyring.gpg) to validate the authenticity of tor packages. If you install the package, you'll automatically update the key next time you run sudo apt update; sudo apt upgrade whenever there is an updated version of the key available (assuming the currently installed key is not expired to fetch the package via apt). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/621586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/443650/"
]
} |
621,632 | I want to delete all directories in the download folder. /content/download/documents//content/download/music//content/download/videos//content/download/pictures/ I have tried removing them using rm but it's not working. rm -rf '/content/download/*/' | Wildcard * doesn't expand inside quotes (either within single quote nor double-quote), so you need write that out of quotes: rm -rf '/content/download/'*/ however quotes are necessary only when the path or filename contains whitespaces/newline or some other characters that are special to the shell to prevent interpreting them by the shell. With the trailing / , */ will expand to all files of type directory after symlink resolution , so will also include symlinks to directories. The expansion will be something like /content/download/dirlink/ for those. What happens for those depends on the rm implementation. With the ones typically found on Linux-based systems, that will remove the contents (recursively) of the target directory of the symlink, but not the symlink nor the directory itself. Also note that it won't remove hidden directories. If your shell is bash, you can read more in its manual, in particular: Filename Expansion Pattern Matching Shell Quoting | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/621632",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/381208/"
]
} |
621,771 | I have a fresh install of Fedora 33, with BTRFS. While installing it I created separate partitions for / and /home . But now the system (df, gparted) thinks that I have the same partition mounted in both: $ df -h.../dev/nvme0n1p2 850G 36G 814G 5% /tmpfs 32G 34M 32G 1% /tmp/dev/nvme0n1p2 850G 36G 814G 5% /home When I add a large file to /home , I see the used space increasing in both.The weird thing (to me) is that when I look at / I don't see directories from /home . What happened? Does anyone know if this is safe, i.e. can writing to the user's directory overwrite or mess up system files and vice versa? | When I add a large file to /home, I see the used space increasing in both. This is how btrfs works. You have one partition formatted to btrfs and the filesystem itself is divided into multiple (in case of Fedora two) subvolumes. All the subvolumes share the same space, that's why you see both / and /home having same 814G free space and that's why creating a new file in /home also increases used space in / . But there's no reason to be worried, it's still two separate directories and you can't overwrite data on / when writing to /home or vice versa. While installing it I created separate partitions for / and /home If you used the manual partition tool and selected btrfs (which is now default) you created subvolumes, not partitions. If you want separate partitions, you need to switch the partitioning scheme from Btrfs to Standard Partition : | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/621771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/443845/"
]
} |
621,942 | When setting up a RAID-1 Ubuntu system (i.e. where / and /boot are on RAID-1 mirrors) it's unclear to me what Ubuntu's answer is to making the EFI System Partition (ESP, i.e. /boot/efi ) redundant, as well. The Fedora solution, i.e. just putting it on a superblock 1.0 RAID-1 , apparently isn't supported at all and thus makes grub-install fail. There seems to be some support for letting the Ubuntu installer create 2 ESPs and install the files to both of them. But according to this recent bug report it's still unclear how this scheme is supported by regular package updates: https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1876974 ( see also ) So how do I have to set up the ESP's on the both disks (when aiming for RAID-1 setup with - say - Ubuntu 20.04 LTS) to make them redundant and keep them in-sync for later Ubuntu updates? The objective here is to still be able to boot that Ubuntu system, i.e. even when one disk dies. How does my /etc/fstab (or other relevant) configuration files have to look for such a setup? For example, when the first ESP is mounted under /boot/efi where has the second one to be mounted in order to be recognized by Ubuntu package post-install scripts? And what are the necessary grub-install / dpkg-reconfigure /reinstall commands to fix the ESP setup after an installer failed to set up the ESPs correctly? | Ubuntu's solution to a redundant ESP is to just to create and mount two of them, and reconfigure grub , instead of creating one on a superblock 1.0 RAID-1. The name of the second mount point doesn't matter. Since a single ESP is usually mounted under /boot/efi , mounting the second ESP under something like /boot/eficopy would be natural. Both ESPs have to be mounted automatically via /etc/fstab in case a grub package update happens. It's important that both ESPs have the right GPT type (i.e. C12A7328-F81F-11D2-BA4B-00A0C93EC93B ). Sizing them 200 MiB each is sufficient. The initial setup then requires reconfiguring grub: dpkg-reconfigure grub-efi-amd64 The grub reconfigure script then checks for all partitions with the ESP GPT type and allows the user to select both. After that change future package updates/re-installs will update both ESPs. Note that (as of 2020), the reconfigure only works for grub-efi-amd64 and not for grub-efi-amd64-signed (where reconfigure doesn't prompt for anything). Thus, one might need to install the right grub first, e.g.: apt-get install grub-efi-amd64apt-get remove grub-efi-amd64-signed | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/621942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
622,057 | Say a song is recorded somewhere on the planet with a computer, and that stream of data is stored. If this is stored using Audacity , you can set it to .aup format, which is a very simple XML file. Why do we need anything else than a few data points in a file? Why would we use .mp3 or anything? In fact, encoding this increases the size for a few local samples. I think the issue could be size for images for example, but I can't say that for audio files. | Why linux uses non free codecs That's the easy part: because most of the audio/music is in non-free formats like MP3 we need non-free codecs to decode it. Why do we need anything else than a few data-points in a file? Audio CD quality has sampling rate 44.1 kHz and 16bit words so you need 605 MiB for 1 hour of stereo audio (44100 * 60 * 60 * 16 * 2). That's quite a lot of data points :-). That's why lossy compression exists and you need something (= codec) to decode these formats/data. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/390930/"
]
} |
622,161 | I feel like I am missing something very simple and I was wondering if it possible to sow the last few lines of the last 4 modified files. I tried something like this tail | ls -lt | head -5 but I think I should iterate over ls | -lt result and apply tail to it and I am not sure how to do it. Any help is appreciated | Before I start, it’s generally considered bad practice to use the output of ls as input for something else; a common flaw being that it may not work as intended for files containing white space/new line in their name. With that limitation in mind, you will probably find that ls | something will work OK most of the time. You are heading in the right direction with your command, here is one solution with the above caveat about ls limitations: ls -t | head -5 | xargs tail This will throw a non fatal error if there are subdirectories in your listing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186975/"
]
} |
622,168 | I have 35 files and directory in my home at the first level (without descending into subdirectories).But the results are "faked" and report 36 because they include the . special directory.How can I exclude the "."? I have tried the switch -not -path but it doesn't work. find . -not -path '*/\.*' -not -path /\. -maxdepth 1|wc -l36find $HOME -not -path '*/\.*' -maxdepth 1|wc -l36 | find includes the directories from which it starts; exclude those from the output, without decoration, to get the result you’re after: find . -maxdepth 1 ! -path . | wc -lfind "$HOME" -maxdepth 1 ! -path "$HOME" | wc -l (as long as $HOME doesn’t contain any characters which would be interpreted as pattern characters, i.e. * , ? , [] ). You can also match against the . name only, by appending it (if necessary) to the path you’re interested in: find . -maxdepth 1 ! -name . | wc -lfind "$HOME"/. -maxdepth 1 ! -name . | wc -l To accurately count the files found by find , you should count something other than the file names (which can include newlines), e.g. if your find supports -printf : find . -maxdepth 1 ! -name . -printf 1 | wc -c or, with POSIX find , by using the // trick: find .//. -maxdepth 1 ! -name . | grep -c // .. isn’t a problem with find , it doesn’t include it unless you add it explicitly ( find .. ...). ls won’t show . and .. by default; if you want to see hidden files, but not . and .. , use the -A option instead of -a . With GNU ls , you can ask it to quote special characters, which will allow you to accurately count files: ls -q1A . | wc -l | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
622,185 | As usual, I can inspect the contents of syslog entries in this way: cat /var/log/syslog | grep myentry I need to append all myentry rows to a specific file. Of course just redirecting the output of the command above to the file will not work, because it will append all the rows, even if they were already added last time. The first solution that comes to mind is to cycle among all the rows in syslog until I find the last row of the target file. Then I can append all the following ones. Doing this periodically (i.e. using a cronjob or even easier a timed cycle in bash) should do the trick. Is there something smarter or more elegant to do the same job? EDIT I add what terdon requested: Example of my syslog: Jan 17 13:03:18 stm32mp1-abc local2.info chat[15782]: CONNECTJan 17 13:03:18 stm32mp1-abc local2.info chat[15782]: -- got itJan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: ReadyJan 17 13:03:18 stm32mp1-abc local2.info chat[15782]: send (^M)Jan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: Init completeJan 17 13:03:18 stm32mp1-abc daemon.info pppd[14362]: Serial connection established.Jan 17 13:03:18 stm32mp1-abc daemon.info pppd[14362]: Using interface ppp0 Example of the existing file I want to append to: Jan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: Ready Final output I expect from those two files: Jan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: ReadyJan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: Init complete UPDATE Ok, it seems I need to be very specific with the example, regardless my description. So new examples: syslog Jan 17 13:03:18 stm32mp1-abc local2.info chat[15782]: CONNECTJan 17 13:03:18 stm32mp1-abc local2.info chat[15782]: -- got itJan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: ReadyJan 17 13:03:18 stm32mp1-abc local2.info chat[15782]: send (^M)Jan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: Init completeJan 17 13:03:18 stm32mp1-abc daemon.info pppd[14362]: Serial connection established.Jan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: Start operationsJan 17 13:03:18 stm32mp1-abc daemon.info pppd[14362]: Using interface ppp0 current output Jan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: Init complete new output Jan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: Init completeJan 17 13:03:18 stm32mp1-abc user.info myentry[14300]: Start operations It should be clear: if the output file is empty, append all the syslog/myentries rows if the output file is not empty, append all the next syslog/myentries rows (from the matching one) if the output file is not empty and there's no matching, still append all the next syslog/myentries rows (after the last timestamp) As said, I'm able to do this using the brute force: cycle all rows, check criteria and append if needed. I'm looking for an easier solution, like the proposal to split the syslog entries automatically, but I didn't find a way to do it. | find includes the directories from which it starts; exclude those from the output, without decoration, to get the result you’re after: find . -maxdepth 1 ! -path . | wc -lfind "$HOME" -maxdepth 1 ! -path "$HOME" | wc -l (as long as $HOME doesn’t contain any characters which would be interpreted as pattern characters, i.e. * , ? , [] ). You can also match against the . name only, by appending it (if necessary) to the path you’re interested in: find . -maxdepth 1 ! -name . | wc -lfind "$HOME"/. -maxdepth 1 ! -name . | wc -l To accurately count the files found by find , you should count something other than the file names (which can include newlines), e.g. if your find supports -printf : find . -maxdepth 1 ! -name . -printf 1 | wc -c or, with POSIX find , by using the // trick: find .//. -maxdepth 1 ! -name . | grep -c // .. isn’t a problem with find , it doesn’t include it unless you add it explicitly ( find .. ...). ls won’t show . and .. by default; if you want to see hidden files, but not . and .. , use the -A option instead of -a . With GNU ls , you can ask it to quote special characters, which will allow you to accurately count files: ls -q1A . | wc -l | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169815/"
]
} |
622,195 | I want to find the lines that contain any word three times. For this, I thought it would be best to use the grep command. This was my attempt. grep '\(.*\)\{3\}' myfile.txt | Using the standard word definition, GNU Grep, 3 or more occurrences of any word . grep -E '(\W|^)(\w+)\W(.*\<\2\>){2}' file GNU Grep, only 3 occurrences of any word . grep -E '(\W|^)(\w+)\W(.*\<\2\>){2}' file | grep -Ev '(\W|^)(\w+)\W(.*\<\2\>){3}' POSIX Awk, only 3 occurences of any word . awk -F '[^_[:alnum:]]+' '{ # Field separator is non-word sequences split("", cnt) # Delete array cnt for (i=1; i<=NF; i++) cnt[$i]++ # Count number of occurrences of each word for (i in cnt) { if (cnt[i]==3) { # If a word appears exactly 3 times print # Print the line break } }}' file For 3 or more occurences, simply change == to >= . Equivalent golfed one-liner: awk -F '[^_[:alnum:]]+' '{split("",c);for(i=1;i<=NF;i++)c[$i]++;for(i in c)if(c[i]==3){print;next;}}' file GNU Awk, only 3 occurrences of the word ab . gawk 'gsub(/\<ab\>/,"&")==3' file For 3 or more occurences, simply change == to >= . Reading material \2 is a back-reference . \w \W \< \> special expressions in GNU Grep . The [:alnum:] POSIX character class . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/622195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444094/"
]
} |
622,283 | According to https://www.geeksforgeeks.org/rev-command-in-linux-with-examples/ rev command in Linux is used to reverse the lines characterwise. e.g. wolf@linux:~$ revHello World!!dlroW olleH What is the example of application of rev in real life? Why do we need reversed string? | The non-standard rev utility is useful in situations where it's easier to express or do an operation from one direction of a string, but it's the reverse of what you have. For example, to get the last tab-delimited field from lines of text using cut (assuming the text arrives on standard input): rev | cut -f 1 | rev Since there is no way to express "the last field" to cut , it's easier to reverse the lines and get the first field instead. One could obviously argue that using awk -F '\t' '{ print $NF }' would be a better solution, but we don't always think about the best solutions first. The (currently) accepted answer to How to cut (select) a field from text line counting from the end? uses this approach with rev , while the runner-up answer shows alternative approaches. Another example is to insert commas into large integers so that 12345678 becomes 12,345,678 (original digits in groups of three, from the right): echo '12345678' | rev | sed -e 's/.../&,/g' -e 's/,$//' | rev See also What do you use string reversal for? over on the SoftwareEngineering SE site for more examples. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/622283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/409008/"
]
} |
622,296 | The command df . can show us which device we are on. For example, me@ubuntu1804:~$ df .Filesystem 1K-blocks Used Available Use% Mounted on/dev/sdb1 61664044 8510340 49991644 15% /home Now I want to get the string /dev/sdb1 . I tried like this but it didn't work: df . | read a; read a b; echo "$a" , this command gave me an empty output. But df . | (read a; read a b; echo "$a") will work as expected. I'm kind of confused now. I know that (read a; read a b; echo "$a") is a subshell, but I don't know why I have to make a subshell here. As my understanding, x|y will redirect the output of x to the input of y . Why read a; read a b; echo $a can't get the input but a subshell can? | The main problem here is grouping the commands correctly. Subshells are a secondary issue. x|y will redirect the output of x to the input of y Yes, but x | y; z isn't going to redirect the output of x to both y and z . In df . | read a; read a b; echo "$a" , the pipeline only connects df . and read a , the other commands have no connection to that pipeline. You have to group the read s together: df . | { read a; read a b; } or df . | (read a; read a b) for the pipeline to be connected to both of them. However, now comes the subshell issue: commands in a pipeline are run in a subshell, so setting a variable in them doesn't affect the parent shell. So the echo command has to be in the same subshell as the read s. So: df . | { read a; read a b; echo "$a"; } . Now whether you use ( ... ) or { ...; } makes no particular difference here since the commands in a pipeline are run in subshells anyway. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/622296",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/145824/"
]
} |
622,344 | I want to add trailing spaces to each field. My file look like: Input file: A|B|C|D Field 1 length in output file would be: 1 Field 2 length in output file would be: 3 Field 3 length in output file would be: 4 Field 4 length in output file would be: 6 Desired output: AB C D How to achieve this in shell? Kindly assist | With awk : awk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}' < input > output Would do right space padding and truncation. Depending on the awk implementation, that would be length in terms of bytes or characters (making a difference for multi-byte characters). In any case, not based on the display width of those characters (like for double-width or 0-width characters, or TAB which don't have a display width of 1 on terminals). Examples: $ echo 'A|B|C|D' | awk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB C D (all of those A B C D graphemes are each made of one character, each made of one byte in any locale and each is single-width). $ echo 'A|B|Ç|D' | gawk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB Ç D$ echo 'A|B|Ç|D' | mawk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB Ç D (2 byte, 1-width Ç character in UTF-8) $ echo $'A|B|C\u0327|D' | gawk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB Ç D$ echo $'A|B|C\u0327|D' | mawk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB Ç D 1-byte, 1-width C combined with 0-width, 2 bytes (in UTF-8) combining cedilla to form a 1-width, 2-characters, 3-bytes Ç grapheme, the decomposed version of the pre-composed U+00C7 Ç character in the previous example. To take into consideration the display width of characters, with some expand implementations (though not GNU expand ) and assuming the input doesn't contain TAB characters and none of the input fields exceed their allocated width in the first place, you could do: <input sed $'s/|/|\t/g;s/$/|\t/' | expand -t3,8,14,22 | sed 's/| //g' >output Which on the output of printf '%s\n' 'A|B|C|D' $'A|B|\uc7|D' $'A|B|C\u327|D' should give: AB C DAB Ç DAB Ç D | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/439394/"
]
} |
622,349 | there is $releasever can be use to identify version. But I host a repo contain both fedora and centos rpms. If there is variable holdsthe distro name/id I can then use a uniform yum repo conf. | With awk : awk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}' < input > output Would do right space padding and truncation. Depending on the awk implementation, that would be length in terms of bytes or characters (making a difference for multi-byte characters). In any case, not based on the display width of those characters (like for double-width or 0-width characters, or TAB which don't have a display width of 1 on terminals). Examples: $ echo 'A|B|C|D' | awk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB C D (all of those A B C D graphemes are each made of one character, each made of one byte in any locale and each is single-width). $ echo 'A|B|Ç|D' | gawk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB Ç D$ echo 'A|B|Ç|D' | mawk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB Ç D (2 byte, 1-width Ç character in UTF-8) $ echo $'A|B|C\u0327|D' | gawk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB Ç D$ echo $'A|B|C\u0327|D' | mawk -F'|' '{printf "%-1.1s%-3.3s%-4.4s%-6.6s\n", $1, $2, $3, $4}'AB Ç D 1-byte, 1-width C combined with 0-width, 2 bytes (in UTF-8) combining cedilla to form a 1-width, 2-characters, 3-bytes Ç grapheme, the decomposed version of the pre-composed U+00C7 Ç character in the previous example. To take into consideration the display width of characters, with some expand implementations (though not GNU expand ) and assuming the input doesn't contain TAB characters and none of the input fields exceed their allocated width in the first place, you could do: <input sed $'s/|/|\t/g;s/$/|\t/' | expand -t3,8,14,22 | sed 's/| //g' >output Which on the output of printf '%s\n' 'A|B|C|D' $'A|B|\uc7|D' $'A|B|C\u327|D' should give: AB C DAB Ç DAB Ç D | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77353/"
]
} |
622,534 | So I would like to do grep -ril in a folder. I would want it to return only the top folder which gets a match. To illustrate: /tmp/: grep -ril "hello" returns: tmp1/banan/filetmp1/banan2/filetmp2/ape/filetmp2/ape2/file Expected result: tmp1tmp2 | Directories don't match patterns for content; files do. What you seem to be asking is how to get the directories of files that match the pattern. Strip off the path past the first component, and ensure the result is presented as unique values in sorted order, as you have specified grep -ril "hello" | sed 's!/.*$!!' | sort -u Replace the sort with awk '!h[$0]++' if you don't want to change the order of results | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324368/"
]
} |
622,537 | if(-z "$file1" && "file2") { print "file1 and file2 are empty";} else { print "execute";} When I write this, when the files are empty it prints execute and when files are not empty it prints file1 and file2 are empty . When the condition is true it is supposed to print file1 and file2 are empty , am I right? or wrong? | Directories don't match patterns for content; files do. What you seem to be asking is how to get the directories of files that match the pattern. Strip off the path past the first component, and ensure the result is presented as unique values in sorted order, as you have specified grep -ril "hello" | sed 's!/.*$!!' | sort -u Replace the sort with awk '!h[$0]++' if you don't want to change the order of results | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444001/"
]
} |
622,752 | Sorry for a noob question but, well, I am a noob, so... Given two files, say file1 with content, say, text1 and file2 with content text2 , I want to create a new file file3 with content text1newtextinbetweentext2 . I would expect some command like cat file1 (dontknowwhat "newtextinbetween") file2 > file3 Is there some dontknowwhat that would do what I want? If not, what is the optimal way to do it? | Another way to do this is to use process substitution: cat file1 <(echo "newtextinbetween") file2 > file3 Edit: If you don't what echo to add a line break use echo -n instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/622752",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444771/"
]
} |
622,768 | The bash man page says the following about the read builtin: The exit status is zero, unless end-of-file is encountered This recently bit me because I had the -e option set and was using the following code: read -rd '' json <<EOF{ "foo":"bar"}EOF I just don't understand why it would be desirable to exit non successfully in this scenario. In what situation would this be useful? | read reads a record (line by default, but ksh93/bash/zsh allow other delimiters with -d , even NUL with zsh/bash) and returns success as long as a full record has been read. read returns non-zero when it finds EOF while the record delimiter has still not been encountered. That allows you do do things like while IFS= read -r line; do ...done < text-file Or with zsh/bash while IFS= read -rd '' nul_delimited_record; do ...done < null-delimited-list And that loop to exit after the last record has been read. You can still check if there was more data after the last full record with [ -n "$nul_delimited_record" ] . In your case, read 's input doesn't contain any record as it doesn't contain any NUL character. In bash , it's not possible to embed a NUL inside a here document. So read fails because it hasn't managed to read a full record. It stills stores what it has read until EOF (after IFS processing) in the json variable. In any case, using read without setting $IFS rarely makes sense. For more details, see Understanding "IFS= read -r line" . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/622768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237982/"
]
} |
622,902 | We have a shell script that -- for various reasons -- wraps a vendor's application. We have system administrators and application owners who have mixed levels of familiarity with systemd. As a result, in situations where the application has failed (systemctl indicates as much), some end users (including “root” system administrators) might start an application “directly” with the wrapper script instead of using systemctl restart . This can cause issues during reboots, because systemd does not call the proper shutdown script -- because as far as it's concerned, the application was already stopped. To help guide the transition to systemd, I want to update the wrapper script to determine whether it is being called by systemd or by an end-user; if it's being called outside systemd, I want to print a message to the caller, telling them to use systemctl. How can I determine, within a shell script, whether it is being called by systemd or not? You may assume: a bash shell for the wrapper script the wrapper script successfully starts and stops the application the systemd service works as expected An example of the systemd service could be: [Unit]Description=Vendor's Application After=network-online.target[Service]ExecStart=/path/to/wrapper startExecStop=/path/to/wrapper stopType=forking[Install]WantedBy=multi-user.target I am not interested in Detecting the init system , since I already know it's systemd. | From Lucas Werkmeister 's informative answer on Server Fault : With systemd versions 231 and later, there's a JOURNAL_STREAM variable that is set for services whose stdout or stderr is connected to the journal. With systemd versions 232 and later, there's an INVOCATION_ID variable that is set. If you don't want to rely on those variables, or for systemd versions before 231, you can check if the parent PID is equal to 1: if [[ $PPID -ne 1 ]]then echo "Don't call me directly; instead, call 'systemctl start/stop service-name'" exit 1fi >&2 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/622902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117549/"
]
} |
622,924 | I saved a backup from ssd A to .img file with dd command now I want to know to clone ssd A to ssd B can I do it directly from the .img file something like: dd if=/dev/backup.img of=/dev/sdc bs=1 status=progress Will this be same as doing: dd if/dev/sdb of=/dev/sdc bs=1 bs=1 status=progress Where Disk A = sdb and Disk B=sdc I already did dd if=/dev/sdc of=/dev/image.img I woud prefer to clone it from the .img file as I dont mess something up or do the opposite, so I want to know are those two methods same 100% results? | From Lucas Werkmeister 's informative answer on Server Fault : With systemd versions 231 and later, there's a JOURNAL_STREAM variable that is set for services whose stdout or stderr is connected to the journal. With systemd versions 232 and later, there's an INVOCATION_ID variable that is set. If you don't want to rely on those variables, or for systemd versions before 231, you can check if the parent PID is equal to 1: if [[ $PPID -ne 1 ]]then echo "Don't call me directly; instead, call 'systemctl start/stop service-name'" exit 1fi >&2 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/622924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444941/"
]
} |
623,001 | I need to find out how many services are listening to my interfaces (ipv4 only, not localhost) $ ifconfigens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.129.56.137 netmask 255.255.0.0 broadcast 10.129.255.255 inet6 dead:beef::250:56ff:feb9:8c07 prefixlen 64 scopeid 0x0<global> inet6 fe80::250:56ff:feb9:8c07 prefixlen 64 scopeid 0x20<link> ether 00:50:56:b9:8c:07 txqueuelen 1000 (Ethernet) RX packets 3644 bytes 330312 (330.3 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3198 bytes 679711 (679.7 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> loop txqueuelen 1000 (Local Loopback) RX packets 15304 bytes 895847 (895.8 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 15304 bytes 895847 (895.8 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0$ nmap 10.129.56.137Starting Nmap 7.60 ( https://nmap.org ) at 2020-12-05 05:23 UTCNmap scan report for 10.129.56.137Host is up (0.000086s latency).Not shown: 991 closed portsPORT STATE SERVICE21/tcp open ftp22/tcp open ssh80/tcp open http110/tcp open pop3139/tcp open netbios-ssn143/tcp open imap445/tcp open microsoft-ds993/tcp open imaps995/tcp open pop3sNmap done: 1 IP address (1 host up) scanned in 10.57 seconds I thought the answer was 9 but there must be a way to find the correct answer.Cheers in advance! | From man netstat: This program is mostly obsolete. Replacement for netstat is ss. At this point, I think this will be the best option: ss -l -4 | grep -v "127\.0\.0" | grep "LISTEN" | wc -l Where: -l : show only listening services -4 : show only ipv4 -grep -v "127.0.0" : exclude all localhost results -grep "LISTEN" : better filtering only listening services wc -l : count results | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/445018/"
]
} |
623,046 | I have a log file with the following structure Example: 1522693524403 entity1,sometext1522693541466 entity2,sometext1522693547273 entity1,sometext... Now I would like to replace the time from epoch milliseconds to DD.MM.YYYY HH:MM:SS in all of the log files with a bash command on a Debian System. I tried different solutions provided here and on other websites but it did not really work for me. Can anyone help me please? Cheers fastboot | If you use a shell script reading line by line, or an awk script calling system date , it would be very slow, too many processes. You have to use a simple awk, Perl, Python, or anything, script. All languages have standard datetime functions for convertions between formats. Here and here are some good references. If you want to use GNU awk time functions and strftime() , all you need for your case is to select the epoch substring excluding the milliseconds: $ awk '{$1 = strftime("%F %T", substr($1,1,10))} 1' file2018-04-02 21:25:24 entity1,sometext2018-04-02 21:25:41 entity2,sometext2018-04-02 21:25:47 entity1,sometext Or to print the milliseconds together: $ awk '{$1 = strftime("%F %T", substr($1,1,10)) "." substr($1,11)} 1' file2018-04-02 21:25:24.403 entity1,sometext2018-04-02 21:25:41.466 entity2,sometext2018-04-02 21:25:47.273 entity1,sometext Or to print the day-month-Year format: $ awk '{$1 = strftime("%d-%m-%Y %T", substr($1,1,10))} 1' file02-04-2018 21:25:24 entity1,sometext02-04-2018 21:25:41 entity2,sometext02-04-2018 21:25:47 entity1,sometext | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/445060/"
]
} |
623,057 | What’s the difference between df -h and df -kh ? I am trying both these commands in my terminal, however I don’t see any visible difference so wanted to understand. | There is effectively no difference. The -h option to df selects "human readable" output, meaning that the sizes of things will be scaled to appropriate amounts to give nice small readable values, such as 2.1G, or 806M. The -k option does something similar, but scales the sizes to kilobytes only, so you'll get e.g. 2165680 and 824550 instead of 2.1G and 806M. Since these options are conflicting with each other (you can't both have the sizes in kilobytes and in "human readable" format), the last of option specified will "win". The combination of these options that you use, -kh (which is the same as -k -h ), means that you'll get the effect of using only -h . There is therefore no difference between df -h and df -kh . Compare this behavior with conflicting options to other utilities, such as the -C , the -1 ("minus one"), and the -l ("minus ell") option to ls , and what happens if you use all in one order or the other. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623057",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222504/"
]
} |
623,071 | I have a script that uses find to search for files and loops over the result for further processing. for file in $(find ~ -type f -size +1G); do filename=$(basename $file) echo $filedone When I run the find command directly, it outputs matches directly. But its whin this script, the loop first waits for the find command to completely finish and afterwards loops over the result. Is it possible to have the loop run over a match as soon as the find command finds any match? It would be possible with using pipe the output, but I want to do more complex stuff in the loop afterwards than simple one line stuff. | There is effectively no difference. The -h option to df selects "human readable" output, meaning that the sizes of things will be scaled to appropriate amounts to give nice small readable values, such as 2.1G, or 806M. The -k option does something similar, but scales the sizes to kilobytes only, so you'll get e.g. 2165680 and 824550 instead of 2.1G and 806M. Since these options are conflicting with each other (you can't both have the sizes in kilobytes and in "human readable" format), the last of option specified will "win". The combination of these options that you use, -kh (which is the same as -k -h ), means that you'll get the effect of using only -h . There is therefore no difference between df -h and df -kh . Compare this behavior with conflicting options to other utilities, such as the -C , the -1 ("minus one"), and the -l ("minus ell") option to ls , and what happens if you use all in one order or the other. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623071",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29091/"
]
} |
623,080 | My apologies for the silly/simple question - yet after searching the web and SE, I cannot find an answer for this specific issue. Question: How does one change the owner and group (system-wide) only for files owned by a specific owner? Use-case: We have a number of RasPis running as various servers and use rsync to back them up. When we're unfortunate enough to have to perform a restore, the owner and group of all 'user' files is pi:pi , rather than the original owner adminuser:adminuser , for example. Without hunting the files owned by pi , is where a way to accomplish the owner/group reassignment? Edit:This is the rsync command: sudo rsync -azh -e 'ssh -pNNNN' --stats --delete --exclude-from="${exc_path}" "${src_path}" "${dst_addr}:${dst_path}" | You're not using -numeric-ids and/or -fake-super for your backups (and restores). If you modify your rsync command a little you'll get the mappings saved and restored correctly. In these examples, the -M tells rsync to apply the next option, i.e. the fakery, on the remote side of the connection. An extra side effect is you don't need the remote side (where the backups are stored) to run as root This pushes the backups from the client to the backups server sudo rsync -azh -e 'ssh -pNNNN' --stats --delete --numeric-ids -M--fake-super --exclude-from="${exc_path}" "${src_path}" "${dst_addr}:${dst_path}" This would pull backups from the client (i.e. restore) sudo rsync -azh -e 'ssh -pNNNN' --stats --delete --numeric-ids -M--fake-super --exclude-from="${exc_path}" "${dst_addr}:${dst_path}" "${src_path}" And this, run on the backups server, would push the backups to the client (i.e. restore) sudo rsync -azh -e 'ssh -pNNNN' --stats --delete --numeric-ids --fake-super "${dst_path}" "${src_host}:${src_path}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623080",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250848/"
]
} |
623,215 | I use bash and the up arrow key sometimes to be able to quickly get the previous commands I used. What is irritating is sometimes when I do this I get a [A instead of a previous command. After doing some research online it looks like this is a key code representing the up arrow key being sent to the computer. I can't seem to find any answers to this online. How can I stop this from happening in the future? | The sequence is actually Escape [ A , and it's part of a set that was adopted by Ecma in 1976 as standard ECMA-48 , being supported by ANSI as a separate but almost identical standard for a number of years (later withdrawn) and also ratified by ISO/IEC 6429 on the way. The upshot of this multiple standardisation is that although they are frequently referenced as ANSI escape codes they should properly be called ECMA-48 control functions * . The usual reason for seeing [A on the screen instead of the action will be that the inital Escape code has been absorbed unexpectedly. I cannot reproduce this on my keyboard unless I first press Ctrl V , which tells the terminal line driver to process the next character as a literal. So, we can then get this sequence Ctrl V Escape [ A , producing the visible output [A . You'll notice that if you press the sequence of characters Escape [ A in quick succession the cursor will indeed go upwards. However, if you pause after the first character you'll fail to get a cursor movement, and this is because the Escape character has a timeout associated with it. On slow serial lines to UNIX systems this used to be a real problem, and the closest equivalent is a slow or intermittent network connection, with a brief lag during this sequence transmission. Now to your question, how to prevent this. There isn't much you can do if you're on an intermittent network connection, except maybe use one of the alternate sequences such as Escape k ... k ... k that are available during command editing in "vi mode" ( set -o vi ). * Much like JavaScript should be called ECMAScript, I suppose | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37617/"
]
} |
623,375 | when I export a gpg private or public key, and specify armored as a switch, I get plain text key, however, the gnupgp website seems to state that these keys are actually encrypted. What's the point in calling it armored if its just plain text? I don't get it. The key is exported in a binary format, but this can be inconvenient when the key is to be sent though email or published on a web page. GnuPG therefore supports a command-line option --armor that that causes output to be generated in an ASCII-armored format similar to uuencoded documents. In general, any output from GnuPG, e.g., keys, encrypted documents, and signatures, can be ASCII-armored by adding the --armor option. https://www.gnupg.org/gph/en/manual/x56.html | gpg --export outputs binary data. This cannot directly be displayed as text, but in base16 (i.e. hexadecimal) encoding, it looks something like: f53e9c4b013d3c6554c3161116face55f11db56dab1a941fe3a6e5ad246d4eb7 gpg --export --armor outputs base64 encoded data, alongside a plaintext header + footer: -----BEGIN PGP PUBLIC KEY BLOCK-----9T6cSwE9PGVUwxYRFvrOVfEdtW2rGpQf46blrSRtTrc=-----END PGP PUBLIC KEY BLOCK----- This is a more conventional format used when sharing keys in email and other text-oriented mediums. This is used because binary data can't be transmitted as ASCII text. Furthermore, base64 is much more compact than other traditional methods of representing binary data as text such as base16 (i.e. hexadecimal). NOTE: The examples have been simplified for pedagogical reasons. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/445394/"
]
} |
623,388 | This does what I expect (put --- when column2 changes value): $ (echo 'a,,b';echo 'b,,a';echo 'c,a,b') | perl -a '-F,' -pe 'BEGIN{$last="---\n";}{local$_=$F[1];if(($last)ne$_){print"---\n";$last=$_;}}'---a,,bb,,a---c,a,b This does not: $ (echo 'a b';echo 'b a';echo 'c a b') | perl -a '-F ' -pe 'BEGIN{$last="---\n";}{local$_=$F[1];if(($last)ne$_){print"---\n";$last=$_;}}'---a bb ac a b | gpg --export outputs binary data. This cannot directly be displayed as text, but in base16 (i.e. hexadecimal) encoding, it looks something like: f53e9c4b013d3c6554c3161116face55f11db56dab1a941fe3a6e5ad246d4eb7 gpg --export --armor outputs base64 encoded data, alongside a plaintext header + footer: -----BEGIN PGP PUBLIC KEY BLOCK-----9T6cSwE9PGVUwxYRFvrOVfEdtW2rGpQf46blrSRtTrc=-----END PGP PUBLIC KEY BLOCK----- This is a more conventional format used when sharing keys in email and other text-oriented mediums. This is used because binary data can't be transmitted as ASCII text. Furthermore, base64 is much more compact than other traditional methods of representing binary data as text such as base16 (i.e. hexadecimal). NOTE: The examples have been simplified for pedagogical reasons. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623388",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2972/"
]
} |
623,409 | I'm trying to find a string(Name2) in a file and match the version(*abc-xyz-0-197) in the next line and update the ID line with an incremented value. How can this be done? The file looks like this: Name1: version: *abc-xyz-0-197 ID: 1 primary_data: somedata: Name2: version: *abc-xyz-0-196 ID: 3 primary_data: somedata: Name3: version: *abc-xyz-0-192 ID: 6 primary_data: somedata: Output has to be: (for Name2 & version: *abc-xyz-0-196) Name1: version: *abc-xyz-0-197 ID: 1 primary_data: somedata: Name2: version: *abc-xyz-0-196 ID: **4** primary_data: somedata: Name3: version: *abc-xyz-0-192 ID: 6 primary_data: somedata: | gpg --export outputs binary data. This cannot directly be displayed as text, but in base16 (i.e. hexadecimal) encoding, it looks something like: f53e9c4b013d3c6554c3161116face55f11db56dab1a941fe3a6e5ad246d4eb7 gpg --export --armor outputs base64 encoded data, alongside a plaintext header + footer: -----BEGIN PGP PUBLIC KEY BLOCK-----9T6cSwE9PGVUwxYRFvrOVfEdtW2rGpQf46blrSRtTrc=-----END PGP PUBLIC KEY BLOCK----- This is a more conventional format used when sharing keys in email and other text-oriented mediums. This is used because binary data can't be transmitted as ASCII text. Furthermore, base64 is much more compact than other traditional methods of representing binary data as text such as base16 (i.e. hexadecimal). NOTE: The examples have been simplified for pedagogical reasons. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/445422/"
]
} |
623,488 | I've multiple results in the format 10.3.2.1.in-addr.arpa name hostname I want to remove the middle part ".in-addr.arpa" and reverse the IP address to, for example, 1.2.3.10. Is that possible with a simple bash one liner? Thanks in advance, I'm trying to do this since hours and kinda stuck. | With perl : perl -pe 's/(\d+)\.(\d+)\.(\d+)\.(\d+)\.in-addr\.arpa/$4.$3.$2.$1/g' < input Which is a bit less verbose and a bit more legible than the standard sed equivalent: d='\([0-9]\{1,\}\)'LC_ALL=C sed "s/$d\.$d\.$d\.$d\.in-addr\.arpa/\4.\3.\2.\1/g" < input Those replace all occurrences of <d1>.<d2>.<d3>.<d4>.in-addr.arpa with <d4>.<d3>.<d2>.<d1> (where <dX> is any sequence of one or more decimal digits) leaving everything else untouched. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623488",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/445527/"
]
} |
623,516 | I've grown affection to scrot as a simple screenshot utility, but it lacks one thing i would greatly appreciate--a way to copy your capture and have it in your clipboard automatically. I've added a line to .bash_aliases that automatically puts it in the folder i desire, and also have it always run in selection mode, but there seems to be no flag for copying the result after capturing. Is there any way to do this? .bash_alias entry= alias scrot='scrot -s ~/Pictures/%b%d::%H%M%S.png' | Create a script file that you'll be able to easily execute: #!/bin/shscrot -e 'xclip -selection clipboard -t image/png -i $f' -t will instruct xclip it will be dealing with an image file; -i will tell xclip where the file is; $ f is scrot 's variable for the recent screenshot file saved. You'll need xclip installed, but it should be readily available on your distro. I'm using KDE Plasma now and had to resource to this approach in order to achieve Cinnamon's ready "screenshot to clipboard" hotkey. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/445547/"
]
} |
623,535 | I have a file called ~/.config/systemd/user/ssh-agent.service with contents: [Unit]Description=SSH key agent[Service]Type=simpleEnvironment=SSH_AUTH_SOCK=%t/ssh-agent.socketEnvironment=SYSTEMD_LOG_LEVEL=debugExecStart=/usr/bin/ssh-agent -D -a $SSH_AUTH_SOCK[Install]WantedBy=default.target I can start the service and use the ssh-agent, but if I stop the service like systemctl --user stop ssh-agent , the service fails: Dec 08 17:13:11 box systemd[571]: Stopping SSH key agent...Dec 08 17:13:11 box systemd[571]: ssh-agent.service: Main process exited, code=exited, status=2/INVALIDARGUMENTDec 08 17:13:11 box systemd[571]: ssh-agent.service: Failed with result 'exit-code'.Dec 08 17:13:11 box systemd[571]: Stopped SSH key agent. The unit is marked as being in the "failed" state, I can't find out why. Setting KillMode=none makes the error disappear, but of course the agent is not killed. Any idea on what might be causing the failure? I've also tried setting Type=forking and removing the -D from the ssh-agent invocation. | Create a script file that you'll be able to easily execute: #!/bin/shscrot -e 'xclip -selection clipboard -t image/png -i $f' -t will instruct xclip it will be dealing with an image file; -i will tell xclip where the file is; $ f is scrot 's variable for the recent screenshot file saved. You'll need xclip installed, but it should be readily available on your distro. I'm using KDE Plasma now and had to resource to this approach in order to achieve Cinnamon's ready "screenshot to clipboard" hotkey. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623535",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282463/"
]
} |
623,544 | I'd like to create a temporary file that will be read by multiple scripts after its creation, but I don't have an easy way of monitoring when the last script finishes reading this temporary file to delete it (it may be a different script each time). I'd like to know if there's a standard way of solving this problem with command-line tools that will autodelete this file when it passes a specific interval of time without being read by any program, is it possible? Or the only way to solve this problem would be to figure out a way of knowing when the last script finishes reading this file for deleting it? | Create a script file that you'll be able to easily execute: #!/bin/shscrot -e 'xclip -selection clipboard -t image/png -i $f' -t will instruct xclip it will be dealing with an image file; -i will tell xclip where the file is; $ f is scrot 's variable for the recent screenshot file saved. You'll need xclip installed, but it should be readily available on your distro. I'm using KDE Plasma now and had to resource to this approach in order to achieve Cinnamon's ready "screenshot to clipboard" hotkey. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623544",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444310/"
]
} |
623,558 | When I run bluetoothctl info it shows me information about the COWIN E9 headset I have connected, Device REDACTED (public) Name: COWIN E9 Alias: COWIN E9 Class: 0x00240418 Icon: audio-card Paired: yes Trusted: yes Blocked: no Connected: yes LegacyPairing: no UUID: Vendor specific (REDACTED) UUID: Serial Port (REDACTED) UUID: Headset (REDACTED) UUID: Audio Sink (REDACTED) UUID: A/V Remote Control Target (REDACTED) UUID: Advanced Audio Distribu.. (REDACTED) UUID: A/V Remote Control (REDACTED) UUID: Handsfree (REDACTED) UUID: PnP Information (REDACTED) UUID: Generic Access Profile (REDACTED) UUID: Generic Attribute Profile (REDACTED) UUID: Battery Service (REDACTED) UUID: Google (REDACTED) Modalias: bluetooth:REDACTED Why does this headset have a UUID: Google , what are these UUIDs for? Why does one headset need so many unique identifiers? Are these provided by the bluetooth controller on the headset? | Create a script file that you'll be able to easily execute: #!/bin/shscrot -e 'xclip -selection clipboard -t image/png -i $f' -t will instruct xclip it will be dealing with an image file; -i will tell xclip where the file is; $ f is scrot 's variable for the recent screenshot file saved. You'll need xclip installed, but it should be readily available on your distro. I'm using KDE Plasma now and had to resource to this approach in order to achieve Cinnamon's ready "screenshot to clipboard" hotkey. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
623,874 | I have a bunch of files and for each row there is a unique value I'm trying to obscure with a hash. However there are 3M rows across the files and a rough calculation of the time needed to complete the process is hilariously long at 32days. for y in files*; do cat $y | while read z; do KEY=$(echo $z | awk '{ print $1 }' | tr -d '"') HASH=$(echo $KEY | sha1sum | awk '{ print $1 }') sed -i -e "s/$KEY/$HASH/g" $y donedone To improve this processes speed I assume I'm going to have to introduce some concurrency. A hasty attempt based of https://unix.stackexchange.com/a/216475 led me to N=4(for y in gta*; do cat $y | while read z; do (i=i%N)); ((i++==0)); wait ((GTA=$(echo $z | awk '{ print $1 }' | tr -d '"') HASH=$(echo $GTA | sha1sum | awk '{ print $1 }') sed -i -e "s/$KEY/$HASH/g) & donedone) Which performs no better. Example input "2000000000" : ["200000", "2000000000"]"2000000001" : ["200000", "2000000001"] Example output "e8bb6adbb44a2f4c795da6986c8f008d05938fac" : ["200000", "e8bb6adbb44a2f4c795da6986c8f008d05938fac"]"aaac41fe0491d5855591b849453a58c206d424df" : ["200000", "aaac41fe0491d5855591b849453a58c206d424df"] Perhaps I should read the lines concurrently then perform the hash-replace on each line? | FWIW I think this is the fastest way you could do it in a shell script: $ cat tst.sh#!/usr/bin/env bashfor file in "$@"; do while IFS='"' read -ra a; do sha=$(printf '%s' "${a[1]}" | sha1sum) sha="${sha% *}" printf '%s"%s"%s"%s"%s"%s"%s"\n' "${a[0]}" "$sha" "${a[2]}" "${a[3]}" "${a[4]}" "$sha" "${a[6]}" done < "$file"done $ ./tst.sh file $ cat file"e8bb6adbb44a2f4c795da6986c8f008d05938fac" : ["200000", "e8bb6adbb44a2f4c795da6986c8f008d05938fac"]""aaac41fe0491d5855591b849453a58c206d424df" : ["200000", "aaac41fe0491d5855591b849453a58c206d424df"]" but as I mentioned in the comments you'd be better of for speed of execution using a tool with sha1sum functionality built in, e.g. python. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106511/"
]
} |
623,881 | We have 100+ GB files on a Linux machine, and while trying to perform gzip using below command, gzip is taking minimum 1-2 hours to complete: gzip file.txt Is there a way we can make gzip to run fast with the same level of compression happening when we use gzip? CPU: Intel(R) Core(TM) i3-2350M CPU @2.30 GHz | If you are using gzip, you use mostly one processor core (well, some parts of the task, like reading and writing data are kernel tasks and kernel will use another core). Have a look at some multicore-capable gzip replacements, e.g. MiGz ( https://github.com/linkedin/migz ) or Pigz ( https://zlib.net/pigz/ , for some longer explanation see also e.g. https://medium.com/ngs-sh/pigz-a-faster-alternative-to-gzip-for-big-files-d5909e46d659 ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/623881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/379007/"
]
} |
623,895 | What I want is to be able to consistently tell apart multiple USB sound cards, identify them by the USB port they're plugged in and use that knowledge to play a sound on a speciffic sound card in my Java program. So far I'm stuck on the first part - identify sound cards by the USB port. First thing I did is to follow advice in this question and use the Udev rules to assign names to sound cards with the script from this site These are the Udev rules I added KERNEL=="controlC[0-9]*", DRIVERS=="usb", PROGRAM="/usr/bin/alsa_name.pl %k", NAME="snd/%c{1}"KERNEL=="hwC[D0-9]*", DRIVERS=="usb", PROGRAM="/usr/bin/alsa_name.pl %k", NAME="snd/%c{1}"KERNEL=="midiC[D0-9]*", DRIVERS=="usb", PROGRAM="/usr/bin/alsa_name.pl %k", NAME="snd/%c{1}"KERNEL=="pcmC[D0-9cp]*", DRIVERS=="usb", PROGRAM="/usr/bin/alsa_name.pl %k", NAME="snd/%c{1}" and these are the contents of alsa_name.pl use strict;use warnings;#my $alsaname = $ARGV[0]; #udev called us with this argument (%k)my $physdevpath = $ENV{PHYSDEVPATH}; #udev put this in our environmentmy $alsanum = "cucu";#you can find the physdevpath of a device with "udevinfo -a -p $(udevinfo -q path -n /dev/snd/pcmC0D0c)"##$physdevpath =~ s/.*\/([^\/]*)/$1/; #eliminate until last slash (/)$physdevpath =~ s/([^:]*):.*/$1/; #eliminate from colon (:) to end_of_line#if($physdevpath eq "1-1.3.1"){ $alsanum="11";}if($physdevpath eq "1-1.3.2"){ $alsanum="12";}if($physdevpath eq "1-1.3.3"){ $alsanum="13";}if($physdevpath eq "1-1.3.4"){ $alsanum="14";}#if($alsanum ne "cucu"){ $alsaname=~ s/(.*)C([0-9]+)(.*)/$1C$alsanum$3/;}#print $alsaname;exit 0; Now, when I plug my USB sound card and look at /var/log/syslog I see that it doesn't exactly work: NAME="snd/%c{1}" ignored, kernel device nodes cannot be renamed; please fix it in /etc/udev/rules.d/99-com.rules:16 I tried to modify my Udev rules based on this repository which provides an Udev rule: SUBSYSTEM!="sound", GOTO="my_usb_audio_end"ACTION!="add", GOTO="my_usb_audio_end"DEVPATH=="/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card?", ATTR{id}="SPEAKER"DEVPATH=="/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.3/1-1.3:1.0/sound/card?", ATTR{id}="HEADSET"LABEL="my_usb_audio_end" So I used my previous script and modified my rule: KERNEL=="pcmC[D0-9cp]*", DRIVERS=="usb", PROGRAM="/usr/bin/alsa_name.pl %k", ATTR{id}="snd/%c{1} but now syslog tells me: error opening ATTR{some_very_long_id} for writing: Permission denied I also tried this answer and did KERNEL=="pcmC[D0-9cp]*", DRIVERS=="usb", PROGRAM="/usr/bin/alsa_name.pl %k", SYMLINK+="snd/%c{1} I don't see any errors in syslog , which I suppose is good, but when I list playback devices with aplay -l , all I see is card 1: Device [USB Audio Device], device 0: USB Audio [USB Audio]Subdevices: 1/1Subdevice #0: subdevice #0 and nothing changes, regardles of which USB port I plug it in. I also see no useful/distinguishable info in my Java program using AudioSystem.getMixerInfo() Is my approach correct and I'm just missing some detail, or this is completely wrong direction? | If you are using gzip, you use mostly one processor core (well, some parts of the task, like reading and writing data are kernel tasks and kernel will use another core). Have a look at some multicore-capable gzip replacements, e.g. MiGz ( https://github.com/linkedin/migz ) or Pigz ( https://zlib.net/pigz/ , for some longer explanation see also e.g. https://medium.com/ngs-sh/pigz-a-faster-alternative-to-gzip-for-big-files-d5909e46d659 ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/623895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/445863/"
]
} |
623,969 | How do I replace the blank lines in tab delimited text file with the content of the row above on a Linux machine? For example: 101 abc group1765 efg group2345 hij group4456 gfd group9762 ert group7554 fgt group11 Expected Output: 101 abc group1765 efg group2345 hij group3345 hij group3456 gfd group9762 ert group7762 ert group7762 ert group7554 fgt group11 | In awk (note that this one will print any empty lines that come before the first non-empty one): $ awk '{ if(! NF){$0=last}else{last=$0;}}1' file101 abc group1765 efg group2 345 hij group4 345 hij group4 456 gfd group9 762 ert group7 762 ert group7 762 ert group7 554 fgt group11 Explanation : NF holds the number of fields. If the line is empty, there are no fields so the variable will be 0 . if(! NF){$0=last} : if the number of fields is 0 (empty line), set the current line ( $0 ) to the value of the variable last . else{last=$0;} : if there are fields, so this line is not empty, set last to hold the contents of this line. 1 : the lone one at the end is an awk trick: when something evaluates to true (1 or any other integer greater than 0 is always true, since 0 is false) awk will print the current line. So that 1 is equivalent to print $0 . $ awk '! NF ? $0=last : last=$0;' file101 abc group1765 efg group2 345 hij group4 345 hij group4 456 gfd group9 762 ert group7 762 ert group7 762 ert group7 554 fgt group11 Explanation This is the same idea as above, but written in a more concise way. We are using the ternary operator . Since one of the two conditions will always be true (either NF is true or it is not true, so the ternary operator will always return true), both outcomes result in the line being printed (except for cases where the line is empty and no non-empty lines have been seen or if a line consistes of nothing but 0 ). However, if NF is not set, we set $0 to last and if it is set, we set last to $0 . The result is the output we want. Since the above will not print lines that are just 0 , you can use this instead of that is a problem for you: awk '{! NF ? $0=last : last=$0};1' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/623969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312065/"
]
} |
624,086 | The following script searches files with the suffix .tex in a directory (i.e. TeX files), for the string \RequireLuaTeX , i.e. LuaTeX files in that directory, and creates a Bash array from the results. It then runs the command latexmk on the files in that array. I'd like to exclude a list of user defined files from this array, possibly declared as an array thus: excludedfiles=(foo.tex bar.tex baz.tex) I'm writing to solicit suggestions for clean ways to do this. I quite like the approach of putting everything in an array. For one thing, it makes it easy to list the files before running commands on them. But I'm willing to consider other approaches. #!/bin/bash ## Get LuaTeX filenames mapfile -t -d "" filenames < <(grep -Z -rL "\RequireLuaTeX" *.tex)## Run `latexmk` on PDFTeX files.for filename in "${filenames[@]}"do base="${filename%.*}" rm -f "$base".pdf latexmk -pdf -shell-escape -interaction=nonstopmode "$base".texdone BACKGROUND AND COMMENTS: TeX users may be confused by my question. So I'm explaining here what I was trying to do, and how I miswrote the question. I'm not changing it, because the change would invalidate the existing answers and create confusion. I have a collection of LaTeX files. The older ones use PDFLaTeX. The newer ones mostly use PDFLaTeX. This question is about the PDFLaTeX ones. What I'm trying to do in my script is a) Create a list of PDFLaTeX files. My LuaLaTeX files contain the string "\RequireLuaTeX" in them. Therefore, files which do not contain that string are PDFLaTeX files. So, I am trying to create a list of LaTeX files which do not contain the string "\RequireLuaTeX" in them. b) Run PDFLaTeX on them using latexmk . My question has the following error. I wrote: The following script searches files with the suffix .tex in a directory (i.e. TeX files), for the string \RequireLuaTeX , i.e. LuaTeX files in that directory, and creates a Bash array from the results. In fact I want files which do not contain that string, because as explained above, those correspond to my PDFLaTeX files. | -L flag to Grep list files not matching a pattern. You want -l instead. Also, Grep needs to see double-backslash to match a single backslash. Since you are in Bash, let us get hold of some useful constructs. #!/bin/bash -shopt -s globstar extglobmapfile -t -d "" filenames < <(grep -Zl '\\RequireLuaTeX' ./**/!(foo|bar|baz).tex)rm -f "${filenames[@]/%.tex/.pdf}"latexmk -pdf -shell-escape -interaction=nonstopmode "${filenames[@]}" **/!(foo|bar|baz).tex expands to all files in the current directory tree that end in .tex but whose basename is not foo.tex , bar.tex nor baz.tex . Both globstar and extglob are required for this operation. "${filenames[@]/%.tex/.pdf}" expands to all elements of the array, substituting each trailing .tex by .pdf . Since Latexmk can be given multiple files as arguments, we could skip for-loops. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/624086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4671/"
]
} |
624,581 | Trying to make a system update to upgrade Tensorflow: sudo pacman -Syu I am asked: :: python-gast03 and python-gast are in conflict. Remove python-gast? [y/N] I say No: error: unresolvable package conflicts detected error: failed to prepare transaction (conflicting dependencies) :: python-gast03 and python-gast are in conflict I then try to remove the oldest of the packages: sudo pacman -R python-gast03 and I get: error: target not found: python-gast03 So, where does this conflict come from if the oldest package is not even present? | I had the same issue when updating my system. sudo pacman -Syu I tried removing python-gast. sudo pacman -R python-gast I was told that python-tensorflow-opt-cuda was dependent on that package.So, I updated it. sudo pacman -S python-tensorflow-opt-cuda It replaced gast with gast03 at that point.Then, I could do a system update. sudo pacman -Syu Everything worked as expected after that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/624581",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117921/"
]
} |
624,757 | when we run the sysctl -p on our rhel 7.2 server1 we get sysctl -pfs.file-max = 500000vm.swappiness = 10vm.vfs_cache_pressure = 50sysctl: cannot stat /proc/sys/pcie_aspm: No such file or directorynet.core.somaxconn = 1024# ls /proc/sys/pcie_aspmls: cannot access /proc/sys/pcie_aspm: No such file or directory but when we run the sysctl -p on other server2 as we get good results without error as sysctl -pfs.file-max = 500000vm.swappiness = 10vm.vfs_cache_pressure = 50net.core.somaxconn = 1024 the file - /proc/sys/pcie_aspm not exist on this server also ( server2 ) so why sysctl -p failed on server1 ? | As revealed in the comments, there’s a pcie_aspm=off line in one of the files which sysctl -p reads. This causes sysctl to attempt to write to /proc/sys/pcie_aspm ; if that doesn’t exist (and it won’t, it’s not a valid sysctl entry , it’s a kernel boot parameter ), you’ll get the error shown in your question. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/624757",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
624,855 | Given a file that contains multiple lines, some with = at the very end. I wish to join each line ending with = with the next line. Any other newlines should remain untouched. I have been unable to do this, because sed seems to operate on a line-by-line basis, thus always "adds" a newline back. Example input: AppleBanana milkshakeCherry =Pie Should become: AppleBanana milkshakeCherry Pie I am totally open to using tools other than sed / awk . | Using awk : $ awk '{ORS = sub(/=$/,"") ? "" : "\n"} 1' fileAppleBanana milkshakeCherry Pie Using a conditional expression, we set ORS (output record separator, default: newline) to either the empty string or the newline. sub() is true when a replacement has been done at the end of the line, removing an existing = , otherwise it's false. In the first case we set ORS to "" , or else to "\n" . 1 means print the line (using the ORS value selected for every line). Alternatively, we could use GNU sed and zero separation, assuming the file is not huge and small enough for the memory: sed -z 's/=\n//g' file sed reads the whole file as one line, and globally replaces =\n with nothing. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/624855",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153478/"
]
} |
624,875 | Bash and Zsh's HEREDOC seems to act like a file, instead of string, and if I hope to do something like foo() { ruby << 'EOF' 3.times do puts "Ruby is getting the argument #{ARGV[0]}" endEOF } is there a way to pass in an argument to the Ruby program? It will be best not to interpolate the $1 into the Ruby code, so that's why I am using 'EOF' instead of EOF , as interpolating into the Ruby code can be messy. There is one way to use the HEREDOC as a string, by the following method: foo() { ruby -e "$(cat << 'EOF' 3.times do puts "Ruby is getting the argument #{ARGV[0]}" endEOF)" $1 } and it works (although a little bit hacky). But is there a way then to use the HEREDOC's usual way of treating it as a file and be able to supply an argument to Ruby? | On systems with /dev/fd/ n , you can always do: foo() { ruby /dev/fd/3 "$@" 3<< 'EOF' 3.times do puts "Ruby is getting the argument #{ARGV[0]}" endEOF} Here using a fd above 2 so your ruby script can still use stdin/stdout/stderr unaffected. If your system is one of the rare few that still don't support /dev/fd/ n , you can do: foo() { ruby - "$@" << 'EOF' 3.times do puts "Ruby is getting the argument #{ARGV[0]}" endEOF} (where ruby itself interprets - as meaning stdin ). But that means that the ruby inline script's stdin is now that heredoc, so that script won't be able to query the user via the original stdin unless you provide that stream some other way. Heredoc is a feature that came with the Bourne shell in the late 70s. It's a redirection operator, it's meant do redirect some file descriptor ( 0 aka stdin by default) to some fixed content. Originally, that was implemented via a temporary file, though some shells, including bash5.1+ use (sometimes) pipes instead. Also note that in ruby -e code -- arbitrary-argument , like in sed -e code or perl -e code but unlike in sh -c code , python -c code , you do need that -- to mark the end of options as otherwise if arbitrary-argument started with - , it would be treated as an option to ruby . We don't need it in ruby /dev/fd/3 arbitrary-argument nor ruby - arbitrary-argument as options are not expected after that non-option argument that is - or /dev/fd/3 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/624875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19342/"
]
} |
624,912 | lvs is good command to show us the file system size as lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv_root vgW -wi-ao---- 100.00g lv_swap vgW -wi-ao---- 7.72g lv_var vgW -wi-ao---- 100.87g I am trying to capture the root file system size and until now I create the following syntax lvs | awk '$1 ~ /root/' | awk '{print $NF}' | sed s'/\./ /g' | awk '{print $1}' and its print ( expected output is 100 ) 100 but I want to improve my syntax to be better any suggestions ? | lvs shows information about LVM volumes. It can give you the size of the block device that contains the root filesystem (not the size of that filesystem) if that filesystem happens to be on a LVM volume and if you know which one (and no, the name of the logical volume does not have to have "root" in it). size=$( lvs --unit b --nosuffix --no-headings --config 'log{prefix=""}' -o size vgW/lv_root) To find out the size of the block device that contains the root filesystem, whether it's a LVM volume or disk partition or NBD/loop/md... device (but note that the root file system doesn't have to backed by a block device like for network filesystems, zfs, btrfs...), on Linux, I'd use lsblk instead: size=$( lsblk -Jbo size,mountpoint | jq '.blockdevices[]|select(.mountpoint=="/").size') To find the size of the / filesystem, you could use df (assuming GNU df ) or findmnt : size=$(findmnt -bno size /) size=$(df -B1 --output=size / | awk 'NR==2{print $1}') Those give you the size in bytes, that is, with the most precision. If you want the size rounded down to an integer number of gigabytes, just divide by 1000000000: gigabytes=$((size / 1000000000)) Or to get it in gibibytes (note that lvs uses 1024 based suffixes, where g means gibibyte, not gigabyte), use: gibibytes=$((size / 1024 / 1024 / 1024)) Though all commands above also allow specifying a different unit (but beware of gigabyte vs gigibyte and that most will give you floating point numbers and may do the rounding differently and use different characters for the decimal radix depending on the locale). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/624912",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
624,922 | When I use ctrl + alt + F7 and switch to the X11 server, I can see the desktop UI. When I switch to ctrl + alt +[ F1 - F6 ], I can see the virtual terminal. Now how do I access the virtual terminal 7 where I can input commands even though X is running parallely | You can’t, your X server is running there. It takes over the virtual terminal. Your terminals 1 through 6 are running a getty variant, and that’s what starts the login process and then the shell running in the virtual terminal. But you can’t have a getty -based session and a display server in the same virtual terminal. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/624922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/446881/"
]
} |
624,941 | I essentially know what the problems is, in that I need to use \[...\] as a way to escape (non-space?) characters, and allow bash to correctly calculate the width of my prompt. However, I cannot iron out all the problems and have been using trial and error as I don't quite understand where exactly I need all my \[...\] placed. STARTCOLOR='\[\e[0;31m\]'ENDCOLOR='\[\e[0m\]'BACKGROUND='\[\e[47m\]'export PS1="$STARTCOLOR$BACKGROUND\u@\h \[\t\]$ENDCOLOR\w>\$?\$\]" Is what I am using. The only issue now seems if I use the arrow keys to scroll previous commands for too long the \w>\$?\$\ part of my PS1 will disappear. It happens too if I reverse back with arrow keys after moving forward with previous commands. | The problem is that you are using the non-printing markers for something that gets printed out ( \t - the timestamp) STARTCOLOR='\[\e[0;31m\]'ENDCOLOR='\[\e[0m\]'BACKGROUND='\[\e[47m\]'export PS1="$STARTCOLOR$BACKGROUND\u@\h \t$ENDCOLOR\w>\$?\$ " The \[ ... \] is only for surrounding non-printing character sequences, such as colour codes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/624941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259020/"
]
} |
625,063 | I sometimes find that man7.org has available man pages not found on locally on my Linux distribution. An example is veth(4) that provides information on veth network devices. A quick inspection using apropos returns no results, although the ip (iproute2) is installed: parallels@debian-gnu-linux-vm:~$ apropos vethveth: nothing appropriate. How can I obtain these very useful man pages on network devices (e.g. veth(4) )? More generally, how can I obtain the complete man7.org database? parallels@debian-gnu-linux-vm:~$ cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 9 (stretch)"NAME="Debian GNU/Linux"VERSION_ID="9"VERSION="9 (stretch)"VERSION_CODENAME=stretchID=debianHOME_URL="https://www.debian.org/"SUPPORT_URL="https://www.debian.org/support"BUG_REPORT_URL="https://bugs.debian.org/" | I’m not aware of a single source for all the man pages on man7.org, but there is a list of all the source projects used to build the web site; if you install all the corresponding packages, you’ll have all the relevant man pages. In particular, you should install the manpages and manpages-dev packages; these contain the man pages maintained as part of the man-pages project . Note that the web site reflects the current state (or latest release) of each source project; this will be newer in many cases than the versions available in Debian 9 or 10. In particular, you won’t find the veth man page in Debian 9: it was added in version 4.14 of the man-pages project, but Debian 9 has version 4.10. Version 4.16 is available in Stretch backports. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/625063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/128739/"
]
} |
625,128 | i=0while read M; do ((cd $DEST/cfg/puppet/modules/$M || exit 1 [ -d .git ] && echo -n "$M: " && git pull) 2>&1 | prep $(printf %04d: $i) puppet/modules/$M) & i=$[i+1] [ $[ i % 20 ] = 0 ] && waitdone < $(dirname "$0")/modules-puppet.txt Can someone please explain what the line [ $[ i % 20 ] = 0 ] && wait does in the above bash snippet? | The code spawns a number of background tasks in a loop. Each of these tasks run commands related to git and puppet . The jobs may be spawned very fast and to not overwhelm the system, the code only runs 20 of them before waiting for all the currently running backgrund tasks to finish. It is the call to wait that makes the script wait for all background tasks to finish before continuing, spawning another 20 jobs. The arithmetic test that precedes the call to wait will be true for each value of $i evenly divisible by 20, i.e. for $i = 20 , $i = 40 , etc. The syntax used for the arithmetic expansions, $[ ... ] , is obsolete bash syntax that nowadays would have been written $(( ... )) (which is portable). The % operator is the ordinary modulus operator. Apart from the use of the obsolete syntax, the shell also has a possible issue with quoting. It's the variable expansions $DEST and $M , and also $i , that lack quoting, as does the two command substitutions. If any of these contain or generate characters present in $IFS (space, tab, newline, by default), you may expect the script to fail or at least to misbehave. The code also lacks a final wait after the loop, to properly wait for any of the last few jobs started by the loop. This would not be needed if it can be guaranteed that the loop will always run n*20 times (for some whole number n ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/625128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2298/"
]
} |
625,213 | How do I change to a near-identical path with a different low-level parent? If you’re working in for instance ~/foobar/ foo /data/images/2020/01/14/0001/ and need to get to the same path in bar instead of foo , how can you get there without typing out cd ~/foobar/ bar /data/images/2020/01/14/0001/ ? Surely there’s some elegant and/or kludgy solution. | In some shells, e.g. ksh and zsh , doing cd word1 word2 would change to a directory given by changing the first occurrence of word1 in the pathname of the current directory to word2 . For example, in the zsh shell: $ pwd/usr/local/sbin$ cd sbin bin/usr/local/bin$ In other shells that support the non-standard ${variable/pattern/replacement} parameter substitution originally found in ksh93 , you may use ${PWD/word1/word2} to create the pathname of the directory to change into: $ pwd/usr/local/sbin$ cd "${PWD/sbin/bin}"$ pwd/usr/local/bin In those shells ( bash , for example), you could even create your own naive cd function to handle two arguments in the way that ksh and zsh does it, like so: cd () { if [ "$#" -eq 2 ] && [[ $1 != -* ]]; then command cd -- "${PWD/$1/$2}" && printf 'New wd: %s\n' "$PWD" else command cd "$@" fi} The [ "$#" -eq 2 ] detects when the special cd behavior should be triggered (when there are exactly two command line arguments), but we test with [[ $1 != -* ]] to not trigger the special behavior if you use an option with cd . Using command cd instead of cd inside the function avoids calling the function recursively. Testing that in bash : $ cd /usr/local/sbin$ cd sbin binNew wd: /usr/local/bin$ cd local ''New wd: /usr/bin$ cd bin sbinNew wd: /usr/sbin$ cd sbin local/sbinNew wd: /usr/local/sbin$ cd '*/' /New wd: /sbin Notice that the last command replaces using a pattern matching up to and including the last / ; the pattern must be quoted. To disallow patterns and to always treat the first argument as a word, use command cd "${PWD/"$1"/$2}" in the function (notice the quoting of $1 ). To additionally force the replacement to only affect a complete directory name, use command cd "${PWD/"/$1/"/"/$2/"}" .Artificially inserting / before and after both arguments would avoid matching substrings of directory names, but would make it incompatible with the way this works in zsh and ksh and it would no longer allow you to make substitutions in the last part of the directory path as there is no / at the end (you can only provide a certain level of hand-holding before the extra "help" starts to be a hindrance). This would make cd foo bar work with the example that is in the question, though. You would otherwise have to make sure not to match foo in foobar in some other way, for example with cd foo/ bar/ . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/625213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337540/"
]
} |
625,223 | I have Windows and Arch Linux installed in my system. I plan to increase the size of my root partition by shrinking the home partition using GParted live USB. But there is a swap partition between my root and home partition. I thought of shrinking the home partition and adding space to the swap and then shrink the swap then add it to the root since the unallocated space must be adjacent to the one being resized. I am not sure whether this going to work. | In some shells, e.g. ksh and zsh , doing cd word1 word2 would change to a directory given by changing the first occurrence of word1 in the pathname of the current directory to word2 . For example, in the zsh shell: $ pwd/usr/local/sbin$ cd sbin bin/usr/local/bin$ In other shells that support the non-standard ${variable/pattern/replacement} parameter substitution originally found in ksh93 , you may use ${PWD/word1/word2} to create the pathname of the directory to change into: $ pwd/usr/local/sbin$ cd "${PWD/sbin/bin}"$ pwd/usr/local/bin In those shells ( bash , for example), you could even create your own naive cd function to handle two arguments in the way that ksh and zsh does it, like so: cd () { if [ "$#" -eq 2 ] && [[ $1 != -* ]]; then command cd -- "${PWD/$1/$2}" && printf 'New wd: %s\n' "$PWD" else command cd "$@" fi} The [ "$#" -eq 2 ] detects when the special cd behavior should be triggered (when there are exactly two command line arguments), but we test with [[ $1 != -* ]] to not trigger the special behavior if you use an option with cd . Using command cd instead of cd inside the function avoids calling the function recursively. Testing that in bash : $ cd /usr/local/sbin$ cd sbin binNew wd: /usr/local/bin$ cd local ''New wd: /usr/bin$ cd bin sbinNew wd: /usr/sbin$ cd sbin local/sbinNew wd: /usr/local/sbin$ cd '*/' /New wd: /sbin Notice that the last command replaces using a pattern matching up to and including the last / ; the pattern must be quoted. To disallow patterns and to always treat the first argument as a word, use command cd "${PWD/"$1"/$2}" in the function (notice the quoting of $1 ). To additionally force the replacement to only affect a complete directory name, use command cd "${PWD/"/$1/"/"/$2/"}" .Artificially inserting / before and after both arguments would avoid matching substrings of directory names, but would make it incompatible with the way this works in zsh and ksh and it would no longer allow you to make substitutions in the last part of the directory path as there is no / at the end (you can only provide a certain level of hand-holding before the extra "help" starts to be a hindrance). This would make cd foo bar work with the example that is in the question, though. You would otherwise have to make sure not to match foo in foobar in some other way, for example with cd foo/ bar/ . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/625223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/447164/"
]
} |
625,416 | I'm using CentOS and I was reading about the substitution command, s/// , in the vi editor. I wanted to test some of the examples I saw: :%s/old/new/g Substitutes old with new throughout the file:.,$s/old/new/g Substitutes old with new from the current cursor position to the end of the file The above examples worked as expected for me, but the following containing the caret symbol ( ^ ) didn't work: :^,.s/old/new/g Substitutes old with new from the beginning of the file to the current cursor position I tried it, but it didn't work, so is caret not working or am I typing the command incorrectly? | In the vi editor, as well as in both ex and ed (as found on BSD systems), the ^ addresses the previous line. This means that the command ^d would delete the previous line, ^m. would swap this line with the previous, and that ^,.s/old/new/g would substitute all strings matching old with new on the previous line and on this line. The vim editor, being an extended re-implementation of the original vi and ex editors, commonly installed on Linux systems under the names vim , vi , and ex , does not have this way of addressing the previous line, and will respond with " E492: Not an editing command " if you try to use it. You may use - or -1 in its place: -,.s/old/new/g Using - or -1 in place of ^ also works in ed , ex and in vi on non-GNU systems. The POSIX standard says the following about this in relation to the ed editor: Historically, ed accepted the ^ character as an address, in which case it was identical to the <hyphen-minus> character. POSIX.1-2017 does not require or prohibit this behavior. There is a similar wording for the vi and ex editors ( ex is vi "in line editor mode"): Historically, ex and vi accepted the ^ character as both an address and as a flag offset for commands. In both cases it was identical to the - character. POSIX.1-2017 does not require or prohibit this behavior. Note that the text that you seem to be quoting says that ^,. addresses all lines from the top of the file to the current line. This is not correct. It only addresses the previous and the current line, and only does so in "historically accurate" implementations of vi (and ex and ed ). To address all lines from the start of the editing buffer to the current line, use 1,. . The ^ -instead-of- 1 typo could possibly come from thinking that "since $ is the end-of-line anchor in regular expressions, and also the address of the last line in the editing buffer, ^ , being the start-of-line anchor in regular expressions, must therefore (by symmetry) be the first line of the editing buffer". Just to provide another piece of trivia: The ^ address can also not be used in the GNU implementation of the ed editor. As in any other implementation of ed , - or -1 may still be used as an alternative. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/625416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/447343/"
]
} |
625,529 | Basically, I am exploring Gentoo , and I would like to be able to perform some kind of graphics rendering (open a jpg, or draw basic shapes, or even set the colors of individual pixels if I have to). I do not have any desktop or window manager, and I would rather not need one, but that is exactly the question. What is the most lightweight/simplest way to render graphics? | You can display graphics using the Linux frame-buffer interface without X11 or Wayland at all. The fbida package includes the fbi image viewer, which can run directly on the virtual console. You can't get much more light-weight than that. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/625529",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/447457/"
]
} |
625,568 | #!/bin/shTMP=$(cd Folder && ls)for name in $TMP; do if [[ "${name}" != *"a"* -a ${name} == *"b"* ]] ;then echo $name fidone I was trying to output the names that has no 'a' in it but 'b' instead .This is the error that I got : ./aufg7.sh: 6: [[: not found./aufg7.sh: 6: [[: not found./aufg7.sh: 6: [[: not found./aufg7.sh: 6: [[: not found./aufg7.sh: 6: [[: not found./aufg7.sh: 6: [[: not found What am I doing wrong here ? | Your main issue is that you're using a pattern match in [[ ... ]] in a script executed by /bin/sh which does not support these types of tests. See also the question entitled Shell script throws a not found error when run from a sh file. But if entered manually the commands work The other issue is that you are combining the output of ls into a single string before splitting it into words on spaces, tabs and newlines, and applying filename globbing to the generated words. You then loop over the result of this. Instead: #!/bin/shcd Folder || exit 1for name in *b*; do [ -e "$name" ] || [ -L "$name" ] || continue case $name in (*a*) ;; (*) printf '%s\n' "$name"; esacdone This loops over all files in the Folder directory that have the letter b in them, and then prints the ones of these that do not contain an a . The test for the a character is done using case ... esac . The *a* pattern matches an a character in the filename if there is one, in which case nothing happens. The printf statements is in the "default case" which gets executed if there is no a in the string. The tests with -e and -L are there to make sure that we catch the edge case where the loop pattern doesn't match anything. In that case, the pattern will remain unexpanded, so to make sure that we can distinguish between an unexpanded pattern and an actual existing file, we test with -e . The -L test is to catch the further edge case of a symbolic link that has the same name as the loop pattern, but that does not point to anywhere valid. The bash code below does not need this because it uses the nullglob shell option instead (unavailable in /bin/sh ). Running the loop over the expansion of the *b* pattern instead of whatever ls outputs has the benefit that this also is able to deal with filenames containing spaces, tabs, newlines and filename globbing characters (the filenames are never put into a single string that we then try to iterate over). In the bash shell, you could do the same with a [[ ... ]] pattern matching test, like you tried to do yourself: #!/bin/bashcd Folder || exit 1shopt -s nullglobfor name in *b*; do [[ $name != *a* ]] && printf '%s\n' "$name"done See also: Why does my shell script choke on whitespace or other special characters? When is double-quoting necessary? Why *not* parse `ls` (and what to do instead)? Why is printf better than echo? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/625568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/439619/"
]
} |
625,819 | Goals Replace the text "scripts: {" with the following string "scripts": { "watch": "tsc -w", in a json file. Attempts I created two variables for source and destination strings: First attempt SRC='"scripts": {'DST='"scripts": { "watch": "tsc -w",' And ran the following command: sed "s/$SRC/$DST/" foo.json This has failed. Second attempt This time I escaped double quotes for the source and destination strings: SRC="\"scripts\": {"DST="\"scripts\": { \"watch\": \"tsc -w\", \"dev\": \"nodemon dist/index.js\"," And ran the same command as above, which failed. Third and fourth attempts I tried the variables defined as above with the following command: sed 's/'"$SRC"'/'"$DST"'/' foo.json This has failed. All these attempts yielded the error unterminated 's' command What has gone wrong? | Assuming your JSON document looks something like { "scripts": { "other-key": "some value" }} ... and you'd like to insert some other key-value pair into the .scripts object. Then you may use jq to do this: $ jq '.scripts.watch |= "tsc -w"' file.json{ "scripts": { "other-key": "some value", "watch": "tsc -w" }} or, $ jq '.scripts += { watch: "tsc -w" }' file.json{ "scripts": { "other-key": "some value", "watch": "tsc -w" }} Both of these would replace an already existing .scripts.watch entry. Note that the order of the key-value pairs within .scripts is not important (as it's not an array). Redirect the output to a new file if you want to save it. To add multiple key-value pairs to the same object: $ jq '.scripts += { watch: "tsc -w", dev: "nodemon dist/index.js" }' file.json{ "scripts": { "other-key": "some value", "watch": "tsc -w", "dev": "nodemon dist/index.js" }} In combination with jo to create the JSON that needs to be added to the .scripts object: $ jq --argjson new "$( jo watch='tsc -w' dev='nodemon dist/index.js' )" '.scripts += $new' file.json{ "scripts": { "other-key": "some value", "watch": "tsc -w", "dev": "nodemon dist/index.js" }} sed is good for parsing line-oriented text. JSON does not come in newline-delimited records, and sed does not know about the quoting and character encoding rules etc. of JSON. To properly parse and modify a structured data set like this (or XML, or YAML, or even CSV under some circumstances), you should use a proper parser. As an added benefit of using jq in this instance, you get a bit of code that is easily modified to suit your needs, and that is equally easy to modify to support a change in the input data structure. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/625819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/390517/"
]
} |
626,065 | I am having a question that do all the linux distros boot files,grub files and kernel files , that are main to run them and only the iso image of the distro is different? I have Fedora installed on my system and can replace it with manjaro by changing the grub entry? How safe it is? | Various distributions of course have different packages of pretty much everything. Nevertheless, three components are typically rather well isolated from each other: bootloader, kernel, userspace programs. Bootloader needs to be able to boot various kernels, otherwise its usability would be quite limited. The kernel doesn't really depend much on the userspace, since it is providing a basic environment for the userspace to run in. The userspace has some dependence on the kernel, but usually not for the basic tasks. It may be requiring various kernel functionality for certain aspects (even rather substantial), but it is often possible to use kernel from another distribution from "around the same time" (couple of months of difference should not matter, unless the userspace is using some bleeding edge features). If you just want to do piece-wise bootstrap of new distribution: run dist-A install dist-B-without-kernel (e.g in chroot ) boot B-with-kernel-A install kernel-B boot dist-B-with-kernel-B it should work just fine (source: "been there, done that" ). Depending on your particular usecase (kernel features needed by the userspace), you may even be able to run happily a "foreign" kernel without issues. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/626065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/441027/"
]
} |
626,070 | I am trying to mirror the directory and file structure of a particular directory. However, I want the mirrored files to have no size. So for example, if I had the following directory tree: original_folder├── images│ ├── image1.jpg (2 MB)│ ├── image2.jpg (3 MB)├── videos│ ├── video1.mp4 (300 MB)│ ├── video2.mp4 (400 MB) I want the following output: mirrored_folder├── images│ ├── image1.jpg (0 b)│ ├── image2.jpg (0 b)├── videos│ ├── video1.mp4 (0 b)│ ├── video2.mp4 (0 b) I tried using the following command in original_folder : cd original_folderfind . -name '*' -exec touch ../mirrored_folder/'{}' \; However, this command tries to execute touch ../mirrored_folder/./images/image1.jpg . Note the dot which messes the command up. How can I achieve what I'm trying to do? | GNU cp (from coreutils ) can do this: cp -r --attributes-only original_folder/* mirrored_folder/ From man cp : --attributes-only don't copy the file data, just the attributes -R , -r , --recursive copy directories recursively Using find command as OP says xe is on MacOS and cp command has no --attributes-only option: find original_folder/ -type d -exec \ sh -c 'mkdir -p mirrored_folder/${1#*/}' _ {} \; \-o -type f -exec \ sh -c 'touch mirrored_folder/${1#*/}' _ {} \; Note that find solution creates fresh directories and files unlike the cp solution that was keeping their attributes (default: mode, ownership, timestamps). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/626070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262730/"
]
} |
626,143 | I have a new machine running debian sid on which I generated a new ssh key-pair. I wanted to find a convenient way to copy this new key-pair to various other machines using my old Ubuntu machine and its key-pair. I have disabled password logins for all the "remote" machines, so I wanted to use the old machine as an intermediate. While researching this, I found the exact situation given as an example in the manual page for ssh-copy-id . I followed the example to access a pi zero running pihole , but got the error in the post title. To sum up my steps from that example, where debian is the machine with the new key-pair, sarp.lan is the machine with the old key-pair and pihole is the "remote" machine, I did: However, running ssh -v pihole , I do see the output debug1: Server accepts key: /home/sarp/.ssh/id_rsa RSA SHA256:V74Y4EhlszaIzco6oxOtl86ALj/U8rhXO2XUpEftZLU agent I read through various posts on this topic, but none of the solutions worked for me. Here are some details/things I have tried: Permissions are correct for .ssh/ I am not running gnome-keyring-daemon: echo $SSH_AUTH_SOCK returns /tmp/ssh-a8Ol5O0XY9Fv/agent.1326 , and I don't see the keyring daemon running through ps aux . ssh-add -l correctly displays the two keys as can be see in the first picture (one from the old machine, the other from the new machine) I just copied the .ssh/config from the old machine, so the hostname/username/etc. should be fine. Let me know if I should provide additional useful info, and apologies if it is something very obvious, but what am I missing here? | Make sure the permissions of the key directory and keys are correct on the client.The ~/.ssh directory should only have execute, read and write permissions for the user. If not then change them: User can execute, read and write chmod 700 ~/.ssh For the private keys and also the id_rsa, user can read and write chmod 600 ~/.ssh/id_rsa For the public keys, user can read and write, others can read chmod 644 ~/.ssh/*.pub | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626143",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134933/"
]
} |
626,241 | I am writing simple shell script and when I check my script at https://www.shellcheck.net it give me error at line 14 Line 14: sysrc ifconfig_"${Bridge}"="addm ${NIC}" ^-- SC2140: Word is of the form "A"B"C" (B indicated). Did you mean "ABC" or "A\"B\"C"? https://github.com/koalaman/shellcheck/wiki/SC2140 In fact I didn't understand how to correct it #!/bin/shSetup() { # Determine interface automatically NIC="$(ifconfig -l | awk '{print $1}')" # Enabling the Bridge Bridge="$(ifconfig bridge create)" # Next, add the local interface as member of the bridge. # for the bridge to forward packets, # all member interfaces and the bridge need to be up: ifconfig "${Bridge}" addm "${NIC}" up # /etc/rc.conf sysrc cloned_interfaces="${Bridge}" sysrc ifconfig_"${Bridge}"="addm ${NIC}" # Create bhyve startup script touch /usr/local/etc/rc.d/bhyve chmod +x /usr/local/etc/rc.d/bhyve cat << 'EOF' >> /usr/local/etc/rc.d/bhyve#!/bin/sh# PROVIDE: bhyve# REQUIRE: DAEMON# KEYWORD: shutdown. /etc/rc.subrname=bhyvestart_cmd="${name}"_startbhyve_start() {}load_rc_config "${name}"run_rc_command "$1"EOF sysrc bhyve_enable="YES"} | The single string ifconfig_"${Bridge}"="addm ${NIC}" is the same as "ifconfig_$Bridge=addm $NIC" (the curly braces aren't needed and the whole string can be quoted by a single set of double quotes) Since you used double quotes to quote two separate parts of the same string, ShellCheck wondered whether you possibly meant for the "inner pair" of quotes to be literal and actually part of the string, i.e. whether you meant to write fconfig_"${Bridge}\"=\"addm ${NIC}" . Since you didn't, it would be better to rewrite the string as I showed before, just to make it clear that it's one single string with no embedded quotes. Note that you have made no error in your code with regard to the quoting here, and that ShellCheck is simply inquiring about your intention because this is (arguably) a common error when you do want literal double quotes inside a string. If you feel strongly about your way of quoting the string, then you may disable the ShellCheck warning with a directive in a comment before the affected line: # shellcheck disable=SC2140sysrc ifconfig_"${Bridge}"="addm ${NIC}" This basically means "I know what I'm doing and rule SC2140 does not apply here, thank you very much." | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/448110/"
]
} |
626,248 | Is there a way to see what the result of a find . -exec somecommand {} \; would be with substitutions, without actually running the commands? Like a dry run (or test run or print)? For example, suppose I have the following file structure: /a/1.txt/a/2.txt/a/b/3.txt Is there a way to test find . type f -exec rm {} \; from within the a directory such that the output would printed to stdout but not executed such as: rm 1.txtrm 2.txtrm b/3.txt Update Note: rm is just an example command, I'm interested in the general case | You can run echo rm instead of rm find . type f -exec echo rm {} \; Also, find has -delete option to delete files it finds | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/626248",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6250/"
]
} |
626,297 | I'd like to count the number of elements inside a folder. I thought of function lsc { /bin/ls -l $1 | sed 1d | wc -l } but then I remembered how awk people reduced those kind of pipes, getting rid of superfluous grep s and sed s. So, how would you do that in awk ? | There is no need for ls , sed , wc or awk . If you simply want to count how many names a pattern expands to, then you can do that with set -- *echo "$#" The set command sets the positional parameters ( $1 , $2 , etc.) to the names matching the * pattern. This automatically sets the special variable $# to the number of set positional parameters, i.e. the number of names matching the given pattern. In bash or in a shell that has named arrays, you can use names=(*)echo "${#names[@]}" This works similarly, but sets the elements of the names array to the names resulting from the expansion of the * pattern. The variable expansion ${#names[@]} will be the number of elements in the names array. An issue with this is that if the pattern doesn't match anything, it will remain unexpanded, so you get a count of 1 (even though the directory is empty). To fix this in the bash shell, set the nullglob shell option with shopt -s nullglob . By setting this shell option, patterns that do not match anything will be removed completely. In bash , if you additionally want to count hidden names, set the dotglob shell option with shopt -s dotglob . Your function could look something like this in bash : lsc () ( shopt -s nullglob set -- "$1"/* echo "$#") Note the use of ( ... ) for the function body to avoid setting nullglob in the calling shell. Or, for /bin/sh : lsc () { set -- "$1"/* if [ -e "$1" ] || [ -L "$1" ]; then echo "$#" else echo 0 fi} The if statement here makes sure that the first positional parameter is the name of an actual file and not an unexpanded pattern (due to not matching anything). The -e ("exists") must be true for us to trust the number in $# . If it isn't true, then we additionally check whether the name refers to a symbolic link with the -L test. If this is true, we know that the first thing that the pattern expanded to was a "dead" symbolic link (a symbolic link pointing to a non-existent file), and we trust $# to be correct. If both tests fail, we know that we didn't match anything and therefore output 0 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626297",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22046/"
]
} |
626,451 | X11 window managers historically have a notion of screens - each screen has a distinct set of windows and you can switch between them using the same physical display. I'm recording a screencast so I would really like to have a secondary, smaller X11 screen on which a handful of windows will be displayed, while keeping the content of my main screen intact and hidden. So I would like to have a virtual screen in a window, which will contain other windows. Then I can simply grab this window for my screencast. How do I do that? I would prefer a native X11 approach (maybe there are window managers which do that with ease?) Maybe there's a way I can declare a virtual monitor for X11 server to use, that ends up displayed as a window? Failing that, I guess I could use Xvfb or VNC, but obviously it is harder to set up. Maybe some other popular approaches are there? | Xephyr if your distro ships it. Xephyr or its predecessor Xnest. Run Xephyr :1 , it starts displaying a window. Then run DISPLAY=:1 rxvt or DISPLAY=:1 xfwm4 , so the terminal would appear in the Xephyr display, or have the window manager manage windows in the Xephyr display. The -size parameter control how big the Xephyr window is, e.g. Xephyr -size 1024x768 . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57502/"
]
} |
626,495 | I have a file called fold.txt . It has two values in each lines separated by a space. If I say that first value represent column A and second value after space is column B then how can I add all the values of column A and all the values of column B and show the summation of each column individually? I am expecting something like this: $ cat fold.txt100 500200 300700 100 Output: Total count Column A = 1000Total count column B = 900 | With awk : awk '{ sum_A +=$1; sum_B+=$2; };END{ print "Total count Column A = " sum_A +0; print "Total count column B = " sum_B +0;}' infile in awk language which is a tool for text-processing purposes, $1 represent first column's value, $2 represent second column's value, $3 for third and so on and one special one NF is represent the last column Id and accordingly $NF is the last column's value (so you can replace $2 above with $NF too; and yes you catch it when NF is the last column Id, so value of the variable tells you how many columns do you have (its value update for each line awk is read form the input) ). To handle the edge case where the input file is empty and still get numeric output we add 0 to the result forcing awk to output numeric result. columns (or fields) in awk distinguished by the FS variable ( F eild S eparator) which default is use Space/Tabs. if you want columns split on different character, you can redefine it with the -F option for awk like in: awk -F'<character-here>' '...' infile or within BEGIN{...} block like with FS : awk 'BEGIN{ FS="<character-here>"; }; { ... }' infile for example for a input file like below (now it's comma instead of space): 100,500200,300700,100 you can write your awk code as following: awk -F',' '{ sum_A +=$1; sum_B+=$2; };END{ print "Total count Column A = " sum_A +0; print "Total count column B = " sum_B +0;}' infile Or within BEGIN block: awk 'BEGIN{ FS=","; }; { sum_A +=$1; sum_B+=$2; };END{ print "Total count Column A = " sum_A +0; print "Total count column B = " sum_B +0;}' infile Going a little bit complex and to sum all N columns of your input file on the following sample: 100,500,140,400200,300,640,200700,100,400,130 So we talked about NF in first paragraph (NF value is telling you how many columns do you have (update per each line)): awk -F',' '{ for (i=1; i<=NF; i++) sum[i]+=$i; };END{ for (colId in sum) { printf ("Total count Column: %d= %d\n", colId, sum[colId] ); };}' infile the only new things here is we used awk array to address the same column Id taking from the value of i and add their values $i into that array (index/keys of this array is column Ids); then at the END{...} block we loop over our array on the keys it's seen then print the column Id first then sum of those next to it, you will see output like below: Total count Column: 1= 1000Total count Column: 2= 900Total count Column: 3= 1180Total count Column: 4= 730 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626495",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365156/"
]
} |
626,514 | I want to make uppercase all driver names that are from countries starting with " United ". For example: From 20 [United Kingdom] Nigel Mansell 188 31 to 20 [United Kingdom] NIGEL MANSELL 188 31 The command that I am using: cat f1.txt | sed -r 's/[^ ]"United"\s+[A-Z]+[a-z]*]\s+[A-Z]+[a-z]*\s+[A-Z]+[a-z]*-?[A-Z]?+[a-z]?*/\U&/g' Full list: Rank Country Driver Races Wins1 [United Kingdom] Lewis Hamilton 264 943 [Spain] Fernando Alonso 311 328 [United Kingdom] Jenson Button 306 1511 [Netherlands] Max Verstappen 116 917 [United Kingdom] David Coulthard 246 1320 [United Kingdom] Nigel Mansell 188 3126 [United Kingdom] Jackie Stewart 100 2727 [United Kingdom] Damon Hill 115 2228 [Spain] Carlos Sainz Jr. 115 032 [United Kingdom] Graham Hill 177 1437 [United Kingdom] Jim Clark 72 2538 [Poland] Robert Kubica 97 141 [South Africa] Jody Scheckter 112 1042 [New Zealand] Denny Hulme 112 847 [Switzerland] Clay Regazzoni 131 549 [Sweden] Ronnie Peterson 123 1050 [New Zealand] Bruce McLaren 102 451 [Russian Federation] Daniil Kvyat 107 052 [United Kingdom] Eddie Irvine 147 454 [United Kingdom] Stirling Moss 72 1658 [United Kingdom] John Surtees 111 659 [United States] Mario Andretti 128 1260 [United Kingdom] James Hunt 92 1063 [United Kingdom] John Watson 152 564 [Thailand] Alexander Albon 35 069 [United States] Dan Gurney 86 471 [United Kingdom] Mike Hawthorn 48 376 [United Kingdom] Lando Norris 35 078 [United Kingdom] Paul di Resta 59 080 [United States] Richie Ginther 52 185 [United States] Phil Hill 51 386 [United Kingdom] Martin Brundle 158 087 [United Kingdom] Johnny Herbert 161 389 [Sweden] Stefan Johansson 79 090 [New Zealand] Chris Amon 97 094 [United Kingdom] Tony Brooks 41 695 [Venezuela] Pastor Maldonado 95 199 [United Kingdom] Derek Warwick 147 0100 [United States] Eddie Cheever 132 0101 [Switzerland] Jo Siffert 97 2103 [Russian Federation] Vitaly Petrov 57 0104 [United Kingdom] Peter Revson 30 2113 [United Kingdom] Peter Collins 36 3114 [United Kingdom] Innes Ireland 52 1119 [Sweden] Jo Bonnier 106 1120 [Spain] Pedro de la Rosa 105 0124 [United Kingdom] Mark Blundell 61 0125 [United States] Harry Schell 63 0127 [Sweden] Gunnar Nilsson 31 1128 [Spain] Jaime Alguersuari 46 0130 [United States] Jim Rathmann 12 1132 [United Kingdom] Mike Hailwood 51 0133 [Switzerland] Sebastien Buemi 55 0135 [United Kingdom] Mike Spence 36 0136 [South Africa] Tony Maggs 26 0140 [United States] Masten Gregory 40 0142 [United States] Sam Hanks 9 1143 [United Kingdom] Piers Courage 27 0145 [United States] Bill Vukovich 5 2147 [United Kingdom] Tom Pryce 42 0148 [United Kingdom] Roy Salvadori 48 0149 [United States] Jimmy Bryan 9 1153 [Sweden] Marcus Ericsson 97 0159 [Switzerland] Marc Surer 82 0160 [Netherlands] Jos Verstappen 106 0161 [United Kingdom] Stuart Lewis-Evans 14 0167 [United Kingdom] Mike Parkes 6 0168 [United States] Rodger Ward 12 1174 [United Kingdom] Jonathan Palmer 84 0176 [Sweden] Reine Wisell 23 0179 [United Kingdom] Jackie Oliver 50 0180 [United States] Johnnie Parsons 10 1181 [United Kingdom] Peter Arundell 13 0185 [United States] Tony Bettenhausen 13 0186 [United Kingdom] Cliff Allison 16 0187 [United Kingdom] Richard Attwood 17 0188 [United Kingdom] Peter Gethin 30 1191 [Switzerland] Rudi Fischer 7 0192 [United States] Johnny Thomson 9 0194 [New Zealand] Howden Ganley 36 0199 [United States] Troy Ruttman 8 1200 [United States] Lee Wallard 2 1 | You are overdoing it, take advantage of your file structure to keep it simple! If we find the string [United in a line, uppercasing everything from the closing brace to the end of the line gives the result you are after. Translating this into Sed language, sed '/\[United/s/].*/\U&/' file Note that the above is specific to GNU Sed. If not available but in a POSIX system, you can use Ex (or see αғsнιη's Awk version ) with a similar syntax: printf '%s\n' 'g/\[United/s/].*/\U&/' '%p' | ex file To save the changes to the file instead of printing the results, change %p to x . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626514",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/448240/"
]
} |
626,637 | I'm new here and new to bash/linux. My teacher gave me an assignment to allow a script to be run only when you're "really" root and not when you're using sudo. After two hours of searching and trying I'm beginning to think he's trolling me. Allowing only root is easy, but how do I exclude users that run it with sudo? This is what I have: if [[ $EUID -ne 0 ]]; then echo "You must be root to run this script." exitfi | The only way I could think of is to check one of the SUDO_* environmentvariables set by sudo: #!/usr/bin/env shif [ "$(id -u)" -eq 0 ]then if [ -n "$SUDO_USER" ] then printf "This script has to run as root (not sudo)\n" >&2 exit 1 fi printf "OK, script run as root (not sudo)\n"else printf "This script has to run as root\n" >&2 exit 1fi Notice that of course this solution is not future proof as you cannot stopanyone from setting a variable before running the script: $ suPassword:# SUDO_USER=whatever ./root.shThis script has to run as root (not sudo)# ./root.shOK, script run as root (not sudo) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/626637",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/448526/"
]
} |
626,690 | I have this script, #!/bin/shguess=$(echo $RANDOM | cut -b 1-2)read -p "Im thinking of a number, can you guess what it is? " numbercase "$number" in "$guess") echo "\nCongratulation number guessed corectly!" exit 0 ;; *) echo "\nIncorrect number guessed, try again? [yes or no]" read yesorno case "$yesorno" in "yes") sh guess.sh ;; "no") echo "\nHave a nice day!" exit 0 ;; *) echo "Invalid input" exit 1 ;; esac ;;esac The variable $guess was suppossed to return a 2 digit number, but returns null. Running the game with sh guess.sh and pressing return, returns congrats instead of the correct number being guessed. Where am I going wrong | Use bash instead of sh guess=$(echo $RANDOM | cut -b 1-2) ^-----^ SC3028: In POSIX sh, RANDOM is undefined. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/387728/"
]
} |
626,722 | I want to remove the album art that is embedded in a bunch of MP3s (thousands) and cannot find a command-line way to do this. I can add art via lame and I can add or remove pretty much any other tags with id3tag but I can’t find a way to do something like: for file in **/*.mp3 do <remove image command> $file; done Anyone know what I might put in for <remove image command> ? | There does not appear to be a good solution, and the best that I came up with was using ffmpeg to quickly create a new file. The command that @awesome14 provided did not work for me on my system (mostly it did, but it generated many errors and that resulted in songs that were not copied). This is the command I came up with. for song in **/*.mp3 do NAME=$(echo ${song%/*} | sed -e 's|[/ ]|-|g’) ffmpeg -y -i $song -vn -c copy /path/NOART/"$NAME-"${song##*/}; done This works with a bash5 or zsh shell. **/*.mp3 Every file matching .mp3 in every directory under thecurrent echo ${song%/*} | sed -e 's|[/ ]|-|g’ convert all slashesand spaces in the path portion (not in file name) to dashes —vn -c copy Do not copy video (video no) and otherwise copy the file unmodified /path/NOART/"$NAME-"${song##*/} save to the path with the filename set to the NAME variable and the base name of the $song. Output filename will look like "10Cc-Look-Hear-Dressed To Kill.mp3”. This has the additional advantage of not removing all the metadata in the song, only stripping the “video” which in this case is the album cover art. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61289/"
]
} |
626,807 | Is there a way to do: output | grep "string1" | grep "string2" BUT with awk, WITHOUT PIPE? Something like: output | awk '/string1/ | /string2/ {print $XY}' Result should be subset of matches, if tha makes sense. | The default action with awk is to print, so the equivalent of output | grep string1 | grep string2 is output | awk '/string1/ && /string2/' e.g. $ cat tstfoobarfoobarbarfoofoothisbarbazotherstuff$ cat tst | awk '/foo/ && /bar/'foobarbarfoofoothisbarbaz | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/626807",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118321/"
]
} |
626,812 | I am using the following command to output a list of servers with associated IPs. For another part of my script, I need this output to be formatted in a particular manner. With an incrementing line number above each line. Example below paste <(aws ec2 describe-instances --query 'Reservations[*].Instances[*].Tags[*].{Name:Value}' --output text) \ <(aws ec2 describe-instances --query 'Reservations[*].Instances[*].{PrivateIP:PrivateIpAddress}' --output text) | awk 'ORS="\n\n"' >> $TMP1 Which outputs ( in a tmp file ) : Dev Server 111.11.11.11Test Server 222.22.22.22 However how can I append numbers to each blank line like so? Example 1Dev Server 111.11.11.112Test Server 222.22.22.22 | Use awk : $ cat FILEDev Server 111.11.11.11Test Server 222.22.22.22 $ awk '{ if ($0 ~ /^$/) { print ++counter } else { print $0 }}' FILE1Dev Server 111.11.11.112Test Server 222.22.22.22 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/386071/"
]
} |
626,829 | Every time I log in to my user account "bob" I have to use these commands (with sudo or in the root account) to connect to the WiFi: wpa_supplicant -B -D wext -i wlan0 -c /etc/wpa_supplicant.confdhclient wlan0 wpa_supplicant -B -D wext -i wlan0 -c /etc/wpa_supplicant.conf gives me this result : Successfully initialized wpa_supplicantCould not set interface wlan0 flags (UP): Operation not permittedWEXT: Could not set interface 'wlan0' UPwlan0: Failed to initialized driver interface And for dhclient wlan0 I get: RTNETLINK answers: Operation is unreachable I am doing this on a Raspberry PI 4 with Debian 10 Codename: buster. I have systemd. How do I set up my environment so that every time I boot up, or I log in with "bob" or even root, my system connects to the WiFi? I was thinking of using the commands I just showed and put them in .profile but I cannot run them with the "bob" account. | Use awk : $ cat FILEDev Server 111.11.11.11Test Server 222.22.22.22 $ awk '{ if ($0 ~ /^$/) { print ++counter } else { print $0 }}' FILE1Dev Server 111.11.11.112Test Server 222.22.22.22 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31175/"
]
} |
626,866 | I get this message every time I install a new package in KDE neon via terminal,Is it normal and I should ignore it or I should fix this? Reading package lists... DoneBuilding dependency tree Reading state information... DoneStarting pkgProblemResolver with broken count: 0Starting 2 pkgProblemResolver with broken count: 0DoneThe following NEW packages will be installed: tree0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.Need to get 0 B/43.0 kB of archives.After this operation, 115 kB of additional disk space will be used.Selecting previously unselected package tree.(Reading database ... 280095 files and directories currently installed.)Preparing to unpack .../tree_1.8.0-1_amd64.deb ...Unpacking tree (1.8.0-1) ...Setting up tree (1.8.0-1) ...Processing triggers for man-db (2.9.1-1) ...Not building database; man-db/auto-update is not 'true'. | The warning is just that, a warning; it means that mandb isn’t run when relevant packages are installed, and the result of that is that the manual page index caches aren’t updated. The technical reason for the warning is the absence of /var/lib/man-db/auto-update . I’m not sure what would cause that. To restore the man-db trigger, restore that file: sudo touch /var/lib/man-db/auto-update You will no longer see the warning, and the caches will be updated. You can update the caches yourself: sudo mandb -pq | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626866",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/439207/"
]
} |
626,894 | The Fedora TeXLive packages do not correctly compile one of my documents. Others cannot reproduce this issue, so I'm testing if a separate TeXLive installation solves the problem. The Fedora packages are installed to /usr/bin/latex ; the separate TeXLive installation is located in /usr/local/texlive/2020/bin/x86_64-linux . Now I need to set the PATH. Using gedit ~/.profile , I have added the following lines to the (previously blank) file: PATH=/usr/local/texlive/2020/bin/x86_64-linux:$PATH; export PATHMANPATH=/usr/local/texlive/2020/texmf-dist/doc/man:$MANPATH; export MANPATHINFOPATH=/usr/local/texlive/2020/texmf-dist/doc/info:$INFOPATH; export INFOPATH This, as far as I can tell, is exactly what TeXLive told me to do. However, which latex still returns /usr/bin/latex , not the expected /usr/local path. Where have I gone wrong? | The warning is just that, a warning; it means that mandb isn’t run when relevant packages are installed, and the result of that is that the manual page index caches aren’t updated. The technical reason for the warning is the absence of /var/lib/man-db/auto-update . I’m not sure what would cause that. To restore the man-db trigger, restore that file: sudo touch /var/lib/man-db/auto-update You will no longer see the warning, and the caches will be updated. You can update the caches yourself: sudo mandb -pq | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117016/"
]
} |
626,994 | I use an external HPC system via ssh. Today I tried to install "ASE" a Python code for dealing with atoms. I followed instructions to modify my .bashrc file but kept getting ModuleNotFoundError: No module named 'ase' So I executed a source command for my .bashrc file, thinking that would be necessary to get the changes to the .bashrc file recognized (unfortunately, I don't remember the exact command). Now when I try to execute any kind of command (even after logging out and logging back in), I get: ###################################################################################### (<-- normal welcome message that I always get on login up to here)-bash: /usr/bin/whoami: Argument list too long-bash: /usr/bin/cut: Argument list too long-bash: /usr/bin/logger: Argument list too longme@n01:~> I have looked around online for a solution, but don't see any examples of this particular situation. Most people who get the same error still seem to be able to access their files. Can anyone help? I can't login as root because this is a system I'm accessing via ssh. I can't access my .bashrc or .bash_profile files without getting the error. | If I'm interpreting your text correctly, then you are quite possibly sourcing the ~/.bashrc recursively, either from itself, or it and ~/.bash_profile are sourcing each other indefinitely (it's not clear from the question). The effect of this would likely be that one or several environment variables are growing out of proportion, which would lead to the error message that you quote. To fix this, you will have to access your account without starting the bash shell. You can do that with, for example, ssh -t user@host /bin/sh (where user@host is your username on and the host's address). This starts the /bin/sh shell rather than your default login shell. The /bin/sh shell does not normally source the ~/.bashrc file, so you would not have the same issue with this shell. You could pick any other shell, but the /bin/sh shell is more or less guaranteed to exist. This would allow you to log into the account, into a possibly unfamiliar but fully functional shell, to fix the issue, which, again, seems to be related to recursively sourcing the ~/.bashrc file in one way or another. I have not addressed the issue you had with Python. That issue may be something that you may want to ask a separate question about, after making sure that your local sysadmin team can't help you with it first. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/626994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/448904/"
]
} |
627,027 | I've got a Docker image which generates log-like files when errors occur. I've mounted the directory it writes to to my host machine with a bind mount. However, the created files are owned by root . Though my user account has root privileges, it is tedious to run chown and chgrp after every run of the container in order to inspect the files. Is there a way to have the container set the owner and group of the files to that of the user who ran the container? For some context, here's a toy example I created: Dockerfile FROM debianWORKDIR /rootVOLUME /root/outputCOPY run.sh /root/ENTRYPOINT ["./run.sh"] run.sh #!/bin/bashecho hello > output/dump My execution command is docker run -v $PWD/output:/root/output test | The files are created by the user that runs within the container.Iif your containerized command runs as root , then all files will be created as root. If you want your files to be created as another user, run the container as this other user.e.g. docker run -v "$(pwd)/output":/root/output -u $(whoami) test Note: Depending on your container, this might not work out of the box (e.g., because, within the container, you need to open a privileged port or your script is only accessible by a given (super)user). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/413026/"
]
} |
627,169 | I have a file on the Linux server that I want to move (not copy) to my local computer. However, I did something like as below after reading commands from stack overflow, however it just copies the file and doesn't move it. scp -r [email protected]:/home/obs/folder/test.txt /home/yuan/folder/ Any help is highly appreciated. | "Moving" is essentially copying and then deleting the source file. If you want to "move" a file over the network, you have to do just that.This is always preferable, since should the network connection fail, you can retry copying, without losing any data (should your files have been transferred improperly, but deleted afterwards). Once the files are transferred, you have to delete the source files on the server, for instance with: ssh [email protected] 'rm /home/obs/folder/test.txt' The option -r you used in your example, is for copying files recursively, which suggests you want to copy directories of files over the network. I suppose you want to move all files over the network (transfer and then delete all files inside that source directory). When copying or "moving" files between two machines, I suggest using rsync . It will only transfer new and changed files, and skip identical files already at the destination. It has an option to remove source files in one go, after they've been transferred, which should mimic the behavior you presumably expect when "moving" a file from one machine to the other: rsync -aPEmivvz --remove-source-files [email protected]:/home/obs/folder /home/yuan And for just one file you'd use: rsync -aPEmivvz --remove-source-files [email protected]:/home/obs/folder/file.txt /home/yuan/folder/ Using the --remove-source-files option simply deletes the file(s) after they've been transferred. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/422902/"
]
} |
627,351 | I have a CSV file, file.csv , containing date and time like this: id0,2020-12-12T07:18:26,7fid1,2017-04-28T19:59:00,80id2,2017-04-28T03:14:35,e4id3,2020-12-12T23:45:09,ffid4,2020-12-12T09:12:34,a1id5,2017-04-28T00:31:54,65id6,2020-12-12T20:13:47,45id7,2017-04-28T21:04:30,7f I would like to split the file based on the date in column 2. Using the above example, it should create 2 files: file_1.csvid1,2017-04-28T19:59:00,80id2,2017-04-28T03:14:35,e4id5,2017-04-28T00:31:54,65id7,2017-04-28T21:04:30,7f and file_2.csvid0,2020-12-12T07:18:26,7fid3,2020-12-12T23:45:09,ffid4,2020-12-12T09:12:34,a1id6,2020-12-12T20:13:47,45 I tried to use sort and awk to do the job but it splits the file into 8 files based on the date and time. sort -k2 -t, file.csv | awk -F, '!($2 in col) {col[$2]=++i} {print > ("file_" i ".csv")}' How can I split the file based on the date only (not date and time)? | How about: awk -F', ' ' { date = substr($2,1,10) } !(date in outfile) { outfile[date] = "file_" (++numout) ".csv" } { print > outfile[date] }' file.csv If it's a large file with many unique dates, you may need to prevent "too many open files" errors with: { print >> outfile[date]; close(outfile[date]) } | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/379553/"
]
} |
627,429 | In my zsh shell, I am dynamically changing prompt depending on whether I am inside git repository or not. I am using following git command to check: if $(git rev-parse --is-inside-work-tree >/dev/null 2>&1); then... now I also want to distinguish whether current directory is being ignored by git . So I have added one more check to my if statement: if $(git rev-parse --is-inside-work-tree >/dev/null 2>&1) && ! $(git check-ignore . >/dev/null 2>&1); then... This works fine, but I was wondering whether I could simplify this into one git command. Since the prompt is refreshed on every ENTER , it tends to slow down the shell noticeably on some slower machines. UPDATE The accepted solution from @Stephen Kitt works great, except in following situation: I am using repository across filesystems. Lets say git resides at /.git (because I want to track my config files in /etc ), but I also want to track some files in /var/foo , which is a different partition/filesystem. When I am located at / and execute following command, everything works as expected, and I get return code 1 (because /var/foo is being tracked): # git check-ignore -q /var/foo But when I am located anywhere in /var , the same command fails with error code 128 and following error message: # git check-ignore -q /var/foofatal: not a git repository (or any parent up to mount point /)Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). But I think this is only problem with the check-ignore command. Otherwise git seems to work fine across filesystem. I can track files in /var/foo fine. The expected behavior should be that git check-ignore -q /var/foo returns 1 , and git check-ignore -q /var/bar returns 0 , if it is not being tracked. how can I fix this problem? | git check-ignore . will fail with exit code 128 if . isn’t in a git repository (or any other error occurs), and with exit code 1 only if the path isn’t ignored. So you can check for the latter only: git check-ignore -q . 2>/dev/null; if [ "$?" -ne "1" ]; then ... Inside the then , you’re handling the case where . is ignored or not in a git repository. To make this work across file system boundaries, set GIT_DISCOVERY_ACROSS_FILESYSTEM to true : GIT_DISCOVERY_ACROSS_FILESYSTEM=true git check-ignore -q . 2>/dev/null; if [ "$?" -ne "1" ]; then ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627429",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
627,438 | I am trying to implement this helper bash function: ores_simple_push(){( set -eo pipefail git add . git add -A args=("$@") if [[ ${#args[@]} -lt 1 ]]; then args+=('squash-this-commit') fi git commit -am "'${args[@]}'" || { echo; } git push)} with: git commit -am 'some stuff here' , I really don't like having to enter in quotes, so I am looking to do: ores_simple_push my git commit message here gets put into a single string so that would become: git commit -am 'my git commit message here gets put into a single string' is there a sane way to do this? | In Korn/POSIX-like shells, while "$@" expands to all the positional parameters, separated (in list contexts), "$*" expands to the concatenation of the positional parameters with the first character (byte with some shells) of $IFS ¹ or with SPC if $IFS is unset or with nothing if $IFS is set to the empty string. And in ksh / zsh / bash / yash (the Bourne-like shells with array support), that's the same for "${array[@]}" vs "${array[*]}" . In zsh , "$array" is the same as "${array[*]}" while in ksh / bash , it's the same as "${array[0]}" . In yash , that's the same as "${array[@]}" . In zsh , you can join the elements of an array with arbitrary separators with the j parameter expansion flag: "${(j[ ])array}" to join on space for instance. It's not limited to single character/byte strings, you can also do "${(j[ and ])array}" for instance and use the p parameter expansion flag to be able to use escape sequences or variables in the separator specification (like "${(pj[\t])array}" to join on TABs and "${(pj[$var])array}" to join with the contents of $var ). See also the F shortcut flag (same as pj[\n] ) to join with line-feed. So here: ores_simple_push() ( set -o errexit -o pipefail git add . git add -A args=("$@") if [[ ${#args[@]} -lt 1 ]]; then args+=('squash-this-commit') fi IFS=' ' git commit -am "${args[*]}" || true git push) Or just POSIXly: ores_simple_push() ( set -o errexit git add . git add -A [ "$#" -gt 0 ] || set square-this-commit IFS=' ' git commit -am "$*" || true git push) With some shells (including bash, ksh93, mksh and bosh but not dash, zsh nor yash), you can also use "${*-square-this-commit}" here. For completeness, in bash , to join arrays with arbitrary strings (for the equivalent of zsh's joined=${(ps[$sep])array} ), you can do: IFS=joined="${array[*]/#/$sep}"joined=${joined#"$sep"} (that's assuming $sep is valid text in the locale; if not, there's a chance the second step fails if the contents of $sep ends up forming valid text when concatenated with the rest). ¹ As a historical note, in the Bourne shell, they were joined with SPC regardless of the value of $IFS | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
627,460 | In all my python scripts, I'd like to replace the 2 consecutive lines import matplotlib as mplmpl.style.use(mpl_plt_default_template) with just the line plt.style.use(mpl_plt_default_template) So far, I've found a way to replace 2 consecutive lines with an expression applied to each of these lines, but not how to replace the 2 lines with a single line altogether. Regarding my unsuccessful approach it applied the following sed-string: sed '/import matplotlib as mpl$/N;//s/mpl.style.use(mpl_plt_default_template)/plt.style.use(mpl_plt_default_template)/g' Note that the first line ( import matplotlib as mpl ) can also occur at other places in the file and should be left unchanged there , so the goal is to perform a replacement only if both lines are found one following the other in the order given. EDIT on additional scope involving find : The ultimate goal is to replace these 2 lines in several textfiles found via the find - command using a pipeline similar to the following manner: find /path/to/dir/ -type f -exec sed 'old-lines/s/single-new-line' {} \; System-specifics: OS: Lubuntu 20.04 LTS | To edit a file, use a scriptable text editor, such as Ed or Ex (both POSIX editors).The syntax is very similar. printf '%s\n' '/^import matplotlib as mpl$/d' 's/mpl/plt' 'w' 'q' | ed -s file printf '%s\n' '/^import matplotlib as mpl$/d' 's/mpl/plt' 'x' | ex file printf '%s\n' supplies commands to the editor. /^import matplotlib as mpl$/d deletes the first line matching the pattern. s/mpl/plt performs the substitution on the next line. w and q or x save the changes. If you really want Sed, sed '/^import matplotlib as mpl$/N; s/.*\nmpl/plt/' file Addressing your expanded question: find /path/to/dir/ -type f -exec sh -c ' printf "%s\n" "/^import matplotlib as mpl\$/d" "s/mpl/plt" "w" "q" | ed -s "$1"' sh {} \; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/437566/"
]
} |
627,463 | On Ubuntu, the command add-apt-repository automatically update the package index after adding a repository but this feature isn't integrated in debian Buster. How to configure add-apt-repository to automatically update the package index on debian? | In the version available in Debian 10, the -u option will update the package cache: sudo add-apt-repository -u ... (This isn’t documented in the man page.) You can change the default by editing /usr/bin/add-apt-repository : change the default in parser.add_option("-u", "--update", action="store_true", dest="update", default=False, help=_("Update package cache after adding")) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153195/"
]
} |
627,498 | My understanding of cp src dest is: If src is a file and dest doesn't exist, but its parent directory does, dest is created. If src is a file and dest is a file, dest is overwritten with the contents of src . If src is a file and dest is a directory, dest/src is created. I'm trying to avoid case #3. If I write a script assuming that dest won't be a directory, but it turns out at runtime that it is, I don't want cp to put a file in the wrong place and continue silently. Instead, I want it to either: Delete the whole dest directory and replace it with the desired file. Error out without doing anything. I'd also prefer for this to happen atomically. Using a separate command like test -d to check if dest is a directory would open up an opportunity for TOCTTOU problems. Can this be done with cp ? If not, can it be done with any other similarly ubiquitous command? I'm interested in both portable solutions and solutions that rely on non-standard extensions, as long as they're reasonably common. | A redirection would do that. It behaves as cp 's -T, --no-target-directory (it exits with error if dst is a directory): $ cat src > dst | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7421/"
]
} |
627,513 | last three days I am experiencing random freezes. If i am looking on youtube when this happens Audio keeps playing but screen is froze and keyboard or cursor do not do anything. I trying to look in sudo journalctl and this is what I found: led 04 10:44:02 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=113031 end=113032) time 340 us, min 1073, max 1079, scanline start 1062, end 1085led 04 11:09:15 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=203838 end=203839) time 273 us, min 1073, max 1079, scanline start 1072, end 1090led 04 11:15:47 arch-thinkpad kernel: i915 0000:00:02.0: [drm] *ERROR* Atomic update failure on pipe C (start=227329 end=227330) time 278 us, min 1073, max 1079, scanline start 1066, end 1085 uname -a returns: Linux arch-thinkpad 5.10.4-arch2-1 #1 SMP PREEMPT Fri, 01 Jan 2021 05:29:53 +0000 x86\_64 GNU/Linux I use: i3wm, picom, pulseaudio. I have lenovo x390 yoga with intel processor. How can I diagnose and solve this problem? EDIT: Upgrading linux kernel to 5.10.16 solved my problem. Still I will accept answer of @Sylvain POULAIN for its complex view on the problemand offering alternative solution. | 5.10.15 doesn't solve this problem. I still have same error. Intel bugs are really annoying since kernel > 4.19.85 (November 2019 !) As a workaround, i915 guc need to be enable as mentionned in Archlinux Wiki : https://wiki.archlinux.org/index.php/Intel_graphics#Enable_GuC_/_HuC_firmware_loading and loaded before others modules To resume : Add guc paramters to kernel parameters by editing /etc/default/grub GRUB_CMDLINE_LINUX="i915.enable_guc=2" Add guc option to i915 module by adding /etc/modprobe.d/i915.conf file with : options i915 enable_guc=2 Add i915 to /etc/mkinitcpio.conf : MODULES=(i915) Rebuild kernel initramfs (needs reboot after successfull build) : # mkinitcpio -P Remove xf86-video-intel (driver is already in kernel) : # pacman -Rscn xf86-video-intel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627513",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/429595/"
]
} |
627,518 | I am trying to install Linux on my Lenovo IdeaPad 1 11ADA05. I have prepared the installer on an SD card. It boots into the SD card fine, but when I try to install the distro, it can't find my SSD. My SSD model is an eMMC card 64gb Ramexal SSD. I have tried multiple different distros, including Arch, Manjaro, Ubuntu, Mint, Kubuntu, and GNU Guix. Here's the output of fdisk -l Disk /dev/loop0: 81.81 MiB, 85786624 bytes, 167552 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop1: 537.95 MiB, 564084736 bytes, 1101728 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop2: 1.31 GiB, 1404850176 bytes, 2743848 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/loop3: 656.67 MiB, 688570368 bytes, 1344864 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/sda: 29.72 GiB, 31914983424 bytes, 62333952 sectorsDisk model: MassStorageClassUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x00000000Device Boot Start End Sectors Size Id Type/dev/sda1 * 64 5496075 5496012 2.6G 0 Empty/dev/sda2 5496076 5504267 8192 4M ef EFI (FAT-12/16/32) Is there any way to get my SSD to work or am I stuck on Windows? | 5.10.15 doesn't solve this problem. I still have same error. Intel bugs are really annoying since kernel > 4.19.85 (November 2019 !) As a workaround, i915 guc need to be enable as mentionned in Archlinux Wiki : https://wiki.archlinux.org/index.php/Intel_graphics#Enable_GuC_/_HuC_firmware_loading and loaded before others modules To resume : Add guc paramters to kernel parameters by editing /etc/default/grub GRUB_CMDLINE_LINUX="i915.enable_guc=2" Add guc option to i915 module by adding /etc/modprobe.d/i915.conf file with : options i915 enable_guc=2 Add i915 to /etc/mkinitcpio.conf : MODULES=(i915) Rebuild kernel initramfs (needs reboot after successfull build) : # mkinitcpio -P Remove xf86-video-intel (driver is already in kernel) : # pacman -Rscn xf86-video-intel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/427024/"
]
} |
627,521 | I am using Linux Mint 20. I am using a vpn with a kill switch ( protonvpn-cli ks --on ). So, if the vpn connection drops for some reason, the network get disconnected. When the network get disconnected, my youtube-dl download stops permanently with the error ERROR: Unable to download JSON metadata: <urlopen error [Errno -2] Name or service not known> (caused by URLError(gaierror(-2, 'Name or service not known'))) The issue is, I want youtube-dl to pause instead of closing, and resume when the connection is back. I checked Retry when connection disconnect not working but I do not think it is relevant to my problem. My config file looks like --abort-on-error--no-warnings--console-title--batch-file='batch-file.txt'--socket-timeout 10--retries 10--continue--fragment-retries 10 As I use batch files, I do not want to start the process from the beginning. I just want to pause the youtube-dl process till I get connected again and then continue the process. How can I do that? Update 1: So far, what I have found is, to pause a process we can do something like: $ kill -STOP 16143 To resume a process we can do something like: $ kill -CONT 16143 I am not sure but think that we can know if my network is up or not by pinging 1 2 : #!/bin/bashHOSTS="cyberciti.biz theos.in router"COUNT=4for myHost in $HOSTSdo count=$(ping -c $COUNT $myHost | grep 'received' | awk -F',' '{ print $2 }' | awk '{ print $1 }') if [ $count -eq 0 ]; then # 100% failed echo "Host : $myHost is down (ping failed) at $(date)" fidone However, it does not seem like an efficient solution. Linux: execute a command when network connection is restored suggested using ifplugd or using /etc/network/if-up.d/ . There is another question and a blog post which mention using /etc/NetworkManager/dispatcher.d . As I am using Linux Mint, I think any solution revolving around NetworkManager will be easier for me. | 5.10.15 doesn't solve this problem. I still have same error. Intel bugs are really annoying since kernel > 4.19.85 (November 2019 !) As a workaround, i915 guc need to be enable as mentionned in Archlinux Wiki : https://wiki.archlinux.org/index.php/Intel_graphics#Enable_GuC_/_HuC_firmware_loading and loaded before others modules To resume : Add guc paramters to kernel parameters by editing /etc/default/grub GRUB_CMDLINE_LINUX="i915.enable_guc=2" Add guc option to i915 module by adding /etc/modprobe.d/i915.conf file with : options i915 enable_guc=2 Add i915 to /etc/mkinitcpio.conf : MODULES=(i915) Rebuild kernel initramfs (needs reboot after successfull build) : # mkinitcpio -P Remove xf86-video-intel (driver is already in kernel) : # pacman -Rscn xf86-video-intel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206574/"
]
} |
627,523 | I'm trying to a run a bitbake command to build an image but I see the following errors ERROR: No space left on device or exceeds fs.inotify.max_user_watches?ERROR: To check max_user_watches: sysctl -n fs.inotify.max_user_watches.ERROR: To modify max_user_watches: sysctl -n -w fs.inotify.max_user_watches=<value>.ERROR: Root privilege is required to modify max_user_watches. Ran a script to determine what process is has however many watch count and I get the following: INOTIFY WATCHER COUNT PID CMD---------------------------------------- 11978 15732 /snap/sublime-text/97/opt/sublime_text/plugin_host 15717 --auto-shell-env 11978 15717 /snap/sublime-text/97/opt/sublime_text/sublime_text 51 10165 /usr/lib/unity-settings-daemon/unity-settings-daemon 12 1759 /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0... Running the following command returns the max watch count set which is greater than 11978 and I'm still seeing the same error. $ sysctl -n fs.inotify.max_user_watches12288 Is there anything else I should be looking into? | 5.10.15 doesn't solve this problem. I still have same error. Intel bugs are really annoying since kernel > 4.19.85 (November 2019 !) As a workaround, i915 guc need to be enable as mentionned in Archlinux Wiki : https://wiki.archlinux.org/index.php/Intel_graphics#Enable_GuC_/_HuC_firmware_loading and loaded before others modules To resume : Add guc paramters to kernel parameters by editing /etc/default/grub GRUB_CMDLINE_LINUX="i915.enable_guc=2" Add guc option to i915 module by adding /etc/modprobe.d/i915.conf file with : options i915 enable_guc=2 Add i915 to /etc/mkinitcpio.conf : MODULES=(i915) Rebuild kernel initramfs (needs reboot after successfull build) : # mkinitcpio -P Remove xf86-video-intel (driver is already in kernel) : # pacman -Rscn xf86-video-intel | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/443333/"
]
} |
627,540 | I have the following directory structure: /home/ |__dirA/ |__ dir_to_zip/ |__ file1.txt |__ file2.txt |__ ... |__dirB/ |__ dir_to_save_in/ |__dirC/ backup_zip_script.sh backup_zip_script.sh contains (among other things) the command (src): tar -czvf /home/dirB/dir_to_save_in/backup.tar.gz /home/dirA/dir_to_zip That is: I wish to zip up the directory ~/dirA/dir_to_zip , save the zipped archive in ~/dirB/dir_to_save_in/ , and do this using a script that lives in ~/dirC/ . The tar command above sort of works, but unzipping the saved archive yields the full directory structure: /home/ |__dirA/ |__ dir_to_zip/ |__ file1.txt |__ file2.txt |__ ... When I just want: dir_to_zip/ |__ file1.txt |__ file2.txt |__ ... How do I zip a directory in another location (such that I'm forced to specify the full absolute path), and yet zip only that directory and not all the levels before it? I could also simply use cd /home/dirA/ && tar... ...but I'd prefer to see if other solutions exist as well. | As mentioned in the comment ,use the -C option to "change to the target directory" if your tar implemention supports it: tar -czvf /home/dirB/dir_to_save_in/backup.tar.gz -C /home/dirA dir_to_zip This option can also be used when extrating the archive, e.g. tar -xzvf /home/dirB/dir_to_save_in/backup.tar.gz -C /home/dirA to extract the archive's content into /home/dirA . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/370005/"
]
} |
627,557 | This is what I get by doing parted -l : $ sudo parted -lModel: ATA TOSHIBA DT01ACA1 (scsi)Disk /dev/sda: 1000GBSector size (logical/physical): 512B/4096BPartition Table: msdosDisk Flags:Number Start End Size Type File system Flags1 1049kB 512MB 511MB primary ext4 boot2 513MB 1000GB 1000GB extended5 513MB 1000GB 1000GB logical btrfs I want to partition my hard drive for dual-booting.The hard drive I am now using has a GNU/Linux distro installed (Parrot OS). Is there a way that I can partition the hard disk (I think /dev/sda ) without losing its data? Such that I can install MS Windows in the new partition? | As mentioned in the comment ,use the -C option to "change to the target directory" if your tar implemention supports it: tar -czvf /home/dirB/dir_to_save_in/backup.tar.gz -C /home/dirA dir_to_zip This option can also be used when extrating the archive, e.g. tar -xzvf /home/dirB/dir_to_save_in/backup.tar.gz -C /home/dirA to extract the archive's content into /home/dirA . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627557",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449482/"
]
} |
627,558 | I'm running Ubuntu 19.04. I recently needed to install a browser other than chromium or Firefox in order to play a flash video. I've now ended up with a much more complicated problem. Every time I try to do something with dpkg, I get an error code like: dpkg: error: dpkg frontend lock is locked by another process To try and fix this, I have tried commands such as: sudo dpkg -l | grep ^..r to figure out what the offending process is, but there's nothing there. I've also sudo rm ed a bunch of folders like /var/lib/apt/lists/lock . No luck, and I have still not been able to install any packages. I cannot think of a reason behind this, except: I changed my sources.list file recently; and downloading the Chrome (the non-free) browser. I have no idea what the connection would be in either case though. Any ideas what I would be able to do to fix this? | When starting Ubuntu the autoupdate service will be executed automatically, that's why you received an error, the best practice is to let the auto-update complete this task. If you need to interrupt this task you can do: sudo pkill aptsudo pkill dpkgsudo dpkg --configure -asudo apt update | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335584/"
]
} |
627,564 | I have stdout with lots of blocks of text that look something like this: % QUESTIONWho played drums for The Beatles?% QUESTIONWho playedguitarfor The Beatles?% QUESTIONWho playedbass for The Beatles? The idea here is that the file is divided into "chunks" where each chunk begins with the line % QUESTION . I'd like to write a script that will print the nth chunk of this data. For example, issuing nthchunk 3 should print Who playedbass for The Beatles? How could I go about doing this? | With the awk implementations that support a regexp as their record separator ( RS ) such as GNU awk , you could do: awk -v n=3 -v RS='(\n+|^)% QUESTION\n' 'NR == n+1 {print; exit}' < questions.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627564",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
627,576 | I have my favorite applications as an autostart and I want one to be automatically placed in another workspace. How can I do that? Are there any tips, tables with ready-made solutions? The only thing that came to my mind is a shortcut that I would create with this tutorial How to move a window to another workspace in Xfce? Ctrl+Alt+Shift+↑ onlybut how to enter it here then? : | With the awk implementations that support a regexp as their record separator ( RS ) such as GNU awk , you could do: awk -v n=3 -v RS='(\n+|^)% QUESTION\n' 'NR == n+1 {print; exit}' < questions.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449510/"
]
} |
627,635 | Trying to upgrade nodejs on ubuntu 20.10. Ran the official installation instructions: curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -sudo apt-get install -y nodejs Got the following error: The following packages will be upgraded: nodejs1 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.1 not fully installed or removed.Need to get 0 B/24.5 MB of archives.After this operation, 119 MB of additional disk space will be used.(Reading database ... 277425 files and directories currently installed.)Preparing to unpack .../nodejs_14.15.4-deb-1nodesource1_amd64.deb ...Unpacking nodejs (14.15.4-deb-1nodesource1) over (12.18.2~dfsg-1ubuntu2) ...dpkg: error processing archive /var/cache/apt/archives/nodejs_14.15.4-deb-1nodesource1_amd64.deb (--unpack): trying to overwrite '/usr/share/doc/nodejs/api/dgram.json.gz', which is also in package nodejs-doc 12.18.2~dfsg-1ubuntu2dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)Errors were encountered while processing: /var/cache/apt/archives/nodejs_14.15.4-deb-1nodesource1_amd64.debE: Sub-process /usr/bin/dpkg returned an error code (1) I've looked at other StackOverflow answers recommending that I either try to uninstall nodejs-doc (the conflicting dependency) or to run the following command: sudo dpkg -i --force-overwrite /usr/share/doc/nodejs/api/dgram.json.gz Neither seemed to work. In the case of the above command, it said that the file needed to be a deb package - and anyway, I'm a little skeptical about that strategy as it could break my setup. For attempting to removing nodejs-doc, I got the following output: You might want to run 'apt --fix-broken install' to correct these.The following packages have unmet dependencies: nodejs : Depends: libnode72 (= 12.18.2~dfsg-1ubuntu2) but it is not going to be installed Recommends: nodejs-doc but it is not going to be installed I've also tried running the recommended apt --fix-broken install but it doesn't seem to help. | You need to enable the universe repository which provide the missing dependencies libnode72 (= 12.18.2~dfsg-1ubuntu2) and nodejs-doc (12.18.2~dfsg-1ubuntu2) in Ubuntu 20.10 : sudo add-apt-repository universesudo apt install libnode72 nodejs-doc dpkg -i --force-overwrite should point to the .deb file: sudo dpkg -i --force-overwrite /var/cache/apt/archives/nodejs_14.15.4-deb-1nodesource1_amd64.deb | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627635",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449568/"
]
} |
627,689 | How to build a debian package from a bash script and a systemd service? The systemd service will control the script by starting/stoping ready to use after the .deb will be installed successfully. From a web searching there are some easy exemples to convert only a single file (python, shell , ruby ... script) to .deb . | Here’s a minimal source package which will install a shell script and an associated service. The tree is as follows: minpackage├── debian│ ├── changelog│ ├── control│ ├── install│ ├── minpackage.service│ ├── rules│ └── source│ └── format└── script script is your script, with permissions 755; debian/minpackage.service is your service. debian/changelog needs to look something like minpackage (1.0) unstable; urgency=medium * Initial release. -- GAD3R <[email protected]> Tue, 05 Jan 2021 21:08:35 +0100 debian/control should contain Source: minpackage Section: admin Priority: optional Maintainer: GAD3R <[email protected]>Build-Depends: debhelper-compat (= 13) Standards-Version: 4.5.1 Rules-Requires-Root: no Package: minpackage Architecture: all Depends: ${misc:Depends} Description: My super package debian/rules should contain #!/usr/bin/make -f %: dh $@ (with a real Tab before dh ). The remaining files can be created as follows: mkdir -p debian/sourceecho "3.0 (native)" > debian/source/formatecho script usr/bin > debian/install To build the package, run dpkg-buildpackage -uc -us in the minpackage directory. This will create minpackage_1.0_all.deb in the parent directory. It will also take care of the systemd maintainer scripts for you, so the service will automatically be enabled when the package is installed, and support the various override mechanisms available in Debian. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153195/"
]
} |
627,691 | i am on my kali linux latest version on the time i am making this post, so i am having a error that whenever i am trying to open the "HOME" Folder on my desktop or the "Trash Can" on my desktop it is showing me a error which says: The folder could not be opened Failed to execute child process "/usr/lib/x86_64-linux-gnu/xfce4/exo-2/exo-helper-2"(no such file or directory) i have a full recorded video on my gdrive which you can see from Here please see the problem and tell me how can i fix it, & why it occured Thanks | Here’s a minimal source package which will install a shell script and an associated service. The tree is as follows: minpackage├── debian│ ├── changelog│ ├── control│ ├── install│ ├── minpackage.service│ ├── rules│ └── source│ └── format└── script script is your script, with permissions 755; debian/minpackage.service is your service. debian/changelog needs to look something like minpackage (1.0) unstable; urgency=medium * Initial release. -- GAD3R <[email protected]> Tue, 05 Jan 2021 21:08:35 +0100 debian/control should contain Source: minpackage Section: admin Priority: optional Maintainer: GAD3R <[email protected]>Build-Depends: debhelper-compat (= 13) Standards-Version: 4.5.1 Rules-Requires-Root: no Package: minpackage Architecture: all Depends: ${misc:Depends} Description: My super package debian/rules should contain #!/usr/bin/make -f %: dh $@ (with a real Tab before dh ). The remaining files can be created as follows: mkdir -p debian/sourceecho "3.0 (native)" > debian/source/formatecho script usr/bin > debian/install To build the package, run dpkg-buildpackage -uc -us in the minpackage directory. This will create minpackage_1.0_all.deb in the parent directory. It will also take care of the systemd maintainer scripts for you, so the service will automatically be enabled when the package is installed, and support the various override mechanisms available in Debian. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449603/"
]
} |
627,704 | Context $ bash --versionGNU bash, version 4.4.19(1)-release (x86_64-redhat-linux-gnu)Copyright (C) 2016 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software; you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law.$ which read/usr/bin/read$ which read/usr/bin/read Can someone explain why Example 1 below works and Example 2 does not? Example 1 - bare read works This: declare datadata="pig,cow,horse,rattlesnake,"declare -a my_arrayIFS=',' read -r -a my_array <<< "$data"for item in "${my_array[@]}"; do echo "$item"; done Produces: pigcowhorserattlesnake Example 2 - /usr/bin/read fails This produces no output: declare datadata="pig,cow,horse,rattlesnake,"declare -a my_arrayIFS=',' /usr/bin/read -r -a my_array <<< "$data"for item in "${my_array[@]}"; do echo "$item"; done | read is a shell builtin, i.e. a command that is provided by the shell itself rather than by an external program. For more information about shell builtins, see What is the difference between a builtin command and one that is not? read needs to be a builtin because it modifies the state of the shell, specifically it sets variables containing the output. It's impossible for an external command to set variables of the shell that calls them. See also Why is cd not a program? . Some systems also have an external command called read , for debatable compliance reasons . The external command can't do all the job of the command: it can read a line of input, but it can't set shell variables to what it read, so the external command can only be used to discard a line of input, not to process it. which read doesn't tell you that a builtin exists because that's not its job. which itself is an external command in bash and other Bourne-style shells (excluding zsh), so it only reports information about external commands. There's very rarely any good reason to call which . The command to find out what a command name stands for is type . bash-5.0$ type readread is a shell builtin | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/627704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/449613/"
]
} |
627,709 | I want to write a shell command that replaces all newlines from all paragraphs in stdout that match a specific regex with spaces. Here, I'm defining a paragraph to be any text bookended by two or more new lines. Specifically, I want to locate all paragraphs of text that do not begin with ( or $ and strip these paragraphs of all newlines. For example, running my script on Aliquam erat volutpat. Nunc ( eleifend leo vitae magna. In (i)yd erat non orcicommodo lobortis. Proin $ neque massa, cursus ut, gravida ut, lobortis eget,lacus. Sed diam.Hello world.(Nullam tristique diamnon turpis.Hello$again!$foobar should result in Aliquam erat volutpat. Nunc ( eleifend leo vitae magna. In (i)yd erat non orci commodo lobortis. Proin $ neque massa, cursus ut, gravida ut, lobortis eget, lacus. Sed diam.Hello world.(Nullam tristique diamnon turpis.Hello $again!$foobar Is this possible? I don't mind if there's collateral damage like adding extra newlines (but I'm also curious if it can be done without collateral damage!). | As extra blank lines don't matter gawk 'BEGIN {RS=""} !/^[$(]/ {gsub("\n"," ")} {print;print "\n"}' Explanation. RS="" sets gawk into paragraph mode. !/^[$(]/ matches paragraphs that don't start with ( or $ . gsub("\n"," ") changes newlines into spaces. print;print "\n" outputs the data and a newline. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/627709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/92703/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.