source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
561,250
I have a scenario where I want to create a folder structure like the one below. How can I write a for loop in such way that it would create this structure: ABC [Parent folder] -> A1 [child folder] -> B1 [child folder] -> C1 [child folder] -> W1 [child folder] -> W2 [child folder] -> W3 [child folder] -> V1 [child folder] -> V2 [child folder] -> V3 [child folder] I will pass in a file like this: ABC|A1|B1|C1ABC|W1|W2|W3ABC|V1|V2|V3 Note : The above file content should be taken as input and the folders should be created. #ABC - MAIN PARENT DIRECTORY#REMAINING AFTER ABC are child folders
Using bash , the canonical way : while IFS='|' read -r maindir subdir1 subdir2 subdir3; do mkdir -p "$maindir/$subdir1" "$maindir/$subdir2" "$maindir/$subdir3"done < file Output $ tree ABCABC├── A1├── B1├── C1├── V1├── V2├── V3├── W1├── W2└── W3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561250", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388916/" ] }
561,262
I have some problems with VL805. I tried to update the firmware as suggested here https://askubuntu.com/questions/1162391/usb-3-0-pci-card . [ 2.407086] usb 3-1.1: New USB device found, idVendor=090c, idProduct=1000, bcdDevice=11.00[ 2.407087] usb 3-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=0[ 2.407088] usb 3-1.1: Product: USB DISK[ 2.407088] usb 3-1.1: Manufacturer: SMI Corporation ... [ 3.415543] usb 3-1-port1: cannot reset (err = -22)[ 3.415547] usb 3-1-port1: cannot reset (err = -22)[ 3.415549] usb 3-1-port1: cannot reset (err = -22)[ 3.415551] usb 3-1-port1: cannot reset (err = -22)[ 3.415552] usb 3-1-port1: cannot reset (err = -22)[ 3.415553] usb 3-1-port1: Cannot enable. Maybe the USB cable is bad? ... The card works from BIOS (it can boot from a device plugged into the card). I have disabled iommu in BIOS since there appears to be a known issue with this card and AMD iommu. FYI, CPU virtualization is also turned off. Yet I see [ 1.041848] AMD-Vi: IOMMU performance counters supported[ 1.042086] iommu: Adding device 0000:00:01.0 to group 0 ... [ 1.044209] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40 Do I have to disable the iommu driver also? Uname: Linux 5.0.0-37-lowlatency #40~18.04.1-Ubuntu SMP PREEMPT Thu Nov 14 12:51:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux Machine info: efi: EFI v2.60 by American Megatrendsefi: ACPI 2.0=0xdd225000 ACPI=0xdd225000 SMBIOS=0xde48b000 ESRT=0xda62cf98 MEMATTR=0xda62b698 secureboot: Secure boot could not be determined (mode 0)SMBIOS 2.8 present.DMI: Micro-Star International Co., Ltd. MS-7B79/X470 GAMING PRO (MS-7B79), BIOS 1.10 03/29/2018CPU0: AMD Ryzen 7 2700X Eight-Core Processor (family: 0x17, model: 0x8, stepping: 0x2) lspci: 00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Root Complex 00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) I/O Memory Management Unit 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59)00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 000:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 100:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 200:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 300:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 400:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 500:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 600:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 703:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43d0 (rev 01)03:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43c8 (rev 01)03:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43c6 (rev 01)16:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43c7 (rev 01)16:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43c7 (rev 01)16:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43c7 (rev 01)16:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43c7 (rev 01)16:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43c7 (rev 01)16:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43c7 (rev 01)18:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)1a:00.0 USB controller: VIA Technologies, Inc. VL805 USB 3.0 Host Controller (rev 01)1c:00.0 USB controller: ASMedia Technology Inc. ASM1143 USB 3.1 Host Controller1d:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1)1d:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)1e:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a1e:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor1e:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] USB 3.0 Host controller1f:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 14551f:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) lsusb: Bus 008 Device 003: ID 0451:8140 Texas Instruments, Inc. Bus 008 Device 002: ID 0451:8140 Texas Instruments, Inc. Bus 008 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 007 Device 007: ID 0451:ca01 Texas Instruments, Inc. Bus 007 Device 006: ID 0451:8142 Texas Instruments, Inc. TUSB8041 4-Port HubBus 007 Device 005: ID 0582:01e1 Roland Corp. Bus 007 Device 004: ID 258a:0001 Bus 007 Device 003: ID 1038:1720 SteelSeries ApS Bus 007 Device 002: ID 0451:8142 Texas Instruments, Inc. TUSB8041 4-Port HubBus 007 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 003 Device 003: ID 090c:1000 Silicon Motion, Inc. - Taiwan (formerly Feiya Technology Corp.) Flash DriveBus 003 Device 002: ID 2109:3431 VIA Labs, Inc. HubBus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hubBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Using bash , the canonical way : while IFS='|' read -r maindir subdir1 subdir2 subdir3; do mkdir -p "$maindir/$subdir1" "$maindir/$subdir2" "$maindir/$subdir3"done < file Output $ tree ABCABC├── A1├── B1├── C1├── V1├── V2├── V3├── W1├── W2└── W3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22458/" ] }
561,281
I use the command line heavily and over the years have migrated from bash to zsh as a daily driver shell. I usually use a slightly customized oh-my-zsh environment, but some systems are on prezto; the differences are not large. The most productive plugins I've been using for zsh are zsh-syntax-highlighting and history-substring-search , and lately I've been using the very powerful fzf plugins for pulling up history. Now, I'm finding one of the biggest pain points that remain for me in the command line is command argument reordering. Quite often I try to run a command command very/long/filesystem/path/to/argumentA another/filesystem/or/network/path/argumentB and realize I've got the order backwards. Another even more common situation is when we do any "manual deployment" workflow: First you compare the new stuff with the real stuff, e.g. diff /opt/app/static/www/a.html /home/user/docs/dev/src/a.htmlcp /home/user/docs/dev/src/a.html /opt/app/static/www/a.html Ok, ok, last example (this one has several steps), no more I promise. Perfect real world example right here. Let's get cracking with some file listing with sweet human sizes: find /pool/backups -type f -print0 | xargs -0 ls -lh > filelisting I want to size sort and pick some out interactively: sort -rhk5 filelisting | fzf -m > /these/are/the_chosen Nice, that works! Oh but I need just the paths now, but don't want to re-run find : cut -d ' ' -f 10 /these/are/the_chosen The output is garbage, because we encountered a setback with ls -lh getting frisky with spaces. But I've got a strategy: Join the contiguous spaces. Let's opt for tr -s to squeeze space chars, no need for a regex here. Though, tr requires stdin: cat /these/are/the_chosen | tr -s ' ' | cut -d ' ' -f 9- > filelist By this point, we're feeling the pain of cutting and pasting arguments around in commands. My choices here are always between awkward alternatives: I can move the long path or i can move the commands. With either move, I have to either reach over for the mouse to copy it, or i have to type it again in the new spot. I can't win. With a command line, even navigating around is cumbersome, and extremely so without word hopping hotkeys set up. I can't even use my mouse to jump around rapidly on a shell prompt! (Hey, does anyone know of a shell that supports mouse events?) So, this is the inefficiency that I want to abolish. I want to eliminate the friction of grabbing a shell object (such as a valid path string) and move it freely left and right while I prototype out monster pipelines. If I had that feature I could spam that 5 times to shove the path to the left of the cut & flags, then construct the rest of the pipeline organically. I believe the lack of line editing power is what the issue is here. In the very first example where I want to transpose the first 2 args, I can create a trivial shell script that perhaps I'd call cpto that inverts the arguments and delegates to the cp command. But I don't want to have to do that, and it would not help me in the general case, like in the third example. I'd like to be able to reorder the arguments that I've entered using a simple key combination, like I can do for various types of lists if I'm in Vim with plugins like sideways . Does such a plugin exist for zsh? Does such a plugin exist for any other shells? If not, how difficult would it be to implement for zsh? I think that the zsh-syntax-highlighting plugin proves that it should be possible to tokenize arguments. Indeed the shell knows how to fetch individual arguments from history: https://stackoverflow.com/a/21439621/340947 The pain point is so severe and common that I'm liable to write a simple script to bind to a hotkey that grabs the last entry in history and swaps the last 2 args for me, and runs that. But that would not be as ideal as having a line editor operation so that the swap can be done interactively rather than committing to run the command. Perhaps an improvement on that could be injecting !:0 !:2 !:1 (which zsh nicely auto expands for me) but there are plenty of problems with this also: (1) it won't work without already having attempted to run the wrong command. More than half the time I want to swap args after catching myself after having typed an incorrect command, and (2) often there are flags that were used which that snippet would fail to account for. I've also seen the approach shown here which is fine but remains tremendously unsatisfying as the keystrokes need to be repeated a lot for long paths, and the Ctrl+Y behavior only recalls the most recent item that was cut, rather than hold a stack of them. It's good to know, but practically useless to me. For completeness' sake, the tactic taken now is to use whatever suitable key combo to delete words to erase the shorter of the commands to reorder, reposition the cursor, use the mouse to copy the deleted argument from terminal output, and paste it back in. Ordinary folk don't bat an eye at this but it makes me die a little every time I do it because I cannot stop thinking about how easy it would be for the computer to do this task for me, and the injustice that I feel having to reach my hand over to the mouse.
In zsh , by default all the widgets that operate on words including the transpose-words one bound by default to Alt+T in emacs mode work on words that are defined as sequences of alnum+ $WORDCHARS characters. The default value of $WORDCHARS has *?_-.[]~=/&;!#$%^(){}<> , so includes / , so should be fine for you to transpose paths as long as those paths don't include characters outside of that. That won't work for paths that contain things like : , @ , , ... or are quoted though. But you could use the select-word-style framework to change the definition of word on-demand. If you add: autoload -U select-word-stylezle -N select-word-stylebindkey '\ez' select-word-style to you ~/.zshrc , then upon pressing Alt+Z , you'll get the choice: Word styles (hit return for more detail):(b)ash (n)ormal (s)hell (w)hitespace (d)efault (q)uit(B), (N), (S), (W) as above with subword matching? After pressing "return for more detail" : (b)ash: Word characters are alphanumerics only(n)ormal: Word characters are alphanumerics plus $WORDCHARS(s)hell: Words are command arguments using shell syntax(w)hitespace: Words are whitespace-delimited(d)efault: Use default, no special handling (usually same as `n')(q)uit: Quit without setting a new style so pressing S would allow you to transpose two shell words (so including those containing quoted spaces or command substitutions...) with Alt+T (or delete one with Ctrl+W , move back one with Alt+B , etc). See info zsh select-word-style for details (assuming the zsh documentation has been installed on your system ( zsh-doc package on Debian and derivatives)). You'll find a section there that looks like it has been especially written for you which you can adapt to specify how you want transpose-words to behave whenever the cursor is on a filename or in-between words, etc: Here are some examples of use of the word-context style to extend the context. zstyle ':zle:*' word-context \ "*/*" filename "[[:space:]]" whitespace zstyle ':zle:transpose-words:whitespace' word-style shell zstyle ':zle:transpose-words:filename' word-style normal zstyle ':zle:transpose-words:filename' word-chars '' This provides two different ways of using transpose-words depending on whether the cursor is on whitespace between words or on a filename, here any word containing a /. On whitespace, complete arguments as defined by standard shell rules will be transposed. In a filename, only alphanumerics will be transposed. Elsewhere, words will be transposed using the default style for :zle:transpose-words . For instance, with: autoload -U select-word-stylezle -N select-word-stylebindkey '\ez' select-word-styleselect-word-style normalzstyle :zle:transpose-words word-style shell transpose-words would work with shell words always while all other word widgets would use the normal definition of word , and you could still use Alt+Z to change it (for widgets other than transpose-words ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561281", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12497/" ] }
561,322
Here is the issue, I would like to count the number of jobs I have in the hpc, but it is not one of the readily provided features. So I made this simple script squeue -u user_name | wc -l where squeue prints all the jobs like the following > squeue -u user_name JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 8840441 theory cteq fxm PD 0:00 1 (Resources) 8840442 theory cteq fxm PD 0:00 1 (Priority) 8840443 theory cteq fxm PD 0:00 1 (Priority) 8840444 theory cteq fxm PD 0:00 1 (Priority) which would be piped to wc and the number of lines would be counted. However, the first line is not an entry of the job. How may I instruct wc to skip the first line when counting? Or should I just take the output of wc and minus one to it? Thanks in advance!
There are many many ways to do this, the first I thought of was: squeue -u user_name | tail -n +2 | wc -l From the man page for tail : -n, --lines=[+]NUM output the last NUM lines, instead of the last 10; or use -n +NUM to output starting with line NUM So fo you -n +2 should skip the first line. You can also use the sort form of tail: tail +2
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/561322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/376814/" ] }
561,565
The error is showing itself like this: Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer rendering failure: An invalid handle value was provided.Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer features failure: An invalid handle value was provided.Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer attributes failure: An invalid handle value was provided.Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer rendering failure: An invalid handle value was provided.Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer features failure: An invalid handle value was provided.Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: video mixer attributes failure: An invalid handle value was provided.Jan 11 16:39:52 pop-os org.gnome.Nautilus[1514]: [00007fa4fc465ce0] vdpau_chroma filter error: It has consumed my whole SSD.
I think the error is caused by VLC. Try using another Media Player.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/561565", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/389701/" ] }
561,577
I want to print all columns from nth to last column of a line Input String in file vddp vddpi vss cb0 cb1 cb2 cb3 ct0 ct1 ct2 ct3 Command cat <file> | awk ' { for (i=3; i<=NF; i++) print $i }' Current Output cb0cb1cb2cb3ct0ct1ct2ct3 Desired Output cb0 cb1 cb2 cb3 ct0 ct1 ct2 ct3 I am trying the awk iteration, but cannot get desired output
awk -v n=4 '{ for (i=n; i<=NF; i++) printf "%s%s", $i, (i<NF ? OFS : ORS)}' input This will take n as the value of n and loop through that number through the last field NF , for each iteration it will print the current value, if that is not the last value in the line it will print OFS after it (space), if it is the last value on the line it will print ORS after it (newline). $ echo 'vddp vddpi vss cb0 cb1 cb2 cb3 ct0 ct1 ct2 ct3' |> awk -v n=4 '{ for (i=n; i<=NF; i++) printf "%s%s", $i, (i<NF ? OFS : ORS)}'cb0 cb1 cb2 cb3 ct0 ct1 ct2 ct3
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93151/" ] }
561,594
I have a simple Bash script I am running to parallelize and automate the execution of a program written in Sage MATH: #!/bin/bashfor i in {1..500}; do echo Spinning up threads... echo Round $i for j in {1..8}; do ../sage ./loader.sage.py & done waitdone 2>/dev/null I would like to add a timeout so that on each thread, after 5 seconds, ../sage ./loader.sage.py & will timeout, kill the thread, and continue execution. How would I go about doing this? Apologies in advance if this is a noob question, I can't seem to get the syntax right.I am running this in a Ubuntu WSL. The program I am calling is written in Python and run through the Sage MATH interpreter which liaises to Singular.
Using GNU Parallel: parallel --timeout 5 -j 8 -N0 ../sage ./loader.sage.py ::: {1..4000} 2>/dev/null This will execute ../sage ./loader.sage.py 4000 times, 8 jobs at a time, each with a timeout of 5 seconds From the parallel man page: --timeout duration Time out for command. If the command runs for longer than duration seconds it will get killed as per --termseq. Note: This command replaces your entire loop.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390010/" ] }
561,600
I have around 50 very large csv files, they have thousands of lines. And I only want to keep the first 200 lines for each of them - I'm okay if the generated files to overwrite the original ones. What command should I use to do this?
Assuming that the current directory contains all CSV files and that they all have a .csv filename suffix: for file in ./*.csv; do head -n 200 "$file" >"$file.200"done This outputs the first 200 lines of each CSV file to a new file using head and a redirection. The new file's name is the same as the old but with .200 appended to the end of the name. There is no check to see if the new filename already exists or not. If you want to replace the originals: for file in ./*.csv; do head -n 200 "$file" >"$file.200" && mv "$file.200" "$file"done The && at the end of the head command makes it so that the mv won't be run if there was some issue with running head . If your CSV files are scattered in subdirectories under the current directory, then use shopt -s globstar and then replace the pattern ./*.csv in the loop with ./**/*.csv . This will locate any CSV file in or below the current directory and perform the operation on each. The ** globbing pattern matches "recursively" down into subdirectories, but only if the globstar shell option is set. For CSV files containing data with embedded newlines, the above will not work properly as you may possibly truncate a record.Instead, you would have to use some CSV-aware tool to do the job for you. The following uses CSVkit, a set of command-line tools for parsing and in general working with CSV files, together with jq , a tool for working with JSON files. There is no tool in CSV kit that can truncate a CSV file at a particular point, but we can convert the CSV files to JSON and use jq to only output the first 200 records: for file in ./*.csv; do csvjson -H "$file" | jq -r '.[:200][] | map(values) | @csv' >"$file.200" && mv "$file.200" "$file"done Given some CSV file like the below short example, a,b,c1,2,3"hello, world",2 3,4"hellothere","my goodman",nice weather for ducks the csvjson command would produce [ { "a": "a", "b": "b", "c": "c" }, { "a": "1", "b": "2", "c": "3" }, { "a": "hello, world", "b": "2 3", "c": "4" }, { "a": "hello\nthere", "b": "my good\nman", "c": "nice weather for ducks" }] The jq tool would then take this, and for each object in the array (restricted to the first 200 objects), extract the values as an array and format it as CSV. It's probably possible to do this transformation directly with csvpy , another tool in CSVkit, but as my Python skills are non-existent, I will not attempt to come up with a solution that does that.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/561600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
561,603
In Linux or more particular EXT4 the initial size of directory file is 4kB.But if a large enough number of files are stored within the directory the size of the directory file will increase due to the increase of the internal "file list".However, how many files are needed for this to happen? I have been unable to find a resource that can answer this question.
The format of ext4 directory entries is documented in the kernel . There are two possibilities. For linear directories, each entry occupies eight bytes, plus the file name (zero-terminated), rounded up to four bytes. So n file entries occupy 8 × n bytes plus the lengths of all the file names individually rounded up to four (including the terminating zero). Directories always include . and .. which occupy twelve bytes each. Each linear directory can also have a twelve-byte checksum. The last entry in a block has its record length extended to cover the remaining room in the current block, so that directory entries never straddle two file system blocks. For hash tree directories, the first data block in each directory has a 40-byte root entry (which includes file entries for . and .. ), and each subsequent data block has an 18-byte node. Nodes occupy eight bytes each, and file entries use the same data structure as in a linear directory, ultimately as a linear array. So the amount of space consumed by a directory is harder to compute: each file occupies eight bytes plus the length of its name, rounded up to four bytes, and the tree structure consumes 40 bytes for the first block plus 18 bytes per extra block, and eight bytes per node. If you want to quickly see a directory increase in size, fill it with files with lengthy file names — file names can be up to 254 bytes in length, plus the terminating zero byte, occupying 264 bytes in total, so 16 such entries in either type of directory will require more than 4096 bytes. To determine whether a directory is linear or hashed, examine its inode, e.g. using debugfs : debugfs: show_inode_info /path/to/directoryInode: 7329 Type: directory Mode: 0755 Flags: 0x1000Generation: 2283115506 Version: 0x00000001... The flags will show 0x1000 set if the directory is hashed, unset otherwise.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38562/" ] }
561,751
I came across an ip like this the other day 255.255.255.1/16 I thought the /16 referred to the number of subnetting addresses the IP could generate. However, I suspect I may be horribly wrong and felt that the Linux/Unix experts on this forum could assist.
The value after the slash, i.e. the 24 in your example 192.168.1.0/24 , uses CIDR notation toindicate the number of bits available for network addressing as distinct from host addressing. For IPv4 each IP address is 32 bits, and the network address for a /24 network has 24 bits, so the host addressing for such a network would be 32 - 24 = 8 bits. Let's look at this a bit more closely. Take an example address 192.168.1.0/24 . This says that 24 bits of the 32 are for the network address. Each octet is 8 bits so it become trivially easy to see that this means that 192.168.1 is the network address, and the remainder is for the host. Eight bits gives 2 8 addresses, i.e. 256. The lowest ( 0 ) is unavailable and the highest ( 255 ) is reserved for local network broadcasts, so that leaves room for 254 host addresses, all beginning with 192.168.1 . Now take another example address 192.168.0.0/16 . Here we have 16 bits of the 32 for network addressing, leaving 16 bits for the hosts on the network. We have 2 16 = 65536 host addresses but, as before, two are reserved ( 192.168.0.0 and 192.168.255.255 ) so you have 65534 available addresses for hosts on this network, all starting 192.168 . This is all very easy; where it gets exciting is when the subnet field is not a multiple of eight. For example, you could have a network 192.168.1.128/26 . The same rules apply though; you have 26 bits for the network address and 6 bits for the hosts on that network. 2 6 is 64 and two are reserved so you can have 62 hosts on such a network. Using the ipcalc tool you can see that the valid IP addresses on this network would be 192.168.1.129 to 192.168.1.190 : ipcalc 192.168.1.128/26Address: 192.168.1.128 11000000.10101000.00000001.10 000000Netmask: 255.255.255.192 = 26 11111111.11111111.11111111.11 000000Wildcard: 0.0.0.63 00000000.00000000.00000000.00 111111=>Network: 192.168.1.128/26 11000000.10101000.00000001.10 000000HostMin: 192.168.1.129 11000000.10101000.00000001.10 000001HostMax: 192.168.1.190 11000000.10101000.00000001.10 111110Broadcast: 192.168.1.191 11000000.10101000.00000001.10 111111Hosts/Net: 62 Class C, Private Internet
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390143/" ] }
561,772
This awk expression prints inet 34.45 as expected on OpenBSD: echo "inet 34.45" | awk '/inet [0-9]+\./ { print }' However, when I replace the + with a bound {1,3} , I am not getting any match: echo "inet 34.45" | awk '/inet [0-9]{1,3}\./ { print }' Both expressions work as expected on Linux with gawk. The gawk man page mentions that what it calls interval expressions were not originally supported by awk but later added to POSIX for consistency with egrep. The awk man page on OpenBSD mentions no such thing and just refers to the man page of re_format, which specifies bounds as usual. Is this a bug or some undocumented limitation of OpenBSD awk?
That restriction is precisely documented. From: http://man.openbsd.org/awk.1#STANDARDS STANDARDS The awk utility is compliant with the IEEE Std 1003.1-2008 (“POSIX.1”) specification, except awk does not support {n,m} pattern matching.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/561772", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14529/" ] }
561,827
I'm attempting to write a bash function that gets the UUID of a VirtualBox VM. I'm pretty new to awk so I'm trying to focus on learning how to solve the problem using it. I'm aware that I can use sed or even cut to solve this. My "raw" output from the VBoxManage list vms is as follows: $ VBoxManage list vms"FreeBSD" {1aac7062-bd59-47ee-9261-2f6aa8d9ef53}"Windows 10" {64942de7-beb9-418c-9f52-5befcb6f577b}"High Sierra" {07f73e1a-a0c4-4190-ade1-79a2e432b4d6}"Test Machine" {9d0953a7-ca2a-4667-8c5b-1a9f550b2956} My desired output is to just get the UUID of a particular VM. Using "Test Machine" for this case, I'm looking for 9d0953a7-ca2a-4667-8c5b-1a9f550b2956 (without the brackets { and } ). After quite a bit of searching and testing, I've come up with $ VBoxManage list vms | awk '/Test Machine/{ sub("{" ,""); sub("}", ""); print $3 }'9d0953a7-ca2a-4667-8c5b-1a9f550b2956 It works, but I have to use to sub commands to extract it. My question is, is there a way to simplify the substitution portion of the action with an or type operator so I don't have to use two sub commands? For example, if I try awk '/Test Machine/{ sub("{" || "}", ""); print $3' it doesn't work - it prints the whole field including the brackets. {9d0953a7-ca2a-4667-8c5b-1a9f550b2956} Is there a better way of extracting that string?
-F field separator in awk. Here we are using 2 field separators. (either { or } ) VBoxManage list vms | awk -F"[{}]" '/Test Machine/{print $2}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107777/" ] }
561,859
I have a build script that executes a long command that produces lots of output which went like this: ./compile In order to troubleshoot compilation performance, I want to use ts (from moreutils ) which prefixes each output line with a timestamp. So I updated my script like this: bash -c "./compile | ts '[%Y-%m-%d %H:%M:%S]'" This works, but now the exit value is always 0 , event when compile fails (I think, because ts exits without an error). How can I update my script to return compile 's exit code while using ts ?
Since you’re using Bash, you can use its $PIPESTATUS which is an array containing the different exit codes from the commands in a pipeline: bash -c './compile | ts "[%Y-%m-%d %H:%M:%S]"; exit "${PIPESTATUS[0]}"' zsh has a similar feature but uses the $pipestatus array instead (also remember zsh arrays are numbered from 1 , not 0 ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/561859", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/58344/" ] }
561,877
Preamble Suppose I have an encrypted partition of 1TB on machine Acontaining the home directories of a dozen users, let's call thispartition sda2 .I want to backup that partition on a remote computer B on a daily basis.To keep it secure and simple, the backup on B must be an exact image copyof sda2 . I know I can create a local image of sda2 with the dd command,and even pipe it to B through ssh : $ dd if=/dev/sda2 | ssh B dd of=/backups/A.sda2.image The problem with this approach is the sheer size of the partition.1TB of data doesn't pass through the network easily and this puts a limiton the frequency of backup operations---less than once in a month to be realistic. An incremental backup tool is needed at this point. rsync , seems to me, is the solution to the previous difficulty. However I fail when I try to test it because rsync treats /dev/sda2 as an special file and the command: $ rsync /dev/sda2 B:/backups/A.sda2.image doesn't do what I want. Question Is there a way to trick rsync to treat /dev/sda2 a regular file? Note I'm not asking if there is an rsync option to do this (if there is such an option that would be great, but that's only half of the story)I want to know if there is something like a mount command or system call that would allow me to create, for instance, a regular file /mnt/sda2.live_image with the raw contents of /dev/sda2 , so thatother applications can read or write directly on sda2 through sda2.live_image . Any help is much appreciated.
Since you’re using Bash, you can use its $PIPESTATUS which is an array containing the different exit codes from the commands in a pipeline: bash -c './compile | ts "[%Y-%m-%d %H:%M:%S]"; exit "${PIPESTATUS[0]}"' zsh has a similar feature but uses the $pipestatus array instead (also remember zsh arrays are numbered from 1 , not 0 ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/561877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/301613/" ] }
561,927
Is it possible to get file modification time from shell using only POSIX features? Ideally in unix timestamp (seconds). Everything I was able to find was using stat(1) but that does not seem to be defined in POSIX. Is it possible?
The only POSIX CLI (shell and utilities) interface to the lstat() and stat() system calls is ls I'm afraid and its output is not post-processable reliably. A trick could be to use pax -x ustar (both the pax command and its ustar format are specified) to generate a tar file with the file in it and extract the timestamp from the file. echo "$((0$(pax -x ustar -wd -- "$file" | dd 2> /dev/null bs=4 skip=34 count=3 | tr -d '\0')))" The mtime being stored at offset 136 as an octal number. It's encumbered by all the limitations of US-tar format though. Your best bet portably would be to use perl or python : perl -MPOSIX -le ' for (@ARGV) { if (@s = lstat$_) {print $s[9]} else {warn "$_: $!\n"} }' -- "$file" (note that it doesn't include the nanosecond as available on many modern systems. You may be able to get it via the Time::HiRes module for instance, but that assumes it's installed and it's from a recent version).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561927", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112194/" ] }
561,942
Maybe a silly question, but a remote server was down for an extended period today. When it came back up, I realized the downtime was apparently due to a system upgrade, from Debian Buster (stable) to Bullseye (testing). I'm a bit confused, because I am the only superuser on this server, and I have not scheduled any kind of update in some time. I don't run production machines on testing, and I didn't intentionally set the system up to automatically upgrade operating system versions. I do periodically use apt to update and upgrade individual packages, but I certainly did not call for a full release upgrade. Any obvious configuration settings I might have made to trigger this - for example, changing my apt sources unintentionally etc.? I don't want any more unexpected updates. (It's a headless Minecraft server, for what it's worth, and downtime is bad when people want to play.)
There are a two things which could cause this. You can find out which by using: cat /etc/apt/sources.list /etc/apt/sources.list.d/* Using a suite instead of a codename: (1) deb http://ftp.debian.org/debian/ buster main(2) deb http://ftp.debian.org/debian/ stable main Line (1) and (2) are equivalent today, but they won't always be. One day, stable will point to bullseye . When that happens, your machine will automatically change too. If you want control, then use codename buster . Check for the testing suite. That switched from buster to bullseye on 6 July 2019. Multiple distributions: (1) deb http://ftp.debian.org/debian/ buster main(2) deb http://ftp.debian.org/debian/ bullseye main(3) deb http://ftp.debian.org/debian/ testing main If you have something like what's above, then Debian may see several versions of each package. The latest version of a package will be selected unless you've set APT::Default-Release in /etc/apt/apt.conf or explicitly pinned priorities in /etc/apt/preferences.d/ . The next question is why did your sources.list have a strange entry? It could be that you've added a line because you wanted the latest version of a package which was only available in bullseye. In that case you may have added the line, apt update then apt install -t testing some-package . But the problem is that unless you delete that line and do another apt update , or add a APT::Default-Release , you are primed for an upgrade to testing , Another option is 3rd party software. It's common for software which doesn't exist in Debian's official archive to give you a *.deb installer. I've seen *.deb archives include a custom /etc/apt/sources.list.d/*.list so that you get updates. It wouldn't be hard for them to say "well I need version X of this dependency, and I know it exists in bullseye, so I'll create a line to add a bullseye repo". It would by sloppy of them, but not impossible. So how to recover? There are three options at this point: 1: Complete the upgrade - Easiest/quickest 2: Downgrade - Hardest/Least likely to succeed 3: Re-install - Most reliable/Most downtime To Complete the upgrade , first obviously fix the strange line in your /etc/apt/sources.list[.d/] . Then: # Make everything 'bullseye'sudo sed -i \ -e 's/buster/bullseye/g' \ -e 's/unstable/bullseye/g' \ -e 's/stable/bullseye/g' \ -e 's/testing/bullseye/g' \ -e 's/sid/bullseye/g' \ /etc/apt/sources.list \ /etc/apt/sources.list.d/*# Upgradesudo apt updatesudo apt upgradesudo apt dist-upgradesudo apt --fix-broken installsudo apt autoremove Toggle between upgrade , dist-upgrade , --fix-broken install and autoremove until apt finishes successfully everywhere. To downgrade (and this is likely to fail, I can't stress that enough): First back everything up. Then, create /etc/apt/preferences.d/buster : Package: *Pin: release n=busterPin-Priority: 1001 Then upgrade like we did in step 1 sudo sed -i \ -e 's/bullseye/buster/g' \ -e 's/unstable/buster/g' \ -e 's/stable/buster/g' \ -e 's/testing/buster/g' \ -e 's/sid/buster/g' \ /etc/apt/sources.list \ /etc/apt/sources.list.d/*# Upgradesudo apt updatesudo apt upgradesudo apt dist-upgradesudo apt --fix-broken installsudo apt autoremove Toggle between upgrade , dist-upgrade , --fix-broken install and autoremove until apt finishes successfully everywhere. When you are happy, delete /etc/apt/preferences.d/buster
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/561942", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70651/" ] }
562,022
I've already extracted the line numbers needed from the input file (i.e 1 through N). I now need to be able to use the input file with this 1-through-N to output to a new file.The pattern matching has already occurred in a previous function which extracted the line numbers from the input file. Now I need to take this range of numbers using the input file and output that range into a new file. I don't need to do any pattern matching, just need to do 1-through-N to a new file. I'm currently using a bash script with Awk extracting the necessary information to generate the ranged data from the input file. Now I need to take that rage and create a new file (using the input file) from 1 to N. I've tried sed -i '1,'"$N" input.txt > input.log to go from 1 to N using my input file input.txt and outputting to input.log file but I get the following error: sed: -e expression #1, char 3: missing command The line of code above is within a bash script.
head seems like the best tool for the job: head -n "$N" input.txt > input.log With sed , you need to specify a command; based on your approach, that would be the p command and the -n option so it doesn’t print the pattern space by default: sed -n "1,${N}p" input.txt > input.log but sed "$N q" input.txt > input.log would be more efficient, only reading the first N lines and stopping (which is also what head does) instead of reading the complete file (which is what sed "1,${N}p" does). These approaches only produce the same output for N greater than or equal to 1; if it’s less than 1, the behaviours will differ.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562022", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390378/" ] }
562,041
I have a tab-delimited file like shown below, and would like to merge the rows based on matches in any of the columns. The number of columns are usually 2, but could vary in some cases and be 3. input: AMAZON NILE ALASKA NILEHELLO MYMANGROVE AMAZONMY NAMEIS NAME desired output: AMAZON NILE ALASKA MANGROVEHELLO MY NAME IS How could one go about this with awk ? Will this work for the below file also?input: apple_bin2file strawberry_24filesmango2files strawberry_39filesapple_bin8file strawberry_39filesdastool_bin6files strawberry_40filesapple_bin6file strawberry_40filesorange_bin004file dastool_bin004filesorange_bin005file dastool_bin005filesapple_bin3file dastool_bin3filesapple_bin5file dastool_bin5filesapple_bin6file dastool_bin6filesapple_bin7file dastool_bin7filesapple_bin8file mango2files expected output in tab-delimited format: apple_bin2file strawberry_24filesmango2files strawberry_39files apple_bin8filedastool_bin6files strawberry_40files apple_bin6fileorange_bin004file dastool_bin004filesorange_bin005file dastool_bin005filesapple_bin3file dastool_bin3filesapple_bin5file dastool_bin5filesapple_bin7file dastool_bin7files Sorry to those who answered, I updated the input files!
head seems like the best tool for the job: head -n "$N" input.txt > input.log With sed , you need to specify a command; based on your approach, that would be the p command and the -n option so it doesn’t print the pattern space by default: sed -n "1,${N}p" input.txt > input.log but sed "$N q" input.txt > input.log would be more efficient, only reading the first N lines and stopping (which is also what head does) instead of reading the complete file (which is what sed "1,${N}p" does). These approaches only produce the same output for N greater than or equal to 1; if it’s less than 1, the behaviours will differ.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562041", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390393/" ] }
562,121
I'm trying to sort a file based on a particular position but that does not work, here is the data and output. ~/scratch$ cat id_researchers_2018_sample id - 884209 , researchers - 1id - 896781 , researchers - 4id - 901026 , researchers - 15id - 904091 , researchers - 1id - 905525 , researchers - 1id - 908660 , researchers - 5id - 908876 , researchers - 7id - 910480 , researchers - 10id - 916197 , researchers - 1~/scratch$ sort -k 28,5 id_researchers_2018_sample id - 884209 , researchers - 1id - 896781 , researchers - 4id - 901026 , researchers - 15id - 904091 , researchers - 1id - 905525 , researchers - 1id - 908660 , researchers - 5id - 908876 , researchers - 7id - 910480 , researchers - 10id - 916197 , researchers - 1 I'm wanting to sort this by the numbers in the last column, like this: id - 884209 , researchers - 1id - 904091 , researchers - 1id - 905525 , researchers - 1id - 916197 , researchers - 1id - 896781 , researchers - 4id - 908660 , researchers - 5id - 908876 , researchers - 7id - 910480 , researchers - 10id - 901026 , researchers - 15
You are intending to sort by column 7 numerically. This can be done with either $ sort -n -k 7 fileid - 884209 , researchers - 1id - 904091 , researchers - 1id - 905525 , researchers - 1id - 916197 , researchers - 1id - 896781 , researchers - 4id - 908660 , researchers - 5id - 908876 , researchers - 7id - 910480 , researchers - 10id - 901026 , researchers - 15 or with $ sort -k 7n fileid - 884209 , researchers - 1id - 904091 , researchers - 1id - 905525 , researchers - 1id - 916197 , researchers - 1id - 896781 , researchers - 4id - 908660 , researchers - 5id - 908876 , researchers - 7id - 910480 , researchers - 10id - 901026 , researchers - 15 These are equivalent. The -n option specifies numerical sorting (as opposed to lexicographical sorting). In the second example above, the n is added as a specifier/modifier to the 7th column specifically. The specification of the sorting key column, -k 7 , will make sort sort the lines on column 7 onwards (the line from column 7 to the end). In this case, since column 7 is last, it mean just this column. If this had mattered, you may have wanted to use -k 7,7 instead ("from column 7 to 7"). If two keys compare equal, sort will use the complete line as the sorting key, which is why we got the result we get for the first four lines in your example. If you had wanted to do a secondary sort on the second column, you would have used sort -n -k 7,7 -k 2,2 , or sort -k 7,7n -k 2,2n (specifying the type of comparison separately for each column). Again, if the 7th and the 2nd columns compare the same between two lines, sort would have used a lexicographical comparison of the complete lines. To sort numerically on character position 29, which corresponds to the first digit of the numerical values at the end of each line in your example data: $ sort -k 1.29n fileid - 884209 , researchers - 1id - 904091 , researchers - 1id - 905525 , researchers - 1id - 916197 , researchers - 1id - 896781 , researchers - 4id - 908660 , researchers - 5id - 908876 , researchers - 7id - 910480 , researchers - 10id - 901026 , researchers - 15 The -k 1.29n means "sort on the key given by the 29th character of the 1st field (onwards, to the end of the line), numerically". The -k 7,7n used in the text above just happens to be equivalent to -k 7.1,7.1n .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562121", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65920/" ] }
562,164
summary: A given system has lots text files with names ~= [type of file].[8-digit date] . To search these files, I like (and wanna keep) using this idiom: find /path/ -name 'file.nnnn*' -print | xargs -e fgrep -nH -e 'text I seek' (where nnnn == 4-digit year) ... and in the past decade I also made find glob across years like find /path/ -name 'file.201[89]*' -print | xargs ... ... but now I can't make find glob across 2019 and 2020 with find /path/ -name 'file.20{19,20}*' -print | xargs ... ... although that "curly-brace globbing" (correct term?) works fine with ls ! Is there a {concise, elegant} way to tell find what I want, without instead doing post- find cleanup (i.e., what I'm doing now) à la find /path/ -name 'file.*' -print | grep -e '\.2019\|\.2020' | xargs ... ? FWIW, I'd prefer a solution that works with xargs . details: I work on a system with lotsa conventions which long precede me and which I cannot change. One of those is, it has lotsa text files with names ~= [type of file].[8-digit date] , e.g., woohoo_log.20191230 . When searching within these files for some given text, I typically (as in, almost always) use the find ... grep idiom (often using Emacs' M-x find-grep ). (FWIW, this is a Linux system with $ find --versionfind (GNU findutils) 4.4.2...$ bash --versionGNU bash, version 4.3.30(1)-release (x86_64-pc-linux-gnu) and I currently lack status to change either of those, if I wanted to.) I often kinda know the year range of the matter-at-hand, and so will try to constrain what find returns (to speed processing), with (e.g.) find /path/ -type f -name 'file.nnnn*' -print | xargs -e fgrep -nH -e 'text I seek' where nnnn == 4-digit year. This WFM, and I like (and wanna keep) using the above idiom ... especially since I can also use it to search across years like find /path/ -type f -name 'file.201[89]*' -print | xargs ... But this new decade seems to be breaking that idiom, and (to me at least) most oddly. (I wasn't here when the last decade changed.) Suppose I choose text that I know is in a file from 2019 && a file from 2020 (as in, I can open the files and see the text). If I currently do find /path/ -name 'file.20{19,20}*' -print | xargs ... grep unexpectedly/annoyingly finishes with no matches found , because $ find /path/ -name 'file.20{19,20}*' -print | wc -l0 But if I do find /path/ -type f -name 'file.*' -print | grep -e '\.2019\|\.2020' | xargs ... grep returns the expected results. Which is nice, but ... ummm ... that's just ugly, esp since this "curly-brace glob" (please correct me if this usage is incorrect or otherwise deprecated) works from ls ! I.e., this shows me the files in the relevant year range (i.e., 2019..2020) ls -al /path/file.20{19,20}* Hence I'd like to know: Am I just not giving find the right glob for this usecase? What do I need to tell find to make it do what ls is capably/correctly doing? Is this a problem with xargs ? If so, I can live with a find ... -exec solution, but ... my brain works better with xargs , so I'd prefer to stay with that if possible. (Call me feebleminded, but -exec 's syntax makes my brain hurt .)
With zsh , you could use recursive globbing and its <x-y> glob operator which matches on ranges of decimal numbers: grep -nHFe 'text I seek' /path/**/file.<2019-2020>*(D-.) (the (D) to also look into hidden ( D ot) dirs as find would; presumably you can omit it if you don't want them, and -. is to restrict to regular file ( . ) identified after symlink resolution ( - )). Note that it would also match on file.00002020 (as that's a decimal number between 2019 and 2020) and like in your approach on file.20201234 as its file.2020 which matches file.<2019-2020> followed by 1234 which matches * . The standard (POSIX sh and utilities) way to do it would be with: find /path \( -name 'file.2019*' -o -name 'file.2020*' \) -type f \ -exec grep -Fne 'text I seek' /dev/null {} + (where adding /dev/null gets you the same effect as GNU grep 's -H to force the file name to be displayed) Note that the output of find -print is not compatible with the expected input format of xargs . With GNU utilities, you can use find -print0 and xargs -r0 , but that's not needed as find -exec ... {} + has the same behaviour, is shorter and more portable.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/562164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38638/" ] }
562,168
Should I make MBR or GPT for bootable USB for the installation of centOS 8? Does it depend on the partitioning type that my existing Windows is? Will it make serious error during installation if the USB is wrong partitioning type?
With zsh , you could use recursive globbing and its <x-y> glob operator which matches on ranges of decimal numbers: grep -nHFe 'text I seek' /path/**/file.<2019-2020>*(D-.) (the (D) to also look into hidden ( D ot) dirs as find would; presumably you can omit it if you don't want them, and -. is to restrict to regular file ( . ) identified after symlink resolution ( - )). Note that it would also match on file.00002020 (as that's a decimal number between 2019 and 2020) and like in your approach on file.20201234 as its file.2020 which matches file.<2019-2020> followed by 1234 which matches * . The standard (POSIX sh and utilities) way to do it would be with: find /path \( -name 'file.2019*' -o -name 'file.2020*' \) -type f \ -exec grep -Fne 'text I seek' /dev/null {} + (where adding /dev/null gets you the same effect as GNU grep 's -H to force the file name to be displayed) Note that the output of find -print is not compatible with the expected input format of xargs . With GNU utilities, you can use find -print0 and xargs -r0 , but that's not needed as find -exec ... {} + has the same behaviour, is shorter and more portable.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/562168", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390519/" ] }
562,173
These two files are artefacts of Office 2011 that are preventing removal the 'Contents' directory. How do I delete these? sh-3.2# ls -l -atotal 21496drwx------+ 15 Beef staff 480 Jan 14 23:32 .drwxr-xr-x+ 19 Beef staff 608 Jan 14 23:40 ..-rw-r--r--@ 1 Beef staff 10244 Jan 10 10:36 .DS_Store-rw-r--r-- 1 Beef staff 0 Dec 25 23:13 .localizeddrwxrwxrwx 47 Beef staff 1504 Jan 2 00:01 CSC119 - 1228drwxrwxrwx 7 Beef staff 224 Jan 1 22:50 Chris 1228drwxrwxr-x@ 3 Beef staff 96 Aug 25 2010 Contentsdrwxrwxrwx@ 5 Beef staff 1 sh-3.2# cd Contentssh-3.2# ls -l -atotal 0drwxrwxr-x@ 3 Beef staff 96 Aug 25 2010 .drwx------+ 15 Beef staff 480 Jan 14 23:32 .. Specifically the these to above '.' and '..' files in the Contents dir.I am using a bash shell on my Mac terminal. I am in root. Thanks! sh-3.2# ls -a -ltotal 21496drwx------+ 15 Beef staff 480 Jan 14 23:32 .drwxr-xr-x+ 19 Beef staff 608 Jan 15 00:19 ..-rw-r--r--@ 1 Beef staff 10244 Jan 10 10:36 .DS_Store-rw-r--r-- 1 Beef staff 0 Dec 25 23:13 .localizeddrwxrwxrwx 47 Beef staff 1504 Jan 2 00:01 CSC119 - 1228drwxrwxrwx 7 Beef staff 224 Jan 1 22:50 Chris 1228drwxrwxr-x@ 3 Beef staff 96 Aug 25 2010 Contentsdrwxrwxrwx@ 5 Beef staff 160 Dec 28 12:09 Lily 1228drwxrwxrwx 29 Beef staff 928 Jan 3 21:48 Old Dell Laptoplrwxr-xr-x 1 Beef staff 29 Dec 26 02:51 Relocated Items -> /Users/Shared/Relocated Items-rwxrwxrwx@ 1 Beef staff 1554944 Jan 8 2019 SanDisk Flashback.pdfdrwxrwxrwx 4 Beef staff 128 Jan 8 2019 SanDiskSecureAccess-rwxrwxrwx 1 Beef staff 8600360 Nov 4 2016 SanDiskSecureAccessV3.01_win.exedrwxr-xr-x@ 9 Beef staff 288 Jan 10 11:41 Scanned Docsdrwxrwxrwx 4 Beef staff 128 Dec 19 10:04 googlesh-3.2# rmdir Contentsrmdir: Contents: Directory not emptysh-3.2# rmdir -p Contentsrmdir: Contents: Directory not empty How do I get rid of this dir? sh-3.2# ls -a -ltotal 21496drwx------+ 15 Beef staff 480 Jan 14 23:32 .drwxr-xr-x+ 19 Beef staff 608 Jan 15 00:19 ..-rw-r--r--@ 1 Beef staff 10244 Jan 10 10:36 .DS_Store-rw-r--r-- 1 Beef staff 0 Dec 25 23:13 .localizeddrwxrwxrwx 47 Beef staff 1504 Jan 2 00:01 CSC119 - 1228drwxrwxrwx 7 Beef staff 224 Jan 1 22:50 Chris 1228drwxrwxr-x@ 3 Beef staff 96 Aug 25 2010 Contentsdrwxrwxrwx@ 5 Beef staff 160 Dec 28 12:09 Lily 1228drwxrwxrwx 29 Beef staff 928 Jan 3 21:48 Old Dell Laptoplrwxr-xr-x 1 Beef staff 29 Dec 26 02:51 Relocated Items -> /Users/Shared/Relocated Items-rwxrwxrwx@ 1 Beef staff 1554944 Jan 8 2019 SanDisk Flashback.pdfdrwxrwxrwx 4 Beef staff 128 Jan 8 2019 SanDiskSecureAccess-rwxrwxrwx 1 Beef staff 8600360 Nov 4 2016 SanDiskSecureAccessV3.01_win.exedrwxr-xr-x@ 9 Beef staff 288 Jan 10 11:41 Scanned Docsdrwxrwxrwx 4 Beef staff 128 Dec 19 10:04 googlesh-3.2# sudo rm -r -f Contentsrm: Contents: Directory not empty It still won't delete.
Your directory looks empty, but the ls output indicates that there is a file in there since the link count for the directory is 3 rather than 2 (an empty directory on an APFS filesystem should have a link count of 2). This implies that your filesystem has managed to get itself into an inconsistent state and that you should probably run fsck on it. On macOS, this is done by running the disk first aid via the "Disk Utility" app. If the issue is with your main boot disk (often called "Macintosh HD", at least by default), then you should do this from recovery mode. If this is the same issue that I encountered , then it will most likely be solved by first making sure that you are on a recent version of macOS (somewhere past March 2019) before running the first aid on the disk in recovery mode. You get into recovery mode by pressing Cmd+R as soon as you reboot the machine. Once in recovery mode, you will be presented with the opportunity to run "Disk Utility" from a menu.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/562173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390465/" ] }
562,347
What is the difference between the od, hd, hexdump and xxd commands ? They are all commands for dumping files and they can all dump it in various formats such as hexadecimal, octal or binary. Why creating different programs ?
Unix, of which Linux is just one flavour, has a long and rich history. It has not been developed by a single company or group, nor following a master plan, and has evolved by adaption to many niches. You can find many examples where multiple tools cover similar or the same functionality. They have been implemented by different people at different times for similar purposes; check their manpages for hints. Thanks to the rise of Open Source in general, and the possibilities of the information age, we can enjoy the benefit of many of these tools being generally available to our use. The attempt to merge them into one will result in one more being available. Enjoy; these are amazing times! A selection for further reading: https://en.wikipedia.org/wiki/History_of_Unix http://www.catb.org/esr/writings/taoup/html/historychapter.html http://www.catb.org/esr/writings/cathedral-bazaar/ https://www.levenez.com/unix/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282650/" ] }
562,376
Example: file1 Speed: 50.00 Temperature: 120.00Speed: 51.00 Temperature: 121.00Speed: 52.00 Temperature: 122.00 file2 50.00 120.0051.00 121.0052.00 122.00 I want to write file1 to file2
awk '{print $2, $4}' file1 > file2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/243091/" ] }
562,461
I have a small script which simply shall go through my Downloads folder and then sort the files according to the extension. How can I make this cleaner/better ? I would like to simply maintain a list of extensions and corresponding directories and have the command run with e.g. a for loop, so I don't have to add a new line every time I want to add an extension. script as it is now: #!/bin/shLOCKFILE=/tmp/.hiddensync.lockif [ -e $LOCKFILE ] then echo "Lockfile exists, process currently running." echo "If no processes exist, remove $LOCKFILE to clear." echo "Exiting..." exitfitouch $LOCKFILEtimestamp=`date +%Y-%m-%d::%H:%M:%s`echo "Process started at: $timestamp" >> $LOCKFILE## Move files to various subfolders based on extensionsfind ~/Downloads -maxdepth 1 -name "*.pdf" -print0 | xargs -0 -I % mv % ~/Downloads/PDF/find ~/Downloads -maxdepth 1 -name "*.opm" -print0 | xargs -0 -I % mv % ~/Downloads/OPM/find ~/Downloads -maxdepth 1 -name "*.yml" -print0 | xargs -0 -I % mv % ~/Downloads/YML/find ~/Downloads -maxdepth 1 -name "*.css" -print0 | xargs -0 -I % mv % ~/Downloads/CSS/find ~/Downloads -maxdepth 1 -name "*.tar.gz" -print0 | xargs -0 -I % mv % ~/Downloads/archives/find ~/Downloads -maxdepth 1 -name "*.zip" -print0 | xargs -0 -I % mv % ~/Downloads/archives/find ~/Downloads -maxdepth 1 -name "*.jpg" -print0 | xargs -0 -I % mv % ~/Downloads/Pictures/find ~/Downloads -maxdepth 1 -name "*.png" -print0 | xargs -0 -I % mv % ~/Downloads/Pictures/find ~/Downloads -maxdepth 1 -name "*.tiff" -print0 | xargs -0 -I % mv % ~/Downloads/Pictures/find ~/Downloads -maxdepth 1 -name "*.pm" -print0 | xargs -0 -I % mv % ~/Downloads/Perl/find ~/Downloads -maxdepth 1 -name "*.xls*" -print0 | xargs -0 -I % mv % ~/Downloads/Excel/find ~/Downloads -maxdepth 1 -name "*.doc*" -print0 | xargs -0 -I % mv % ~/Downloads/Word/echo "Task Finished, removing lock file now at `date +%Y-%m-%d::%H:%M:%s`"rm $LOCKFILE
When there are multiple extensions for a destination, you could put more logic into the find directives: find ~/Downloads -maxdepth 1 \( -name "*.tar.gz" -o -name "*.zip" \) -print0 | xargs -0 -I % mv % ~/Downloads/archives/ And you don't need to pipe to xargs: find ~/Downloads -maxdepth 1 \( -name "*.tar.gz" -o -name "*.zip" \) -exec mv -t ~/Downloads/archives/ {} + Since you have -maxdepth 1 , do you really need find ? shopt -s nullglobcd ~/Downloadsmv -t archives/ *.tar.gz *.zipmv -t Pictures/ *.jpg *.png *.tiff# etc This approach will emit some errors if there are no files to move. You can get around that with something like: shopt -s nullglobmovefiles() { local dest=$1 shift if (( $# > 0 )); then mkdir -p "$dest" mv -t "$dest" "$@" fi}cd ~/Downloadsmovefiles PDF/ *.pdfmovefiles OPM/ *.opmmovefiles YML/ *.ymlmovefiles CSS/ *.cssmovefiles archives/ *.zip *.tar.gzmovefiles Pictures/ *.jpg *.png *.tiffmovefiles Perl/ *.pmmovefiles Excel/ *.xls*movefiles Word/ *.doc* Notes: without nullglob, if no files match a pattern, then the function will receive the pattern as a string. for example, if there are no pdf files, the shell will execute movefiles PDF/ "*.pdf" with nullglob, if there are no matches, then the shell removes the pattern from the command: movefiles PDF/ this is why I check the number of arguments: if no files match, then after shifting, $# is zero and hence there's nothing to move.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390799/" ] }
562,463
Given a string composed of 0 s and 1 s, my goal is to replace 0 by 1 and vice-versa. Example: Input 111111100000000000000 Intended output 000000011111111111111 I tried, unsuccessfully, the following sed command echo '111111100000000000000' | sed -e 's/0/1/g ; s/1/0/g'000000000000000000000 What am I missing?
You can use tr for this, its main purpose is character translation: echo 111111100000000000000 | tr 01 10 Your sed command replaces all 0s with 1s, resulting in a string containing only 1s (the original 1s and all the replaced 0s), and then replaces all 1s with 0s, resulting in a string containing only 0s. On long streams, tr is faster than sed ; for a 100MiB file: $ time tr 10 01 < bigfileof01s > /dev/nulltr 10 01 < bigfileof01s > /dev/null 0.07s user 0.03s system 98% cpu 0.100 total$ time sed y/10/01/ < bigfileof01s > /dev/nullsed y/10/01/ < bigfileof01s > /dev/null 3.91s user 0.11s system 99% cpu 4.036 total
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/562463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40484/" ] }
562,560
I am on Ubuntu, I am trying to install rkhunter. I've tried apt-get install rkhunter success But then, I did rkhunter --update I kept getting Invalid WEB_CMD configuration option: Relative pathname: "/bin/false"
I had the same problem but found the following fix : Open /etc/rkhunter.conf . Uncomment (remove the # to the left) and change the following three variables: MIRRORS_MODE=1 ---> MIRRORS_MODE=0UPDATE_MIRRORS=0 ---> UPDATE_MIRRORS=1WEB_CMD="/bin/false" ---> WEB_CMD="" --versioncheck and --update should now work. I believe the well-written comments in /etc/rkhunter.conf explain each variable clearly, but, in the tl;dr spirit, here's my quick interpretation of what is happening: The default MIRRORS_MODE=1 tells rkhunter to only use local mirrors, but you have to define them in the mirrors file for this setting to work. Switching to MIRRORS_MODE=0 allows rkhunter to use any mirror. The default UPDATE_MIRRORS=0 only allows the mirrors file to be updated manually. Switching to UPDATE_MIRRORS=1 allows rkhunter to update the file during the --update operation. The default WEB_CMD="/bin/false" purposely blocks rkhunter from connecting to mirrors for security reasons . Switching to WEB_CMD="" re-enables rkhunter's ability to do mirror updates. However, considering this function was purposely disabled for security reasons, it's seems like it may be best to update rkhunter using the package manager. That's what I plan to do. Hope this was helpful.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118753/" ] }
562,571
I have some HDD disk which has 4096 physical sector size and 512 bytes logical size. It is SATA disk. Now I'd like to use 4kiB in Linux as a logical also sector size - not 512 bytes one. How can I achieve this? Is it possible to switch this disk to operate only in 4kiB mode? How can I be sure that all the partitions I created are aligned to 4kiB? Do I have to manually calculate start and end sector numbers of given partition to have 4kiB alignment? I'm using Linux and sometimes Windows. Mainly I'm creating partitions using Linux fdisk - not Windows one. Maybe using of "fdisk -b 4096" is enough solution? Hm... Probably not, because how Linux will be know which sector size given disk uses?
Unless you use options to force a legacy MS-DOS compatible mode, or use expert mode to specify exact LBA block numbers for the beginning and end of partitions, most modern partitioning tools (Linux and otherwise) will align partitions to multiples of 1MB by default. This is also what modern Windows does, and it guarantees compatibility with both 4kB sector size and various SSD and SAN storage devices which might require alignment to larger powers of two for optimal performance. You can use lsblk -t to check the alignment offsets of each partition. If the value in the ALIGNMENT column is zero, then as far as the kernel knows, the partition should be optimally aligned. To switch the HDD sector size, you would first need to verify that your HDD supports the reconfiguration of the Logical Sector Size. Changing the Logical Sector Size will most likely make all existing data on the disk unusable, requiring you to completely repartition the disk and recreate any filesystems from scratch. The hdparm --set-sector-size 4096 /dev/sdX would be the "standard" way to change the sector size, but if there's a vendor-specific tool for it, I would generally prefer to use it instead - just in case a particular disk requires vendor-specific special steps. On NVMe SSDs, nvme id-ns -H /dev/nvmeXnY will tell (among other things) the sector size(s) supported by the SDD, the LBA Format number associated with each sector size, and the currently-used sector size. If you wish to change the sector size, and the desired size is actually supported, you can use nvme format --lbaf=<number> /dev/nvmeXnY to reformat a particular NVMe namespace to a different sector size.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/254327/" ] }
562,574
I have the following date string 2020-01-17T06:41:48.000Z . I would like to convert it to such as 15810232300 which is milliseconds type in mac os shell. How to achieve it?
Unless you use options to force a legacy MS-DOS compatible mode, or use expert mode to specify exact LBA block numbers for the beginning and end of partitions, most modern partitioning tools (Linux and otherwise) will align partitions to multiples of 1MB by default. This is also what modern Windows does, and it guarantees compatibility with both 4kB sector size and various SSD and SAN storage devices which might require alignment to larger powers of two for optimal performance. You can use lsblk -t to check the alignment offsets of each partition. If the value in the ALIGNMENT column is zero, then as far as the kernel knows, the partition should be optimally aligned. To switch the HDD sector size, you would first need to verify that your HDD supports the reconfiguration of the Logical Sector Size. Changing the Logical Sector Size will most likely make all existing data on the disk unusable, requiring you to completely repartition the disk and recreate any filesystems from scratch. The hdparm --set-sector-size 4096 /dev/sdX would be the "standard" way to change the sector size, but if there's a vendor-specific tool for it, I would generally prefer to use it instead - just in case a particular disk requires vendor-specific special steps. On NVMe SSDs, nvme id-ns -H /dev/nvmeXnY will tell (among other things) the sector size(s) supported by the SDD, the LBA Format number associated with each sector size, and the currently-used sector size. If you wish to change the sector size, and the desired size is actually supported, you can use nvme format --lbaf=<number> /dev/nvmeXnY to reformat a particular NVMe namespace to a different sector size.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562574", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215717/" ] }
562,611
I'm really struggling to understand how logrotate works when running a command within a shell file of my own, and how it doesn't. The command in question is: rclone -L -vv --log-file "/home/mike/tmp/qqq.log" sync "/media/mike/W10 D drive/My Documents/" remote:MyDocuments_M17A_from_Linux What I would like: I would like this qqq.log file to be checked quite often, say every 10 min., to see whether it has exceeded a given size, say 1 MB. And if it has, to rotate the file. rclone with the -vv option produces copious output, deliberately (i.e. to try to understand to get logrotate working for this use case). There's a moderately helpful tutorial here , but it still leaves me in the dark: How are you meant to configure rotation of logs from your own non-system processes? Are you meant to stick lines at the bottom of /etc/logrotate.conf ? This is what I have done (under "#system-specific logs may be configured here"): /home/mike/tmp/qqq.log { notifempty size 1M daily create 0664 root root rotate 3} later: I now realise you can also put individual files in /etc/logrotate.d - no great complexity there. Is it the case that by default logrotate only runs once a day, by virtue of the file /etc/cron.daily/logrotate ? Does that mean that if I want much more frequent checking I should perhaps set up a cron job to do this, running (in this example) every 10 minutes? Are you meant to run logrotate <config file> as root? I ask this question because I have tried both as root and as user. User didn't seem to work. A link to a beginner's guide to this sort of setup, which can't be that uncommon, would be helpful. I have searched but most of what I've found doesn't give you a step-by-step guide.
logrotate can be run as an ordinary user, without administrative privileges, to rotate logs for that user. # Create a local per-user configuration filecat >.logrotate.conf <<'X'/home/mike/tmp/qqq.log { notifempty missingok size 1M rotate 3}X# Run logrotate with that configuration file/usr/sbin/logrotate -v -s .logrotate.state .logrotate.conf I've removed your daily criterion because you wanted purely a size-based check, and this would have limited any possible action to just once a day (the first time each day that logrotate is run, as it happens). I've replaced create with missingok so that it's up to your actual rclone job to create the output file rather than logrotate . Then put the logrotate command into your user's crontab file: # Capture any existing crontab entriescrontab -l >.crontab# Append ours to the listecho '0 * * * * /usr/sbin/logrotate -s .logrotate.state .logrotate.conf >>.crontab.log 2>&1' >>.crontab# Reload crontabcrontab .crontab Using this example, output from the command will be written to .crontab.log , and you'll probably want a logrotate entry to cycle or reset it monthly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562611", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220752/" ] }
562,725
Create files touch a1.txt a2.txt a3.txttouch s1.mp3 s2.mp3 s3.mp3 then I do find . -name "*.txt" -or -type f -print And it's showing only s1.mp3 s2.mp3 s3.mp3 .Why it's not showing .txt files?
Because of the precedence of the operators: the implicit AND ( -a ) between -type f and -print has higher precedence than the OR ( -o ); your command is similar to find . \( -name "*.txt" \) -or \( -type f -print \) while you probably want find . \( -name "*.txt" -or -type f \) -print to print all the files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/383549/" ] }
562,771
I am creating a website with Wordpress 5.2 on a CentOS Linux 7.7.1908 node. PHP version is 7.x. I have asked assistance to the creator of the theme that I am using. The creator asks me admin access to the WP console in order to see the issue I am encountering and solve it. Can I trust giving WP Admin access to a stranger? Can this login be exploited for hacking the machine?
Admin access in Wordpress gives complete control over the Wordpress settings, content, users, etc. (including exporting it all, e.g., via a backup). I believe it also allows execution of arbitrary code as whatever user Wordpress runs under (likely the web server user), e.g., via installing a Wordpress extension. I believe a Wordpress theme contains of PHP code, though. So (unless you carefully audited the theme) you've already allowed that developer to run arbitrary code on your machine. Of course, if this is a publicly-available theme, the risk is lower (as it isn't targeted at you and detection is more likely). In a lot of cases, you can greatly reduce the risk by setting up a temporary Wordpress instance, e.g., with a VM. You set up the minimal necessary to see the problem. Do not copy over your data (so, e.g., your user database can't be compromised); do not re-use your existing site/domain (to prevent attacks on your users e.g., via JavaScript). You can set up strict firewall rules for the VM (on the hypervisor). After the developer is done, you delete the VM, so you don't even care if somehow the system was compromised. Potentially, you can just send the VM image to the developer, who can then reproduce the problem locally. (If you're not confident of running a VM yourself, you can get one relatively cheaply from any of the many cloud providers.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/562771", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/128945/" ] }
562,870
#!/bin/shecho "Noise $1"echo "Enhancement $2"for snr in 0 5 10 15 20 25do python evaluate.py --noise $1 --snr 25 --iterations 1250 --enhancement $2done If $2 is not specified, I don't want to pass the --enhancement $2 argument to my python script. How would I do that?
Modifying your original script: #!/bin/shecho "Noise $1"echo "Enhancement $2"for snr in 0 5 10 15 20 25do python evaluate.py --noise "$1" --snr "$snr" --iterations 1250 ${2:+--enhancement "$2"}done The standard parameter expansion ${var:+word} will expand to word if the variable var is set and not empty. In the code above, we use it to add --enhancement "$2" to the command if $2 is available and not empty. I've also taken the liberty to assume that what you are giving to --snr as an option-argument should be the loop variable's value. My personal touch on the code (mostly just using printf rather than echo , avoiding long lines, and giving the code a bit more air): #!/bin/shprintf 'Noise %s\n' "$1"printf 'Enhancement %s\n' "$2"for snr in 0 5 10 15 20 25; do python evaluate.py \ --noise "$1" \ --snr "$snr" \ --iterations 1250 \ ${2:+--enhancement "$2"}done As mosvy points out in comments below: If your /bin/sh happens to be the dash shell, or some other shell that does not properly reset IFS as a new shell session starts (this is required by POSIX), and if you have, for one reason or other, exported IFS and given it a non-default value, then you may want to use unset IFS at the top of the above script. Do that whenever you have fixed all other issues that exporting IFS doubtlessly have raised (don't export IFS ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/562870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/386283/" ] }
562,880
I'm maintaining a linux server which manages 100 TPS API traffics.Initially, it had only 8GB memory. We recently doubled the memory and doubled the JVM memory consumption upper limit as well. Still the free memory is very low. free -m total used free shared buff/cache availableMem: 15885 9909 156 15 5819 5584Swap: 18431 127 18304 As you can see, free memory is 156MB. When the cache memory is cleared, free memory increases. But still it drops down to same level within few hours. Is this the normal behavior or do I need to increase memory again?
It's a normal behaviour. Any unused memory will be used for cache, to improve overall performances. That memory will be freed automatically if it's needed for something else. You don't have to ever drop caches except to check this specific behaviour and show it to somebody. If the memory were to be kept as free, then your disk I/O activity would likely go up and your application would probably become slower. Also I believe java naturally uses more memory when it sees more memory available, and requires a lot of parameters to prevent it from doing this. If some monitoring tool alarms on memory usage because of this, you should correct the monitoring tool to give a correct report of the actual memory available (eg: here free tells you 5584mb available beside the 156mb free). As a side note, the fact that 157mb of swap are in use isn't a problem as long as it's not swapping in and swapping out (this can be checked with vmstat ). That means 157mb more memory was made available for real activity. That's still something to keep an eye on.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/562880", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/349901/" ] }
562,919
I packed and compressed a folder to a .tar.gz archive.After unpacking it was nearly twice as big. du -sh /path/to/old/folder = 263Mdu -sh /path/to/extracted/folder = 420M I searched a lot and found out that tar is actually causing this issue by adding metadata or doing other weird stuff with it. I made a diff on 2 files inside the folder, as well as a md5sum. There is absolutely no diff and the checksum is the exact same value. Yet, one file is as twice as big as the original one. root@server:~# du -sh /path/to/old/folder/subfolder/file.mcapm /path/to/extracted/folder/subfolder/file.mcapm1.1M /path/to/old/folder/subfolder/file.mcapm2.4M /path/to/extracted/folder/subfolder/file.mcapmroot@server:~# diff /path/to/old/folder/subfolder/file.mcapm /path/to/extracted/folder/subfolder/file.mcapmroot@server:~# root@server:~# md5sum /path/to/old/folder/subfolder/file.mcapmroot@server:~# f11787a7dd9dcaa510bb63eeaad3f2adroot@server:~# md5sum /path/to/extracted/folder/subfolder/file.mcapmroot@server:~# f11787a7dd9dcaa510bb63eeaad3f2ad I am not searching for different methods, but for a way to reduce the size of those files again to their original size. How can I achieve that?
[this answer is assuming GNU tar and GNU cp] There is absolutely no diff and the checksum is the exact same value. Yet, one file is as twice as big as the original one. 1.1M /path/to/old/folder/subfolder/file.mcapm2.4M /path/to/extracted/folder/subfolder/file.mcapm That .mcapm file is probably sparse . Use the -S ( --sparse ) tar option when creating the archive. Example: $ dd if=/dev/null seek=100 of=dummy...$ mkdir extracted$ tar -zcf dummy.tgz dummy$ tar -C extracted -zxf dummy.tgz$ du -sh dummy extracted/dummy0 dummy52K extracted/dummy$ tar -S -zcf dummy.tgz dummy$ tar -C extracted -zxf dummy.tgz$ du -sh dummy extracted/dummy0 dummy0 extracted/dummy You can also "re-sparse" a file afterwards with cp --sparse=always : $ dd if=/dev/zero of=junk count=100...$ du -sh junk52K junk$ cp --sparse=always junk junk.sparse && mv junk.sparse junk$ du -sh junk0 junk
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/562919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/391216/" ] }
562,932
My home desktop system is Ubuntu 18.04.1 with kernel regularly updated, currently 5.3.0. From time to time, mostly when browsing but not necessarily, the system becomes slow on IO:- hdd LED constantly on- system slow on all disk request. E.g. console login or ls ~/ takes minutes- system fast on other things (mouse moves, virtual console switching)- iotop shows multiple apps 99% waiting for IO- iostat shows high wrqm, low wrkb/s after a few minutes the system goes into a complete freeze, I only can make a hard reboot What can I do to investigate the problem better?What scheduler would you recommend?If it's a single app killing my hdd, is there a way to disallow it to do so? Update :The disk is HDD, i.e. a spinning disk. The apps showing IO waits are just all doing IO really. No swapping, there is enough memory. No relevant lines in syslog, I'll see /var/log/messages on the next occurrence
[this answer is assuming GNU tar and GNU cp] There is absolutely no diff and the checksum is the exact same value. Yet, one file is as twice as big as the original one. 1.1M /path/to/old/folder/subfolder/file.mcapm2.4M /path/to/extracted/folder/subfolder/file.mcapm That .mcapm file is probably sparse . Use the -S ( --sparse ) tar option when creating the archive. Example: $ dd if=/dev/null seek=100 of=dummy...$ mkdir extracted$ tar -zcf dummy.tgz dummy$ tar -C extracted -zxf dummy.tgz$ du -sh dummy extracted/dummy0 dummy52K extracted/dummy$ tar -S -zcf dummy.tgz dummy$ tar -C extracted -zxf dummy.tgz$ du -sh dummy extracted/dummy0 dummy0 extracted/dummy You can also "re-sparse" a file afterwards with cp --sparse=always : $ dd if=/dev/zero of=junk count=100...$ du -sh junk52K junk$ cp --sparse=always junk junk.sparse && mv junk.sparse junk$ du -sh junk0 junk
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/562932", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170566/" ] }
562,985
I'm trying to get the output from ls /dev to match 'tty' that ends with numbers between 1-4. So from: tty5tty4tty2tty6tty1 Should match: tty4tty2tty1 The regexp "\s([tty]+[0-4])\s" works in RegExr . I've tried using this with grep: ls /dev | grep -E \s([tty]+[0-4])\sls /dev | grep -E \s([tty]\+\[0-4])\sls /dev | grep -Ex \s([tty]+[0-4])\sls /dev | grep -P \s([tty]+[0-4])\s as I've read in other posts, still I can't make it work.
The reason it isn't matching is because you are looking for whitespace ( \s ) before the string tty and at the end of your match. That never happens here since ls will print one entry per line. Note that ls is not the same as ls | command . When the output of ls is piped, that activates the -1 option causing ls to only print one entry per line. It will work as expected if you just remove those \s : ls /dev | grep -E '([tty]+[0-4])' However, that will also match all sorts of things you don't want. That regex isn't what you need at all. The [ ] make a character class . The expression [tty]+ is equivalent to [ty]+ and will match one or more t or y . This means it will match t ,or tttttttttttttttt , or tytytytytytytytytyt or any other combination of one or both of those letters. Also, the parentheses are pointless here, they make a capture group but you're not using it. What you want is this: $ ls /dev | grep '^tty[0-4]$'tty0tty1tty2tty3tty4 Note how I added the $ there. That's so the expression only matches tty and then one number, one of 1, 2, 3 or 4 until the end of the line ( $ ). Of course, the safe way of doing this that avoids all of the dangers of parsing ls is to use globs instead: $ ls /dev/tty[0-4]/dev/tty0 /dev/tty1 /dev/tty2 /dev/tty3 /dev/tty4 or just $ echo /dev/tty[0-4]/dev/tty0 /dev/tty1 /dev/tty2 /dev/tty3 /dev/tty4
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/562985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/391284/" ] }
563,173
I'm learning awk today, but I cannot succeed in having the most simple scripts to work. #!/usr/bin/env -S awk -fBEGIN { }{ }END { } this outputs BEGIN: command not found or even #!/usr/bin/env -S awk -f{} this outputs {}: command not found When I launch $ /usr/bin/env -S awk -f , I do have the awk executable that display its default output. And $ awk --version says it's awk version 5.0.1 , on nixos 19.09. I need to use /usr/bin/env , because nixos files are not following the traditionnal FHS directory hierarchy. I suspect I'm missing something obvious but looking awk tutorials and SO questions has not given me any clue for now. EDIT: the command line I use to launch the script ls -l | . testawk.sh
Sourcing is not the same as executing. Specifically, sourcing expects a list of commands that can be executed in the current shell. The following is from bash's help . : .: . filename [arguments] Execute commands from a file in the current shell. Read and execute commands from FILENAME in the current shell. The entries in $PATH are used to find the directory containing FILENAME. If any ARGUMENTS are supplied, they become the positional parameters when FILENAME is executed. So, when you run . file , your shell will read the file and execute each command it finds. However, this means that the shebang line is ignored and treated like a regular comment. Therefore, your shell and not awk, was attempting to execute BEGIN . To avoid this, you should execute the script instead of sourcing it. If, for some reason, you just have to source it, write an awk command in the script: awk 'BEGIN { }{ }END { }' Then, you can do ls | . ./a.awk Although I can't really think of why you would ever want to. As an aside, you should be aware that . (or source , in bash) looks for file names in your $PATH by default. So, if you run . foo , and have a foo file in the current directory and a foo file in any directory in your $PATH , then the file that will be sourced is the one in your $PATH and not the one in your current directory. To avoid this, always use full paths when sourcing: . ./foo .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4175/" ] }
563,203
I want to write a CGI, which must read a specified number of bytes from STDIN. My idea is to do it this way: dd bs=$CONTENT_LENGTH count=1 But I was wondering, if the block size is limited by anything else but the RAM. $ dd bs=1000000000000dd: memory exhausted by input buffer of size 1000000000000 bytes (931 GiB) The manual page of GNU's coreutils does not specify any limit.
The POSIX specifications for dd don’t specify a maximum explicitly, but there are some limits: the datatype used to store the value given can be expected to be size_t , since that’s the type of the number of bytes to read given to the read function ; read is also specified to have a limit of SSIZE_MAX ; under Linux, read only transfers up to 2,147,479,552 bytes anyway. On a 64-bit platform, size_t is 64 bits in length; in addition, it’s unsigned, so dd will fail when given values greater than 2 64 – 1: $ dd if=/dev/zero of=/dev/null bs=18446744073709551616dd: invalid number: ‘18446744073709551616’ On Linux on 64-bit x86, SSIZE_MAX is 0x7fffffffffffffffL (run echo SSIZE_MAX | gcc -include limits.h -E - to check), and that’s the input limit: $ dd if=/dev/zero of=/dev/null bs=9223372036854775808dd: invalid number: ‘9223372036854775808’: Value too large for defined data type$ dd if=/dev/zero of=/dev/null bs=9223372036854775807dd: memory exhausted by input buffer of size 9223372036854775807 bytes (8.0 EiB) Once you find a value which is accepted, the next limit is the amount of memory which can be allocated, since dd needs to allocate a buffer before it can read into it. Once you find a value which can be allocated, you’ll hit the read limit (on Linux and other systems with similar limits), unless you use GNU dd and specify iflag=fullblock : $ dd if=/dev/zero of=ddtest bs=4294967296 count=10+1 records in0+1 records out2147479552 bytes (2.1 GB, 2.0 GiB) copied, 38.3037 s, 56.1 MB/s ( dd copied just under 2 31 bytes, i.e. the Linux limit mentioned above, not even half of what I asked for). As explained in the Q&A linked above, you’ll need fullblock to reliably copy all the input data in any case, for any value of bs greater than 1.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/563203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7167/" ] }
563,287
I'm trying to dump the env from a systemd service unit and systemctl show-environment doesn't do what I want. Is there any way to systemctl to show me what the environment looks like inside my service?
If your service is running, you can use systemctl status <name>.service to identify the PID(s) of the service process(es), and then use sudo strings /proc/<PID>/environ to look at the actual environment of the process.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/563287", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147473/" ] }
563,324
The aws command returns a json string to stdout inside a bash shell. $ aws ssm get-parameter --name /mysite/dev/email{ "Parameter": { "Name": "/mysite/dev/email", "Type": "String", "Value": "[email protected]", "Version": 1 }} I want to return the value of "Value" without the quotes wrapping it. I used a combination of awk and sed linux programs. Basically, awk searches each line in the input and allows you to return a line that has a matching value. Since awk scans a file line by line, allowing you to split the line into fields and perform action on matched pattern, I did this first: $ aws ssm get-parameter --name /mysite/dev/email | awk -F: '/Value/ {print $2}'"[email protected]", As you can see, I used : as a delimited via the F flag. I matched the line with the string "Value" and then printed the second part of the delimited string, which is the value "[email protected]", But the quotes and comma are unwelcomed. So I had to use sed with the -E regex flag to replace the quotes and comma: $ aws ssm get-parameter --name /mysite/dev/email | awk -F: '/Value/ {print $2}' | sed -E 's/"|",//g' [email protected] I got the desired result, but I would prefer to just use awk, instead of having to pipe awk to sed. Is it possible?
For a one-liner, you can just remove all commas and quotes first. awk '/Value/ { gsub(/[",]/,""); print $2}' A better translation of your awk | sed pipeline would be awk '/Value/ { gsub(/[",]/,"",$2); print $2}' to just alter the values in the second field.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563324", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/173692/" ] }
563,350
I am currently using Ubuntu 18.04.3 LTS, and I randomly got a login loop upon restarting my PC. The content of my /var/log/lightdm/lightdm.log is the following: Greeter connected version 1.27.0 api=1 resettable=falseGreeter start authentication for malekSession pid=1767: Started with service 'lightdm', username 'malek'Session pid=1767: Got 1 message(s) from PAMPrompt greeter with 1 message(s)Seat seat0 changes active session toSeat seat0 changes active session to 4 I did select the save session on restart option, which I only started doing two days ago, and I never had to encounter this problem before.My home directory and all of the files inside of it belong to malek I am completely stumped, I don't want to re-install Linux (unless there's a way to do so without loosing your users) Thank you for you help.
For a one-liner, you can just remove all commas and quotes first. awk '/Value/ { gsub(/[",]/,""); print $2}' A better translation of your awk | sed pipeline would be awk '/Value/ { gsub(/[",]/,"",$2); print $2}' to just alter the values in the second field.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/391592/" ] }
563,427
I am trying to understand the use of toe command. From the man pages it's difficult for me to figure out what the command does. Plus I cannot find any examples on the internet. from the manpage: toe - table of (terminfo) entries [..] lists all available terminal types by primary name with descriptions Can someone try to provide a simple explanation with examples?
toe lists the terminal descriptions known to Terminfo on the system; by default it only lists descriptions stored in its default directory, rather than all the locations it knows about ( e.g. /etc/terminfo on Debian-based systems), so toe often produces no output. To see something useful, run toe -ha This will list all the Terminfo database entries, with a header showing where they come from: $ toe -ha##/etc/terminfo:###/lib/terminfo:#hurd The GNU Hurd console serverwsvt25m NetBSD wscons in 25 line DEC VT220 mode with Metawsvt25 NetBSD wscons in 25 line DEC VT220 modelinux linux console etc. Each line starts with a value which can be used with the TERM variable so that Terminfo-compatible programs will use the corresponding terminal description. You might recognise xterm and its variants in the list...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563427", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62628/" ] }
563,469
I recently noticed that in normal mode when I type Ctrl-i (command for jumps ) it is "confused" for the TAB key. In particular, I have this mapping: nnoremap <Tab> :tabnext<Enter>
Terminal I/O applications just see the composed characters sent by the terminal , and cannot distinguish amongst specific key chords , while GUI applications can, because GUIs tend to operate in terms of key press and release messages. Most terminals, and most terminal emulators, send a ␉ (U+0009) character down the (virtual) wire to the host system when either ⇥ Tab or ⎈ Control + I are pressed. This is not vim . This is how terminals work, and how the emulators that emulate them work too. Similarly, and oft forgotten nowadays I observe, these terminals and terminal emulators send a ␛ (U+001B) character down the (virtual) wire to the host system when either ⎋ Esc or ⎈ Control + [ are pressed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563469", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377583/" ] }
563,583
I am planning to use(hyphen -) in a variable test-ing=3.0 but I am unable to print value $test-ing . I know hyphen will not work on shell, any possible way to print the variable value without changing the variable name?
Assuming an environment variable, since test-ing is not a valid shell variable name, you can use printenv : % env foo-bar=baz printenv foo-barbaz Or Perl: % env foo-bar=baz perl -e 'print $ENV{"foo-bar"}'baz Or other tools like Python, etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/563583", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/376375/" ] }
563,593
i have a directory d1/ which contain two sub-directories d2/ and d3/ . Directory d2/ contains a number of files I want to search a file from the d3/ directory by matching a pattern in any of the directories using grep command. Thanks
Assuming an environment variable, since test-ing is not a valid shell variable name, you can use printenv : % env foo-bar=baz printenv foo-barbaz Or Perl: % env foo-bar=baz perl -e 'print $ENV{"foo-bar"}'baz Or other tools like Python, etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/563593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388129/" ] }
563,688
I used findmnt --verify to verify some changes I did to /etc/fstab . However to my surprise it gave me the warning / [W] cannot detect on-disk filesystem type for my root partition. A quick check with df -T / shows Filesystem Type 1K-blocks Used Available Use% Mounted on/dev/sdb2 ext4 116778960 107259980 3543840 97% / Is the on-disk filesystem type something different from the filesystem type of the partition? If not, is the warning legit or a missing feature/bug? ( findmnt --version gives findmnt from util-linux 2.31.1 )
Is the on-disk filesystem type something different from the filesystem type of the partition? No. is the warning legit or a missing feature/bug? Legit but somewhat misleading. It could be findmnt: /dev/sdb2: Permission denied . Regular users cannot read /dev/sdb2 directly, so the tool cannot verify if the device holds a filesystem that matches the corresponding fstab entry. Run sudo findmnt --verify . The tool will be allowed to examine /dev/sdb2 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563688", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/322052/" ] }
563,723
I've been looking around and haven't found what I'm trying.I have to say I'm petty poor with grep, sed and awk though. I have an alias: alias upgradable='apt list --upgradable' and it gets me what I need: thunderbird/bionic-updates,bionic-security 1:68.4.1+build1-0ubuntu0.18.04.1 amd64 [upgradable from: 1:68.2.2+build1-0ubuntu0.18.04.1]thunderbird-gnome-support/bionic-updates,bionic-security 1:68.4.1+build1-0ubuntu0.18.04.1 amd64 [upgradable from: 1:68.2.2+build1-0ubuntu0.18.04.1] however I'd like to get only the first word, the header of it.Tried lots of stuff but all failed. How do I have to proceed ?
To print everything before the first / you can use cut : alias upgradable='apt list --upgradable | cut -d'/' -f1 or awk : alias upgradable="apt list --upgradable | awk -F'/' '{print \$1}'"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333564/" ] }
563,821
I have a directory full of backup folders, with names in the format yyyy-MM-dd--HH:mm:ss If I enter a date in the same format, is there a way to get the first directory that sorts after it, and copy that folder to somewhere else? e.g. If my list of backups looks like this: 2019-12-04--16:12:562019-12-09--13:36:532020-01-23--13:24:132020-01-23--13:47:03 and I enter 2020-01-05--00:00:00 , I want to restore 2020-01-23--13:24:13
for dir in ????-??-??--??:??:??/; do if [[ $dir > "2020-01-05--00:00:00" ]]; then printf '%s\n' "$dir" # process "$dir" here break fidone The above script will loop through the directories in the current directory whose names matches the pattern ????-??-??--??:??:?? . For each directory, it is compared to the string 2020-01-05--00:00:00 . If it sorts after that string lexicographically, the name of the directory is printed and the loop is exited. This works since the list resulting from a pathname expansion is sorted according to the current collating order (just like ls sorts the list by default). To copy that directory to elsewhere, replace the comment with something like rsync -av "$dir" /somewhere/elsewhere The following is a script that takes the particular string from its first command line argument and does the same thing: #!/bin/bashfor dir in ????-??-??--??:??:??/; do if [[ $dir > "$1" ]]; then printf '%s\n' "$dir" # process "$dir" here break fidone Testing this with the directories that you list: $ ls -ltotal 10drwxr-xr-x 2 myself wheel 512 Jan 24 11:14 2019-12-04--16:12:56drwxr-xr-x 2 myself wheel 512 Jan 24 11:14 2019-12-09--13:36:53drwxr-xr-x 2 myself wheel 512 Jan 24 11:14 2020-01-23--13:24:13drwxr-xr-x 2 myself wheel 512 Jan 24 11:14 2020-01-23--13:47:03-rw-r--r-- 1 myself wheel 119 Jan 24 11:23 script.sh $ ./script.sh "2020-01-05--00:00:00"2020-01-23--13:24:13/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373158/" ] }
563,901
So I have this fasta (biology) file that looks like this: >m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC Each line is max 70 chars long. Usually, if I want to format it to max 50 characters long, I use: fold -50 input.fasta > output.fasta # also tried -b and -w args But somehow this is not working. The file looks exactly the same as many others I've seen. The output now looks like this: >m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCC It cuts overhanging 20 characters and correctly places them bellow, but then it's not joining the next line and cutting it on max 50 chars as it should. I went back to previous fasta files I created and the fold command still works normally. The problem persists if I copy a segment of the new file and past it in another file. I think there might be an encoding problem that I'm not aware of. Can anyone help? Cheers, EDIT: Great answers, thanks!!
Your issue has nothing to do with the encoding of your file. The fold utility is quite primitive and breaks lines at particular lengths, but it does not join lines. You may also want to be careful with retaining the fasta header lines as they are (i.e., not fold these). awk -v W=50 ' /^>/ { if (seq != "") print seq; print; seq = ""; next } { seq = seq $1 while (length(seq) > W) { print substr(seq, 1,W) seq = substr(seq, 1+W) } } END { if (seq != "") print seq }' file.fa This awk command would reformat your sequence to 50 characters, leaving the header lines intact. The width, 50, is adjustable through the W variable and may be set to any positive integer. The first block in the code handles header lines and will output the accumulated sequence bit from the previous sequence, if there is any left to output, before passing along the header line unmodified to the output. The second block accumulates a line of sequence and will possibly output the accumulated sequence if it's long enough, in appropriate chunks. The last block ( END ) outputs any left-over sequence when reaching the end of input. Running this on a file consisting of two copies of your sequence will produce >m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC>m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC Changing W to 30 gives >m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC>m64093_191209_130050/133911/ccs_64TTCAGGCTGTGTTCCATTTGATTTAAAATCAAATAATTTCATTCGCGTCAGAACACCTGGTTTCACGACCATAAATAATTTACCAGTGAATCGAGGCTCAATTATAGATCCTCGGACGCGAGTTCTCGGTTGACGAGTGGGATTCGAATTATTTTTCACCGAAAATTTTAGTCGACGAGTTCAGATAAATTTGTTCGGGATAAAATCATCTGAGTAGGTCGGGCTTCTGAATTTCGTATTCTTGCGAGCAATGAATTTTAAATAATCATCGGACATACCAATTTTTGGAACAATAATGTTCCGAACATCCCGAAAATATAGGAAGAGCCCGGATAGATAAAAATAAACAC You may also be interested in the FASTX-Toolkit from CSHL. I've never use this myself, but it seems to include a "FASTA Formatter (changes the width of sequences line in a FASTA file)". The latest release of the tools are from 2014 (quite old), so you may want to compile them yourself from source rather than using one of the provided precompiled binaries, unless your particular Unix distribution provides a package (check your package repository).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/563901", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/286592/" ] }
564,013
I wrote a small script to automate the connection of my bluetooth headphones to my linux machine. #!/bin/bashbluetoothctlwait ${!}connect XX:XX:XX:XX:XX:XX #headphone MAC addresswait ${!}exit The script opens bluetoothctl but doesn't run any of the following commands.
You can use bluetoothctl command in shell script as follow : bluetoothctl -- command or: echo -e "command\n" | bluetoothctl e,g: bluetoothctl -- connect XX:XX:XX:XX:XX:XX
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/564013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/343929/" ] }
564,024
I have the below data as input: A 1,2B 3,2,5C 6,7D 1,3,5,8 How can I get the below output using AWK? A 1A 2B 3B 2B 5C 6C 7D 1D 3D 5D 8
$ awk -F '[ ,]' '{ for (i = 2; i <= NF; ++i) print $1, $i }' fileA 1A 2B 3B 2B 5C 6C 7D 1D 3D 5D 8 This treats the lines as consisting of fields delimited by either spaces or commas. For each line, the awk program iterates over the second field onwards to the end of the line. For each field, it outputs the first field on the line together with the current field.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370670/" ] }
564,073
My wife and I downloaded PDF museum passes and printed them on the same PostScript laser printer. To my surprise, hers looked much better! On the left is the printout from her Mac, and on the right the one sent from my Debian Linux machine. I am using evince 3.14.1, libcairo2 1.15.8, cups 2.1.3, and the Internet Printing Protocol to talk to the H-P LaserJet Pro 400 MFP M425dn (any other factor that could matter?). Apparently, the printer is somehow not executing the PostScript content, which should yield a high-resolution rendering. But I am not aware of requiring that the content be pre-rendered on my desktop nor would I seek that. What have I got set up wrong and how can I increase the output quality?
Solved it. The pages were being rendered on the computer because I had chosen a LaserJet driver from the Hewlett-Packard section of the CUPS printer management interface, implemented using Gutenprint. This worked fine for B&W text but poorly for color areas. After still not seeing a "PostScript" option in the configurator, I chose "Raw", and this works: now the rendering happens in the printer, making the paper output look proper with both graphics and text. Thanks to all for your help!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564073", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65468/" ] }
564,109
Is it legal to print null bytes using awk's printf function according to POSIX? The POSIX standard of awk does not seem to explicitly mention it either way. Real world implementations differ in how they behave: +$ gawk 'BEGIN { x = sprintf("\000"); print(length(x)); }'1+$ busybox awk 'BEGIN { x = sprintf("\000"); print(length(x)); }'0+$ and +$ gawk 'BEGIN { printf("\000"); }' | xxd00000000: 00 .+$ busybox awk 'BEGIN { printf("\000"); }' | xxd+$ Is this specified somewhere in the standard? If yes, is the behaviour required for variables ( x = sprintf("\000") ) and printf ( printf("\000") ) same?
There are at least 4 relevant pieces of text in the POSIX.2018 specification of awk : Emphasis (bold text) is mine in all the quoted text below: Input files to the awk program from any of the following sources shall be text files That means that if the input contains NUL characters (which would make it non-text as per the POSIX definition of text), then the behaviour is unspecified. \ddd : A <backslash> character followed by the longest sequence of one, two, or three octal-digit characters (01234567). If all of the digits are 0 (that is, representation of the NUL character), the behavior is undefined . So \000 results in undefined behaviour. About regexp matching: However, in all awk ERE matching, the use of one or more NUL characters in the pattern, input record, or text string produces undefined results About printf / sprintf : 7. For the c conversion specifier character: if the argument has a numeric value, the character whose encoding is that value shall be output. If the value is zero or is not the encoding of any character in the character set, the behavior is undefined . So, that's another way to get a NUL character that leads to undefined behaviour. So, to sum up, in awk , POSIX tells us you can't use the NUL character portably, whether it's for input, output or to store in its variables. gawk (since at least 2.10 in 1989 which is the earliest version I could find where NUL support is documented ) and @ThomasDickey's mawk (since version 20140914 ) are two implementations that can deal with NUL.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/564109", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112194/" ] }
564,364
What function exactly is the " keyword " option assigned by the set command on bash shell? OPTION: set -o keyword OR set -k A simple example is enough for me to understand.
Usually, to inject an environment variable for a command, you place an assignment to a name before the command like this: VARIABLE=value bash -c 'echo "$1 $VARIABLE"' bash hello This runs bash -c 'echo "$VARIABLE"' bash hello and sets VARIABLE to the string value in the environment of that command. The command, in this case will, print hello value (with hello coming from the in-line script's first command line argument, and value coming from the environment variable). With set -o keyword (or set -k ) in bash (this is a non-standard shell option), the assignment is allowed to occur anywhere on the command line: $ set -k$ bash -c 'echo "$1 $VARIABLE"' bash hello VARIABLE=valuehello value$ bash -c 'echo "$1 $VARIABLE"' bash VARIABLE=value hellohello value$ bash -c 'echo "$1 $VARIABLE"' VARIABLE=value bash hellohello value$ bash -c VARIABLE=value 'echo "$1 $VARIABLE"' bash hellohello value$ bash VARIABLE=value -c 'echo "$1 $VARIABLE"' bash hellohello value I've never seen this option used "in the wild" and I'm assuming that it's only used sparingly under very specific circumstances, as it radically changes the way commands are parsed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564364", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364572/" ] }
564,559
I am trying to increment a build number by 1 using command line. Here is the content of my test file: SOME_DUMMY_VALUE = -1;CURRENT_PROJECT_VERSION = 4;SOME_SECOND_DUMMY_VALUE = -1;CURRENT_PROJECT_VERSION = 4; The result I want to obtain is the following: SOME_DUMMY_VALUE = -1;CURRENT_PROJECT_VERSION = 5;SOME_SECOND_DUMMY_VALUE = -1;CURRENT_PROJECT_VERSION = 5; I am trying to use something like: sed -i -E "s/CURRENT_PROJECT_VERSION = (\d+);/CURRENT_PROJECT_VERSION = \1~;/" test.txt I am not experienced in bash scripting and I don't know how I can increment the number by one. (I am using MacOS but the sed command is almost the same as on Linux)
awk -F '= ' '/CURRENT_PROJECT_VERSION/{$2=$2+1";"}1' OFS='= ' input > output Tests cat fileSOME_DUMMY_VALUE = -1;CURRENT_PROJECT_VERSION = 4;SOME_SECOND_DUMMY_VALUE = -1;CURRENT_PROJECT_VERSION = 4;awk -F '= ' '/CURRENT_PROJECT_VERSION/{$2=$2+1";"}1' OFS='= ' fileSOME_DUMMY_VALUE = -1;CURRENT_PROJECT_VERSION = 5;SOME_SECOND_DUMMY_VALUE = -1;CURRENT_PROJECT_VERSION = 5;
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392601/" ] }
564,582
I am new to linux and I have a quick question for opening text files using my terminal. I tried many times to open a text file using commands such as xdg-open <location>./filename and none of them seems to be working, maybe from syntax or not? I receive errors like # Option “-x” is deprecated and might be removed in a later version of gnome-terminal.# # Use “-- ” to terminate the options and put the command line to execute after it.# -- xdg-open Random_File.sh --: command not found I thought that I might have permission issue, but all read, write and execute permissions are available for my text document
There are a few solutions: vi <filename>vim <filename>nano <filename>cat <filename> vi and vim are text editors, anything you can do in vi can be done in vim but both have a big learning curve for beginners. Nano is also a text editor but is much more user friendly than the former (disclaimer: personal opinion), this being said it may not be installed on your system by default. Lastly cat just displays the contents of your file to the command line, so you may not edit with this command.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392464/" ] }
564,593
I have the below line code in ksh: echo -e "$SUBJECT"|/usr/sbin/sendmail -f [email protected] -t [email protected] but I'm still getting the $SUBJECT in the body, and not as the real subject. what's wrong with it?
It's sometimes easier to send all the headers in the echo e.g. { echo From: xxxx echo To: yyyy echo Subject: Foobar echo echo This is the message} | /usr/lib/sendmail -t
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/365645/" ] }
564,600
Is there a way to rollback to an older Fedora version from current version without reinstalling the OS? If yes I'd like to know how to do it, specifically from Fedora 31 to Fedora 30, if no, then that's also fine, will just have to do it manually again. Just want to know so that I don't have to go through setting up everything again if there's simply a way to downgrade. I don't suppose sudo dnf system-upgrade download --releasever=30 will work since that was used from Fedora 29 to Fedora 30.
The short answer is yes. Here's the exact syntax: dnf install system-upgrade --releasever=31 --allowerasing Note - this was from 32 to 31. The long answer is as follows. It grabs 300+ odd packages: (345/345): mutter328-libs-3.28.4-4.fc31.x86_64.rpm 3.5 MB/s | 2.0 MB 00:00 If you have a gpg key, it will ask for permission to import it: Importing GPG key 0x3C3359C4: Userid : "Fedora (31) <[email protected]>" Fingerprint: 7D22 D586 7F2A 4236 474B F7B8 50CB 390B 3C33 59C4 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-31-x86_64Is this ok [y/N]: y Once it has downloaded all the packages, it runs 600+ checks: Verifying : libtracker-control-2.3.4-1.fc32.x86_64 684/687Verifying : libtracker-miner-2.3.4-1.fc32.x86_64 685/687Verifying : python3-dasbus-0.2-2.fc32.noarch 686/687Verifying : python3-nftables-1:0.9.3-3.fc32.x86_64 687/687 Finally it displays the summary of changes: Downgraded: abrt-2.14.2-2.fc31.x86_64 abrt-addon-ccpp-2.14.2-2.fc31.x86_64 abrt-addon-kerneloops-2.14.2-2.fc31.x86_64 abrt-addon-pstoreoops-2.14.2-2.fc31.x86_64... xdg-desktop-portal-gtk-1.4.0-1.fc31.x86_64 yum-4.2.21-1.fc31.noarchInstalled: libreoffice-draw-1:6.3.6.2-3.fc31.x86_64 mutter328-libs-3.28.4-4.fc31.x86_64 python-unversioned-command-3.7.7-1.fc31.noarch python3-asn1crypto-0.24.0-7.fc31.noarch python3-dnf-plugin-system-upgrade-4.0.10-1.fc31.noarch python3-dnf-plugins-extras-common-4.0.10-1.fc31.noarch python3-pydbus-0.6.0-9.fc31.noarchRemoved: libtracker-control-2.3.4-1.fc32.x86_64 libtracker-miner-2.3.4-1.fc32.x86_64 python3-dasbus-0.2-2.fc32.noarch python3-nftables-1:0.9.3-3.fc32.x86_64 One thing it does not do is change the release notification 2020-06-18 09:36:34 localhost:/tmp #cat /etc/redhat-releaseFedora release 32 (Thirty Two)2020-06-18 09:38:21 localhost:/tmp #cat /etc/fedora-releaseFedora release 32 (Thirty Two)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564600", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/365518/" ] }
564,603
I currently have this in my .bashrc since I use git status and git diff often. I would like to be able to read other entries that may be passed in as options like -s with git diff . How can I do that in a function rather than an alias ? I only know that $# will give me the number of arguments passed in but how do I paste all of them after say status on line 48? 42 # ========================================================= 43 # Git 44 # ========================================================= 45 g () { 46 case $1 in 47 "s") 48 git status 49 ;; 50 "d") 51 git diff 52 ;; 53 esac 54 }
"$@" will be replaced with all the arguments, correctly quoted, so after shifting to remove the sub-command shortcut: g () { cmd=$1 shift case "$cmd" in s) git status "$@" ;; d) git diff "$@" ;; esac} Instead of doing this though, I suggest using git aliases; to set the above up: git alias s statusgit alias d diff or, if you don’t have git-alias (typically in git-extras ), git config --global alias.s statusgit config --global alias.d diff (you only need to do this once, the aliases are stored in ~/.gitconfig ). Then in your shell, alias g=git And you’ll find g s etc. work as you’d expect, including with arguments.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564603", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/313163/" ] }
564,619
According to the docs on rg , I'm supposed to run $ sudo yum-config-manager --add-repo=https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo$ sudo yum install ripgrep How do I get that first yum-config-manager into ansible?
Better yet, try using something like this: - name: Ripgrep Repo get_url: url: https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo dest: /etc/yum.repos.d/copr_ripgrep.repo This is probably the most "Ansible" way to solve the problem if you don't care to install yum-config-manager . Old Answer You shouldn't actually need yum-config-manager to complete this task. All that command does in this context is put that remote file in /etc/yum.repos.d/ . After that, yum will be able to pull packages from that repository. Something like, sudo wget -O /etc/yum.repos.d/copr_ripgrep.repo https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo ...should do the trick. curl would work as well if wget isn't available. Alternatively, you could install yum-config-manager first and then use it as you mentioned.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564619", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3285/" ] }
564,630
I have string which is expected to look like this : final old-version=1.2.3-old new-version=1.2.4 Im trying to extract the version numbers from the string and create two variables old-version and new-version where : old-version=1.2.3-oldnew-version=1.2.4 This is what I've come up with : x='final old-version=1.2.3-old new-version=1.2.4'echo $x | awk '{print $2}' | cut -d'=' -f2 Using this I get the value of old-version . However this way falls apart quickly if for some reason I get a string like final new-version=1.2.4 old-version=1.2.3-old Is there is better / cleaner / more reliable way of extracting my substrings and their values ?
Better yet, try using something like this: - name: Ripgrep Repo get_url: url: https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo dest: /etc/yum.repos.d/copr_ripgrep.repo This is probably the most "Ansible" way to solve the problem if you don't care to install yum-config-manager . Old Answer You shouldn't actually need yum-config-manager to complete this task. All that command does in this context is put that remote file in /etc/yum.repos.d/ . After that, yum will be able to pull packages from that repository. Something like, sudo wget -O /etc/yum.repos.d/copr_ripgrep.repo https://copr.fedorainfracloud.org/coprs/carlwgeorge/ripgrep/repo/epel-7/carlwgeorge-ripgrep-epel-7.repo ...should do the trick. curl would work as well if wget isn't available. Alternatively, you could install yum-config-manager first and then use it as you mentioned.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392662/" ] }
564,662
Using awk , I'm trying to print the fourth field of an input line ( $4 ) also at the begining of the line. I have tried the following code, but it prints the string above the line: awk -F"[--]" '{print $4 ;print$0}' file The string is random so I can't use the string name. The input like looks as follows: /example-origin-live/ngrp:tennis-320-fd1d9b92-69e2-446c-a3e6-45b33a55efc9 With the code above I get: 320/example-origin-live/ngrp:tennis-320-fd1d9b92-69e2-446c-a3e6-45b33a55efc9 I need the output to look like: 320/example-origin-live/ngrp:tennis-320-fd1d9b92-69e2-446c-a3e6-45b33a55efc9 Thanks for help!
Separate print statements will print separate records (lines) by default. Use a single print , concatenating $4 and $0 : % awk -F- '{print $4 $0}' input320/example-origin-live/ngrp:tennis-320-fd1d9b92-69e2-446c-a3e6-45b33a55efc9
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/564662", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/247485/" ] }
564,735
I always have /proc/sys/kernel/panic set up to 0 . Looking at description of this option in kernel.org we can see: panic: The value in this file represents the number of seconds the kernel waits before rebooting on a panic. When you use the software watchdog, the recommended setting is 60. From here one can conclude that 0 is 0 seconds waiting before reboot - immediate reboot. But proc MAN page states the following: /proc/sys/kernel/panic This file gives read/write access to the kernel variable panic_timeout. If this is zero, the kernel will loop on a panic; if nonzero, it indicates that the kernel should autore‐ boot after this number of seconds. When you use the software watchdog device driver, the recommended setting is 60. Here 0 means antipodal thing - never reboot. So why such a trusted source gives such a misleading info? Or maybe the MAN page is inaccurate? P.S. just from a hint in panic_on_oops section (if you happen to read this) you can guess that MAN page is right. Or if you are technically skilled enough to investigate something in kernel source code.
The authoritative source is the implementation in the kernel, so let’s look at that first. The panic entry in sysctl corresponds to a kernel variable called panic_timeout . This is a signed integer , used to control behaviour on panic as follows: if panic_timeout is strictly positive, the kernel waits after a panic, for panic_timeout seconds; if panic_timeout is non-zero, the kernel reboots after a panic (after waiting, if appropriate); if the kernel hasn’t rebooted, it prints a message and loops forever. So the manpage is correct, and the kernel’s own documentation was incomplete; but sysctl/kernel.rst now documents panic in more detail. This was fixed in version 5.7-rc1 of the kernel .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/564735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165555/" ] }
564,760
In Analyzing Scripts webpage of TLDP , the following code is provided for analysis: export SUM=0for f in $(find src -name "*.java"); do export SUM=$(($SUM + $(wc -l $f | awk '{ print $1 }')))doneecho $SUM I understand that it calculates the sum of the number of lines of all *.java files in the directory src . What I do not understand is the reason for using the export keyword, which is described thus : The export command makes available variables to all child processes of the running script or shell. Since SUM is never accessed by a child process, is there any reason for exporting it?
I understand that it calculates the sum of the number of lines of all *.java files in the directory src. This is not necessarily completely true. It calculates the sum of the number of lines of all *.java files in the directory tree rooted at src (i.e. src and all its child directories). But it will fail for any file paths containing whitespace or when there are directory names ending with .java . Since SUM is never accessed by a child process, is there any reason for exporting it? No. I would probably write the snippet of code like this, making it filename-safe in the process: find src -type f -name '*.java' -exec wc -l {} \; | awk '{ s += $1 } END { print s }' A better solution would probably be this: find src -type f -name '*.java' -exec cat {} + | wc -l
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/564760", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388654/" ] }
564,957
I want to obtain the ASCII number of a character, so I have the following: VAR="a"NUM=$(printf "%d" "'$VAR")echo $NUM What does '$ mean in this context? Can someone point me to a documentation to understand the syntax? I don't understand if its part of $(...) or printf or bash .
'$ doesn't mean anything special. With %d in printf , it tries to evaluate the argument as an integer expression. 'a is taken to be the char a , or the integer 97. You'd get the same result even if you didn't use variable expansion: $ printf %d\\n "'a'"97$ printf %d\\n "'0'"48$ printf %d\\n "'"$'\1'1 From the bash documentation on printf (emphasis mine): Arguments to non-string format specifiers are treated as C language constants, except that a leading plus or minus sign is allowed, and if the leading character is a single or double quote, the value is the ASCII value of the following character. Any characters left are ignored, as noted in the comments.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/564957", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311038/" ] }
564,998
$ printf "hi"hi$ printf "hi\n"hi$ printf "hi\\n"hi Why doesn't the last line print hi\n ?
This is nothing to do with printf , and everything to do with the argument that you have given to printf . In a double-quoted string, the shell turns \\ into \ . So the argument that you have given to printf is actually hi\n , which of course printf then performs its own escape sequence processing on. In a double-quoted string, the escaping done through \ by the shell is specifically limited to affecting the ␊, \ , ` , $ , and " characters. You will find that \n gets passed to printf as-is. So the argument that you have given to printf is actually hi\n again . Be careful about putting escape sequences into the format string for printf . Only some have defined meanings in the Single Unix Specification . \n is defined, but \c is actually not, for example. Further reading https://unix.stackexchange.com/a/359510/5132 POSIX Shell: inside of double-quotes, are there cases where `\` fails to escape `$`, ```, `"`, `\` or `<newline>`? Why is a single backslash shown when using quotes Echo new line and string beginning \t Why does dash expand \\\\ differently to bash? https://unix.stackexchange.com/a/558665/5132
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/564998", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/151111/" ] }
565,012
I have some template files which I'm currently processing with envsubst, which works great. <?php$config['db_host'] = '${DB_HOST}';$config['db_port'] = '${DB_PORT}';$config['url'] = 'http://${WEB_HOST}/${WEB_PATH}';// Please do NOT change this value$config['maxSize'] = 25; What I'm trying to find is a way to scan the file with a bash script and generate a list of all the environment variables that need to be set, so I can then dump them into a .env file like so: DB_HOST=DB_PORT=WEB_HOST=WEB_PATH= I think it's possible with sed, however all of the examples I've found after 30 minutes of googling have been about how to replace variables inline, and nothing about just printing out the matches.
Your criteria seems to be: match any number of uppercase letters or underscores contained between ${ and } . gawk can work for this on it's own, or grep can simplify the pattern matching part (but will need extra formatting afterwards). GNU awk : gawk -v 'RS=[$]{' -F '}' '$1 ~ /^[A-Z_]+$/ && !a[$1]++ {printf "%s=\n", $1}' FILE GNU awk can accept a regex for the record separator, so by assigning RS=[$]{ , it will split the input FILE up into records wherever the pattern ${ appears field separator set to } – now the first field of each record can be checked to see if it matches your other criteria: nothing other than one-or-more of [A-Z_] using && !a[$1]++ will remove duplicates the print statement adds an equals sign = to the end of each line – to match your desired output also note: the first part of a file will always be counted as the first record – even if it didn't begin with ${ – this means that if your file began with [A-Z_]+} (unlikely) – those uppercase letters/underscores would "match" and be printed on the first line of output grep + formatting grep is perhaps easier to understand (thanks to it's -o / --only-matching option): grep -o '${[A-Z_]\+}' FILE but this doesn't format the output: a pipe through sed could do that: eg. grep -o '${[A-Z_]\+}' FILE | sed 's/${\(.*\)}/\1=/' this doesn't remove duplicates: pipe output through sort -u to do that, or alternatively pipe once through awk: grep -o '${[A-Z_]\+}' FILE | awk -F '[{}]' '!a[$0]++{printf "%s=\n", $2}'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/229729/" ] }
565,019
btrfs send and receive can be used to transfer terabytes of data, but these commands don't produce helpful progress output (even with -v ). How can I check if they succeeded? For example, if I create a new subvolume called source , write 1 GB of random data into it, and make it read-only so that it can be sent: # btrfs subvolume create source# head -c 1G < /dev/urandom > source/data# btrfs property set source ro true Then, create a copy of the new subvolume using btrfs send and receive , but interrupt the process before it completes: # mkdir destination# btrfs send source | btrfs receive destinationAt subvol sourceAt subvol source^C btrfs subvolume list will not indicate that anything has gone wrong: # btrfs subvolume list .ID 1216 gen 370739 top level 5 path sourceID 1219 gen 371244 top level 5 path destination/source The new subvolume can be browsed normally, although clearly its data is corrupt: # exa -lT - ├── destination - │ └── source251M │ └── random_data - └── source1.1G └── random_data btrfs subvolume show destination/source does not warn us that the subvolume is incomplete. It does show that destination/source has a different UUID to source , and it looks as though destination/source 's Received UUID will be set to source 's UUID if and only if btrfs receive ran to completion. Does the presence of the Received UUID guarantee that a subvolume created by btrfs receive is a complete and unmodified copy of the subvolume with that UUID on another filesystem? This part of man btrfs-send suggests not, and seems to imply that using destination/source in the above example as the parent of a future snapshot of source would fail to detect and repair the corruption as well. However, I'm still not completely clear on the purpose of send -c and whether this advice also applies to send -p . In the incremental mode (options -p and -c ), previously sent snapshots that are available on both the sending and receiving side can be used to reduce the amount of information that has to be sent to reconstruct the sent snapshot on a different filesystem. The -p <parent> option can be omitted when -c <clone-src> options are given, in which case btrfs send will determine a suitable parent from among the clone sources. You must not specify clone sources unless you guarantee that these snapshots are exactly in the same state on both sides—both for the sender and the receiver. From what I can tell, snap-sync , buttersink and other similar tools deal with this problem by redirecting the output of btrfs send to a series of files, and transferring them using a reliable method like rsync rather than a simple pipe. Is that the right approach to take, if I want to develop my own incremental backup solution without relying on third-party software that isn't packaged by my distro?
TL;DR: If Received UUID and the readonly flag is set, then it's quite unlikely that something went wrong, unless carelessness or malice is involved. Like @timakro already said in his answer, Received UUID is not set until the transfer is complete. Neither is the readonly flag. This, combined with the fact that every command in the stream is checksumed (and that, as far as I can understand, sent metadata also includes checksums) makes it quite unlikely that you will end up with a corrupt snapshot on the receiving side with readonly and Received UUID set. If any of them are unset, btrfs will refuse to use that snapshot as a reference for a future btrfs receive . What could corrupt the received snapshot would be intentional corruption, if receiving a specially crafted stream, or if some process or user changed the contents of the received snapshot while it was received. From the btrfs-receive manpage: BUGS btrfs receive sets the subvolume read-only after it completessuccessfully. However, while the receive is in progress, users who havewrite access to files or directories in the receiving path can add,remove, or modify files, in which case the resulting read-onlysubvolume will not be an exact copy of the sent subvolume. If the intention is to create an exact copy, the receiving path shouldbe protected from access by users until the receive operation hascompleted and the subvolume is set to read-only. Additionally, receive does not currently do a very good job ofvalidating that an incremental send stream actually makes sense, and itis thus possible for a specially crafted send stream to create asubvolume with reflinks to arbitrary files in the same filesystem.Because of this, users are advised to not use btrfs receive on sendstreams from untrusted sources, and to protect trusted streams whensending them across untrusted networks. It's also worth noting that it's possible to disable the readonly flag on a subvolume, modify things, and then enable it again. If this has been done on either side, all guarantees are thrown out of the window. Note that piping the output to a file and transferring that file does not provide any protection from the above. Personally I see absolutely no reason why it would be insecure to pipe the output of btrfs send directly to ssh . The benefit of storing the stream in files intermediately is that it makes it possible to resume an interrupted transfer on an unreliable connection, but it does not provide any guarantee in the way of data integrity. A good (though not fool-proof) way to verify that the received snapshot matches the sent snapshot is to use rsync -avcn --del path/to/sent/snapshot/ user@remote:path/to/received/snapshot/ .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39765/" ] }
565,202
I'm running Arch Linux and KDE and I am exploring how mime-types are behaving (and misbehaving) on my system. Consider this mime-type file association. The following console output shows that I have the mime-type xml definition file and there is an association between .pub files and the mime-type application/vnd.ms-publisher . $ xdg-mime query filetype ~/.ssh/id_rsa_test.pubapplication/vnd.ms-publisher$ less mimeapps.list[Added Associations]application/vnd.ms-publisher=org.kde.kate.desktop;# find /usr -name vnd.ms-publisher.xml/usr/share/mime/application/vnd.ms-publisher.xml# less /usr/share/mime/application/vnd.ms-publisher.xml<?xml version="1.0" encoding="utf-8"?><mime-type xmlns="http://www.freedesktop.org/standards/shared-mime-info" type="application/vnd.ms-publisher"><!--Created automatically by update-mime-database. DO NOT EDIT!--><sub-class-of type="application/x-ole-storage"/><glob pattern="*.pub"/></mime-type> (I do not like the fact that ms-publisher is associated with public keys on my Linux system, but that's the topic of another question.) It would appear from the above that all is in order. Next I decided to add an association for Kate (text editor) to handle .pub public key files . I created this using KDE System Settings > Applications > File Associations. This screen shot shows what I did. When I clicked "Apply" the progress dialog appears briefly and the action seems to have succeeded. However, upon revisiting that same dialog, the Kate association I just added is gone. The box under "Application Preference Order" is empty. My question is: what is causing this file association to not be saved, and how can I fix it? Checking journalctl -r I found the following messages (in reverse order). All lines start with a prefix simlarl to Jan 31 17:24:18 laptop systemsettings5[20318] but I removed most of those to save space. Jan 31 17:24:19 laptop systemsettings5[20318]: Mimetype Comment Dirty: old= "Kindle book document" m_comment= "Amazon KF8 ebook format"Jan 31 17:24:19 laptop systemsettings5[20318]: Mimetype Comment Dirty: old= "ODB database" m_comment= "OpenDocument Database"Jan 31 17:24:19 laptop systemsettings5[20318]: kf5.kservice.services: KMimeTypeTrader: mimeType "application/vnd.ms-publisher" not foundJan 31 17:24:19 laptop systemsettings5[20318]: kf5.kservice.services: KMimeTypeTrader: mimeType "application/vnd.ms-publisher" not foundJan 31 17:24:19 laptop systemsettings5[20318]: ("services", "servicetypes", "xdgdata-mime", "apps")...Jan 31 17:24:18 laptop systemsettings5[20318]: "application/vnd.ms-publisher" hasDefinitionFile: falsekf5.kservice.sycoca: Service type not found: "audio/x-xm"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.presentation-template"kf5.kservice.sycoca: Service type not found: "text/x-rst"kf5.kservice.sycoca: Service type not found: "application/pdf"kf5.kservice.sycoca: Service type not found: "application/x-bzip"kf5.kservice.sycoca: Service type not found: "application/x-cue"kf5.kservice.sycoca: Service type not found: "image/x-rgb"kf5.kservice.sycoca: Service type not found: "application/x-gzpdf"kf5.kservice.sycoca: Service type not found: "application/x-cmakecache"kf5.kservice.sycoca: Service type not found: "image/x-sigma-x3f"kf5.kservice.sycoca: Service type not found: "application/x-tellico"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.spreadsheet-flat-xml"kf5.kservice.sycoca: Service type not found: "application/x-mswrite"kf5.kservice.sycoca: Service type not found: "application/x-t602"kf5.kservice.sycoca: Service type not found: "image/x-nikon-nef"kf5.kservice.sycoca: Service type not found: "video/x-flic"kf5.kservice.sycoca: Service type not found: "x-content/video-vcd"kf5.kservice.sycoca: Service type not found: "audio/flac"kf5.kservice.sycoca: Service type not found: "application/xspf+xml"kf5.kservice.sycoca: Service type not found: "image/svg+xml"kf5.kservice.sycoca: Service type not found: "application/x-tar"kf5.kservice.sycoca: Service type not found: "image/x-xpixmap"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.calc"kf5.kservice.sycoca: Service type not found: "application/gzip"kf5.kservice.sycoca: Service type not found: "application/x-zip-compressed-fb2"kf5.kservice.sycoca: Service type not found: "application/x-compressed-tar"kf5.kservice.sycoca: Service type not found: "audio/x-wavpack"kf5.kservice.sycoca: Service type not found: "video/vnd.rn-realvideo"kf5.kservice.sycoca: Service type not found: "image/x-pic"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.draw"kf5.kservice.sycoca: Service type not found: "audio/x-pn-realaudio-plugin"kf5.kservice.sycoca: Service type not found: "application/x-kexi-connectiondata"kf5.kservice.sycoca: Service type not found: "application/x-mobipocket-ebook"kf5.kservice.sycoca: Service type not found: "audio/ac3"kf5.kservice.sycoca: Service type not found: "application/vnd.openofficeorg.extension"kf5.kservice.sycoca: Service type not found: "image/x-win-bitmap"kf5.kservice.sycoca: Service type not found: "application/vnd.kde.okular-archive"kf5.kservice.sycoca: Service type not found: "application/x-zstd-compressed-tar"kf5.kservice.sycoca: Service type not found: "audio/mpeg"kf5.kservice.sycoca: Service type not found: "video/mlt-playlist"kf5.kservice.sycoca: Service type not found: "image/x-kde-raw"kf5.kservice.sycoca: Service type not found: "application/x-7z-compressed"kf5.kservice.sycoca: Service type not found: "audio/vnd.rn-realaudio"kf5.kservice.sycoca: Service type not found: "image/x-panasonic-rw"kf5.kservice.sycoca: Service type not found: "text/x-patch"kf5.kservice.sycoca: Service type not found: "application/x-kdenlivetitle"kf5.kservice.sycoca: Service type not found: "application/vnd.lotus-1-2-3"kf5.kservice.sycoca: Service type not found: "x-content/blank-cd"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-asf"kf5.kservice.sycoca: Service type not found: "video/quicktime"kf5.kservice.sycoca: Service type not found: "image/vnd.djvu"kf5.kservice.sycoca: Service type not found: "video/x-anim"kf5.kservice.sycoca: Service type not found: "text/plain"kf5.kservice.sycoca: Service type not found: "application/x-java-keystore"kf5.kservice.sycoca: Service type not found: "application/x-archive"kf5.kservice.sycoca: Service type not found: "application/x-sv4crc"kf5.kservice.sycoca: Service type not found: "application/vnd.appimage"kf5.kservice.sycoca: Service type not found: "application/vnd.visio"kf5.kservice.sycoca: Service type not found: "image/x-tga"kf5.kservice.sycoca: Service type not found: "application/x-zoom"kf5.kservice.sycoca: Service type not found: "image/heif"kf5.kservice.sycoca: Service type not found: "image/rle"kf5.kservice.sycoca: Service type not found: "text/csv"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-cab-compressed"kf5.kservice.sycoca: Service type not found: "application/vnd.lotus-wordpro"kf5.kservice.sycoca: Service type not found: "application/x-xar"kf5.kservice.sycoca: Service type not found: "audio/aac"kf5.kservice.sycoca: Service type not found: "application/vnd.openxmlformats-officedocument.presentationml.template"kf5.kservice.sycoca: Service type not found: "image/x-icns"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.presentation"kf5.kservice.sycoca: Service type not found: "audio/x-tta"kf5.kservice.sycoca: Service type not found: "application/x-cbt"kf5.kservice.sycoca: Service type not found: "image/tiff"kf5.kservice.sycoca: Service type not found: "application/ogg"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-wpl"kf5.kservice.sycoca: Service type not found: "image/x-pentax-pef"kf5.kservice.sycoca: Service type not found: "image/x-olympus-orf"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-excel"kf5.kservice.sycoca: Service type not found: "application/pgp-keys"kf5.kservice.sycoca: Service type not found: "image/x-jng"kf5.kservice.sycoca: Service type not found: "application/x-lz4-compressed-tar"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.text-master"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.impress.template"kf5.kservice.sycoca: Service type not found: "application/x-font-pcf"kf5.kservice.sycoca: Service type not found: "application/xhtml+xml"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.text"kf5.kservice.sycoca: Service type not found: "application/x-java"kf5.kservice.sycoca: Service type not found: "image/x-sgi"kf5.kservice.sycoca: Service type not found: "audio/basic"kf5.kservice.sycoca: Service type not found: "application/x-executable"kf5.kservice.sycoca: Service type not found: "text/spreadsheet"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.spreadsheet-template"kf5.kservice.sycoca: Service type not found: "audio/x-ms-wma"kf5.kservice.sycoca: Service type not found: "image/x-fuji-raf"kf5.kservice.sycoca: Service type not found: "application/x-compress"kf5.kservice.sycoca: Service type not found: "audio/vnd.dts"kf5.kservice.sycoca: Service type not found: "image/fits"kf5.kservice.sycoca: Service type not found: "application/x-xz"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.formula-template"kf5.kservice.sycoca: Service type not found: "image/gif"kf5.kservice.sycoca: Service type not found: "audio/x-ms-asx"kf5.kservice.sycoca: Service type not found: "video/x-mng"kf5.kservice.sycoca: Service type not found: "image/x-gimp-gbr"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.chart-template"kf5.kservice.sycoca: Service type not found: "application/vnd.openxmlformats-officedocument.wordprocessingml.document"kf5.kservice.sycoca: Service type not found: "application/x-bzpdf"kf5.kservice.sycoca: Service type not found: "image/png"kf5.kservice.sycoca: Service type not found: "application/x-gzdvi"kf5.kservice.sycoca: Service type not found: "application/mxf"kf5.kservice.sycoca: Service type not found: "application/x-wpg"kf5.kservice.sycoca: Service type not found: "image/x-xwindowdump"kf5.kservice.sycoca: Service type not found: "image/x-dcraw"kf5.kservice.sycoca: Service type not found: "audio/x-mpegurl"kf5.kservice.sycoca: Service type not found: "x-content/audio-player"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.text-web"kf5.kservice.sycoca: Service type not found: "x-content/blank-dvd"kf5.kservice.sycoca: Service type not found: "image/cgm"kf5.kservice.sycoca: Service type not found: "application/x-fictionbook+xml"kf5.kservice.sycoca: Service type not found: "application/vnd.palm"kf5.kservice.sycoca: Service type not found: "video/webm"kf5.kservice.sycoca: Service type not found: "image/wmf"kf5.kservice.sycoca: Service type not found: "text/tab-separated-values"kf5.kservice.sycoca: Service type not found: "application/x-pagemaker"kf5.kservice.sycoca: Service type not found: "application/vnd.comicbook-rar"kf5.kservice.sycoca: Service type not found: "image/openraster"kf5.kservice.sycoca: Service type not found: "application/illustrator"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-publisher"kf5.kservice.sycoca: Service type not found: "application/msword"kf5.kservice.sycoca: Service type not found: "application/x-krita"kf5.kservice.sycoca: Service type not found: "application/x-dvi"kf5.kservice.sycoca: Service type not found: "image/x-portable-bitmap"kf5.kservice.sycoca: Service type not found: "audio/AMR"kf5.kservice.sycoca: Service type not found: "application/x-cpio"kf5.kservice.sycoca: Service type not found: "image/webp"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.writer"kf5.kservice.sycoca: Service type not found: "text/css"kf5.kservice.sycoca: Service type not found: "image/x-adobe-dng"kf5.kservice.sycoca: Service type not found: "image/x-eps"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.draw.template"kf5.kservice.sycoca: Service type not found: "image/x-compressed-xcf"kf5.kservice.sycoca: Service type not found: "application/x-bzip-compressed-tar"kf5.kservice.sycoca: Service type not found: "application/x-quattropro"kf5.kservice.sycoca: Service type not found: "application/x-ms-dos-executable"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-access"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-powerpoint"kf5.kservice.sycoca: Service type not found: "application/x-sv4cpio"kf5.kservice.sycoca: Service type not found: "audio/mp4"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.chart"kf5.kservice.sycoca: Service type not found: "application/x-lrzip-compressed-tar"kf5.kservice.sycoca: Service type not found: "application/vnd.comicbook+zip"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.writer.global"kf5.kservice.sycoca: Service type not found: "application/vnd.apple.mpegurl"kf5.kservice.sycoca: Service type not found: "application/x-xojpp"kf5.kservice.sycoca: Service type not found: "application/x-bzdvi"kf5.kservice.sycoca: Service type not found: "image/x-gimp-pat"kf5.kservice.sycoca: Service type not found: "image/x-gimp-gih"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.math"kf5.kservice.sycoca: Service type not found: "image/vnd.zbrush.pcx"kf5.kservice.sycoca: Service type not found: "video/x-flv"kf5.kservice.sycoca: Service type not found: "x-content/audio-cdda"kf5.kservice.sycoca: Service type not found: "image/jpeg"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.calc.template"kf5.kservice.sycoca: Service type not found: "image/x-sony-arw"kf5.kservice.sycoca: Service type not found: "image/emf"kf5.kservice.sycoca: Service type not found: "image/x-sony-srf"kf5.kservice.sycoca: Service type not found: "image/x-panasonic-rw2"kf5.kservice.sycoca: Service type not found: "application/prs.plucker"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.graphics-flat-xml"kf5.kservice.sycoca: Service type not found: "video/dv"kf5.kservice.sycoca: Service type not found: "application/x-trash"kf5.kservice.sycoca: Service type not found: "application/pgp-encrypted"kf5.kservice.sycoca: Service type not found: "image/x-dds"kf5.kservice.sycoca: Service type not found: "image/x-xcursor"kf5.kservice.sycoca: Service type not found: "audio/midi"kf5.kservice.sycoca: Service type not found: "image/x-kodak-dcr"kf5.kservice.sycoca: Service type not found: "application/vnd.rn-realmedia"kf5.kservice.sycoca: Service type not found: "application/smil+xml"kf5.kservice.sycoca: Service type not found: "application/x-font-bdf"kf5.kservice.sycoca: Service type not found: "application/octet-stream"kf5.kservice.sycoca: Service type not found: "application/x-k3b"kf5.kservice.sycoca: Service type not found: "audio/x-it"kf5.kservice.sycoca: Service type not found: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"kf5.kservice.sycoca: Service type not found: "application/x-bzpostscript"kf5.kservice.sycoca: Service type not found: "application/vnd.amazon.mobi8-ebook"kf5.kservice.sycoca: Service type not found: "application/vnd.rar"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.impress"kf5.kservice.sycoca: Service type not found: "audio/x-musepack"kf5.kservice.sycoca: Service type not found: "image/x-sun-raster"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.graphics-template"kf5.kservice.sycoca: Service type not found: "application/pgp-signature"kf5.kservice.sycoca: Service type not found: "application/zip"kf5.kservice.sycoca: Service type not found: "application/x-cd-image"kf5.kservice.sycoca: Service type not found: "application/x-rpm"kf5.kservice.sycoca: Service type not found: "application/mathml+xml"kf5.kservice.sycoca: Service type not found: "image/x-xcf"kf5.kservice.sycoca: Service type not found: "video/x-nsv"kf5.kservice.sycoca: Service type not found: "audio/x-scpls"kf5.kservice.sycoca: Service type not found: "audio/x-speex"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.presentation-flat-xml"kf5.kservice.sycoca: Service type not found: "application/x-shorten"kf5.kservice.sycoca: Service type not found: "audio/x-wav"kf5.kservice.sycoca: Service type not found: "image/x-canon-cr2"kf5.kservice.sycoca: Service type not found: "application/epub+zip"kf5.kservice.sycoca: Service type not found: "image/x-photo-cd"kf5.kservice.sycoca: Service type not found: "audio/x-adpcm"kf5.kservice.sycoca: Service type not found: "font/ttf"kf5.kservice.sycoca: Service type not found: "application/vnd.stardivision.writer"kf5.kservice.sycoca: Service type not found: "image/jp2"kf5.kservice.sycoca: Service type not found: "x-content/video-svcd"kf5.kservice.sycoca: Service type not found: "application/javascript"kf5.kservice.sycoca: Service type not found: "image/bmp"kf5.kservice.sycoca: Service type not found: "image/x-portable-anymap"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.text-master-template"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-htmlhelp"kf5.kservice.sycoca: Service type not found: "audio/x-gsm"kf5.kservice.sycoca: Service type not found: "video/mp4"kf5.kservice.sycoca: Service type not found: "application/sdp"kf5.kservice.sycoca: Service type not found: "image/x-xbitmap"kf5.kservice.sycoca: Service type not found: "application/xml"kf5.kservice.sycoca: Service type not found: "image/x-bzeps"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.text-template"kf5.kservice.sycoca: Service type not found: "video/x-msvideo"kf5.kservice.sycoca: Service type not found: "application/x-xpinstall"kf5.kservice.sycoca: Service type not found: "image/svg+xml-compressed"kf5.kservice.sycoca: Service type not found: "application/x-iwork-keynote-sffkey"kf5.kservice.sycoca: Service type not found: "application/vnd.debian.binary-package"kf5.kservice.sycoca: Service type not found: "application/x-matroska"kf5.kservice.sycoca: Service type not found: "audio/x-s3m"kf5.kservice.sycoca: Service type not found: "application/x-ksysguard"kf5.kservice.sycoca: Service type not found: "application/x-keepass2"kf5.kservice.sycoca: Service type not found: "audio/mp2"kf5.kservice.sycoca: Service type not found: "image/x-kodak-k25"kf5.kservice.sycoca: Service type not found: "x-content/blank-hddvd"kf5.kservice.sycoca: Service type not found: "text/x-google-video-pointer"kf5.kservice.sycoca: Service type not found: "application/vnd.sun.xml.writer.template"kf5.kservice.sycoca: Service type not found: "x-content/blank-bd"kf5.kservice.sycoca: Service type not found: "text/html"kf5.kservice.sycoca: Service type not found: "application/vnd.ms-works"kf5.kservice.sycoca: Service type not found: "application/vnd.openxmlformats-officedocument.presentationml.slide"kf5.kservice.sycoca: Service type not found: "audio/x-flac+ogg"kf5.kservice.sycoca: Service type not found: "application/x-gzpostscript"kf5.kservice.sycoca: Service type not found: "text/vcard"kf5.kservice.sycoca: Service type not found: "image/x-sony-sr2"kf5.kservice.sycoca: Service type not found: "inode/directory"kf5.kservice.sycoca: Service type not found: "application/x-xopp"kf5.kservice.sycoca: Service type not found: "application/x-kdenlive"kf5.kservice.sycoca: Service type not found: "application/vnd.corel-draw"kf5.kservice.sycoca: Service type not found: "application/vnd.wordperfect"kf5.kservice.sycoca: Service type not found: "image/x-minolta-mrw"kf5.kservice.sycoca: Service type not found: "application/vnd.sqlite3"kf5.kservice.sycoca: Service type not found: "image/x-portable-pixmap"kf5.kservice.sycoca: Service type not found: "text/vnd.qt.linguist"kf5.kservice.sycoca: Service type not found: "image/x-canon-crw"kf5.kservice.sycoca: Service type not found: "application/vnd.openxmlformats-officedocument.presentationml.slideshow"kf5.kservice.sycoca: Service type not found: "x-content/video-dvd"kf5.kservice.sycoca: Service type not found: "application/vnd.kde.fontspackage"kf5.kservice.sycoca: Service type not found: "application/vnd.openxmlformats-officedocument.wordprocessingml.template"kf5.kservice.sycoca: Service type not found: "application/oxps"kf5.kservice.sycoca: Service type not found: "application/x-khtml-adaptor"kf5.kservice.sycoca: Service type not found: "video/mp2t"kf5.kservice.sycoca: Service type not found: "application/vnd.adobe.flash.movie"kf5.kservice.sycoca: Service type not found: "audio/x-mod"kf5.kservice.sycoca: Service type not found: "image/vnd.rn-realpix"kf5.kservice.sycoca: Service type not found: "application/postscript"kf5.kservice.sycoca: Service type not found: "application/vnd.openxmlformats-officedocument.spreadsheetml.template"kf5.kservice.sycoca: Service type not found: "image/vnd.adobe.photoshop"kf5.kservice.sycoca: Service type not found: "application/x-lzma"kf5.kservice.sycoca: Service type not found: "audio/AMR-WB"kf5.kservice.sycoca: Service type not found: "audio/x-aiff"kf5.kservice.sycoca: Service type not found: "image/x-portable-graymap"kf5.kservice.sycoca: Service type not found: "text/markdown"kf5.kservice.sycoca: Service type not found: "application/x-bcpio"kf5.kservice.sycoca: Service type not found: "application/x-lzip-compressed-tar"kf5.kservice.sycoca: Service type not found: "video/x-matroska"kf5.kservice.sycoca: Service type not found: "application/vnd.openxmlformats-officedocument.presentationml.presentation"kf5.kservice.sycoca: Service type not found: "audio/x-stm"kf5.kservice.sycoca: Service type not found: "audio/prs.sid"kf5.kservice.sycoca: Service type not found: "audio/x-ape"kf5.kservice.sycoca: Service type not found: "application/x-designer"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.graphics"kf5.kservice.sycoca: Service type not found: "image/vnd.microsoft.icon"kf5.kservice.sycoca: Service type not found: "text/x-ldif"kf5.kservice.sycoca: Service type not found: "application/x-kexiproject-shortcut"kf5.kservice.sycoca: Service type not found: "application/x-font-type1"kf5.kservice.sycoca: Service type not found: "image/x-exr"kf5.kservice.sycoca: Service type not found: "image/x-kodak-kdc"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.formula"kf5.kservice.sycoca: Service type not found: "application/x-cb7"kf5.kservice.sycoca: Service type not found: "image/x-gzeps"kf5.kservice.sycoca: Service type not found: "application/x-xopt"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.text-flat-xml"kf5.kservice.sycoca: Service type not found: "image/x-hdr"kf5.kservice.sycoca: Service type not found: "multipart/x-mixed-replace"kf5.kservice.sycoca: Service type not found: "application/x-iso9660-appimage"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.spreadsheet"kf5.kservice.sycoca: Service type not found: "application/x-java-applet"kf5.kservice.sycoca: Service type not found: "application/x-sony-bbeb"kf5.kservice.sycoca: Service type not found: "application/x-kwallet"kf5.kservice.sycoca: Service type not found: "application/x-tzo"kf5.kservice.sycoca: Service type not found: "application/vnd.oasis.opendocument.database"kf5.kservice.sycoca: Service type not found: "video/vnd.mpegurl"kf5.kservice.sycoca: Service type not found: "application/x-dbf"kf5.kservice.sycoca: Service type not found: "application/x-hwp"kf5.kservice.sycoca: Service type not found: "application/x-navi-animation"kf5.kservice.sycoca: Service type not found: "application/x-font-afm"kf5.kservice.sycoca: Service type not found: "audio/x-opus+ogg"kf5.kservice.sycoca: Service type not found: "application/ram"kf5.kservice.sycoca: Service type not found: "multipart/mixed"kf5.kservice.sycoca: Service type not found: "image/fax-g3"...Jan 31 17:24:18 laptop systemsettings5[20318]: kf5.kservice.services: KMimeTypeTrader: mimeType "application/vnd.ms-publisher" not foundJan 31 17:24:18 laptop systemsettings5[20318]: kf5.kservice.services: KMimeTypeTrader: mimeType "application/vnd.ms-publisher" not foundJan 31 17:24:18 laptop systemsettings5[20318]: Entry "application/vnd.ms-publisher" is dirty. Saving.Jan 31 17:21:57 laptop systemsettings5[20318]: kf5.kservice.services: KMimeTypeTrader: mimeType "application/vnd.ms-publisher" not foundJan 31 17:21:57 laptop systemsettings5[20318]: kf5.kservice.services: KMimeTypeTrader: mimeType "application/vnd.ms-publisher" not foundJan 31 17:21:57 laptop systemsettings5[20318]: "application/vnd.ms-publisher" hasDefinitionFile: false Some notable messages from the above include: kf5.kservice.sycoca: Service type not found: "application/vnd.ms-publisher"kf5.kservice.sycoca: Service type not found: "application/illustrator" I have already shown that the mime type application/vnd.ms-publisher is present and defined. So I checked a few more at random. Here is application/illustrator (with comments removed to save space). less /usr/share/mime/application/illustrator.xml<?xml version="1.0" encoding="utf-8"?><mime-type xmlns="http://www.freedesktop.org/standards/shared-mime-info" type="application/illustrator"><!--Created automatically by update-mime-database. DO NOT EDIT!--><generic-icon name="image-x-generic"/><glob pattern="*.ai"/><alias type="application/vnd.adobe.illustrator"/></mime-type> All the mime-type definitions seems to be present according to a listing of ls /usr/share/mime/application/ (There is not sufficient space to post the entire directory listing here.) I do not understand why the log messages indicate "Service type not found" for mime-types that are present on my system. But more importantly, why can I not add an application to handle the mime type as shown above? Response to comments by Nathaniel M. Beaver $ ktraderclient5 --mimetype application/vnd.ms-publishermimetype is : application/vnd.ms-publishergot 1 offers.---- Offer 0 ----Invalid property ActionsStartupNotify : 'TRUE'StartupWMClass : 'libreoffice-draw'Invalid property UntranslatedGenericNameInvalid property X-GIO-NoFuseX-KDE-Protocols : 'file - http - ftp - webdav - webdavs'Type : 'Application'Name : 'LibreOffice Draw'Comment : 'Create and edit drawings, flow charts, and logos by using Draw.'GenericName : 'Drawing Program'Icon : 'libreoffice-draw'Exec : 'libreoffice --draw %U'Terminal : 'FALSE'Invalid property TerminalOptionsInvalid property PathServiceTypes : 'application/vnd.oasis.opendocument.graphics - application/vnd.oasis.opendocument.graphics-flat-xml - application/vnd.oasis.opendocument.graphics-template - application/vnd.sun.xml.draw - application/vnd.sun.xml.draw.template - application/vnd.visio - application/x-wpg - application/vnd.corel-draw - application/vnd.ms-publisher - image/x-freehand - application/clarisworks - application/x-pagemaker - application/pdf - application/x-stardraw - image/x-emf - image/x-wmf - Application'AllowAsDefault : 'TRUE'InitialPreference : '5'Invalid property LibraryDesktopEntryPath : '/usr/share/applications/libreoffice-draw.desktop'DesktopEntryName : 'libreoffice-draw'Keywords : 'Vector - Schema - Diagram - Layout - OpenDocument Graphics - Microsoft Publisher - Microsoft Visio - Corel Draw - cdr - odg - svg - pdf - vsd'FormFactors : ''Categories : 'Office - FlowChart - Graphics - 2DGraphics - VectorGraphics - X-Red-Hat-Base - X-MandrivaLinux-Office-Drawing' What is the recommended way to remove this?
Same here on completely different system. Do this as regular user: mkdir -p $HOME/.local/share/mime/packagesupdate-mime-database $HOME/.local/share/mime (the mkdir step may not be necessary if the directory already exists). If you need full reset, do this: cd $HOME/.local/share/mv mime mime2mkdir -p mime/packagesupdate-mime-database $HOME/.local/share/mime This will reset all your mime corrupted settings, but it will work on.Currently KDE doesn't call to update the database after making changes, leading to this.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15010/" ] }
565,263
Testing out the SSH Match Exec feature. I have this minimal ~/.ssh/config : Match Exec echo ServerAliveInterval 60 and I am running ssh localhost I get Unable to execute 'echo': No such file or directory This is true regardless of whether I use a full path or not, or using quotes whether double or single. I tried putting a fake echo script in my .ssh folder as well. I have tried multiple commands ( test, nc, connect ). It seems the Exec feature cannot see my path at all. I am running WSL Debian with OpenSSH. My final goal is to test if $http_proxy is reachable in the match clause in order to automate proxy usage, but getting the above to work would be enough.
To invoke "match" directives, ssh actually invokes: $SHELL -c 'command' "$SHELL" is either the value of the SHELL environment variable or a default which is usually "/bin/sh". "command" is the command from the "match" directive. Here is the actual code that executes the command : argv[0] = shell; argv[1] = "-c"; argv[2] = xstrdup(cmd); argv[3] = NULL; execv(argv[0], argv); error("Unable to execute '%.100s': %s", cmd, strerror(errno)); Note that execv() doesn't search any kind of path for the shell being executed, so SHELL has to be a complete pathname like "/bin/bash" or "/usr/local/bin/zsh". If the shell had started up and then failed to run "echo", then you'd get an error from the shell. But the error that you're getting is from ssh. This implies that the problem is with invoking the shell, not with the "echo" command. The simplest explanation is that your SHELL environment variable is invalid. It refers to a file which is missing, or it's in a directory that you can't read.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565263", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/232207/" ] }
565,559
Let's say I have a 4 GB file abc on my local computer. I have uploaded it to a distant server via SFTP, it took a few hours. Now I have slightly modified the file (probably 50 MB maximum, but not consecutive bytes in this file) locally, and saved it into abc2 . I also kept the original file abc on my local computer. How to compute a binary diff of abc and abc2 ? Applications: I could only send a patch file (probably max 100MB) to the distant server, instead of reuploading the whole abc2 file (it would take a few hours again!), and recreate abc2 on the distant server from abc and patch only. Locally, instead of wasting 8 GB to backup both abc and abc2 , I could save only abc + patch , so it would take < 4100 MB only. How to do this? PS: for text, I know diff , but here I'm looking for something that could work for any raw binary format, it could be zip files or executables or even other types of file. PS2: If possible, I don't want to use rsync ; I know it can replicate changes between 2 computers in an efficient way (not resending data that has not changed), but here I really want to have a patch file, that is reproducible later if I have both abc and patch .
For the second application/issue, I would use a deduplicating backup program like restic or borgbackup , rather than trying to manually keep track of "patches" or diffs. The restic backup program allows you to back up directories from multiple machines to the same backup repository, deduplicating the backup data both amongst fragments of files from an individual machine as well as between machine. (I have no user experience with borgbackup , so I can't say anything about that program.) Calculating and storing a diff of the abc and abc2 files can be done with rsync . This is an example with abc and abc2 being 153 MB. The file abc2 has been modified by overwriting the first 2.3 MB of the file with some other data: $ ls -lhtotal 626208-rw-r--r-- 1 kk wheel 153M Feb 3 16:55 abc-rw-r--r-- 1 kk wheel 153M Feb 3 17:02 abc2 We create out patch for transforming abc into abc2 and call it abc-diff : $ rsync --only-write-batch=abc-diff abc2 abc $ ls -lhtotal 631026-rw-r--r-- 1 kk wheel 153M Feb 3 16:55 abc-rw------- 1 kk wheel 2.3M Feb 3 17:03 abc-diff-rwx------ 1 kk wheel 38B Feb 3 17:03 abc-diff.sh-rw-r--r-- 1 kk wheel 153M Feb 3 17:02 abc2 The generated file abc-diff is the actual diff (your "patch file"), while abc-diff.sh is a short shell script that rsync creates for you: $ cat abc-diff.shrsync --read-batch=abc-diff ${1:-abc} This script modifies abc so that it becomes identical to abc2 , given the file abc-diff : $ md5sum abc abc2be00efe0a7a7d3b793e70e466cbc53c6 abc3decbde2d3a87f3d954ccee9d60f249b abc2$ sh abc-diff.sh$ md5sum abc abc23decbde2d3a87f3d954ccee9d60f249b abc3decbde2d3a87f3d954ccee9d60f249b abc2 The file abc-diff could now be transferred to wherever else you have abc . With the command rsync --read-batch=abc-diff abc , you would apply the patch to the file abc , transforming its contents to be the same as the abc2 file on the system where you created the diff. Re-applying the patch a second time seems safe. There is no error messages nor does the file's contents change (the MD5 checksum does not change). Note that unless you create an explicit "reverse patch", there is no way to easily undo the application of the patch. I also tested writing the 2.3 MB modification to some other place in the abc2 data, a bit further in (at about 50 MB), as well as at the start. The generated "patch" was 4.6 MB large, suggesting that only the modified bits were stored in the patch.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/565559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59989/" ] }
565,597
Saving and restoring the cursor position should be possible with simple ANSI escape sequences ANSI escape sequences allow you to move the cursor around the screen at will. This is more useful for full screen user interfaces generated by shell scripts, but can also be used in prompts. The movement escape sequences are as follows: [...] Save cursor position: \033[s Restore cursor position: \033[u Source: Bash Prompt HOWTO: Cursor movement However, it seems that this ANSI sequences restore only the horizontal position of the cursor. For example: $ printf 'Doing some task...\e[s\n\nMore text\n\e[udone!\n\n\n'Doing some task...More text done!$ where the done! is horizontally at the correct position but not vertically (correct in the sense of restored). Am I missing something, i.e. can you reproduce this?! Is this the intended desired behaviour? If so, how would I get the done! printed after the task... ? If this should not happen, might this behaviour be triggered indirectly by something in my environment? I searched and read the many questions about, but I did not find anything about this behaviour I experienced. Actually, the same occur with tput via $ printf 'Doing some task...'; tput sc; printf '\n\nMore text\n'; tput rc; printf 'done!\n\n\n'
Am I missing something, i.e. can you reproduce this?! I can, if I'm at the bottom of the terminal and the next line makes the content move up. But repeat the test in a terminal that doesn't scroll in the meantime. Hit Ctrl + L (or invoke clear ) and start from the top. Then it behaves as you wish. Is this the intended desired behaviour? I think so. Cursor position is relative to the screen, not to its content. How would I get the done! printed after the task... ? Possible approach: If you know you're going to print no more than 6 lines and the terminal is big enough, print 6 empty lines first so it scrolls first, then move the cursor up and only then print the meaningful text: printf '\n\n\n\n\n\n'; printf '\033[6A'; printf 'Doing some task...\e[s\n\nMore text\n\e[udone!\n\n\n' I used three separate printf s to show the logic, but it could be one.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370049/" ] }
565,637
I've created a script to run automated backups on my CentOS 7 server.The backups get stored to the /home/backup directory. The script works, but now I would like to incorporate a way to count the files after the backup happens and if the number is more than 5, delete the oldest backup. Below is what I have for my backup script. #!/bin/bash#mysqldump variablesFILE=/home/backup/databasebk_!`date +"Y-%m-%d_%H:%M"`.sqlDATABASE=databaseUSER=rootPASS=my password#backup command processmysqldump --opt --user=${USER} --password=${PASS} ${DATABASE} > ${FILE}#zipping the backup filegzip $FILE#send message to the user with the resultsecho "${FILE}.gz was created:"ls -l ${FILE}.gz# This is where I would like to count the number of files # in the directory and if there are more than 5 I would like# to delete the oldest file. Any help is greatly appreciated Thanks-Mike
You could look at set -- /home/backup/databasebk_* and while $# is greater than five, delete a file. So the code would look similar to set -- /home/backup/databasebk_*while [ $# -gt 5 ]do echo "Removing old backup $1" rm "$1" shiftdone This works because the filenames you picked are automatically in "oldest first" order. For consistency I would set a variable (I normally call it BASE but you can call it whatever you like) So BASE=/home/backup/databasebk_FILE=${BASE}!`date +"%Y-%m-%d_%H:%M"`.sql....set -- ${BASE}*while [ $# -gt 5 ]do echo "Removing old backup $1" rm "$1" shiftdone
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/565637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393545/" ] }
565,665
Question : What is the most-ideal way to add new users in Debian? adduser appears to be missing on my system, any tips? Log : bash: adduser: command not found Edit : adduser does appear to be installed adduser is already the newest version (3.118). Is is possible to manually execute it as a binary? Where is the applications stored?
Use su -l or su - to start the root shell with an environment similar to a normal 'login' shell. This includes initializing the environment variable $PATH for user root instead of simply inheriting it from the normal (non-sudo) user who does not have /sbin on her $PATH . See man su or https://linuxconfig.org/command-not-found-missing-path-to-sbin-on-debian-gnu-linux . This is how to enable sudo after a fresh install of Debian 10: $ su -l# adduser <your_username_here> sudo# logout Then, log out of the Desktop Environment and log in again.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/565665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393587/" ] }
565,778
When I perform a simple math operation in #!/bin/sh , does that create a subshell? E.g., addition=$(( 1 + 1 )) The syntax would suggest a subshell, but I couldn't find anything on this
$(cmd arg) runs cmd in a subshell environment and its output (minus the trailing newline characters), becomes the result of the expansion. (cmd arg) does run in a subshell with its output unaffected. So $((cmd arg)) would be the same as $(cmd arg) but with an extra layer of subshell, except that it's not. $((...)) is a separate form of expansion that comes from the Korn shell. In the Korn shell, ((arithmetic expression)) evaluates the arithmetic expression (which follows a syntax very similar to that of C) and the exit status reflects whether the expression resolves to 0 or non-zero. That allows things like: if ((var < 10)); then ...fi Which makes it look very similar to C . $ is used to introduce expansions . Just like $(cmd) is like (cmd) except that it expands to the output of cmd , $((arith)) is like ((arith)) except that it expands to the result of the evaluation of the arithmetic expression. POSIX, whose sh is mostly based on ksh88, specified $((...)) but not ((...)) . Actually, in an earlier draft, it was going for $[...] instead which is why you find that bash and zsh support $[...] as an alternative to $((...)) . IIRC, the main reason why POSIX initially thought of specifying it as $[...] is because $((...)) conflicts with $((cmd arg)) , a subshell inside a command substitution. You'll find that most shells correctly identify $((echo x; echo y) | (tr xy ab)) as not an arithmetic expansion, but not $((cmd arg)) . In any case $((cmd)) is meant to expand to the arithmetic value of the $cmd variable, not to the output of cmd . The relevant text in the POSIX specification has: The syntax of the shell command language has an ambiguity for expansions beginning with "$((", which can introduce an arithmetic expansion or a command substitution that starts with a subshell. Arithmetic expansion has precedence; that is, the shell shall first determine whether it can parse the expansion as an arithmetic expansion and shall only parse the expansion as a command substitution if it determines that it cannot parse the expansion as an arithmetic expansion. The shell need not evaluate nested expansions when performing this determination. If it encounters the end of input without already having determined that it cannot parse the expansion as an arithmetic expansion, the shell shall treat the expansion as an incomplete arithmetic expansion and report a syntax error. A conforming application shall ensure that it separates the "$(" and '(' into two tokens (that is, separate them with white space) in a command substitution that starts with a subshell. For example, a command substitution containing a single subshell could be written as: ksh 's ((...)) also conflicts with nested subshells. While POSIX doesn't specify ((...)) , it does allow ksh's behaviour. In practice, when nesting subshells and/or cmdsubsts, you should make sure to include white space between the parens: echo "$( (...) )"( (a; b) | (c;d) )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565778", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160697/" ] }
565,785
While looking for the plain truth on echo I found this page: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/echo.html It's normally a HTML frame on this site https://pubs.opengroup.org/onlinepubs/9699919799/ (where you can search for "echo"). This claims to be POSIX, but I see no -n and I see \c instead! What have I found? GracefulRestart points out that /bin/echo recognises \c but it doesn't do that by default: I must do echo -e for \c to be recognised.
You have found IEEE 1003.1-2017, a.k.a. the Single Unix Specification , published by The Open Group. For more, see " What exactly is POSIX? ", " Difference between POSIX, Single UNIX Specification, and Open Group Base Specifications? ", and all of their linked questions and answers. The -n is there, in boldface no less so it is hard to miss. And yes, \c is standard. The variations in behaviour of echo are notorious. You should not be surprised that /bin/echo is not the same as a shell built-in echo , and that one requires an -e where another does not. It's not even that simple. For a long explanation, see " Why is printf better than echo? ". For the little-known variability of printf , ironically involving the very same \c escape sequence, see " Bash printf formating not working ".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565785", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16446/" ] }
565,811
I have test data in a file text.txt abtesttest21,23,3 I want to output the file starting from the line number where test is + 2. I need this to be a oneliner usable in gnuplot , i have comeup with the following: awk -v linestart=$(awk '$0~"test" {a=NR}END{print a+2}' $filename) 'BEGIN{FS=",";OFS="\t";lines}NR>=linestart{print $1, $2}' $filename but i need somehow to supply the file contents to two awk 's which i do not know how to do. So i came up with solution with the $filename but this has the problem, how to get the $filename in. I was thinking along the lines: echo "test.txt" | read filename | awk -v linestart=$(awk '$0~"test" {a=NR}END{print a+2}' $filename) 'BEGIN{FS=",";OFS="\t";lines}NR>=linestart{print $1, $2}' $filename but that does not work. How else can i make the above work? The obvious problem is that i need to know the number of the line where i want to start printing before i run awk . i was also thinking something along this: awk 'BEGIN{FS=",";OFS="\t";lines=100000}{if ($0~"test"){lines=NR+2}; if(NR>=lines){print $1, $2}}' But i did not even try it since, it is very ugly and not general, i have to make the variable lines always sufficiently big. So is there an elegant solution that would work with a normal text file pipe or in the other case with some way of pushing the file name inside?
Using ed : $ printf '%s\n' '/^test/+2,$p' | ed -s file1,23,3 In the ed editor, the command /^test/+2,$p would print ( p ) the lines from two lines beyond the line matching ^test , to the end ( $ ). Using awk : $ awk '/^test/ { flag = 1; count = 1 }; (flag == 1 && count <= 0); { count-- }' file1,23,3 Here, a line will be printed if flag is 1 and if count is less than or equal to zero. The flag is set to 1 when the pattern ^test is matched in the input data, and count is then also set to the number of lines to skip until the output should start (not counting the current line). The count is decreased for all lines. A slightly different approach with awk : $ awk '/^test/ { getline; while (getline > 0) print }' file1,23,3 Here, we match our pattern and then immediately read and discard the next line of input. Then we use a while loop to read the rest of the file, printing each line read. The exact same approach, but with sed : $ sed -n -e '/^test/ { n' -e ':again' -e 'n; p; b again' -e '}' file1,23,3 Match the pattern, then read and discard the next line ( n ), then get into a loop reading and printing each line ( n; p; ). The loop is made up of the label again and the branching/jumping to this label ( b again ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/172751/" ] }
565,905
I'm using macOS 10.15.2 with iTerm2, zsh 5.7.1 and oh-my-zsh (theme robbyrussell). I noticed that the prompt print is slightly slow respect to the bash one. For example, if I press enter , cursor initially goes at the beginning of the next line then, after a little while, the shell prompt comes in and the cursor is moved to its natural position. For example, if → ~ is the prompt when I'm in my home folder, and [] is my cursor, when I press enter I see: 0 - Idle status → ~ [] 1 - Immediately after pressing enter [] 2 - Back to idle status → ~ [] This slowness is particularly evident when I quickly press enter multiple times. In this case, I see some blank lines. This is what I see → ~→ ~→ ~→ ~→ ~→ ~→ ~→ ~→ ~ [] I come from bash shell and when I use bash, there is not such a slowness. I'm not sure this is an issue of oh-my-zsh or its natural behavior. I'd like to know more about this and, eventually, how to fix it. Thanks. PS : the problem comes from oh-my-zsh and it persists even if I disable all the plugins. PPS : I previously posted this question on SO. Thanks to user1934428 for his help and for suggesting me to move this question here.
I don't know what oh-my-zsh puts in the prompt by default. Maybe it tries to identify the version control status, that's a very popular prompt component which might be time-consuming. To see what's going on, turn on command traces with set -x . → ~ → ~ set -x trace of the commands that are executed to calculate the prompt → ~ trace of the commands that are executed to calculate the prompt → ~ set +x +zsh:3> set +x→ ~ → ~ If the trace is so long that it scrolls off the screen, redirect it to a file with exec 2>zsh.err This directs all error messages to the file, not just the trace. To get traces and errors back on the terminal, run exec 2>/dev/tty You can customize the trace format through PS4 . This is a format string which can contain prompt escapes . For example, to add precise timing information: PS4='%D{%s.%9.}+%N:%i> '
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/565905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180227/" ] }
565,949
I have two packages which are in conflict after installing a new one with pacman on arch. How can I list all installed packages that are depending on the ones in conflict? Or more general: How can I list all installed packages that are depending on a certain other package
To list the dependencies use pacman -Si (i.e., pacman --sync --info )or pacman -Qi (i.e., pacman --query --info ). To list the reverse dependencies: pacman -Sii (i.e., pacman --sync --info --info ; yes two infos). Arch Linux: Querying package dependencies
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/565949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/258564/" ] }
565,997
Sudo in Debian 10 is driving me mad. I have a clean, vanilla install of Debian 10. During installation, I've set a root password as well as created a normal user, let's name him "tom". Logged in as Tom, I open a terminal and try to add tom to sudo: su [entering root password] whoami [root] /usr/sbin/usermod -a -G sudo tom No errors, all seems well. Testing it: su tom whoami [tom] sudo echo "hello"; [hello] Works as expected. Next, I close the terminal. Still logged in as Tom in this desktop session, I open a new terminal: sudo echo "hello"; [error: tom is not sudoers file] The sudo add is not persisted, as soon as I close a terminal, it's gone. I can confirm this with the "groups" command. After the above sequence of commands, it successfully lists "tom" as part of the sudo group. After closing the terminal and opening a new terminal, "groups" shows "tom" as not part of the sudo group. Why is my change not persisted? (on a probably unrelated note, I'm unable to visually log into the system using my root account, despite knowing sure the password is correct).
Your change is being persisted, and you can verify that by running grep sudo /etc/group which should show tom as a member of group sudo . What is happening is that your user’s groups aren’t being reloaded. When you run su , you effectively log in again, and the resulting shell is set up with tom as a member of group sudo . But your desktop session isn’t, and won’t be until you log out and log back in again, or perhaps not even then (if your systemd user session persists, for example).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/565997", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/393929/" ] }
566,066
Just to be clear I want to comment crontab entries, not a basic file. Usually, I do it like crontab -e 30 * * * * /u01/app/abccompny/scripts/GenerateAWRReport.pl01,31 * * * * /u01/app/abccompny/scripts/table_growth_monitor.sh30 0,4,8,12 /u01/shivam/script/getMongoData.sh and I add "#" in front of each line and just save it. Similarly after work is done I remove the "#". #30 * * * * /u01/app/abccompny/scripts/GenerateAWRReport.pl#01,31 * * * * /u01/app/abccompny/scripts/table_growth_monitor.sh#30 0,4,8,12 /u01/shivam/script/getMongoData.sh Is there an efficient way to do this using the script?
Export your current crontab into a file, delete the crontab, then use the previously created file. $ crontab -l > cron_content$ crontab -r$ <this is where you do your stuff>$ crontab cron_content
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/390225/" ] }
566,071
I'd like to insert a line at the 3rd line following the pattern. Example: insert the word z 3 lines after each match of a From a b1c1d1a b2c2d2... to a b1c1zd1a b2c2zd2...
Export your current crontab into a file, delete the crontab, then use the previously created file. $ crontab -l > cron_content$ crontab -r$ <this is where you do your stuff>$ crontab cron_content
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/189711/" ] }
566,081
I want to copy a long file into clipboard with xsel,in my local pc ,just input: cat /usr/bin/mysql_secure_installation|xsel -b The file mysql_secure_installation located in my local os was copied into clipboard. Now login to my vps with ssh command,then input DISPLAY=:0 cat /usr/bin/mysql_secure_installation|xsel -b I come across the error info: xsel: Can't open display: (null): Inappropriate ioctl for device I know that the proper way to get remote file is scp command: scp -P port username@tohostname:/remotefile /newlocalfile I just wonder how to get file content copied into clipboard after loginning my vps,copy file from remote machine to the clipboard of my local computer. Same error: DISPLAY=:0 xsel -b < /usr/bin/mysql_secure_installationxsel: Can't open display: (null): Connection refused
Start your ssh connection with ssh -X yourserver then don't change the DISPLAY variable yourself. ssh should set it up automatically, so that xsel (and any other X11 clients) can use the display on your local computer -- including its clipboard. This is called X11 forwarding . Manually setting DISPLAY=:0 in your ssh session will tell xsel to use the display on that remote server -- if one is even running.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566081", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102745/" ] }
566,190
I recently came across a situation where I wanted to change my systemd's log level to debug , but wanted to make sure I set it back to what it was previously. All my searches related to this only found user service log level settings rather than the actual system log level. I would like the "get" version of: $ systemd-analyze set-log-level
The "get" version is — or rather was — since September 2017 (v235), unsurprisingly: systemd-analyze get-log-level But these subcommands were moved from systemd-analyze to systemctl in November 2019 (v244-rc1), having been combined in January 2018 (v237) into a single subcommand: systemctl log-level Further reading https://github.com/systemd/systemd/commit/ef5a8cb1a7e4529b2b69c4d5a3dcd34e30534f54 https://github.com/systemd/systemd/commit/90657286fcc2e76a6c76b2c7df6f20f222051c1f https://github.com/systemd/systemd/commit/38fcb7f766c84736425e86854b8a4468c126dafa
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394124/" ] }
566,381
I have a PC with Proxmox. I have been using it half year ago. It was 5.4 version. I've started it yesterday, but couldn't connect to it using webinterface: 192.168.1.21:8006 . Chrome said: ERR_EMPTY_RESPONSE Looking for solution, I've found it could be solved by 1) Upgrade. I upgraded from 5.4 to 6.1 and it did not resolved the issue 2) Reset certificates: pvecm updatecerts -f . It did not resolved the issue.3) Clear browser's cookies. There were no cookies. I've also used Chrome's incognito mode and different browsers that never been connected to my Proxmox server. root@proxmox:~# netstat -na | grep 8006tcp 0 0 0.0.0.0:8006 0.0.0.0:* LISTENroot@proxmox:~# pveversionpve-manager/6.1-7/13e58d5e (running kernel: 5.3.13-3-pve)root@proxmox:~# systemctl status pveproxy● pveproxy.service - PVE API Proxy Server Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2020-02-07 23:18:10 EET; 34min ago Process: 1009 ExecStartPre=/usr/bin/pvecm updatecerts --silent (code=exited, status=0/SUCCESS) Process: 1011 ExecStart=/usr/bin/pveproxy start (code=exited, status=0/SUCCESS) Main PID: 1013 (pveproxy) Tasks: 4 (limit: 4915) Memory: 127.7M CGroup: /system.slice/pveproxy.service ├─1013 pveproxy ├─1014 pveproxy worker ├─1015 pveproxy worker └─1016 pveproxy workerFeb 07 23:18:08 proxmox systemd[1]: Starting PVE API Proxy Server...Feb 07 23:18:10 proxmox pveproxy[1013]: starting serverFeb 07 23:18:10 proxmox pveproxy[1013]: starting 3 worker(s)Feb 07 23:18:10 proxmox pveproxy[1013]: worker 1014 startedFeb 07 23:18:10 proxmox pveproxy[1013]: worker 1015 startedFeb 07 23:18:10 proxmox pveproxy[1013]: worker 1016 startedFeb 07 23:18:10 proxmox systemd[1]: Started PVE API Proxy Server. I could connect to it using telnet 192.168.1.21 8006 . Log displayed no errors. Runing by pveproxy -debug=1 start displayed nothing special in case of browser's page refreshing: root@proxmox:~# pveproxy start -debug=19190: ACCEPT FH10 CONN19191: ACCEPT FH10 CONN1close connection AnyEvent::Handle=HASH(0x560ee2f16cf0)9190: CLOSE FH10 CONN0close connection AnyEvent::Handle=HASH(0x560ee2f16cf0)9191: CLOSE FH10 CONN09191: ACCEPT FH10 CONN1close connection AnyEvent::Handle=HASH(0x560ee2f13ac0)9191: CLOSE FH10 CONN09190: ACCEPT FH10 CONN1close connection AnyEvent::Handle=HASH(0x560ee38bff60)9190: CLOSE FH10 CONN09189: ACCEPT FH10 CONN1close connection AnyEvent::Handle=HASH(0x560ee2f16cf0)
By creating the question, I was reviewing advice that did not work. I wanted to include them into the question to show what I've tried and what did not help. Looking for that I've found an advice to check protocol. So I've discovered the cause of the issue. I haven't used Proxmox very long time, so I forgot it should be accessible only through https://192.168.1.21:8006 , not just by 192.168.1.21:8006 . But after accessing it through https , Chrome remembered it and used https even if I typed http://...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/341457/" ] }
566,456
I used docker-compose from the Openpoiservice project . Both Docker containers were launched successfully. kshnkvn@kshnkvn-vb:~$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES10fafbab73dc openpoiservice_gunicorn_flask "/ops_venv/bin/gunic…" 23 minutes ago Up 22 minutes 0.0.0.0:5000->5000/tcp openpoiservice_gunicorn_flask_1a66fe5691455 kartoza/postgis:11.0-2.5 "/bin/sh -c /docker-…" 23 minutes ago Up 22 minutes 5432/tcp openpoiservice_psql_postgis_db_1 But when trying to check the service for functionality,it could not connect to the database. I tried to do it manually: kshnkvn@kshnkvn-vb:~$ docker exec -it 10fafbab73dc /bin/bashroot@10fafbab73dc:/deploy/app# psql -h localhost -U gis_admin-gispsql: could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?could not connect to server: Cannot assign requested address Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432?root@10fafbab73dc:/deploy/app# Strange, checked just in case that the type of container network is the bridge: kshnkvn@kshnkvn-vb:~$ docker network lsNETWORK ID NAME DRIVER SCOPE81001dac99c0 bridge bridge local8e65fb4ef6f8 host host local94ce4e1605ef none null locala3f48ac3facc openpoiservice_default bridge locale3d4286df013 openpoiservice_poi_network bridge local Checked Postgres launch logs: kshnkvn@kshnkvn-vb:~$ docker logs a66fe5691455Add rule to pg_hba: 0.0.0.0/0Add rule to pg_hba: replication replicator Setup master databasepsql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?2020-02-08 13:50:20.675 UTC [25] LOG: listening on IPv4 address "127.0.0.1", port 54322020-02-08 13:50:20.683 UTC [25] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"2020-02-08 13:50:20.756 UTC [37] LOG: database system was interrupted; last known up at 2020-02-08 13:35:17 UTC2020-02-08 13:50:21.830 UTC [48] postgres@postgres FATAL: the database system is starting uppsql: FATAL: the database system is starting up2020-02-08 13:50:22.726 UTC [37] LOG: database system was not properly shut down; automatic recovery in progress2020-02-08 13:50:22.730 UTC [37] LOG: redo starts at 0/21CCC502020-02-08 13:50:22.730 UTC [37] LOG: invalid record length at 0/21CCC88: wanted 24, got 02020-02-08 13:50:22.730 UTC [37] LOG: redo done at 0/21CCC502020-02-08 13:50:22.867 UTC [25] LOG: database system is ready to accept connections List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+-----------+----------+---------+---------+----------------------- gis | gis_admin | UTF8 | C.UTF-8 | C.UTF-8 | postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 | template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres(4 rows)postgres readySetup postgres User:PasswordCreating superuser gis_adminALTER ROLECreating replication user replicatorALTER ROLEgis db already exists List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+-----------+----------+---------+---------+----------------------- gis | gis_admin | UTF8 | C.UTF-8 | C.UTF-8 | postgres | postgres | UTF8 | C.UTF-8 | C.UTF-8 | template0 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres(4 rows)2020-02-08 13:50:24.785 UTC [25] LOG: received smart shutdown request2020-02-08 13:50:24.799 UTC [25] LOG: background worker "logical replication launcher" (PID 58) exited with exit code 12020-02-08 13:50:24.801 UTC [53] LOG: shutting down2020-02-08 13:50:24.838 UTC [25] LOG: database system is shut downPostgres initialisation process completed .... restarting in foreground2020-02-08 13:50:25.842 UTC [148] LOG: listening on IPv4 address "0.0.0.0", port 54322020-02-08 13:50:25.842 UTC [148] LOG: listening on IPv6 address "::", port 54322020-02-08 13:50:25.850 UTC [148] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"2020-02-08 13:50:25.880 UTC [150] LOG: database system was shut down at 2020-02-08 13:50:24 UTC2020-02-08 13:50:25.887 UTC [148] LOG: database system is ready to accept connections It looks like Postgres started on IP address 0.0.0.0. I looked at what IPs are used by the Docker ip addr show command. Tried to reconnect using this IP: psql: could not connect to server: Connection refused Is the server running on host "172.17.0.1" and accepting TCP/IP connections on port 5432?root@10fafbab73dc:/deploy/app# psql -h 172.17.255.255 -U gis_admin-gispsql: could not connect to server: Connection timed out Is the server running on host "172.17.255.255" and accepting TCP/IP connections on port 5432? What can I try to do to connect the script to the database?
TLDR; psql -h psql_postgis_db -U gis_admin gis# orpsql -h psql_postgis_db gis gis_admin Problem with server address None of the IPs you are trying are actually correct. 127.0.0.1 is the localhost address. Since you launch the command from your flask container, there is no posgres service running there. 172.17.0.1 is the ip of a docker bridge. This is actually the ip of your docker engine host as seen by your containers on the same bridge. Unless you have a postgres running on your machine and listening to that ip, you'll get no answer (and this would not be the correct postgres server anyway) 172.17.255.255 is a network broadcast address for the previous bridge network. From your start logs we can see that your postgres should be listening correctly. 0.0.0.0 is not actually a real ip: it stands for "any ip configured on this host". You could look for the ip of your postgres container to contact it (see for example this answer on SO ), but you don't even have to. docker/docker-compose are making this easy for you by mapping container/service names on the same network to their respective IPs automagically. So your db server is reachable using the service name psql_postgis_db Problem with user and db name I don't really get what your wrote in your -U option to the psql command (a mix of user + db name...). Anyway, it should be the username you want to use to connect to the postgres server. From your compose file it is gis_admin . Since there is no db with the same name as the user, you need to specify the db name you want to connect to. You either use the -U option( psql -U <user> <db> ) or use positional parameters ( psql <db> <user> )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566456", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/371053/" ] }
566,550
In terminal, if I define some variable char as follows: export char=\'\\\"\?\! In effect, char is the string '\"?! And then I use the tr command to replace '\"?! with numbers 01234 tr "\'\\\"\?\!" "01234" And I thought I would get 01234 Instead, I got 0\123 I would be really grateful if someone could explain to me what happened. It seems replacing each character individually with the sed command avoids this problem, but why?
Not just the shell, but tr itself also interprets backslash as a special escaping character, see its manual for details. So you need to make sure that tr receives literal \\ (two backslashes) when you want to replace backslashes. This might be done e.g. by char=...\\\\... in the shell, this part doesn't need further explanation since you understand correctly how the shell handles the backslash. This might be inconvenient for you here, but is convenient in many other situations, and allows sets of characters, or the NUL byte to be part of the search or replace set (which wouldn't be possible otherwise). E.g. to convert NUL-delimited strings to newline-delimited you can do something like tr '\0' '\n' < /proc/1234/environ , or to lowercase a string use tr '[:upper:]' '[:lower:]' . These wouldn't be possible if tr didn't have an escape character.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566550", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394462/" ] }
566,576
The following variable include for example this values echo $SERVERSserver1,server2,server3,server4,server5 and when I want to pipe them on different lines then I do the following echo $SERVERS | tr ',' '\n'server1server2server3server4server5 now I want to add another pipe ( echo $SERVERS | tr ',' '\n' | ..... ) , in order to print the following expected results 1 ……………… server12 ………………… server23 ………………… server34 ………………… server45 ………………… server56 ……………… server67 ………………… server78 ………………… server89 ………………… server910 ………………… server1011 ………………… server1112 ………………… server12 Not sure how to do it but maybe with nc command os similar Any suggestion?
With awk: $ servers='server1,server2,server3,server4,server5'$ awk -v RS=, '{print NR "........" $0}' <<<"$servers"1........server12........server23........server34........server45........server5 or, to output the line numbers with left-padding awk -v RS=, '{printf "%3d........%s\n",NR,$0}' <<<"$servers" (choose the field width 3 as appropriate for the size of your server list).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
566,686
Background I log into a server to do scientific computations. It runs 'Scientific Linux version 7.4'. In order to get access to different software I have to run a command like 'module load x'. For instance to use python I need to write 'module load python'. I don't know much about this module system but from what I can tell it just modifies some environmental variables. Typing "module show python" reveals module-whatis This module sets up PYTHON 3.6 in your environment.conflict pythonappend-path MODULEPATH /global/software/sl-7.x86_64/modfiles/python/3.6setenv PYTHON_DIR /global/software/sl-7.x86_64/modules/langs/python/3.6prepend-path PATH /global/software/sl-7.x86_64/modules/langs/python/3.6/binprepend-path CPATH /global/software/sl-7.x86_64/modules/langs/python/3.6/includeprepend-path FPATH /global/software/sl-7.x86_64/modules/langs/python/3.6/includeprepend-path INCLUDE /global/software/sl-7.x86_64/modules/langs/python/3.6/includeprepend-path LIBRARY_PATH /global/software/sl-7.x86_64/modules/langs/python/3.6/libprepend-path PKG_CONFIG_PATH /global/software/sl-7.x86_64/modules/langs/python/3.6/lib/pkgconfigprepend-path MANPATH /global/software/sl-7.x86_64/modules/langs/python/3.6/share/man When I load python I also gain access to conda (whose executable is found in /global/software/sl-7.x86_64/modules/langs/python/3.6/bin). Problem Normally I cannot run conda without first loading the python module. But recently I noticed that this changed and now I can run conda without loading the python module. This confused me so I typed 'which conda' to see if I could find what executable is being run, but when I do it says that 'no conda is found' in any of the directories on my PATH variable. How is it possible that 'which' cannot find the conda executable despite the fact that I can still run conda?
You probably have an alias or a shell function called “ conda ”. Type type conda and see what it says.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353029/" ] }
566,762
Having a pathname it is possible to extract its filename , excluding its apriori known extension, with basename : $ pathname="/home/paulo/paulo.pdf"$ printf "%s\n" "$(basename $pathname .pdf)"paulo But if the extension is not known how can this be done?
In the zsh shell: $ pathname=/home/paulo/paulo.pdf$ printf '%s\n' $pathname:t:rpaulo The :t modifier ("tail") extracts the last pathname component in $pathname (it works like basename ). The :r modifier ("root", I suppose) extracts the bit of the filename up to the extension, if there is one. The extension is the part of the filename that occurs after the last dot. This means that you would get an empty result for filenames like .zshrc . The other related modifier are :h ("head"), which works like dirname , and :e ("extension"), which extracts the extension only.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566762", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40484/" ] }
566,796
How can I add elements of a list as a prefix plus a "_" to filenames? filenames: aaa.gfbbr.gfcee.gf list.txt: pplo125ss35w2 wanted result: pplo_aaa.gf125ss_bbr.gf35w2_cee.gf All elements are on the same folder. All target files end in .gf. Lines of list.txt should correspond to filenames alphabetically sorted, as shown in the example. Got stuck on: for f in *.gf; do mv "$f" LINE_"$f"; done Don't know how to make LINE work. Thanks.
How about mapfile -t list < list.txti=0for f in *.gf; do echo mv "$f" "${list[i++]}_$f"done Remove the echo once you are happy that it is doing the right thing.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/337841/" ] }
566,981
When I try to implement the C string library myself, I found that glibc and the Linux kernel have a different way to implement some functions. For instance, glibc memchr and glibc strchr use some trick to speed up the function but the kernel memchr and the kernel strchr don't. Why aren't the Linux kernel functions optimized like glibc?
The kernel does have optimised versions of some of these functions, in the arch-specific directories; see for example the x86 implementation of memchr (see all the memchr definitions , and all the strchr definitions ). The versions you found are the fallback generic versions; you can spot these by looking for the protective check, #ifndef __HAVE_ARCH_MEMCHR for memchr and #ifndef __HAVE_ARCH_STRCHR for strchr . The C library’s optimised versions do tend to used more sophisticated code, so the above doesn’t explain why the kernel doesn’t go to such great lengths to go fast. If you can find scenarios where the kernel would benefit from a more optimised version of one of these functions, I imagine a patch would be welcome (with appropriate supporting evidence, and as long as the optimised function is still understandable — see this old discussion regarding memcpy ). But I suspect the kernel’s uses of these functions often won’t make it worth it; for example memcpy and related functions tend to be used on small buffers in the kernel. And never discount the speed gains from a short function which fits in cache or can be inlined... In addition, as mentioned by Iwillnotexist Idonotexist , MMX and SSE can’t easily be used in the kernel , and many optimised versions of memory-searching or copying functions rely on those. In many cases, the version used ends up being the compiler’s built-in version anyway, and these are heavily optimised, much more so than even the C library’s can be (for example, memcpy will often be converted to a register load and store, or even a constant store).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/566981", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259329/" ] }
566,983
A while ago I made a backup of an entire disk using dd if=/dev/nvme0n1 conv=sync,noerror bs=64K | gzip -c > backup.img.gz Today I restored this backup to the same disk using gunzip -c backup.img.gz | dd of=/dev/nvme0n1 dd exited with the following error message: dd: writing to '/dev/nvme0n1': No space left on device1000215217+0 records in1000215216+0 records out512110190592 bytes (512 GB, 477 GiB) copied, 5769.06 s, 88.8 MB/s Do I have to assume that the restore process failed?If so, what can I do to restore my disk? I also have a backup of fdisk -l /dev/nvme0n1 , and now after the restore operation, the output of fdisk -l is the same like before, but I don't know if that is any guarantee of success.
It's possible for dd conv=sync,noerror (or dd conv=noerror,sync ) to corrupt data in some cases . However in your case it's probably simply surplus zeroes at the end of file. If your device is not exact multiple of 64K, your dd command would have filled the last 64K block with zeroes in the image file. And those additional zeroes can't be restored. Which would be harmless. To verify that theory, you could run some commands: # blockdev --getsize64 /dev/nvme0n1expected result: 512110190592# gunzip < backup.img.gz | wc --bytesexpected result: 512110231552 (next multiple of 64K) If that is correct then you're probably okay here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/566983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/307795/" ] }
567,053
I'd like to copy a public ssh key from the ~/.ssh/id_rsa.pub file on my local machine to the ~/.ssh/authorized_keys file on a remote host that is two ssh hops away. In other words, localhost only has ssh access to host1 , but host1 has ssh access to host2 . I want to copy my public ssh key from localhost to host2 . To copy a an ssh key to a remote host one hop away, the ssh documentation gives the command: ssh-copy-id -i ~/.ssh/mykey user@host Is there a way to copy the key to a machine that is two hops away in a single command?
You can pass any ssh option to ssh-copy-id with the -o option. By using the ProxyJump option you can use ssh-copy-id to copy your key to a host via jump host. Here's an example where I copy my ssh key to leia.spack.org via the jump host jump.spack.org: $ ssh-copy-id -o ProxyJump=jump.spack.org [email protected]'s password:Number of key(s) added: 1 And then test it with: $ ssh -J jump.spack.org leia.spack.orgWelcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-42-generic x86_64)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/567053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/311384/" ] }
567,057
I have a scenario where I need to install an arbitrary list of RPM packages (usually around 5-10 packages) on a server with no network connection. Some packages are from EPEL and I want to avoid having to sync all repos since I need to do this often. I have solved this for RHEL/CentOS 7 by doing the following: $ yum -y install epel-release createrepo$ repotrack $PACKAGE_NAME$ createrepo --database . Then I just move this folder onto the server using a USB drive and create a repository file in /etc/yum.repos.d , allowing me to install the package with yum --disablerepo="*" --enablerepo="my-custom-repo" install $PACKAGE_NAME . Now I am moving this to RHEL/CentOS 8 and while it works for half of my packages, I get the following error for the other half when I do dnf install on the isolated server: No available modular metadata for modular package 'podman-1.6.4-2.module_el8.1.0+272+3e64ee36.x86_64', it cannot be installed on the system I figure this is due to the new modular system and my repo does not have all the necessary info. I have tried to read the manual for both repotrack and createrepo but none of them seem to mention modules. Searching the Internet just gave me solutions for 7, which I already have, but I failed to find anything for 8 and packages that belongs to modules in particular. So how do I fetch RHEL/CentOS 8 packages belonging to modules, and all their dependencies, to disk, so I can then move them to another server and install them there? Thanks!
Take a look at the modulemd-tools project. You can find precompiled binaries in EPEL . Assuming you have several modular rpms in ./my-custom-repo/Packages : modular rpms names like python36-3.6.8-2. module_el8 .1.0+245+c39af44f.x86_64.rpm Run: cd my-custom-repo# create traditional rpm repocreaterepo_c .# generate modules meta inforepo2module -s stable -d . modules.yaml# adjust modules meta info to traditional rpm repomodifyrepo_c --mdtype=modules modules.yaml repodata/ After all this work, you can find a file names like xxxx-modules.yaml.gz in repodata dir. The repo should work now.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147816/" ] }
567,071
The problem I have is that is trying to match both sets of delimiter (above and below) I'm trying to match only the second part of the delimiter below (bolded). This is so I can add a new version made on the same day to multiple files. Using perl, so I can get a result like this when I make the replacement How ever according to https://regex101.com/ (and my experience when I ran the command) it selects both sets of delimiters, making a replacement above and below. This is the RegEx I'm using (?!V[0-9]{2}.[0-9]{2}.[0-9]{4}.1)(.*=.$) And the comand in UNIX: perl -pe 's#(?!V[0-9]{2}.[0-9]{2}.[0-9]{4}.1)(.*=.$)#-* V02.11.2020.1 11/Feb/2020 Author2 Minor Changed Include lms \n -* ================ ============= ==================== =========== ========================================================/#g' path/to/file Is there a way to select the one below? Or the problem originates from the Negative Lookahead?-********************************************************************** EDIT I used the command selected by bey0nd 3,$s/ -\* =[=[:space:]]*\// -* V02.11.2020.1\t 11\/Feb\/2020\t Author2\t\t Minor\tChange include 1ms\n\0/1 It helped a lot with readability But I'm still getting both delimeters (= signs) repalced. I thought that the lookaround function of regex would've helped I'm using perl 5 and sed 4.2 At least I got it to work in regex101.com, but in my version didn't work Hope someone finds it useful (-\* =[=[:space:]]*\/)(?!\n.-\*[[:space:]].V[0-9]{2}.[0-9]{2}.[0-9]{4}.1*)
Take a look at the modulemd-tools project. You can find precompiled binaries in EPEL . Assuming you have several modular rpms in ./my-custom-repo/Packages : modular rpms names like python36-3.6.8-2. module_el8 .1.0+245+c39af44f.x86_64.rpm Run: cd my-custom-repo# create traditional rpm repocreaterepo_c .# generate modules meta inforepo2module -s stable -d . modules.yaml# adjust modules meta info to traditional rpm repomodifyrepo_c --mdtype=modules modules.yaml repodata/ After all this work, you can find a file names like xxxx-modules.yaml.gz in repodata dir. The repo should work now.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394927/" ] }
567,090
I have a text file which has 4 columns with 300 lines. I want to add 5th columns where 1st hundred lines will be multiplication of 1*0.02, 2*0.02... and from hundred one lines 2.2, 2.4 ... 0.02 0.04...2 (in 100 lines)2.22.4...12 (in 300 lines) my text file: # "Frame" "Timestep" "WignerSeitz.interstitial_count" "WignerSeitz.vacancy_count"0 0 0 0 1 100 0 0 2 200 0 0 3 300 0 0 ..98 9800 16 16 99 9900 16 16 100 10000 15 15..299 29900 48 48 300 30000 55 55 expected output: # "Frame" "Timestep" "WignerSeitz.interstitial_count" "WignerSeitz.vacancy_count"0 0 0 0 01 100 0 0 0.02 2 200 0 0 0.043 300 0 0 0.06..98 9800 16 16 1.9699 9900 16 16 1.98100 10000 15 15 2..299 29900 48 48 11.8 300 30000 55 55 12
Take a look at the modulemd-tools project. You can find precompiled binaries in EPEL . Assuming you have several modular rpms in ./my-custom-repo/Packages : modular rpms names like python36-3.6.8-2. module_el8 .1.0+245+c39af44f.x86_64.rpm Run: cd my-custom-repo# create traditional rpm repocreaterepo_c .# generate modules meta inforepo2module -s stable -d . modules.yaml# adjust modules meta info to traditional rpm repomodifyrepo_c --mdtype=modules modules.yaml repodata/ After all this work, you can find a file names like xxxx-modules.yaml.gz in repodata dir. The repo should work now.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567090", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/362256/" ] }
567,093
I have an alias that is set up like this. alias X='`xclip -o --selection primary`' now this works great if I just want to echo my clipboard value. But I would really like to be able to use it as an argument to other commands. ssh X I've tried it as a function as well but that doesn't seem to work either. I suppose i could store it just as a string and do ssh $(X) but I would prefer to avoid any syntax like that. From what I've noticed thus far it doesn't seem like arguments get expanded at all it only seems to work if its the first thing typed. I mean I know could alias X="ssh xclip..." but I want this to work for every command not just ssh. So I guess the question is how can I expand a single letter when it is a command argument?
Take a look at the modulemd-tools project. You can find precompiled binaries in EPEL . Assuming you have several modular rpms in ./my-custom-repo/Packages : modular rpms names like python36-3.6.8-2. module_el8 .1.0+245+c39af44f.x86_64.rpm Run: cd my-custom-repo# create traditional rpm repocreaterepo_c .# generate modules meta inforepo2module -s stable -d . modules.yaml# adjust modules meta info to traditional rpm repomodifyrepo_c --mdtype=modules modules.yaml repodata/ After all this work, you can find a file names like xxxx-modules.yaml.gz in repodata dir. The repo should work now.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394949/" ] }
567,202
I have a directory structure as follows: dir |___sub_dir1 |_____files_1 |___sub_dir2 |_____files_2 |___sub_dirN |_____files_N Each sub_directory may or may not have a file called xyz.json . I want to find the total count of xyz.json files in the directory dir . How can I do this?
You can use : find path_to_dir -name xyz.json | wc -l
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/567202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/363033/" ] }