source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
696,381
In light of the current security issues with openssl 1.1.1x we had to upgade our (Ubuntu) systems from source, as apt only showed that the latest openssl (1.1.1f) was the newest UPDATE CVE-2022-0778 after running sudo apt update/upgrade, openssl was still version 1.1.1f, which is vulnerable - at least on Ubuntu 20.04
-p is not a standard option for the type command¹. The type utility itself, though standard is optional in POSIX (only required on systems implementing XSI, though that is required to obtain UNIX compliance certification). Instead of type , you can use command -v to check for availability of a command. No need to check its output, command will tell you via its exit status if it succeeded to find the command or not (regardless of whether it's an external command found in $PATH , a builtin or a function): #!/usr/bin/sh -main() { for mycommand do printf >&2 '%s\n' "Checking $mycommand" if ! command -v -- "$mycommand" > /dev/null 2>&1; then printf >&2 '%s\n' "I think I am missing $mycommand" fi done}main less asdf command is a mandatory POSIX utility. The -v option to command used to be optional, but it's not any longer in the latest version of the POSIX specification. Also remember that echo can't be used to display arbitrary data , -- must be used to separate options and non-options, errors (and in general diagnostics including progress/advisory information) should preferably go to stderr. ¹ it is however an option supported by the type builtin of the bash , mksh , ksh93 , busybox ash , zsh and yash shells². In mksh and yash , it's meant to search the command in the standard $PATH (as returned by getconf PATH for instance), in zsh , it's for searching the command only in $PATH , not in aliases/builtins/functions. Same in ksh93 , except that it also only prints the path of the found command. In bash (where it's used to be -path ; still supported though no longer documented), type -p succeeds but prints nothing if the command is a builtin, and prints the path only like in ksh93 when it's an external command. In busybox ash , it behaves similarly to command -v ² all more or less conformant interpreter implementations of the standard sh language at least when invoked as sh , though that doesn't prevent them from supporting extensions over that language in areas where the standard leaves the behaviour unspecified like here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/696381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/132753/" ] }
696,392
So, i've huge (over 100k records) log file, and need to extract all GPS locations based on their datestamp. ./production.log.109.gz:I, [2022-02-10T10:00:59.703529 #25190] INFO -- : #<Event::TeltonikaServer:3ffcbe931d90>:357544377733734 TS: 2022-02-10 10:00:35 +0000, GPS: 52.1773033,20.8162, SAT: 17, KM/H: 0, V: 26343./production.log.109.gz:I, [2022-02-10T10:01:13.939349 #25190] INFO -- : #<Event::TeltonikaServer:3ffcbe931d90>:357544377733734 TS: 2022-02-10 10:00:40 +0000, GPS: 52.1773033,20.8162, SAT: 17, KM/H: 0, V: 26352./production.log.109.gz:I, [2022-02-10T10:10:44.757308 #25190] INFO -- : #<Event::TeltonikaServer:3ffcbe931d90>:357544377733734 TS: 2022-02-10 10:10:40 +0000, GPS: 52.1773033,20.8162, SAT: 18, KM/H: 0, V: 25924 So, basically, for those 3 records i need to find, that it's 10th February 2022 , cut and paste two stamps after "GPS:" into new file named 2022-02-10.txt , or preferably, into suitable .KML file.
-p is not a standard option for the type command¹. The type utility itself, though standard is optional in POSIX (only required on systems implementing XSI, though that is required to obtain UNIX compliance certification). Instead of type , you can use command -v to check for availability of a command. No need to check its output, command will tell you via its exit status if it succeeded to find the command or not (regardless of whether it's an external command found in $PATH , a builtin or a function): #!/usr/bin/sh -main() { for mycommand do printf >&2 '%s\n' "Checking $mycommand" if ! command -v -- "$mycommand" > /dev/null 2>&1; then printf >&2 '%s\n' "I think I am missing $mycommand" fi done}main less asdf command is a mandatory POSIX utility. The -v option to command used to be optional, but it's not any longer in the latest version of the POSIX specification. Also remember that echo can't be used to display arbitrary data , -- must be used to separate options and non-options, errors (and in general diagnostics including progress/advisory information) should preferably go to stderr. ¹ it is however an option supported by the type builtin of the bash , mksh , ksh93 , busybox ash , zsh and yash shells². In mksh and yash , it's meant to search the command in the standard $PATH (as returned by getconf PATH for instance), in zsh , it's for searching the command only in $PATH , not in aliases/builtins/functions. Same in ksh93 , except that it also only prints the path of the found command. In bash (where it's used to be -path ; still supported though no longer documented), type -p succeeds but prints nothing if the command is a builtin, and prints the path only like in ksh93 when it's an external command. In busybox ash , it behaves similarly to command -v ² all more or less conformant interpreter implementations of the standard sh language at least when invoked as sh , though that doesn't prevent them from supporting extensions over that language in areas where the standard leaves the behaviour unspecified like here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/696392", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/519337/" ] }
696,397
I have this output from Linux. Size: 8192 MBLocator: CPU0Bank Locator: DIMM01Size: No Module InstalledLocator: CPU0Bank Locator: DIMM02Size: 8192 MBLocator: CPU0Bank Locator: DIMM03Size: No Module InstalledLocator: CPU0Bank Locator: DIMM04Size: 8192 MBLocator: CPU0Bank Locator: DIMM05Size: No Module InstalledLocator: CPU0Bank Locator: DIMM06Size: 8192 MBLocator: CPU1Bank Locator: DIMM07Size: No Module InstalledLocator: CPU1Bank Locator: DIMM08Size: 8192 MBLocator: CPU1Bank Locator: DIMM09Size: No Module InstalledLocator: CPU1Bank Locator: DIMM10Size: 8192 MBLocator: CPU1Bank Locator: DIMM11Size: No Module InstalledLocator: CPU1Bank Locator: DIMM12 This is to diagnose DIMMs. I would like to remove the outputs from the DIMMs that say "No Module Installed. so when I run the command it will look like this: Size: 8192 MBLocator: CPU0Bank Locator: DIMM01Size: 8192 MBLocator: CPU0Bank Locator: DIMM03Size: 8192 MBLocator: CPU0Bank Locator: DIMM05Size: 8192 MBLocator: CPU1Bank Locator: DIMM07Size: 8192 MBLocator: CPU1Bank Locator: DIMM09Size: 8192 MBLocator: CPU1Bank Locator: DIMM11 The 'Locator:' and 'Bank Locator:' result is not always the same, so I would need to identify the 'Size: No Memory Installed' and then remove the following 2 rows.
-p is not a standard option for the type command¹. The type utility itself, though standard is optional in POSIX (only required on systems implementing XSI, though that is required to obtain UNIX compliance certification). Instead of type , you can use command -v to check for availability of a command. No need to check its output, command will tell you via its exit status if it succeeded to find the command or not (regardless of whether it's an external command found in $PATH , a builtin or a function): #!/usr/bin/sh -main() { for mycommand do printf >&2 '%s\n' "Checking $mycommand" if ! command -v -- "$mycommand" > /dev/null 2>&1; then printf >&2 '%s\n' "I think I am missing $mycommand" fi done}main less asdf command is a mandatory POSIX utility. The -v option to command used to be optional, but it's not any longer in the latest version of the POSIX specification. Also remember that echo can't be used to display arbitrary data , -- must be used to separate options and non-options, errors (and in general diagnostics including progress/advisory information) should preferably go to stderr. ¹ it is however an option supported by the type builtin of the bash , mksh , ksh93 , busybox ash , zsh and yash shells². In mksh and yash , it's meant to search the command in the standard $PATH (as returned by getconf PATH for instance), in zsh , it's for searching the command only in $PATH , not in aliases/builtins/functions. Same in ksh93 , except that it also only prints the path of the found command. In bash (where it's used to be -path ; still supported though no longer documented), type -p succeeds but prints nothing if the command is a builtin, and prints the path only like in ksh93 when it's an external command. In busybox ash , it behaves similarly to command -v ² all more or less conformant interpreter implementations of the standard sh language at least when invoked as sh , though that doesn't prevent them from supporting extensions over that language in areas where the standard leaves the behaviour unspecified like here.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/696397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/518653/" ] }
696,495
I have a file that has entries in key: value format like the below: cat data.txt name: 'tom'tom_age: '31'status_tom_mar: 'yes'school: 'anne'fd_year_anne: '1987'name: 'hmz'hmz_age: '21'status_hmz_mar: 'no'school: 'svp'fd_year_svp: '1982'name: 'toli'toli_age: '41' and likewise ... I need to find and print only those key: value that have duplicate keys as a single entry. The below code gets me the duplicate keys cat data.txt | awk '{ print $1 }' | sort | uniq -dname:school: However, I want the output where I wish to concatenate the values of duplicate keys in one line. Expected output: name: ['tom', 'hmz', 'toli']school: ['anne', 'svp']tom_age: '31'status_tom_mar: 'yes'fd_year_anne: '1987'hmz_age: '21'status_hmz_mar: 'no'fd_year_svp: '1982'toli_age: '41' Can you please suggest?
In awk : $ awk -F': ' '{ count[$1]++; data[$1] = $1 in data ? data[$1]", "$2 : $2 } END { for (id in count) { printf "%s: ",id; print (count[id]>1 ? "[ "data[id]" ]" : data[id]) }}' data.txt hmz_age: '21'tom_age: '31'fd_year_anne: '1987'school: [ 'anne', 'svp' ]name: [ 'tom', 'hmz', 'toli' ]toli_age: '41'fd_year_svp: '1982'status_hmz_mar: 'no'status_tom_mar: 'yes' A Perl approach: $ perl -F: -lane 'push @{$k{$F[0]}},$F[1]; END{ for $key (keys(%k)){ $data=""; if(scalar(@{$k{$key}})>1){ $data="[" . join(",",@{$k{$key}}) . "]"; } else{ $data=${$k{$key}}[0]; } print "$key: $data" } }' data.txt status_tom_mar: 'yes'fd_year_anne: '1987'tom_age: '31'toli_age: '41'fd_year_svp: '1982'hmz_age: '21'school: [ 'anne', 'svp']name: [ 'tom', 'hmz', 'toli']status_hmz_mar: 'no' Or, a bit easier to understand maybe: perl -F: -lane '@fields=@F; push @{$key_hash{$fields[0]}},$fields[1]; END{ for $key (keys(%key_hash)){ $data=""; @key_data=@{$key_hash{$key}}; if(scalar(@key_data)>1){ $data="[" . join(",", @key_data) . "]"; } else{ $data=$key_data[0] } print "$key: $data" } }' data.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/696495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392596/" ] }
696,502
im trying to fillter below json data with jq and expect ouput as below { "data": [ { "topic_name": "BookShow", "topic_id": "ABCDFG", "urgency": "high" }, { "topic_name": "AmzonMarket", "topic_id": "ESDCGHY", "urgency": "high" }, { "topic_name": "AmzonMarket", "topic_id": "ESDCGHY", "urgency": "high" }, { "topic_name": "BookShow", "topic_id": "ABCDFG", "urgency": "high" }, { "topic_name": "bookTick", "topic_id": "KOLPUYDD", "urgency": "high" }, { "topic_name": "bookTick", "topic_id": "KOLPUYDD", "urgency": "high" } ], "more": false, "limitations": 100, "range": 0} expecting output as below andhere "occurrences" will be new field, where it count the number of occurrences. "id","name","occurrences""KOLPUYDD","bookTick",2"ABCDFG","BookShow",2"ESDCGHY","AmzonMarket",2 Please support.
Using group_by to form the required objects grouped by topic_id and create the CSV out of it. jq --raw-output '[ "id", "name", "occurrences" ], ( .data | group_by(.topic_name)[] | { id: .[0].topic_id, name: .[0].topic_name, occurrences: length } | [.id, .name, .occurrences]) | @csv' jqplay demo
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/696502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/515000/" ] }
696,683
I have standard ls installed, I also have lsd installed, which is a nerd-font drop-in replacement for ls. I mention this because the error may have something to do with the alias. For testing right now, I have alias ls=ls in place to remove any of my custom commands. If I type ls in the /etc directory, where there are obviously several subfolders (which you can see without the -d flag), I get the following: >>>root@Unraid:/etc# ls -d //>>>root@Unraid:/etc# ls -d.>>>root@Unraid:/etc# lsd -d ./>>>root@Unraid:/etc# lsd -d / // the missing codepoint is just a folder icon. I replaced all the boxes with browser compatible emojis just for clarity. LSD has been installed for a few days and isn't the issue. They behave identically. I prefixed the prompt lines with >>> just to make them easier to pick out. Let's use my home directory as an example. The correct response is that there are 2 folders in ~ , lsd and pkg >>>root@Unraid:~# lsd lsd/ pkg/ appdataUNRAID.code-workspace@ mdcmd@>>>root@Unraid:~# ls -FappdataUNRAID.code-workspace@ lsd/ mdcmd@ pkg/>>>root@Unraid:~# lsd -F lsd/ pkg/ appdataUNRAID.code-workspace@ mdcmd@>>>root@Unraid:~# lsappdataUNRAID.code-workspace lsd mdcmd pkg>>>root@Unraid:~# ls -lhptotal 0lrwxrwxrwx 1 root root 30 Mar 14 16:41 appdataUNRAID.code-workspace -> ./appdataUNRAID.code-workspacedrwxrwxrwx 3 root root 140 Mar 18 11:35 lsd/lrwxrwxrwx 1 root root 21 Mar 13 18:55 mdcmd -> /usr/local/sbin/mdcmddrwxrwxrwx 2 root root 140 Mar 24 10:10 pkg/ Again these icons are missing here, but the single directory in here "lsd" has the correct folder icon. Both these commands take appropriate flags. Here we can see that they take flags just fine, and they find the two directories perfectly fine, but TWO implementations of ls only return the . directory on the -d flag, even when combined with other flags. >>> root@Unraid:~# lsd -d ./root@Unraid:~# ls -dFldrwx--x--- 11 root root 420 Mar 24 10:10 ./## The full text flag also doesn't work.root@Unraid:~# ls --directory. It doesn't matter what other flags are in place, if the D flag is in then two separate implementations of ls only show the current directory . , I mean, it doesn't even show the default .. 2nd directory that's there. I tried adding recursion as well. What's happening?
When you provide the -d option, you're telling ls that you don't want it to list the contents of any directories, only the directory name itself. When you don't provide any additional arguments to ls , the default is to list the current directory. As a result, your variations on ls -d all show the right thing -- the name of the current directory: .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/696683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/440260/" ] }
696,684
I am unable to find the katello-agent package for the new CentOS Stream 9. (I've even tried to install the repo for CentOS Stream 8, but it results in broken dependencies, as expected.) Is there a workaround to install it? (I know that Katello is deprecated in favour of Remote Execution, but for now I'll have to do with.)
When you provide the -d option, you're telling ls that you don't want it to list the contents of any directories, only the directory name itself. When you don't provide any additional arguments to ls , the default is to list the current directory. As a result, your variations on ls -d all show the right thing -- the name of the current directory: .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/696684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
697,052
Looking for suggestions/edits I can make to this question to get it reopened. A lot of these answers seem objective to me and they are the type of answers I'm looking for. I've read four articles about POSIX and haven't been able to find an answer. Now I know what POSIX is , I already knew what standards were, but I'm still not sure why the POSIX standard remains relevant today. One of the three articles I read was an interview of Richard Stallman . In it, he says Seth : Are POSIX-compliant free software projects easier to port to other Unix-like systems? RMS : I suppose so, but I decided in the 1980s not to spend my time on porting software to systems other than GNU. ... Seth : Is POSIX important to software freedom? RMS : At the fundamental level, it makes no difference. However, standardization ... helped us advance more quickly towards ... free software. That was achieved inthe early 1990s ... [emphasis mine] His answers are colored by a GNU point of view, but it seems he's answering my question with, "POSIX is no longer relevant"? Here's something more concrete to explain why I'm interested in this in the first place: In this answer , I suggested using process redirection <(command) . A reply said, "unfortunately process redirection is a non-POSIX feature so it wouldn't be supported across all machines." I guess I was assuming that the script starts with #!/bin/bash . Since process redirection is a feature of bash, wouldn't that mean any system with bash would be portable? And according to Wikipedia , bash "has been used as the default login shell for most Linux distributions." Whereas, this article says , "For now, Linux is not POSIX-certified due to high costs, except for the two commercial Linux distributions Inspur K-UX [12] and Huawei EulerOS [6]. Instead, Linux is seen as being mostly POSIX-compliant." So, I think the goal of POSIX is portability, but, at least in my experience, my scripts would be more portable if they worked in bash and ignored POSIX than the other way around. To reiterate my question, why should any scriptwriter spend time concerning themselves with POSIX compliance?
Perhaps you don't need to care if you know you're only going to use some particular shell that's not limited to just the POSIX features. But, there's still a chance you'll end up having to use something else, and in that case, having an idea about the non-standard features might help. Maybe you worked on Linux all your life, but then a new $DAYJOB drops you on the shell of some completely different system. Not that it's just non-Linux systems though. Embedded/small systems might have just Busybox, and its default ash -based shell is closer to plain POSIX sh than Bash. (Busybox also has another shell, hush , which I'm not that familiar with. I doubt it's a Bash clone either. Anyway, on small systems, having a smaller shell is usually desired, and that might well mean less features.) Also, Debian (and hence, Ubuntu) famously cared too: since Bash is rather slow and shell scripts were used a lot for system startup before systemd came along they changed the default /bin/sh to Dash, another ash-based shell.(See e.g. an LWN article on that and DashAsBinSh on Ubuntu's wiki.) Then there's also some cases where doing the POSIX thing just doesn't cost anything, since the features are equivalent. I admit, the ones I can think of are rather simple, e.g. using [ a = b ] instead of [ a == b ] , or i=$((i + 1)) instead of ((i++)) . Anyway, cases like that exist, and esp. using == seems rather common, with no benefit. ( [[ .. ]] vs. [ .. ] is a bit different though, they have actual differences.) That's not to say e.g. local variables, arrays, process substitutions, and the string replace and slice expansions should never be used! Quite the contrary, I think they are very useful when needed. Just that it helps to be aware they don't work in every shell in the world. Or worse, they don't work identically. All that said, if we consider the answers here on unix.SE, it should be noted that the site title says "Unix & Linux", and it is called unix .stackexchange.com , not linux .stackexchange.com , so keeping the non-Linux systems in mind is somewhat relevant. Also, I guess a bunch of the long-time users are pedants like that. ;) If you're happy to leave the POSIX chains and rely on having Bash, you might also want to consider using zsh. It's even less POSIX-compatible, which actually makes it a lot saner in many ways.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/697052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101311/" ] }
697,132
Let me explain with an example: I have a line where I declare an alias in my ~/.bashrc : > grep lsdir ~/.bashrcalias lsdir='ls -d */' I have just added this line to my bashrc, and the alias is thus not yet added to my current session environment. I would also like not to re-source all my bashrc for some configuration reason, which is why I would like to simply run this grepped line on its own. For the purpose of curiosity and education, I tried to do it without having to write it down manually, to no avail. I tried this: > $(grep lsdir ~/.bashrc)bash: alias: -d: not foundbash: alias: */': not found or this: > "$(grep lsdir ~/.bashrc)"bash: alias lsdir='ls -d */': No such file or directory or even this: > $("grep lsdir ~/.bashrc")bash: grep lsdir ~/.bashrc: No such file or directory But none worked. Is there any way to achieve this ?
You could use Process Substitution to source just the matching lines: source <(grep lsdir ~/.bashrc)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/697132", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/511458/" ] }
697,256
I was thinking back to my introduction to programming recently and remembered writing a C++ program that deliberately read and wrote to memory addresses at random. I did this to see what would happen. To my surprise, on my Windows 98 PC, my program would create some really weird side effects. Occasionally it would toggle OS settings, or create graphical glitches. More often than not it would do nothing or just crash the entire system. I later learned this was because Windows 98 didn't restrict what a user process had access to. I could read and write to RAM used by other processes and even the OS. It is my understanding that this changed with Windows NT (though I think it took a while to get right). Now Windows prevents you from poking around in RAM that doesn't belong to your process. I vaguely remember running my program on a Linux system later on and not getting nearly as many entertaining results. If I understand correctly this is, at least in part, due to the separation of User and Kernel space. So, my question is: Was there a time when Linux did not separate User and Kernel space?In other words, was there a time when my rogue program could have caused similar havoc to a Linux system?
Linux has always protected the kernel by preventing user space from directly accessing the memory it uses; it has also always protected processes from directly accessing each others’ memory. Programs can only access memory through a virtual address space which gives access to memory mapped for them by the kernel; access outside allocated memory results in a segmentation fault. (Programs can access the kernel through system calls and drivers, including the infamous /dev/mem and /dev/kmem ; they can also share memory with each other.) Is the MMU inside of Unix/Linux kernel? or just in a hardware device with its own memory? explains how the kernel/user separation is taken care of in Linux nowadays (early releases of Linux handled this differently; see Linux Memory Management Overview and 80386 Memory Management for details). Some Linux-related projects remove this separation; for example the Embeddable Linux Kernel Subset is a subset of Linux compatible with the 8086 CPU, and as a result it doesn’t provide hardware-enforced protection. µClinux provides support for embedded systems with no memory management unit, and its core “ingredients” are now part of the mainline kernel, but such configurations aren’t possible on “PC” architectures.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/697256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/239497/" ] }
697,349
The second line of the script only works if I trigger glob expansion by executing echo . I can't understand why. Here's the command and it's execution to give some context. Function definition: ~/ cat ~/.zsh/includes/ascii2gifascii2gif () { setopt extendedglob input=$(echo ${1}(:a)) _path=${input:h} input_f=${input:t} output_f=${${input_f}:r}.gif cd $_path nerdctl run --rm -v $_path:/data asciinema/asciicast2gif -s 2 -t solarized-dark $input_f $output_f} Activate function debugging for ascii2gif function.. ~/ typeset -f -t ascii2gif Debugged function execution: ~/ ascii2gif ./demo.cast+ascii2gif:1> input=+ascii2gif:1> echo /Users/b/demo.cast+ascii2gif:1> input=/Users/b/demo.cast+ascii2gif:2> _path=/Users/b+ascii2gif:3> input_f=demo.cast+ascii2gif:4> output_f=demo.gif+ascii2gif:5> cd /Users/b+_direnv_hook:1> trap -- '' SIGINT+_direnv_hook:2> /Users/b/homebrew/bin//direnv export zsh+_direnv_hook:2> eval ''+_direnv_hook:3> trap - SIGINT+ascii2gif:6> nerdctl run --rm -v /Users/b:/data asciinema/asciicast2gif -s 2 -t solarized-dark demo.cast demo.gif==> Loading demo.cast...==> Spawning PhantomJS renderer...==> Generating frame screenshots...==> Combining 40 screenshots into GIF file...==> Done. I've tried numerous variations to try and force expansion such as input=${~1}(:a) etc, but to no avail. Any suggestions? Obviously the script works but seems sub-optimal.
That's because the way you're trying to use the a modifier here is for globbing, and no globbing takes place in var= WORD (because globbing, in general, can result in multiple words, so it doesn't take place in contexts that expect a single word). So you're relying on globbing taking place inside the command substitution, and then assigning the result to the variable. Since the a modifier can be used on parameter expansion but with a different way of applying it, you can try that: input=${1:a} For example: % cd /tmp% foo() { input=${1:a}; typeset -p input; }% foo some-filetypeset -g input=/tmp/some-file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/697349", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/520330/" ] }
697,552
I'm writing a Git Bash utility that copies a project folder from one location to another. There are multiple destinations to which the user may want to copy the project, though only one location per execution of the script is permitted. Here is the logic thus far - #!/bin/bash# declare and initialize variablessource="/z/files/development/xampp/code/htdocs/Project7"targets[0]="/z/files/development/xampp/code/htdocs/test/$(date +'%Y_%m_%d')"targets[1]="/c/users/knot22/desktop/temp_dev/$(date +'%Y_%m_%d')"# display contents of variables to userecho "source " $sourceecho -e "\nchoice \t target location"for i in "${!targets[@]}"; do echo -e "$i \t ${targets[$i]}" doneecho# prompt user for a targetread -p "Enter target's number for this copy operation: " target So far, so good. Next I'd like to write an if statement that checks whether or not the value the user entered for target is a valid index in targets . In PHP it would be array_key_exists($target, $targets) . What is the equivalent in Bash?
You can check if the array element is not null/empty with: expr='^[0123456789]+$'if [[ $target =~ $expr && -n "${targets[$target]}" ]]; then echo yeselse echo nofi You also have to check if the response is an integer since people can reply to the read prompt with a string which will evaluate to zero and therefore give you the first element in your array. You may also want to consider using select here: #!/bin/bash# declare and initialize variablessource="/z/files/development/xampp/code/htdocs/Project7"targets[0]="/z/files/development/xampp/code/htdocs/test/$(date +'%Y_%m_%d')"targets[1]="/c/users/knot22/desktop/temp_dev/$(date +'%Y_%m_%d')"select i in "${targets[@]}" exit; do [[ $i == exit ]] && break echo "$i which is number $REPLY"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/697552", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/384266/" ] }
697,569
I have a text file and I want to use Unix commands (I don't care which) to print lines that contain a Chinese character OR contain the string ###. This answer has a grep command which prints out the lines containing Chinese characters grep -P '[\p{Han}]' filename.txt which I understand is a Perl regular expression. And this prints out the lines containing ###: grep '###' filename.txt But I can't figure out how to combine (OR) them. If I do grep -e '###' -P '[\p{Han}]' as I might expect this answer would generalize, it doesn't print out the lines containing Chinese characters. Question : How do I use Unix commands to print lines that contain Chinese characters OR lines that contain ###? Oh, in case it helps, if the file contains 中文 keep this line### keep this linedon't keep this line it should output 中文 keep this line### keep this line
In general you'd combine multiple patterns using -e pat1 -e pat2 , however at least for GNU grep version 3.4, the global -P option only permits a single pattern: $ grep -P -e '[\p{Han}]' -e '###' filename.txtgrep: the -P option only supports a single pattern So you need to place the alternation inside the regex: grep -P -e '[\p{Han}]|###' filename.txt or just grep -P '\p{Han}|###' filename.txt (the -e is optional in the case of a single pattern, and you don't need to use a bracket expression [ ] unless you have a set of characters or properties to match on). Alternatively you may prefer to use Perl's regexps directly, ex. perl -CDS -ne 'print if /\p{Han}/ or /###/' filename.txt
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/697569", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107463/" ] }
697,806
For my powerlevel10k custom prompt, I currently have this function to display the seconds since the epoch, comma separated. I display it under the current time so I always have a cue to remember roughly what the current epoch time is. function prompt_epoch() { MYEPOCH=$(/bin/date +%s | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta') p10k segment -f 66 -t ${MYEPOCH}} My prompt looks like this: I've been told I can do this without the forked processes using these commands: $ zmodload -F zsh/datetime p:EPOCHSECONDS$ printf "%'d" $EPOCHSECONDS1,648,943,504 But I'm not sure how to do that without the forking. I know to add the zmodload line in my ~/.zshrc before my powerlevel10k is sourced, but formatting ${EPOCHSECONDS} isn't something I know how to do without a fork. If I were doing it the way I know, this is what I'd do: function prompt_epoch() { MYEPOCH=$(printf "%'d" ${EPOCHSECONDS}) p10k segment -f 66 -t ${MYEPOCH}} But as far as I understand it, that's still forking a process every time the prompt is called, correct? Am I misunderstanding the advice given because I don't think I can see a way to get the latest epoch seconds without running some sort of process, which requires a fork, correct?
The printf utility in both bash and zsh has a -v option that allows you to "print into a variable": printf -v MYEPOCH "%'d" ${EPOCHSECONDS} The actual result of the above command may well be dependent on the current locale.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/697806", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101884/" ] }
697,825
The OpenSSH client has a command line option for port forwarding, used like this: ssh -L localport:server:serverport user@host which will connect to host as user , and at the same time redirecting localport on the client to serverport on server (which can be host or anything reachable from host over the network). Now suppose I have SSHed into host doing just ssh user@host and in the middle of the session I realize I forgot to forward the port. Alas, I am in the middle of something, so I don’t just want to log out and re-establish the SSH connection with the port forwarding. Is there a way to add port forwarding to a running SSH session?
From man 1 ssh : ESCAPE CHARACTERS When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character. A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option. The supported escapes (assuming the default ~ ) are: […] ~C Open command line. Currently this allows the addition of port forwardings using the -L , -R and -D options (see above). […] Basic help is available, using the -h option. So type Enter ~ C (i.e. capital c), then -L localport:server:serverport with desired localport , server and serverport , finally Enter . Notes: The initial Enter will be immediately sent to the remote side and may cause some action there, so pick a good moment (e.g. when you're in a shell with an empty command line). Or if you are sure the last thing you have typed is Enter anyway (e.g. you have just invoked a command that is now running), you can start directly with ~ because Enter has already been noticed by your local ssh . On internationalized keyboards the tilde could be a dead key for generating special 'tilded' characters (like pressing ~ n to generate ñ ). In that case, it could be necessary to press SPACE after ~ to generate a single tilde, i.e: ENTER ~ SPACE C . In the case of the Spanish/LA keyboard layouts, as there is no combined character using tilde and C, the space can be omitted and the ~ C generates the desired sequence. Regarding multiple redirections, the ssh escaped command line only accepts a single command. You should press again the keyboard sequence to enter another command or redirection.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/697825", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/91283/" ] }
697,826
Let us assume a directory that contains pictures from different cameras: DCIM1234.JPGDCIM1235.JPGDCIM1236.JPGDSCN4120.JPGDSCN4121.JPGDSCN4122.JPGDSCN4123.JPGIMG5840.JPGIMG5841.JPGIMG5842.JPGIMG5843.JPG Sorting all these files by modification date across cameras is easy using ls -t . The problem is that most file systems have an accuracy of 1 second or more, so some pictures might have an identical timestamps, for instance when shooting bursts. In this case, ls -t might loose the natural order of the file, which is reflected in the names. How to sort the files by modification time, while sorting by name the files that have an identical modification time?
Generally, it's recommended to avoid parsing ls output. As doneal24 suggested above, stat is a better option. $ stat -c "%Y/%n" *.JPG | sort -t/ -k1,1n -k2 | sed 's@^.*/@@' From man stat : The valid format sequences for files (without --file-system): ... %n file name %Y time of last data modification, seconds since Epoch So stat -c "%Y/%n" *.JPG will get you the timestamp in seconds and the name of each file, separated by a / . For example: 1580845717/IMG5841.JPG The output of that command is piped to sort -t/ -k1,1n -k2 , which sorts first by the first column, numerically (the timestamp), and then by the second column. Columns are separated by / ( -t/ ). Finally, the output of the sort command is piped to sed , which removes all character up to and including the first / (the chosen delimiter). The result in the list of filenames in the order you wanted (with the "newest" files listed last).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/697826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276531/" ] }
697,944
Output of just the ping command: [root@servera ~]# ping -c 4 8.8.8.8PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.64 bytes from 8.8.8.8: icmp_seq=1 ttl=128 time=8.04 ms64 bytes from 8.8.8.8: icmp_seq=2 ttl=128 time=7.47 ms64 bytes from 8.8.8.8: icmp_seq=3 ttl=128 time=7.72 ms64 bytes from 8.8.8.8: icmp_seq=4 ttl=128 time=7.50 ms--- 8.8.8.8 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3007msrtt min/avg/max/mdev = 7.473/7.683/8.037/0.225 ms I want to just capture the integer 4 from the "4 received". ping -c 4 8.8.8.8 | awk -F ',' '/received/ { print $2 }' The result is 4 received . I want to capture just the number 4 from the above command. How can I do that? The delimiter is space now.
All you need is: awk '/received/{print $4}' e.g. using cat file to get the same input for awk as in your question: $ cat filePING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.64 bytes from 8.8.8.8: icmp_seq=1 ttl=128 time=8.04 ms64 bytes from 8.8.8.8: icmp_seq=2 ttl=128 time=7.47 ms64 bytes from 8.8.8.8: icmp_seq=3 ttl=128 time=7.72 ms64 bytes from 8.8.8.8: icmp_seq=4 ttl=128 time=7.50 ms--- 8.8.8.8 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3007msrtt min/avg/max/mdev = 7.473/7.683/8.037/0.225 ms $ cat file | awk '/received/{print $4}'4 Obviously just replace cat file with ping -c 4 8.8.8.8 for your real testing. In response to the OPs comment below asking what line it's matching on: $ awk '/received/' file4 packets transmitted, 4 received, 0% packet loss, time 3007ms and why field number 4 is the one to print: $ awk '/received/{for (i=1; i<=NF; i++) print i, "<" $i ">"}' file1 <4>2 <packets>3 <transmitted,>4 <4>5 <received,>6 <0%>7 <packet>8 <loss,>9 <time>10 <3007ms>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/697944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364311/" ] }
698,017
For some highly secure bastion VMs I'll implement soon, I am considering to unmount /boot after booting - among other measures of course. Will be mounted only for updating kernel. Testing this, no problems seem to appear; can it have any side effects I'm missing? The systems will be probably based on Debian Linux (other scenario, on Redhat). Both are systemd. What is the proper way to unmount /boot on a systemd system after reboot? For testing I just sudo umount /boot . I'm debating myself if I'm going to use BIOS or UEFI. As they will be VMs, it's a matter of choice. UEFI appears to be a more sane choice as more modern. But I'm not sure regarding security benefits, if any. On the contrary, because it's more complicated, more chances of vulnerabilities perhaps. In case of UEFI, what about efi partition? It's mounted inside /boot by default, although I think /efi can be used (I haven't tried it), to separate them and handled more transparently, administrator side. Can /boot/efi or /efi be unmounted as well after boot without side effects?
In theory neither /boot/ nor /boot/efi are commonly used after boot. The two form a bridge between the BIOS (or similar) and the operating system. They are not generally used at runtime. They are mounted so that the you can re-configure your boot and so that your OS can update / upgrade its boot sequence. That is, on Debian, apt / dpkg will trigger changes to both. Besides dpkg (or rpm on redhat derivatives) it would be unlikely for anything to want access to the /boot file tree. From a security perspective I'd challenge the wisdom of unmounting either . They both should be read-only to all users except root. If a user gains root access then they could just mount them. On the other hand preventing your system from applying updates (including security patches) might open more holes than you close. Instead, have you considered isolating bastion access with chroot etc.? Chroot lets those logged in only access a child file tree and a pid namespace and user namespace can protect against something escaping ( chroot alone isn't enough). The easiest way to do this might be to replace your SSH server with docker (or podman ) running openssh inside a container. That would leave any SSH clients inside a docker container than would have no sight of the host system. The filesystem inside that container could be really minimal such as an alpine linux container with almost nothing but a minimal command line. Note for clarity: chroot is not enough to isolate a process. With root access, a process can escape chroot. However other isolations such as pid and user namespaces and dropping capabilities should do a lot to secure a process inside a chroot jail... Hence the suggestion of using docker.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/698017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276189/" ] }
698,322
I have a file that looks like this and was computed with Python: 1.00000100e+07 1.00000000e+04 1.11000111e+08 1.11000000e+05 Now, I would like to obtain something like this: 010000010 000010000 111000111 000111000 The idea is that I would like to add zeros to obtain a fixed format of 9 bits. The Python code that I wrote gives me the binary numbers, but for further decoding I need these binary numbers to have 9 bits. I tried to obtain them in the desired format from Python, but I was not able to do that. I hope my explanation is clear. Update: Python Code: #!/usr/bin/env python3 import numpy as np from numpy import genfromtxt import os day = os.environ.get('day') Array = genfromtxt("SUC_diffzeros_"+str(day)+".csv") final = np.zeros(len(Array)) def decimaltobinary(x): for i in range(len(x)): print(x[i]) final[i] = "{0:09b}".format(x[i]) print(final) var = np.column_stack([Array, final]) np.savetxt("SUC_diffzeros_"+str(day)+".csv", var, fmt='%.8e')if __name__ == '__main__': val = Array.astype(int) decimaltobinary(val)
$ awk '{printf "%09.15g\n", $0}' < file010000010000010000111000111000111000 Or: $ awk '{printf "%09d\n", $0}' < file010000010000010000111000111000111000 Which also converts to integer. You'd see the difference on numbers such as 1.1 or 111000111.9 where the former would give: 0000001.1111000111.9 And the latter: 000000001111000111 See also awk '{printf "%09.0f\n", $0}' which rounds: 000000001111000112
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/698322", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/516564/" ] }
698,627
I'm doing some processing trying to get how many different lines in a file containing 160,353,104 lines. Here is my pipeline and stderr output. $ tail -n+2 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 |\ sort -T. -S1G | tqdm --total=160353104 | uniq -c | sort -hr > users100%|████████████████████████████| 160353104/160353104 [0:15:00<00:00, 178051.54it/s] 79%|██████████████████████ | 126822838/160353104 [1:16:28<20:13, 027636.40it/s]zsh: done tail -n+2 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 | zsh: killed sort -T. -S1G | zsh: done tqdm --total=160353104 | uniq -c | sort -hr > users My command-line PS1 or PS2 printed the return codes of all process of the pipeline. ✔ 0|0|0|KILL|0|0|0 First char is a green checkmark that means that last process returned 0 (success). Other numbers are return code for each one of pipelined processes, in same order. So I've notice that my fourth command got KILL status, this is my sort command sort -T. -S1G setting local directory to temp storage and buffer up to 1GiB. The question is, why did it returned KILL, does it means something sent a KILL SIGN to it?Is there a way to know "who killed" it? Updates After reading Marcus Müller Answer , first I've tried to load the data into Sqlite. So, maybe this is a good moment to tell you that, no, don't use a CSV-based data flow. A simple sqlite3 place.sqlite and in that shell (assuming your CSV has a title row that SQLite canuse to determine the columns) (of course, replace $second_column_namewith the name of that column) .import 022_place_canvas_history.csv canvas_history --csvSELECT $second_column_name, count($second_column_name) FROM canvas_history GROUP BY $second_column_name; This was taking a lot of time, so I leave it processing and went to do other things. While it I thought more about this other paragraph from Marcus Müller Answer : You just want to know how often each value appeared on the second column. Sorting that before just happens because your tool ( uniq -c ) is bad, and needs the rows to be sorted before (there's literally no good reason for that. It's just not implemented that it could hold a map of values and their frequency and increase that as they appear). So I thought, I can implement that. When I got back into computer, my Sqlite import process had stopped cause of a SSH Broken Pip, think as it didn't transmit data for a long time it closed the connection.Ok, what a good opportunity to implement a counter using a dict/map/hashtable. So I've write the follow distinct file: #!/usr/bin/env python3import sysconter = dict()# Create a key for each distinct line and increment according it shows up. for l in sys.stdin: conter[l] = conter.setdefault(l, 0) + 1 # After Update2 note: don't do this, do just `couter[l] = conter.get(l, 0) + 1`# Print entries sorting by tuple second item ( value ), in reverse orderfor e in sorted(conter.items(), key=lambda i: i[1], reverse=True): k, v = e print(f'{v}\t{k}') So I've used it by the follow command pipeline. tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 | ./distinct > users2 It was going really really fast, projection of tqdm to less than 30 minutes, but when got into 99% it was getting slower and slower. This process was using a lot of RAM, about 1.7GIB. Machine I'm working with this data, the machine I have storage enought, is a VPS with just 2GiB RAM and ~1TiB storage. Thought it may be getting so slow cause SO was having to handle these huge memory, maybe doing some swap or other things.I've waited anyways, when it finally got into 100% in tqdm, all data was sent into ./distinct process, after some seconds got the follow output: 160353105it [30:21, 88056.97it/s] zsh: done tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 | zsh: killed ./distinct > users2 This time mostly sure cause by out-of-memory-killer as spotted in Marcus Müller Answer TLDR section. So I've just checked and I don't have swap enabled in this machine. Disabled it after complete its setup with dmcrypt and LVM as you may get more information in this answers of mine . So what I'm thinking is to enable my LVM swap partition and trying to run it again. Also at some moment I think that I've seen tqdm using 10GiB of RAM. But I'm pretty sure I've seen wrongly or btop output mixed up, as latter it showed only 10MiB, don't think tqdm would use much memory as it just counts and updates some statics when reading a new \n . In Stéphane Chazelas comment to this question they say: The system logs will possibly tell you. I would like to know more about it, should I find something in journalctl? If so, how to do it? Anyways, as Marcus Müller Answer says, loading the csv into Sqlite may be by far the most smart solution, as it will allow to operate on data in may ways and probably has some smart way to import this data without out-of-memory. But now I'm twice curious about how to find out why as process was killed, as I want to know about my sort -T. -S1G and now about my ./distinct , the last one almost sure it was about memory. So how to check for logs that says why those process were killed? Update2 So I've enabled my SWAP partition and took Marcus Müller suggestion from this question comment. Using pythons collections.Counter. So my new code ( distinct2 ) looks like this: #!/usr/bin/env python3from collections import Counterimport sysprint(Counter(sys.stdin).most_common()) So I've run Gnu Screen to even if I get a broken pipe again I could just resume the session, than run it in the follow pipeline: tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 --unit-scale=1 | ./distinct2 | tqdm --unit-scale=1 > users5 That got me the follow output: 160Mit [1:07:24, 39.6kit/s]1.00it [7:08:56, 25.7ks/it] As you can see it took way more time to sort the data than to count it.One other thing you may notice is that tqdm second line output shows just 1.00it, it means it got just a single line. So I've checked the user5 file using head: head -c 150 users5 [('kgZoJz//JpfXgowLxOhcQlFYOCm8m6upa6Rpltcc63K6Cz0vEWJF/RYmlsaXsIQEbXrwz+Il3BkD8XZVx7YMLQ==\n', 795), ('JMlte6XKe+nnFvxcjT0hHDYYNgiDXZVOkhr6KT60EtJAGa As you can see, it printed the entire list of tuples in a single line. For solving this I've used the good old sed as follow sed 's/),/)\n/g' users5 > users6 . After it I've checked users6 content using head, as follow with its output: $ head users6[('kgZoJz/...c63K6Cz0vEWJF/RYmlsaXsIQEbXrwz+Il3BkD8XZVx7YMLQ==\n', 795) ('JMlte6X...0EtJAGaezxc4e/eah6JzTReWNdTH4fLueQ20A4drmfqbqsw==\n', 781) ('LNbGhj4...apR9YeabE3sAd3Rz1MbLFT5k14j0+grrVgqYO1/6BA/jBfQ==\n', 777) ('K54RRTU...NlENRfUyJTPJKBC47N/s2eh4iNdAKMKxa3gvL2XFqCc9AqQ==\n', 767) ('8USqGo1...1QSbQHE5GFdC2mIK/pMEC/qF1FQH912SDim3ptEFkYPrYMQ==\n', 767) ('DspItMb...abcd8Z1nYWWzGaFSj7UtRC0W75P7JfJ3W+4ne36EiBuo2YQ==\n', 766) ('6QK00ig...abcfLKMUNur4cedRmY9wX4vL6bBoV/JW/Gn6TRRZAJimeLw==\n', 765) ('VenbgVz...khkTwy/w5C6jodImdPn6bM8izTHI66HK17D4Bom33ZrwuGQ==\n', 758) ('jjtKU98...Ias+PeaHE9vWC4g7p2KJKLBdjKvo+699EgRouCbeFjWsjKA==\n', 730) ('VHg2OiSk...3c3cr2K8+0RW4ILyT1Bmot0bU3bOJyHRPW/w60Y5so4F1g==\n', 713) Good enough to work latter. Now I think I should add an update after trying to check who killed my sort using dmesg ou journalctl. I'm also wondering if there is a way to make this script faster. Maybe creating a threadpool, but have to check pythons dict behavior, also thought about other data-structures as the column I'm counting is fixed width string, maybe using a list to storage the frequency of each different user_hash. Also I read the python implementation of Counter, it's just a dict, pretty much same implementation I had before, but instead of using dict.setdefault just used dict[key] = dict.get(key, 0) + 1 , it was a miss-usage of setdefault no real need for this scenario. Update3 So I'm getting so deep in the rabbit hole, totally lost focus of my objective. I started search for faster sorting, maybe write some C or Rust, but realized that already have the data I came for processed. So I'm here to show dmesg output and one final tip about the python script. The tip is: may be better to just count using dict or Counter, than sort its output using gnu sort tool. Probably sort sorts faster than python sorted buitin function. About dmesg, it was pretty simple to find out of memory, just did a sudo dmesg | less press G to go all way down, than ? to search back, than searched for Out string. Found two of them, one for my python script and another to my sort, the one that started this question. Here is those outputs: [1306799.058724] Out of memory: Killed process 1611241 (sort) total-vm:1131024kB, anon-rss:1049016kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:2120kB oom_score_adj:0[1306799.126218] oom_reaper: reaped process 1611241 (sort), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB[1365682.908896] Out of memory: Killed process 1611945 (python3) total-vm:1965788kB, anon-rss:1859264kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:3748kB oom_score_adj:0[1365683.113366] oom_reaper: reaped process 1611945 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB That's it, thank you so much for helping so far, hope it help others too.
TL;DR: out-of-memory-killer or running out of disk space for temporary files kills sort . Recommendation: Use a different tool. I've had a glance over GNU coreutils' sort.c right now¹. Your -S 1G just means that the sort process tries to allocate memory in a chunk of 1GB, and will fall back to increasingly smaller sizes if that is not possible. After having exhausted that buffer, it will create a temporary file to store the already sorted lines², and sort the next chunk of input in-memory. After all input has been consumed, sort will then merge/sort two of the temporary file into a temporary file (mergesort-style), and successively merge all the temporaries until the merging will yield the total sorted output, which is then output to stdout . That's clever, because it means you can sort input larger than available memory. Or, it's clever on systems where these temporary files are not themselves held in RAM, which they typically are these days ( /tmp/ is typically a tmpfs , which is a RAM-only file system). So, writing these temporary files eats exactly the RAM you're trying to save, and you're running out of RAM: your file has 160 million lines, and a quick google suggests it's 11GB of uncompressed data. You can "help" sort around that by changing the temporary directory it uses. You're already doing that, -T. , placing the temporary files in your current directory. Might be you're running out of space there? Or is that current directory on tmpfs or similar? You've got a CSV file with an medium amount of data (160 million rows is not that much data for a modern PC). Instead of putting that into a system meant to deal with that much data, you're trying to operate on it with tools from the 1990s (yes, I just read sort git history), when 16 MB RAM seemed quite generous. CSV is just the wrong data format for processing any significant amount of data, and your example is the perfect illustration of that. Inefficient tooling working on inefficient data structure (a text file with lines) in inefficient ways to achieve a goal with an inefficient approach: You just want to know how often each value appeared on the second column. Sorting that before just happens because your tool ( uniq -c ) is bad, and needs the rows to be sorted before (there's literally no good reason for that. It's just not implemented that it could hold a map of values and their frequency and increase that as they appear). So, maybe this is a good moment to tell you that, no, don't use a CSV-based data flow. A simple sqlite3 place.sqlite and in that shell (assuming your CSV has a title row that SQLite can use to determine the columns) (of course, replace $second_column_name with the name of that column) .import 022_place_canvas_history.csv canvas_history --csvSELECT $second_column_name, count($second_column_name) FROM canvas_history GROUP BY $second_column_name; is likely to be as fast, and bonus, you get an actual database file place.sqlite . You can play around with that much more flexibly – for example, create a table where you extract coordinates, and convert the times to numerical timestamps, and then be much faster and more flexible by what you analyze. ¹ The globals, and the inconsistency on what is used when. They hurt. It was a different time for C authors. And it's definitely not bad C, just ... not what you're used to from more modern code bases. Thanks to Jim Meyering and Paul Eggert for writing and maintaining this code base! ² you can try to do the following: sort a file that's not too massive, say, ls.c with say has 5577 lines, and record the number of files opened: strace -o /tmp/no-size.strace -e openat sort ls.cstrace -o /tmp/s1kB-size.strace -e openat sort -S 1 ls.cstrace -o /tmp/s100kB-size.strace -e openat sort -S 100 ls.cwc -l /tmp/*-size.strace
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/698627", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182552/" ] }
698,740
I used to play with a small X program dealing with symmetry groups. In a window you selected a symmetry group from a display, in another window appeared a grid of symmetry cells, drawn in thin red lines, you then draw a segment using the mouse and your segments were replicated in all the cells. You had the possibility of saving your drawing in Postscript format. What is/was the name of this program? I tried to ask the big G, but I was unlucky or unable to ask the right question.
This appears to be Kali which was last released in 1998 bu UMN Geometry Center but still packaged around : Kali is an interactive 2D Euclidean symmetry pattern editor. You can use Kali to draw Escher-like tilings, infinite knots, friezepatterns, and other cool stuff. It lets you draw patterns in any ofthe 17 planar (wallpaper) or 7 frieze symmetry groups. Drawings aredone interactively with X, and PostScript output is supported. How did I find it? With a package search engine: I tried on Debian apt-cache search symmetry and among various libraries or chemical and medical results, there it was: $ apt-cache search symmetry[...]kali - Draw tilings, frieze patterns, and so on[...]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/698740", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164535/" ] }
698,919
I'm looking for a way to nullify the undesirable behaviour of some installers that append code to .bashrc to force-load their environment automatically. The problem cropped-up a few times, mostly with Conda, and in some cases the user ended-up with a broken account that prevented them from logging in anymore. I tried to add an unclosed here-document at the end of .bashrc, like this: # .bashrc#...: <<'__END__' Which works, but generates parsing errors annoying warnings. What would be a clean way to do that (without making the .bashrc readonly)?
If you end your .bashrc with return 0 Bash will ignore any lines added after that , since .bashrc is handled like a sourced script: return may also be used to terminate execution of a script being executed with the . ( source ) builtin, returning either n or the exit status of the last command executed within the script as the exit status of the script. ( exit 0 causes the shell to exit, which isn’t what you want.)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/698919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/504798/" ] }
699,021
According with the FHS about /dev at: 6.1.3. /dev : Devices and special files it contains: The following devices must exist under /dev./dev/null All data written to this device is discarded. A read from this device will return an EOF condition./dev/zero This device is a source of zeroed out data. All data written to this device is discarded. A read from this device will return as many bytes containing the value zero as was requested.... Observe that both have: All data written to this device is discarded I read many tutorials where /dev/null is always used to discard data. But because both have the same purpose about writing (discard) Question When is mandatory use /dev/zero over /dev/null for write / discard purpose? BTW for other differences - practically mostly about read - we have available: Difference between /dev/null and /dev/zero
If you're using Linux, it's never "mandatory" to redirect to /dev/null instead of /dev/zero . As you've noticed, you'll get the same result either way. That said, you should always redirect to /dev/null if you're discarding data. Because everyone understands that writing to /dev/null means throwing the data away; it's expressing your intention. On the other hand, writing to /dev/zero also throws your data away, but it's not immediately obvious that that's what you're trying to do. Besides that, I'd be concerned whether writes to /dev/zero are allowed on other Unices, like the BSDs etc. I don't think /dev/zero is even required by POSIX, while /dev/null is . So using /dev/null for its intended purpose is maximally portable; doing anything else is sacrificing portability for no gain.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/699021", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/383045/" ] }
699,116
Let's say I have a directory called "/home/ben/files" with 100 randomly named text files.Out of the 100 text files some just contain the word "DELETEME" inside the text files. e.g.file1.txt, file2.txt, file3.txt, file4.txt, file5.txt.....etc How should one delete such files containing the pattern?
With GNU grep at least, you could try: grep -lZr "DELETEME" /home/ben/files | xargs -0 rm Warning: that will recurse through subfolders of /home/ben/files ; if you don't want that, more precautions need taking. (Thanks for cas in the comments for suggesting using a null delimiter.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/699116", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/522040/" ] }
699,180
I have created a script to export CSV data to mysql table. #!/bin/bashcd /data/NEWfor f in User*do mysql --user="root" --password="user@123" -e "LOAD DATA LOCAL INFILE '/data/NEW/"$f"' ignore into table new.table2 fields terminated by ',' enclosed by '"' lines terminated by '\n' (table_date, table_name, table_count);"done But I am getting following errors. But I could not find where I have messed this. test.sh: line 6: unexpected EOF while looking for matching `''test.sh: line 9: syntax error: unexpected end of file Can someone show me where to improve?
You seem to want to include a double quote in your double-quoted command string: "... enclosed by '"' lines ..." You need to escape the embedded double-quote character so that it is included in the string and not terminating it: "... enclosed by '\"' lines ..." Note also that your use of $f is totally unquoted here: "LOAD DATA LOCAL INFILE '/data/NEW/"$f"' ignore ..." It would be better to quote it (i.e. let it be quoted by the double-quoted string that it is already part of): "LOAD DATA LOCAL INFILE '/data/NEW/$f' ignore ..." Or, if you need to include the double-quote characters in the command string: "LOAD DATA LOCAL INFILE '/data/NEW/\"$f\"' ignore ..." You may also use a here-document to make the quoting simpler: mysql ... <<END_SQLLOAD DATA LOCAL INFILE '/data/NEW/$f' IGNORE INTO TABLE new.table2 FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY '\n' (table_date, table_name, table_count);END_SQL Each quote is literal in the here-document above.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/699180", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/522346/" ] }
699,386
As discussed in Understanding UNIX permissions and file types , each file has permission settings ("file mode") for: the owner / user (" u "), the owner's group (" g "), and everyone else (" o "). As far as I understand, the owner of a file can always change the file's permissions using chmod . So can any application running under the owner. What is the reason for restricting the owner's own permissions if they can always change them? The only use I can see is the protection from accidental deletion or execution, which can be easily overcome if intended. A related question has been asked here: Is there a reason why 'owner' permissions exist? Aren't group permissions enough? It discusses why the owner's permissions cannot be replaced by a dummy group consisting of a single user (the owner). In contrast, here I am asking about the purpose of having permissions for the owner in principle , no matter if they are implemented through a separate " u " octal or a separate group + ACLs.
There are various reasons to reduce the owner's permissions (though rarely to less than that of the group). The most common is not having execute permission on files not intended to be executed. Quite often, shell scripts are fragments intended to be sourced from other scripts (e.g. your .profile ) and don't make sense as top-level processes. Command completion will only offer executable files, so correct permissions helps in interactive shells. Accidentally overwriting a file is a substantial risk - it can happen through mistyping a command, or even more easily in GUI programs. One of the first things I do when copying files from my camera is to make them (and their containing directory) non-writeable, so that any edits I make must be copies, rather than overwriting the original. Sometimes it's important that files are not even readable. If I upgrade my Emacs and have problems with local packages in my ~/lisp directory, I selectively disable them (with chmod -r ) until it can start up successfully; then I can make them readable one at a time as I fix compatibility problems. A correct set of permissions for user indicates intentionality . Although the user can change permissions, well-behaved programs won't do that (at least, not without asking first). Instead of thinking of the permissions as restricting the user , think of them as restricting what the user's processes can do at a given point in time.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/699386", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/419627/" ] }
699,451
Running 20.04 I am trying to figure out what my LAN IP is on my laptop. If I run ifconfig I get (trimmed down): $ ifconfigdocker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ...enp0s31f6: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 ...lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 ...virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 ...virbr1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 192.168.39.1 netmask 255.255.255.0 broadcast 192.168.39.255 ...wlp0s20f3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.0.23 netmask 255.255.255.0 broadcast 192.168.0.255 ... Which one of the above is my IP - e.g. the one I would use when SSH'ing to this latop from another PC on my LAN? Also in the "old" days I always looked for eth0 (also on ubuntu) but seems that is no longer used: https://www.linuxquestions.org/questions/linux-networking-3/eth0-is-not-displayed-in-ifconfig-4175444486/
enp0s31f6 shows the flag UP but not RUNNING : this typically means it does not have a valid link at the moment. On the other hand, wlp0s20f3 has both UP and RUNNING flags present. The name prefix wl indicates this is a wireless interface, which makes sense as you said this is a laptop. An en prefix would indicate a wired interface. So, the IP address of the wlp0s20f3 interface (i.e. 192.168.0.23 ) would be the one to use for inbound SSH connections from other physical hosts. The interfaces docker0 , virbr0 and virbr1 are for facilitating networking between Docker containers and/or virtual machines running on this system: depending on other settings, they might allow containers/VMs communicate only with the host OS, or they might allow NAT-based access to the world outside this physical host. To understand their exact purpose, it might be necessary to study the iptables NAT and forward filtering rules (i.e. sudo iptables -Lvn -t NAT and sudo iptables -Lvn ). If your laptop had the appropriate data records embedded in its firmware, its integrated wired network interface should get identified as eno1 and the wireless one as wlo1 . But apparently your laptop's firmware does not include those records. If you wish, you could change the interface names by creating two simple /etc/systemd/network/*.link files. First, you would need to use e.g. sudo udevadm info -q all -p /sys/class/net/enp0s31f6 | grep -e ID_NET_NAME -e ID_PATH to identify the hardware path and the autodetected name candidates for your network interface. The output might look something like this: # udevadm info -q all -p /sys/class/net/enp0s31f6 | grep -e ID_NET_NAME -e ID_PATHE: ID_NET_NAME_MAC=enx0123456789abE: ID_NET_NAME_ONBOARD=eno1E: ID_NET_NAME_PATH=enp0s31f6E: ID_PATH=pci-0000:00:1f.6E: ID_PATH_TAG=pci-0000_00_1f_6E: ID_NET_NAME=enp0s31f6 If the ID_NET_NAME_ONBOARD line does not appear, that confirms your system firmware does not properly identify the network interface as an onboard one. You might wish to fix this by renaming the interfaces to use the names they would have ideally been assigned to anyway. To rename this interface, you would note the ID_PATH= line, and use it to write a configuration file as e.g /etc/systemd/network/70-eno1.link with the following contents: [Match]Path=pci-0000:00:1f.6[Link]Name=eno1 #or whatever you want and likewise for the wireless interface. Instead of setting an explicit Name= , you can also use a NamePolicy= setting to select any of the pre-generated ID_NET_NAME_* candidates, or to set an order of preference for selecting a pre-generated name. See man 5 systemd.link for more details. After creating these files, you should update your initramfs ( sudo update-inintramfs -u ) and reboot. After rebooting, you should find your interfaces with the names of your choice. Note that the enp0s31f6 is a name that is based on the PCI device path: it indicates it refers to PCI device 00:1f.6 as 31 = 0x1f. Likewise, wlp0s20f3 would be PCI device 00:14.3 (20 = 0x14).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/699451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/11740/" ] }
699,471
I am trying to come up with a bash script to remove parts of a file name on CentOS. My file names are: 119903_Relinked_new_3075_FileNote_07_02_2009_JHughes_loaMeetingAndSupport_FN_205.doc119904_relinked_new_2206_Support_Intensity_Scale_SYCH_SIS_264549.pdf119905_relinked_new_3075_Consent_07_06_2009_DSweet_CRFA_CF_16532.docx29908_relinked_new_2206_Assessor_Summary_Report_SERT_I_OTH_264551.pdf009712_relinked_new_3075_Consent_07_06_2009_CWell_DPRT_check_CF_16535.pdf I would like to remove 119903_Relinked_new_ from the file names. The end result should be: 3075_FileNote_07_02_2009_JHughes_loaMeetingAndSupport_FN_205.doc2206_Support_Intensity_Scale_SYCH_SIS_264549.pdf3075_Consent_07_06_2009_DSweet_CRFA_CF_16532.docx2206_Assessor_Summary_Report_SERT_I_OTH_264551.pdf3075_Consent_07_06_2009_CWell_DPRT_check_CF_16535.pdf I have been trying multiple scripts but coming up short. The number before _Relinked_new_ is different in most cases and the file extensions vary across .pdf , .docx , .doc etc. Any help would be appreciated.
Using the prename(1) tool (it might be called rename or prename or perl-rename depending on your system): rename 's/[0-9]+_[rR]elinked_new_//' /path/to/dir/* This will use a regular expression to match the pattern and replace it with nothing on the specified files.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/699471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/516595/" ] }
699,500
When I need to capture some packets using tcpdump , I use command like: tcpdump -i eth0 "dst host 192.168.1.0" I always think the dst host 192.168.1.0 part is something called BPF, Berkeley Packet Filter. To me, it's a simple language to filter network packets. But today my roommate tells me that BPF can be used to capture performance info. According to his description, it's like the tool perfmon on Windows. Is it true? Is it the same BPF as I mentioned in the beginning of the question?
What is BPF? BPF (or more commonly, the extended version, eBPF ) is a language that was originally used exclusively for filtering packets, but it is capable of quite a lot more. On Linux, it can be used for many other things, including system call filters for security, and performance monitoring, as you pointed out. While Windows did add eBPF support , that is not what Windows' perfmon utility uses. Windows only added support for compatibility with non-Windows utilities that rely on OS support for eBPF. The eBPF programs are not executed in userspace. Instead, the application creates and sends an eBPF program to the kernel, which executes it. It is actually machine code for a virtual processor that is implemented in the form of an interpreter in the kernel, although it can also use JIT compilation to enhance performance considerably. The program has access to some basic interfaces in the kernel, including those related to performance and networking. The eBPF program then communicates with the kernel to provide it the computational results (such as dropping a packet). Restrictions on eBPF programs In order to protect from denial-of-service attacks or accidental crashes, the kernel first verifies the code before it is compiled. Before being run, the code is subject to several important checks: The program consists of no more than 4096 instructions in total for unprivileged users. Backwards jumps cannot occur, with the exception of bounded loops and function calls. There are no instructions that are always unreachable. The upshot is that the verifier must be able to prove that the eBPF program halts. It hasn't found a solution to the halting problem , of course, which is why it only accepts programs that it knows will halt. To do this, it represents the program as a directed acyclic graph . In addition to this, it tries to prevent information leaks and out-of-bounds memory access by preventing the actual value of a pointer from being revealed while still allowing limited operations to be performed on it: Pointers cannot be compared, stored, or returned as a value that can be examined. Pointer arithmetic can only be done against a scalar (a value not derived from a pointer). No pointer arithmetic can result in pointing outside the designated memory map. The verifier is rather complex and does far more, although it has itself been the source of serious security bugs , at least when the bpf(2) syscall is not disabled for unprivileged users . Viewing the code The dst host 192.168.1.0 component of the command is not BPF. That is just syntax which is used by tcpdump . However, the command you give it is used to generate a BPF program which is then sent to the kernel. Note that it is not eBPF which is used in this case, but the older cBPF. There are several important differences between the two (although the kernel internally converts cBPF into eBPF). The -d flag can be used to see the cBPF code that is to be sent to the kernel: # tcpdump -i eth0 "dst host 192.168.1.0" -d(000) ldh [12](001) jeq #0x800 jt 2 jf 4(002) ld [30](003) jeq #0xc0a80100 jt 8 jf 9(004) jeq #0x806 jt 6 jf 5(005) jeq #0x8035 jt 6 jf 9(006) ld [38](007) jeq #0xc0a80100 jt 8 jf 9(008) ret #262144(009) ret #0 More complicated filters result in more complicated bytecode. Try some of the examples in the manpage and append the -d flag to see what bytecode would be loaded into the kernel. In order to understand how to read the disassembly, review the BPF filter documentation . If you're reading an eBPF program, you should take a look at the eBPF instruction set for the virtual CPU. Understanding the code For simplicity, I'll assume you specified a destination IP of 192.168.1.1 instead of 192.168.1.0 and wanted to match IPv4 only, which shrinks the code quite a bit as it no longer has to handle IPv6: # tcpdump -i eth0 "dst host 192.168.1.1 and ip" -d(000) ldh [12](001) jeq #0x800 jt 2 jf 5(002) ld [30](003) jeq #0xc0a80101 jt 4 jf 5(004) ret #262144(005) ret #0 Let's walk through what the above bytecode actually does . Each time a packet is received on the interface specified, the BPF bytecode is run. The packet contents (including the Ethernet header, if applicable) are put in a buffer that the BPF code has access to. If the packet matches the filter, the code will return the size of the capture buffer (262144 bytes by default), otherwise it returns 0. Let's assume you are running this filter and it receives a packet sending an ICMP message with an empty payload from 192.168.1.142 to 192.168.1.1. The source MAC is aa:aa:aa:aa:aa:aa and the destination MAC is bb:bb:bb:bb:bb:bb. The contents of the Ethernet frame, in hexadecimal, are: aa aa aa aa aa aa bb bb bb bb bb bb 08 00 45 0000 1c 77 71 40 00 40 01 3f 92 c0 a8 01 8e c0 a801 01 08 00 c1 c0 36 0e 00 01 The first instruction is ldh [12] . This loads a half-word (two bytes) located at an offset of 12 bytes into the packet into the A register. This is the value 0x0800 (remember that network data is always big-endian). The second instruction is jeq #0x800 , which will compare an immediate with the value in the A register. If they are equal, it will jump to instruction 2, otherwise 5. The value 0x800 at that offset in the Ethernet frame specifies the IPv4 protocol. Because the comparison evaluates true, the code now jumps to instruction 2. If the payload was not IPv4, it would have jumped to 5. Instruction 2 (the third) is ld [30] . This loads an entire 4-byte word at an offset of 30 into the A register. In our Ethernet frame, this is 0xc0a80101. The next instruction, jeq #0xc0a80101 , will compare an immediate against the contents of the A register and will jump to 4 if true, otherwise 5. This value is the destination address (0xc0a80101 is the big-endian representation of 192.168.1.1). The values do indeed match, so the program counter is now set to 4. Instruction 4 is ret #262144 . This terminates the BPF program and returns the integer 262144 to the calling program. This tells the calling program, tcpdump in this case, that the packet was caught by the filter, so it requests the contents of the packet from the kernel, decodes it more thoroughly, and writes the information to your terminal. If the destination address did not match what the filter was looking for or the protocol type was not IPv4, the code would have jumped to instruction 5 instead, where it would have been met with ret #0 . This would have terminated the filter without a match. This is all just a way to return 262144 if the half-word at offset 12 into the packet is 0x800 AND the word at offset 30 is 0xc0a80101, and return 0 otherwise. Because this is all done in the kernel (optionally after being converted into native machine code by the JIT engine), no expensive context switches or passing buffers between kernelspace and userspace are required, so the filter is fast . More advanced examples The BPF code is not limited to being used by tcpdump . A number of other utilities can use it. You can even create an iptables rule with a BPF filter by using the xt_bpf module! However, you have to be careful when generating the bytecode with tcpdump -ddd because it expects to consume a layer 2 header, whereas iptables does not. All you have to do to make them compatible is adjust offsets. Furthermore, a number of auxiliary functions are provided that provide information that can't be obtained by reading the raw packet contents such as the packet length, the payload start offset, the CPU the packet was received on, the NetFilter mark, etc. From the filter documentation: The Linux kernel also has a couple of BPF extensions that are used along with the class of load instructions by “overloading” the k argument with a negative offset + a particular extension offset. The result of such BPF extensions are loaded into A. The supported BPF extensions are: Extension Description len skb->len proto skb->protocol type skb->pkt_type poff Payload start offset ifidx skb->dev->ifindex nla Netlink attribute of type X with offset A nlan Nested Netlink attribute of type X with offset A mark skb->mark queue skb->queue_mapping hatype skb->dev->type rxhash skb->hash cpu raw_smp_processor_id() vlan_tci skb_vlan_tag_get(skb) vlan_avail skb_vlan_tag_present(skb) vlan_tpid skb->vlan_proto rand prandom_u32() For example, to match all packets that are received on CPU 3, you could do: ld #cpu jneq #3, drop ret #262144drop: ret #0 Note that this is using BPF assembly syntax compatible with bpf_asm , whereas the other assembly listings here are using tcpdump syntax. The main difference is that the former's syntax uses named labels whereas the latter's BPF syntax labels each instruction with a line number. This assembly translates to the following bytecode (commas delimit instructions): 4,32 0 0 4294963236,21 0 1 1,6 0 0 262144,6 0 0 0, This can then be used with iptables using the xt_bpf module: iptables -A INPUT -m bpf --bytecode "4,32 0 0 4294963236,21 0 1 1,6 0 0 262144,6 0 0 0," -j CPU3 This will jump to target chain CPU3 for any packets received on that CPU. If this seems powerful, remember that this is all cBPF. Although cBPF is translated into eBPF internally, all this is nothing compared to what raw eBPF can do! For more information I highly recommend you read this article to understand how tcpdump uses cBPF. After reading that, read this explanation of how tcpdump turns expressions into bytecode. If you want to learn everything else about it, you can always check out the source code !
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/699500", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/520451/" ] }
699,568
I am meeting some difficulties when using bash to insert multiple empty lines into a file (called file1) according to an index file(called file2).(these files can be treated as passed variables)the index file (file2) looks like: ---MHA-NXXM---FGA... the file1 looks like this: M x1 y1 z1 m1 n1H x2 y2 z2 m2 n2A x3 y3 z3 m3 n3N x4 y4 z4 m4 n4X x5 y5 z5 m5 n5X x6 y6 z6 m6 n6M x7 y7 z7 m7 n7F x8 y8 z8 m8 n8G x9 y9 z9 m9 n9A x0 y0 z0 m0 n0... the output should be looks like this: ---M x1 y1 z1 m1 n1H x2 y2 z2 m2 n2A x3 y3 z3 m3 n3-N x4 y4 z4 m4 n4X x5 y5 z5 m5 n5X x6 y6 z6 m6 n6M x7 y7 z7 m7 n7---F x8 y8 z8 m8 n8G x9 y9 z9 m9 n9A x0 y0 z0 m0 n0... if file2 deletes the '-', the content as well as the order will always be the same as the first column in file1. I tried dataframe in Python to deal with it, but it's too slow. So I was wondering how to use bash to figure this out. Thanks!
Assuming the letters in the index file are always the right ones in the right order (so we can ignore which letter we see), and that the empty lines actually contain the dashes and aren't totally empty, maybe this should work: $ awk -v datafile=data.txt '$1 == "-" { print "-"; next} { getline < datafile; print }' < index.txt ---M x1 y1 z1 m1 n1H x2 y2 z2 m2 n2A x3 y3 z3 m3 n3-N x4 y4 z4 m4 n4X x5 y5 z5 m5 n5X x6 y6 z6 m6 n6M x7 y7 z7 m7 n7---F x8 y8 z8 m8 n8G x9 y9 z9 m9 n9A x0 y0 z0 m0 n0... It reads the index file one line at a time; if the first field there is exactly - , prints that; and otherwise reads and prints a line from the other file. (Which means that if a totally empty line comes up in the index file, it will also go to the next line from the data file.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/699568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/521972/" ] }
699,726
I would like to make a loop in which a certain column (in my case column 4) from a text file is added as last column to a new text file. I have in total around 500 text files (V1-V500) from which I want to take the fourth column and add it to the new text file (columns separated by tabs). All text files have the same number of lines. In addition, the heading of the column that was added should contain the file name of the text file where it was originally from. I've tried to work out a command line with awk and a for-loop already, but none of my commands work. I've tried command lines based on the command line of a previous post . I'm working in Linux with GNU tools available. To give an example:V1 text file header1 header2 header3 header41 5 9 13 2 6 10 143 7 11 154 8 12 16 V2 text file: header1 header2 header3 header417 25 21 29 18 26 22 3019 27 23 3120 28 24 32 NEW text file: V1 V213 2914 3015 3116 32 Thanks for your help!
With awk parsing all files. awk -F'\t' -v OFS='\t' '{ x = (FNR==1 ? FILENAME : $4) a[FNR] = (FNR==NR ? x : a[FNR] OFS x) } END { for (i=1;i<=FNR;i++) print a[i] }' V{1..500} x is what we keep from every line and a is the new line we build. Both are assigned using a conditional expression . FNR is the line number of the current input file, NR the total one. FNR==NR means "when parsing the first file". Also I have assumed tab-delimited inputs and output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/699726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/522902/" ] }
699,758
Say the directory contained 3 csv files: The first csv: Name, JohnAge, 18 The second csv: Name, JimAge, 21 The third csv: Name, AmyAge, 22 I would want the result to be: Name, John, Jim, AmyAge, 18, 21, 22 It's important to know the directory could have n many csvsI have both bash and posix shell available Edit: This feels like it should work but still has an issue with regards to order: awk -F, -v OFS="," '{a[FNR]=a[FNR]?a[FNR]FS$2:$1FS$2}END{for(x in a)print x,a[x]}' *.csv > results.csv Which makes no sense as FNR 1 should be first in the array but it is printed last?
With awk parsing all files. awk -F'\t' -v OFS='\t' '{ x = (FNR==1 ? FILENAME : $4) a[FNR] = (FNR==NR ? x : a[FNR] OFS x) } END { for (i=1;i<=FNR;i++) print a[i] }' V{1..500} x is what we keep from every line and a is the new line we build. Both are assigned using a conditional expression . FNR is the line number of the current input file, NR the total one. FNR==NR means "when parsing the first file". Also I have assumed tab-delimited inputs and output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/699758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/522972/" ] }
700,151
I lost some files by using the mv command. I don't know where they are. They are not in the directory to which I intended to copy them. Below is a transcript of what I did: samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ cdsamuelcayo@CAYS07019906:~$ lsDesktop Documents Downloads GameShell Music Pictures pratice Public Templates Videossamuelcayo@CAYS07019906:~$ mkdir tp2samuelcayo@CAYS07019906:~$ lsDesktop Documents Downloads GameShell Music Pictures pratice Public Templates tp2 Videossamuelcayo@CAYS07019906:~$ cd Downloads/221-tp2-public-main/samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ lsbackup copybash Dockerfile ntfy-1.16.0 packets.txt README.md restore secretcloud data Dockerfile_CAYS07019906 ntfy.zip rapport-tp2.md remplacer.sed sauvegarde.sh tailsamuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ mv rapport-tp2.md tp2samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ mv Dockerfile_CAYS07019906 tp2samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ mv packets.txt tp2samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ mv sauvegarde.sh tp2samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ cdsamuelcayo@CAYS07019906:~$ cd tp2/samuelcayo@CAYS07019906:~/tp2$ lssamuelcayo@CAYS07019906:~/tp2$ ls -ltotal 0samuelcayo@CAYS07019906:~/tp2$ cd ..
You created a directory called tp2 in your home directory, i.e. you created the directory ~/tp2 . You then changed into ~/Downloads/221-tp2-public-main and started to move files with mv . Since you specified the target of each mv operation as tp2 , and since tp2 was not a directory in your current directory, each file you moved was instead renamed tp2 . You overwrote the file previously called tp2 each subsequent time you ran mv . In the end, the tp2 that you were left with is the file previously called sauvegarde.sh . You would have avoided the loss of data by using ~/tp2/ as the target of each mv operation. The ~ refers to your home directory, where you created your tp2 directory. The / at the end of the target path is not strictly necessary, but it makes mv fail gracefully if ~/tp2 is not a directory. As for what you can do now to restore your lost files; consider restoring them from a recent backup if you don't have other copies of them lying around elsewhere.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/700151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/523343/" ] }
700,190
I have an attach volume that gets a snapshot every hour. In order to test the snapshot performance, I need to run a process that will, between snapshot backups, generate a large amount of "churn" or file change. There are two questions related to this: simply and obviously, how to generate large blocks of text EFFICIENTLY and write them to disc. With my limited knowledge about the only thing I can think of is a for loop generating random characters, but that's probably extremely slow. Also, the new randomness if replacing a file has to be such that the snapshot essentially has no patterns to match. what is the most effective way to store this? e.g. 1 Gigabyte in 1000 files, or 100 GB in 10 files Since a picture is worth 1K words, I drew up this conceptually: Thanks in advance for insight on coupling tools-to-use with insight on the file system.
You created a directory called tp2 in your home directory, i.e. you created the directory ~/tp2 . You then changed into ~/Downloads/221-tp2-public-main and started to move files with mv . Since you specified the target of each mv operation as tp2 , and since tp2 was not a directory in your current directory, each file you moved was instead renamed tp2 . You overwrote the file previously called tp2 each subsequent time you ran mv . In the end, the tp2 that you were left with is the file previously called sauvegarde.sh . You would have avoided the loss of data by using ~/tp2/ as the target of each mv operation. The ~ refers to your home directory, where you created your tp2 directory. The / at the end of the target path is not strictly necessary, but it makes mv fail gracefully if ~/tp2 is not a directory. As for what you can do now to restore your lost files; consider restoring them from a recent backup if you don't have other copies of them lying around elsewhere.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/700190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104388/" ] }
700,198
I couldn't find any guide or documentation referring on how to add files to a custom Alpine Linux ISO, the nearest i could find is this page on the Alpine Wiki about creating a custom ISO image with mkimage I would prefer to have my automated installation scripts and answer files directly on the ISO instead of having to download them through wget
You created a directory called tp2 in your home directory, i.e. you created the directory ~/tp2 . You then changed into ~/Downloads/221-tp2-public-main and started to move files with mv . Since you specified the target of each mv operation as tp2 , and since tp2 was not a directory in your current directory, each file you moved was instead renamed tp2 . You overwrote the file previously called tp2 each subsequent time you ran mv . In the end, the tp2 that you were left with is the file previously called sauvegarde.sh . You would have avoided the loss of data by using ~/tp2/ as the target of each mv operation. The ~ refers to your home directory, where you created your tp2 directory. The / at the end of the target path is not strictly necessary, but it makes mv fail gracefully if ~/tp2 is not a directory. As for what you can do now to restore your lost files; consider restoring them from a recent backup if you don't have other copies of them lying around elsewhere.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/700198", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274181/" ] }
700,200
I've got a small SRE lab set up using various Vagrant boxes (VirtualBox backend). I usually work on a Debian or Archlinux box and attach to a Windows box via remote debugging. On my Linux boxes, X11 forwarding is enabled and works usually. When I try to run Cutter (the official rizin GUI), either from the AppImage or unpacked, I receive the following error: The X11 connection broke: No error (code 0)X connection to localhost:10.0 broken (explicit kill or server shutdown). I've never seen something like this before and I can't reproduce it with any other application, AppImage or not. Cutter runs fine locally, other applications run fine via X11 forwarding in the boxes, only this one errors on both, the Debian and the Arch box. Any idea where to start debugging is appreciated :)
You created a directory called tp2 in your home directory, i.e. you created the directory ~/tp2 . You then changed into ~/Downloads/221-tp2-public-main and started to move files with mv . Since you specified the target of each mv operation as tp2 , and since tp2 was not a directory in your current directory, each file you moved was instead renamed tp2 . You overwrote the file previously called tp2 each subsequent time you ran mv . In the end, the tp2 that you were left with is the file previously called sauvegarde.sh . You would have avoided the loss of data by using ~/tp2/ as the target of each mv operation. The ~ refers to your home directory, where you created your tp2 directory. The / at the end of the target path is not strictly necessary, but it makes mv fail gracefully if ~/tp2 is not a directory. As for what you can do now to restore your lost files; consider restoring them from a recent backup if you don't have other copies of them lying around elsewhere.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/700200", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165427/" ] }
700,308
I have been using FreeBSD for quite some time. Now I want to dive into OpenBSD some more. Currently I'm trying to figure out what is the "recommended" way to keep my system up-to-date . On FreeBSD, we use the command pkg upgrade to update all the installed packages to the latest version. And we use the command freebsd-update to fetch/install the latest patches for the "base" system (kernel). So, I think with pkg upgrade && freebsd-update I'm pretty much safe. Now: What is the equivalent procedure on OpenBSD? I think that pkg_add -u on OpenBSD does pretty much the same as pkg upgrade on FreeBSD, i.e. it updates all the installed packages to the latest version. But what about an equivalent to freebsd-update ? So far I found sysupgrade on OpenBSD, but it is giving me "404 Not Found" errors every time. I think this is OpenBSD's way of telling me that, at this time, there is no newer release that I can upgrade to. Fair enough! But how to get security patches for the "base" system of the OpenBSD release that I'm currently running ? Does such thing even exist for OpenBSD, or do I have to wait for new release? Thank you!
You are correct that sysupgrade(8) replies with a 404 error when there are no new releases. That tool is the proper tool to use when upgrading your system to the next release or the latest snapshot release. Using pkg_add -u is also sufficient to update all installed packages (possibly followed up with pkg_delete -a to delete no longer needed packages). Security patches, etc., are installed using syspatch(8) . You may want to run syspatch -c as a daily cron job to be informed about new patches as they arrive. Snapshot systems do not use syspatch . See also the OpenBSD FAQ , especially the section on Security Updates , and the OpenBSD 7.1 errata page . The only ingredient missing from the above mix is the package sysclean . Once installed with pkg_add , you may use it to find files on the system that are no longer distributed as part of the base system and are also no longer in use by installed packages. Study the manual for how to make sysclean ignore your own local additions to the system, and make sure not to trust the tool blindly (e.g. don't write automated jobs that delete stuff based on its output).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/700308", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/523509/" ] }
700,508
A well-formed printf usually has a format to use: $ var="Hello"$ printf '%s\n' "$var"Hello However, what could be the security implications of not providing a format? $ printf "$var"Hello As the variable expansion is quoted, there should not be any issues, are there?
In: printf "$var" There are two problems: variable data passed as the format. Could be a problem if $var is under the control of an attacker option delimiter ( -- ) missing, so $var could be taken as an option if it starts with - . It would be a lot worse with: printf $var Where split+glob in most Bourne-like shells is performed upon $var expansion on top of that causing the sort of security vulnerabilities mentioned at Security implications of forgetting to quote a variable in bash/POSIX shells . Here that would be up to arbitrary command execution: $ export var1='-va[1$(uname>&2)] x' var2='%d a[1$(uname>&2)]'$ bash -c 'printf $var1'Linux$ ksh -c 'printf $var2'Linux0 The arbitrary uname command (thankfully harmless here) was run by printf . For printf "$var" itself, there are fewer problems I can think of. The most obvious one is the DoS one for things like var=%1000000000s which would spam the output with a lot of space characters or worse with things like %.1000000000f which would also use up a lot of memory and CPU time: $ var=%.1000000000f 'time' -f 'max mem: %MK, elapsed: %E' bash -c 'printf "$var"' | wc -cmax mem: 4885344K, elapsed: 0:12.331000000002 Other DoS ones could be the $var values that trigger syntax errors because of incorrect format or incorrect options, causing printf to fail and the script it's invoked in along with it if the errexit option is enabled. printf "$var" with var='-va[1$(uname>&2)]' doesn't seem to be a problem for bash , ksh93 and zsh , the only three shells that I know that support that -v varname option, zsh treating it as the format, and the other two as a syntax error (because of the missing format)¹. There's some minor information disclosure with ksh93 and bash with export var='%(%Z %z)T\n' that reveals the timezone of the script. $ bash -c 'printf "$var"'BST +0100 In yash , printf "$var" would call printf with more than one argument if $var was an array with more than one element, but yash 's printf doesn't do arithmetic evaluation and anyway its arithmetic evaluations are not affected by the same kind of command injection vulnerabilities affecting ksh's, bash's or zsh's . ksh93's printf is the one with the most extensions (all the date formatting, regexp format conversion, padding based on grapheme width, URI/HTML encoding...), and it still remains quite experimental. printf "$data" there exposes thousands of lines of code to that data . I wouldn't be surprised if there was a path for arbitrary command execution in there, possibly via some arithmetic expression evaluation or by triggering some bug in its own code². Of course, that could also happen with any printf implementation. Problems with variable external data in the printf() C function, are when they contain % sequences that end up dereferencing random memory areas on the stack. printf(var) when var is %12$s tries to print the byte values stored at the 12th argument passed to printf . Since printf is not passed any other argument, that will be something else that happens to be on the stack, and that could be pointer to some area of memory holding sensitive information. With %n , printf() would end up writing some number there. $ tcc -run -w -xc - $'%6$s\n' <<<'f(char*f){char*s="secret";printf(f);}main(int c,char**v){f(v[1]);}'secret$ tcc -run -w -xc - $'%p%p%p%p%p\n%s\n' <<<'f(char*f){char*s="secret";printf(f);}main(int c,char**v){f(v[1]);}'0x7fff1182db380x7fff1182db500x7900000x80x562b5ec0ba6asecret printf utilities may end up calling printf() or may implement all of it themselves (they have to at least to some extent as %b is not in printf() , and for numeric formats, they need to convert the arguments to numbers). If they do call printf() , they will guard against calling it with not enough arguments to cover the format specification. That is a POSIX requirement that printf "%s" output nothing or that printf %d output 0 for instance, so printf implementations should pass enough empty string or 0 number arguments to printf() . You could imagine poorly written printf implementations failing to do so properly. I'm not aware of any, but I've seen awk implementations in the past where their own printf() was affected (also via OFMT or CONVFMT there which involve printf() processing³). ¹ print "$var" is an arbitrary command injection vulnerability in zsh however via that vector. It's important to use print -- $var there, and even generally print -r -- "$var" is what you want there. ² As an example, I get a SEGV with var='%(%.999999999999s)T' with the ksh93 that comes with Ubuntu 20.04 ³ Even today, with my current version of busybox , busybox awk -v OFMT='%#x %#x %#x %#x %g' 'BEGIN {print 1.1}' outputs 0x1 0x4 0x4 0x4624bb30 1.1 and busybox awk -v OFMT='%n %g' 'BEGIN {print 1.1}' segfaults.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/700508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
700,606
What am I missing? I want to find out how many files match a given pattern in a directory and assign it to a variable. If I type it all straight on the command line it works fine ls /backups/system\ db\ files/temp/daily/backup_filename_202203*.tar.gz | wc -l (n.b. the * is the date the backup was created, e.g. 01 , 02 , 03 etc.) But as soon as I add it to my bash script it fails. So far I have: base_dir="/backups/system\ db\ files/temp" sub_dir="${base_dir}/daily"filename_base="backup_filename_" and I then try and run: counter=$(ls ${sub_dir}/${filename_base_}202203*.tar.gz | wc -l) or with quotes: counter=$(ls "${sub_dir}/${filename_base_}202203*.tar.gz" | wc -l) The first one fails as it tries to split it based on the whitespaces.The second one doesn't expand the * to look for the wildcards. I have tried just quoting some of it, e.g. counter=$(ls "${sub_dir}/${filename_base_}"202203*.tar.gz | wc -l) But again it ignores the wildcard. I've been searching but for the life of me can't find a way to get the total number of matching files.
You don't escape the spaces when you already quoted them, otherwise the backslashes will be taken literally: base_dir="/backups/system\ db\ files/temp" should be base_dir="/backups/system db files/temp" Also why do you have _ in ${filename_base_} ? Your last command should have probably worked if everything else was correct: counter=$(ls "${sub_dir}/${filename_base}"202203*.tar.gz | wc -l) Anyways, you should not parse ls . Rather use find : find "${sub_dir}" -name "${filename_base}202203*.tar.gz" -printf '.' | wc -c or an array: shopt -s nullglobfiles=("${sub_dir}/${filename_base}"202203*.tar.gz)echo "${#files[@]}" You need the nullglob option, otherwise the result will be 1 if no files match your pattern as the string is then taken literally.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/700606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102428/" ] }
700,615
I am rather new to bioinformatics (this is my first post!) and I would appreciate some help on task that has me stuck. I have a Tab-delimited data table with three columns: AATTCTTGCA 4 [A/T]AATTCCTTCG 7 [C/T]AATTCAACAA 2 [T/C] I would like to replace the character in the first column at the position indicated by the second column with the string in the third column so that the output is: AAT[A/T]CTTGCAAATTCC[C/T]TCGA[T/C]TTCAACAA I am working through various tutorials now and will update my post when I have some (failed) commands with sed / awk . Thanks in advance!
The following awk command should do the task: awk -F"\t" '{printf "%s%s%s%s",substr($1,1,$2-1),$3,substr($1,$2+1),ORS}' input.txt The option -F sets the field separator to TAB . The program will then print (using the printf() function) for every line the substring of field 1 from the beginning up to (but excluding) the character position indicated in field 2 the string contained in field 3 the remainder of field 1, starting one past the character position indicated in field 2 the "output record separator", which defaults to new-line thereby effectively replacing the indicated character with the content of field 3. Note that in hindsight this amount of explicit formatting control is actually not necessary, and the program can be abbreviated to awk -F"\t" '{print substr($1,1,$2-1) $3 substr($1,$2+1)}' input.txt Caveat : The program assumes that the character position in field 2 is always reasonable, i.e. greater than 0 and less or equal to the total length of field 1. If the file can be corrupt, more error-checking is needed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/700615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/523806/" ] }
700,664
I have a file with the following format, each column separated by tabs: C1 C2 C3a b,c de f,g,h ij k l... Now I need to have the number of lines according to the number of values separated by commas (if that's the case) in the 2nd column. The lines must have one of the values and not the others. The result would be this: C1 C2 C3a b da c de f ie g ie h ij k l...... As this is due to work asap, I've just made a don't do this at home script, reading line by line with a while , due to my lack of skills in awk , or not exploring other possible solutions with other tools. The script is as follows: I'm revising the script in the meantime # DON'T DO THIS AT HOME SCRIPT> duplicados.txtwhile IFS= read -r line; do # get the value of the column of interest cues="$(echo "$line" | awk -F'\t' '{ print $18 }')" # if the column has commas then it has multiple values if [[ "$cues" =~ , ]]; then # count the commas c=$(printf "%s" "$cues" | sed 's/[^,]*//g' | wc -c) # loop according to the number of commas for i in $(seq $(($c + 1))); do # get each value of the column of interest according to the position cue="$(echo "$cues" | awk -F',' -v c=$i '{ print $c; ++c }')" # save the line to a file substituting the whole column for the value echo "$line" | sed "s;$cues;$cue;" >> duplicados.txt done continue fi # save the single value lines echo "$line" >> duplicados.txtdone < inmuebles.txt With this I get the desired result (as far as I can tell). As you can imagine the script is slow and very ineficient. How could I do this with awk or other tools? A sample of the real data is like this, being the column of interest the number 18: 1409233 UNION VIAMONTE Estatal Provincial DGEP 3321 VIAMONTE -33.7447365;-63.0997115 Rural Aglomerado 140273900 140273900-ESCUELA NICOLAS AVELLANEDA1402961 UNION SAN MARCOS SUD Estatal Provincial DGEA, DGEI, DGEP 3029, 3311, Z11 SAN MARCOS SUD -32.629557;-62.483976 / -32.6302699949582;-62.4824499999125 / -32.632417;-62.484932 Urbano 140049404, 140164000, 140170100, 140173100 140049404-C.E.N.M.A. N° 201 ANEXO SEDE SAN MARCOS SUD, 140164000-C.E.N.P.A. N° 13 CASA DE LA CULTURA(DOC:BERSANO), 140170100-ESCUELA HIPOLITO BUCHARDO, 140173100-J.DE INF. HIPOLITO BUCHARDO1402960 UNION SAN ANTONIO DE LITIN Estatal Provincial DGEA, DGEI, DGETyFP 3029, TZONAXI, Z11 SAN ANTONIO DE LITIN 3601300101020009 360102097366 0250347 SI / SI -32.212126;-62.635999 / -32.2122558;-62.6360432 / -32.2131931096409;-62.6291815804363 Rural Aglomerado 140049401, 140313000, 140313300, 140483400, 140499800 140049401-C.E.N.M.A. N° 201 ANEXO SAN ANTONIO DE LITIN, 140313000-I.P.E.A. Nº 214. MANUEL BELGRANO, 140313300-J.DE INF. PABLO A. PIZZURNO, 140483400-C.E.N.P.A. DE SAN ANTONIO DE LITIN, 140499800-C.E.N.P.A. B DE SAN ANTONIO DE LITIN
You could do it in awk by splitting the compound column on , and looping over the result: awk -F'\t' 'BEGIN{OFS=FS} {n=split($2,a,/,/); for(i=1;i<=n;i++){$2 = a[i]; print}}' file Perhaps more cleanly, you could do it with Miller - in particular, using the nest verb : $ cat fileC1 C2 C3a b,c de f,g,h ij k l$ mlr --tsv nest --explode --values --across-records --nested-fs ',' -f C2 fileC1 C2 C3a b da c de f ie g ie h ij k l More compactly --explode --values --across-records --nested-fs ',' may be replaced by --evar ','
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/700664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/338177/" ] }
700,826
I need to lock some user accounts, without messing with their HOME, if at all possible. Normal way would be usermod -L user but it seems to leave open ssh login with public key authentication (routinely used on this server). I know I could just mv /home/user/.ssh /home/user/_ssh or something similar, but is that the right way of doing this? What am I missing?
The documentation, man usermod , gives you the recommended solution: -L , --lock Lock a user's password. This puts a ! in front of the encrypted password, effectively disabling the password. You can't use this option with -p or -U . Note: if you wish to lock the account (not only access with a password), you should also set the EXPIRE_DATE to 1 . And then -e , --expiredate EXPIRE_DATE The date on which the user account will be disabled. The date is specified in the format YYYY-MM-DD . An empty EXPIRE_DATE argument will disable the expiration of the account. So, turning this into a real example usermod -L -e 1 someuser # Lockusermod -L -e '' someuser # Unlock If you might have a default expiry date set up in /etc/default/useradd you can include its value - if any - as part of the unlocking process: usermod -U -e "$( . /etc/default/useradd; echo "$EXPIRE")" someuser Jobs scheduled through cron are also disabled for a user with an expired account but not for just a locked account. Error messages from journalctl -u cron show the expired user's account name (in this instance test2 ): Apr 30 11:47:01 pi CRON[26872]: pam_unix(cron:account): account test2 has expired (account expired)Apr 30 11:47:01 pi cron[472]: Authentication failureApr 30 11:47:01 pi CRON[26872]: Authentication failure Jobs scheduled with at remain unattempted. (It's not clear to me if they will ever be run once they are past their due date; empirically it seems not.) Other processes already running under the locked and expired account remain unaffected and continue to run.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/700826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/130498/" ] }
700,849
How i can run sublime text in Void Linux with musl LibC (not glibc) without using chroot or flatpak?
The documentation, man usermod , gives you the recommended solution: -L , --lock Lock a user's password. This puts a ! in front of the encrypted password, effectively disabling the password. You can't use this option with -p or -U . Note: if you wish to lock the account (not only access with a password), you should also set the EXPIRE_DATE to 1 . And then -e , --expiredate EXPIRE_DATE The date on which the user account will be disabled. The date is specified in the format YYYY-MM-DD . An empty EXPIRE_DATE argument will disable the expiration of the account. So, turning this into a real example usermod -L -e 1 someuser # Lockusermod -L -e '' someuser # Unlock If you might have a default expiry date set up in /etc/default/useradd you can include its value - if any - as part of the unlocking process: usermod -U -e "$( . /etc/default/useradd; echo "$EXPIRE")" someuser Jobs scheduled through cron are also disabled for a user with an expired account but not for just a locked account. Error messages from journalctl -u cron show the expired user's account name (in this instance test2 ): Apr 30 11:47:01 pi CRON[26872]: pam_unix(cron:account): account test2 has expired (account expired)Apr 30 11:47:01 pi cron[472]: Authentication failureApr 30 11:47:01 pi CRON[26872]: Authentication failure Jobs scheduled with at remain unattempted. (It's not clear to me if they will ever be run once they are past their due date; empirically it seems not.) Other processes already running under the locked and expired account remain unaffected and continue to run.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/700849", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/524070/" ] }
700,865
Example: % diff "/Volumes/New Volume/4kyoutube/" "/Volumes/New Volume/tmpmusic"| grep DistortionOnly in /Volumes/New Volume/tmpmusic: ZAC & Bäkka - Distortion (Original Mix) [Sprout].mp3Only in /Volumes/New Volume/4kyoutube/: ZAC & Bäkka - Distortion (Original Mix) [Sprout].mp3% diff "/Volumes/New Volume/tmpmusic/ZAC & Bäkka - Distortion (Original Mix) [Sprout].mp3" "/Volumes/New Volume/4kyoutube/ZAC & Bäkka - Distortion (Original Mix) [Sprout].mp3" % What can I do about it? The files are identical.
This is not a "diff false positive", but rather the two file names are seen as different . My wild hypothesis is that either the two folders are on different devices, with a different file encoding; or that the two names are encoded differently albeit they are visually identical. Specifically, one of the two "Bäkka"s is in "precomposed" form, i.e. U+00E4 (UTF-8 C3 A4) while the other is in "decomposed" form, U+0061 U+0308 (UTF-8 0x61 0xCC 0x88) with combination diaeresis. I have not a MacOS at hand, but I can reproduce this on an ext4 Linux: $ A=$( echo -e "Ba\xcc\x88kka" )$ B=$( echo -e "B\xc3\xa4kka" )$ echo $A $BBäkka Bäkka$ touch $A $B$ ls -la | grep kka-rw-rw-rw-+ 1 lserni users 0 Apr 29 18:14 Bäkka-rw-rw-rw-+ 1 lserni users 0 Apr 29 18:14 Bäkka Apparently, I now have two files with the same name in the same folder . I cannot obviously be sure, but you might be in the same straits. To check, simply run the output of "diff" through hexdump -C and see if you have something like, 00000020 20 20 20 30 20 41 70 72 20 32 39 20 31 38 3a 31 | 0 Apr 29 18:1|00000030 36 20 42 61 cc 88 6b 6b 61 0a 2d 72 77 2d 72 77 |6 Ba..kka.-rw-rw|00000060 70 72 20 32 39 20 31 38 3a 31 36 20 42 c3 a4 6b |pr 29 18:16 B..k|00000070 6b 61 0a |ka.| Note that in the hex dump they are immediately visible as "Ba..kka" (the "a" is a normal "a", followed by the UTF8 "add a diaeresis") and "B..kka" (there is only one symbol and it is "small latin a with diaeresis"). Fixing things Frankly, I'd run first a normalization on the whole folder structure. Even if you have identically named files, but with a different encoding (i.e. some precomposed, some decomposed), this is going to bite you sooner or later. From a file system point of view, which system you use is largely irrelevant. The important thing is how you feed the system now and how you use the system now. If the new incoming files have precomposed names, it makes sense to set all the FS to precomposed (or vice versa), so the standard will be maintained. On the other hand, you might also want to check out functions like searching for files, sorting, and so on, to verify that the files are where you expect them to be (needless to say, some systems consider "a", "ä" and "ä" the same, some others don't - they might set "a" and "ä" together, "ä" somewhere else; or vice versa). I'd try copying a small mp3 file with the names "älpha composed", "älpha decomposed" and "alpha neutral", then working with a folder with those three files as well as "alpha 0 test" and "alpha z test", and then whether decomposed or precomposed is best, if any. The docs seem to indicate you should go with decomposed . So first thing, you need a list of all the file names. This is easy find . -type f > list-as-it-is.txt But now you need to convert the precomposed elements in the list to their decomposed form. I have done a bit of research and, to add a further layer of complication, it seems that MacOS and Linux behave differently , and MacOS has several legacy accommodation problems: Important: The terms used in this Q&A, precomposed and decomposed,roughly correspond to Unicode Normal Forms C and D, respectively.However, most volume formats do not follow the exact specification forthese normal forms. For example, HFS Plus (Mac OS Extended) uses avariant of Normal Form D in which U+2000 through U+2FFF, U+F900through U+FAFF, and U+2F800 through U+2FAFF are not decomposed (thisavoids problems with round trip conversions from old Mac textencodings). It's likely that your volume format has similar oddities. In theory you should have only one form on disk ("Mac OS X's BSD layer uses canonically decomposed UTF-8 encoding for filenames"). In practice, it seems to depend (obviously, otherwise you wouldn't have problems; predictably, you are not alone ). So, I'm pretty chary about suggesting a conversion method without being able to test it beforehand on a real MacOS. If the files are few, then I'd suggest fixing them by hand - delete one file, copy the other on the other folder. In theory , you could do something like (in Bash) hexa=$( echo -n "$name" | xxd -ps | tr -d "\n" )if [ $[ 2*${#name} ] -lt ${#hexa} ]; then # Not ASCII. orif ( echo "$name" | file - | grep "UTF-8" > /dev/null ); then and if the test matches, you can do mv "$name" "$(dirname "$name")/tmpname" && mv "$(dirname "$name")/tmpname" "$name" and maybe the first "mv" will recognize the file whatever its encoding, while the second will recreate the name using the fixed default system encoding, which hopefully will suit you. This kind of operation would be very fast, even if it needlessly processes all UTF-8 names. Ignoring things You could ignore all files with this kind of trick. Then, trouble would arise only when two files are different, and have differently encoded same name . Is this an issue? If it is not, then you're all set. Just do a preliminary grep to remove the lines containing "^Only": diff ... | grep -v ^Only | grep Distortion Removing duplicates This, luckily, bypasses encoding entirely. There are tools that do this already ( jdupes is the one I use). Files with identical content that differ by MP3 tags will not work with this approach, and you'll probably find this answer useful. find folder1 -type f -exec md5sum \{\} \; | sort > folder1.txtfind folder2 -type f -exec md5sum \{\} \; | sort > folder2.txt Now if you want to get duplicates: join -o 2.2 folder1.txt folder2.txt will get you the files in folder2 that are duplicates (-o 2.1 will get you the files in folder1).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/700865", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
700,869
I have a laptop running Windows with a Cygwin X server. On this machine I have a virtual Linux box running under VMWare. I set export DISPLAY=xserver:0 on the VM and do xhost +xclient on the cygwin shell. I can use either the hostname or the IPv4 address. I can now run my X programs (mostly emacs/xterm) by redirecting the display. So far so good. I also need to use the AWS VPN client to connect to AWS (horrible client but it works). This runs on the Windows laptop but also gets picked up by the virtual machine. I can now talk to AWS on either machine. So far so good. However, if I try to start any X programs on the linux machine, it refuses to authenticate it. I just get the error "Authorization required, but no authorization protocol specified". If I add the IP address or the server name, it doesn't matter - same error. Neither IP address has changed (I've verified this with Wireshark). If I do xhost + to disable the authentication, then I can connect but this is obviously hideously insecure and I don't want to do it. I've tried going down the xauth rabbit hole but that just replaces the above errors with Invalid MIT-MAGIC-COOKIE-1 errors. Any idea what's going on?
This is not a "diff false positive", but rather the two file names are seen as different . My wild hypothesis is that either the two folders are on different devices, with a different file encoding; or that the two names are encoded differently albeit they are visually identical. Specifically, one of the two "Bäkka"s is in "precomposed" form, i.e. U+00E4 (UTF-8 C3 A4) while the other is in "decomposed" form, U+0061 U+0308 (UTF-8 0x61 0xCC 0x88) with combination diaeresis. I have not a MacOS at hand, but I can reproduce this on an ext4 Linux: $ A=$( echo -e "Ba\xcc\x88kka" )$ B=$( echo -e "B\xc3\xa4kka" )$ echo $A $BBäkka Bäkka$ touch $A $B$ ls -la | grep kka-rw-rw-rw-+ 1 lserni users 0 Apr 29 18:14 Bäkka-rw-rw-rw-+ 1 lserni users 0 Apr 29 18:14 Bäkka Apparently, I now have two files with the same name in the same folder . I cannot obviously be sure, but you might be in the same straits. To check, simply run the output of "diff" through hexdump -C and see if you have something like, 00000020 20 20 20 30 20 41 70 72 20 32 39 20 31 38 3a 31 | 0 Apr 29 18:1|00000030 36 20 42 61 cc 88 6b 6b 61 0a 2d 72 77 2d 72 77 |6 Ba..kka.-rw-rw|00000060 70 72 20 32 39 20 31 38 3a 31 36 20 42 c3 a4 6b |pr 29 18:16 B..k|00000070 6b 61 0a |ka.| Note that in the hex dump they are immediately visible as "Ba..kka" (the "a" is a normal "a", followed by the UTF8 "add a diaeresis") and "B..kka" (there is only one symbol and it is "small latin a with diaeresis"). Fixing things Frankly, I'd run first a normalization on the whole folder structure. Even if you have identically named files, but with a different encoding (i.e. some precomposed, some decomposed), this is going to bite you sooner or later. From a file system point of view, which system you use is largely irrelevant. The important thing is how you feed the system now and how you use the system now. If the new incoming files have precomposed names, it makes sense to set all the FS to precomposed (or vice versa), so the standard will be maintained. On the other hand, you might also want to check out functions like searching for files, sorting, and so on, to verify that the files are where you expect them to be (needless to say, some systems consider "a", "ä" and "ä" the same, some others don't - they might set "a" and "ä" together, "ä" somewhere else; or vice versa). I'd try copying a small mp3 file with the names "älpha composed", "älpha decomposed" and "alpha neutral", then working with a folder with those three files as well as "alpha 0 test" and "alpha z test", and then whether decomposed or precomposed is best, if any. The docs seem to indicate you should go with decomposed . So first thing, you need a list of all the file names. This is easy find . -type f > list-as-it-is.txt But now you need to convert the precomposed elements in the list to their decomposed form. I have done a bit of research and, to add a further layer of complication, it seems that MacOS and Linux behave differently , and MacOS has several legacy accommodation problems: Important: The terms used in this Q&A, precomposed and decomposed,roughly correspond to Unicode Normal Forms C and D, respectively.However, most volume formats do not follow the exact specification forthese normal forms. For example, HFS Plus (Mac OS Extended) uses avariant of Normal Form D in which U+2000 through U+2FFF, U+F900through U+FAFF, and U+2F800 through U+2FAFF are not decomposed (thisavoids problems with round trip conversions from old Mac textencodings). It's likely that your volume format has similar oddities. In theory you should have only one form on disk ("Mac OS X's BSD layer uses canonically decomposed UTF-8 encoding for filenames"). In practice, it seems to depend (obviously, otherwise you wouldn't have problems; predictably, you are not alone ). So, I'm pretty chary about suggesting a conversion method without being able to test it beforehand on a real MacOS. If the files are few, then I'd suggest fixing them by hand - delete one file, copy the other on the other folder. In theory , you could do something like (in Bash) hexa=$( echo -n "$name" | xxd -ps | tr -d "\n" )if [ $[ 2*${#name} ] -lt ${#hexa} ]; then # Not ASCII. orif ( echo "$name" | file - | grep "UTF-8" > /dev/null ); then and if the test matches, you can do mv "$name" "$(dirname "$name")/tmpname" && mv "$(dirname "$name")/tmpname" "$name" and maybe the first "mv" will recognize the file whatever its encoding, while the second will recreate the name using the fixed default system encoding, which hopefully will suit you. This kind of operation would be very fast, even if it needlessly processes all UTF-8 names. Ignoring things You could ignore all files with this kind of trick. Then, trouble would arise only when two files are different, and have differently encoded same name . Is this an issue? If it is not, then you're all set. Just do a preliminary grep to remove the lines containing "^Only": diff ... | grep -v ^Only | grep Distortion Removing duplicates This, luckily, bypasses encoding entirely. There are tools that do this already ( jdupes is the one I use). Files with identical content that differ by MP3 tags will not work with this approach, and you'll probably find this answer useful. find folder1 -type f -exec md5sum \{\} \; | sort > folder1.txtfind folder2 -type f -exec md5sum \{\} \; | sort > folder2.txt Now if you want to get duplicates: join -o 2.2 folder1.txt folder2.txt will get you the files in folder2 that are duplicates (-o 2.1 will get you the files in folder1).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/700869", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/397205/" ] }
700,938
I Upgraded to PopOS 22.04 and it is supposed to have Wayland Support, but it isn't working. I tried the following things: Enabled Wayland in /etc/gdm3/custom.conf by setting WaylandEnable=true Installed latest NVIDIA driver 510 (I'm using a GTX 1080) added the line GRUB_CMDLINE_LINUX="nvidia-drm.modeset=1" to /etc/default/grub Of course I rebooted the PC every time I changed something. There is supposed to be a cog on the bottom right corner when logging in, but no luck so far :( Has anyone a solution?
sudo nano /usr/lib/udev/rules.d/61-gdm.rulesLABEL="gdm_prefer_xorg"#RUN+="/usr/libexec/gdm-runtime-config set daemon PreferredDisplayServer xorg"GOTO="gdm_end"LABEL="gdm_disable_wayland"#RUN+="/usr/libexec/gdm-runtime-config set daemon WaylandEnable false"GOTO="gdm_end"LABEL="gdm_end"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/700938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/524159/" ] }
700,941
I have created a .sh executable file that generates several files in the working directory/folder that are used by the same script. If it is run twice in the same folder it would be a mess because the loops inside the script would use both new files and those previously created. Is there any option that would block the double execution of the same script in the same folder? Then echo "this operation has been already executed here, change folder"
sudo nano /usr/lib/udev/rules.d/61-gdm.rulesLABEL="gdm_prefer_xorg"#RUN+="/usr/libexec/gdm-runtime-config set daemon PreferredDisplayServer xorg"GOTO="gdm_end"LABEL="gdm_disable_wayland"#RUN+="/usr/libexec/gdm-runtime-config set daemon WaylandEnable false"GOTO="gdm_end"LABEL="gdm_end"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/700941", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/523421/" ] }
701,017
In bash, is it possible to get the index of the last element of an array (that might be sparse) without looping through the entire array like so: a=( e0 e1 ... )i=0while [ "$i" -lt $(( ${#a[@]} - 1 )) ]do let 'i=i+1'doneecho "$i" Since at least bash v 4.2, I can get the value of the last element in an array using e="${array[-1]}" but that will not get me the positive index since other elements may have the same value.
In case of an array which is not sparse, last index is number of elements - 1: i=$(( ${#a[@]} - 1 )) To include the case of a sparse array, you can create the array of indexes and get the last one: a=( [0]=a [1]=b [9]=c )indexes=( "${!a[@]}" )i="${indexes[-1]}"echo "$i"9
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/701017", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162056/" ] }
701,219
I want to change the filename in the Mac Bash to initial capital after every dot/period, except for the filename extension. Input string example: one.two.three.four.txt Desired output: One.Two.Three.Four.txt I tried the following: echo 'one.two.three.four.txt' | awk 'BEGIN{FS=OFS="."} NF==1{$1=toupper($1)} {for (i=1;i<NF;i++) $i=toupper($i)} 1' but this leads to the output: ONE.TWO.THREE.FOUR.txt So it's making everything uppercase (except for the extension). I'm looking for initial Capital and extension in lower case. Looking for a solution with awk as sed \U won't work (for me so far it's not) in Mac Bash. Thanks in Advance
In a variation of your attempt, the following should do: echo "one.two.three.four.txt" | awk 'BEGIN{FS=OFS="."} {for (i=1;i<NF;i++) {$i=toupper(substr($i,1,1)) substr($i,2)}}1' This will again set the input and output field separator to the . . It will then iterate over all fields but the last, and re-assamble each field to be the concatenation of upper-case version of the first character (using the substr() function to isolate it) and the remainder of the original content (again, using the substr() function). In the end, it will print the current line including all modifications (this is the meaning of the seemingly "stray" 1 outside of the rule blocks). Addendum : as noted in a comment, the OP actually wanted to not only ensure initial capitals, but full-fledged capitalization of the . -separated parts, i.e. also forcing all subsequent letters to lower-case. This can be achieved by a minor modification of the above program: echo "one.two.tHREE.four.txt" | awk 'BEGIN{FS=OFS="."} {for (i=1;i<NF;i++) {$i=toupper(substr($i,1,1)) tolower(substr($i,2))}}1'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701219", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/524450/" ] }
701,267
I have a large file which contains hundreds of English phrases in the following form: \phrase{. . . * * }{I shoul-d've stayed home.}{aɪ ʃʊd‿əv ˈsteɪd ˈhoʊm.} <- only replace on this line\phrase{ . . * }{Did you eat?}{dɪdʒjʊʷˈit? ↗} <- only replace on this line\phrase{ * . * . * . . . * . }{Yeah, I made some pas-ta if you're hun-gry.}{ˈjɛə, aɪ ˈmeɪd səm ˈpɑ stəʷɪf jər ˈhʌŋ gri.} <- only replace on this line It's a LaTeX .tex file. I would like to replace all r characters in each phonetic transcription (by phonetic transcription I mean every third line after the \phrase line) with the ɹ symbol (hex code U+0279 ). Doing it by hand in Emacs is cumbersome for me. I was wondering if there is a way to target those lines somehow and do the replacement automatically. All r characters have to be replaced with ɹ , there is no exception, but only in the phonetic transcription, leave the r as-is in the English/non-phonetic text. Is it possible to do that somehow by using a script or something? There are no line breaks in my document so the transcription is alway the third line after \phrase . Thank you!
an awk version (you'll need a relay file, you can one-line it) awk '/\\phrase/ { p=NR ; } NR == p+3 { gsub("r","ɹ") ; } {print;} ' old-file.tex > new-file.tex where /\\phrase/ { p=NR ; } will set p to each line number where \phrase appear NR == p+3 { gsub("r","ɹ") ; } perform replacement on 3th line after {print;} print all line. this gave on your sample :(note the ɹeplace ) \phrase{. . . * * }{I shoul-d've stayed home.}{aɪ ʃʊd‿əv ˈsteɪd ˈhoʊm.} <- only ɹeplace on this line\phrase{ . . * }{Did you eat?}{dɪdʒjʊʷˈit? ↗} <- only ɹeplace on this line\phrase{ * . * . * . . . * . }{Yeah, I made some pas-ta if you're hun-gry.}{ˈjɛə, aɪ ˈmeɪd səm ˈpɑ stəʷɪf jəɹ ˈhʌŋ gɹi.} <- only ɹeplace on this line
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701267", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421232/" ] }
701,317
I just learned a trick to create a new file with the cat command. By my testing, if the last line is not followed by a newline, I have to type ctrl+d twice to finish the input, as demonstrated below. [root@192 ~]# cat > testab ctrl+d [root@192 ~]# cat > testab ctrl+d ctrl+d [root@192 ~]# Is this expected? Why this behavior?
Yes, it's expected. We say that Ctrl-D makes cat see "end of file" in the input, and it then stops reading and exits, but that's not really true. Since that's on the terminal, there's no actual "end", and in fact it's not really "end of file" that's ever detected, but any read() of zero bytes. Usually, the read() system call doesn't return zero bytes except when it's known there's no more available, like at the end of a file. When reading from a network socket where there's no data available, it's expected that new data will arrive at some point, so instead of that zero-byte read, the system call will either block and wait for some data to arrive, or return an error saying that it would block. If the connection was shut down, then it would return zero bytes, though.Then again, even on a file, reading at (or past) the end is not an interminably final end as another process could write something to the file to make it longer, after which a new attempt to read would return more data. (That's what a simple implementation of tail -f would do.) For a lot of use-cases treating "zero bytes read" as "end of file detected" happens to work well enough that they're considered effectively the same thing in practice. What the Ctrl-D does here, is to tell the terminal driver to pass along everything it was given this far, even if it's not a full line yet. At the start of a line, that's all of zero bytes, which is detected as an EOF. But after the letter b , the first Ctrl-D sends the b , and then the next one sends the zero bytes entered after the b , and that now gets detected as the EOF. You can also see what happens if you just run cat without a redirection. It'll look something like this, the parts in italics are what I typed: $ cat foo Ctrl-D foo When Ctrl-D is pressed, cat gets the input foo , prints it back and continues waiting for input. The line will look like foofoo , and there's no newline after that, so the cursor stays there at the end.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/701317", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/520451/" ] }
701,473
Is there a way to know which partition you actually booted from? fdisk -l reveals a "Boot" column that I definitely don't have on my NVME. Is this just legacy information? Device Boot Start End Sectors Size Id Type/dev/sda1 * 2048 1126399 1124352 549M b W95 FAT32/dev/sda2 1126400 975688107 974561708 464.7G 7 HPFS/NTFS/exFAT/dev/sda3 975689728 976769023 1079296 527M 27 Hidden NTFS WinRE...Device Start End Sectors Size Type/dev/nvme0n1p1 616448 2458216447 2457600000 1.1T Linux filesystem/dev/nvme0n1p2 2458216448 3907024031 1448807584 690.8G Linux filesystem/dev/nvme0n1p3 2048 616447 614400 300M EFI SystemPartition table entries are not in disk order. Considering lsblk shows that /boot/efi is mounted I'm 90% sure that it's using my nvme drive, I just wanted to confirm that's true even though there's no boot indicator from fdisk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTSsda 8:0 0 465.8G 0 disk├─sda1 8:1 0 549M 0 part├─sda2 8:2 0 464.7G 0 part└─sda3 8:3 0 527M 0 partsdb 8:16 0 1.8T 0 disk├─sdb1 8:17 0 99M 0 part├─sdb2 8:18 0 16M 0 part└─sdb3 8:19 0 1.8T 0 partnvme0n1 259:0 0 1.8T 0 disk├─nvme0n1p1 259:1 0 1.1T 0 part├─nvme0n1p2 259:2 0 690.8G 0 part /└─nvme0n1p3 259:3 0 300M 0 part /boot/efi I also noticed Disklabel type is dos for /dev/sda and gpt for /dev/nvme0n1 if that factors in.
Since your system apparently boots in UEFI style, the answer to the titular question is: Run efibootmgr -v as root, see the four-digit ID on the BootCurrent: line (usually the first line of output), then look at the corresponding BootNNNN line to find both the PARTUUID of the partition to boot from, and the filename containing the actual boot manager/loader used . Then run lsblk -o +PARTUUID to see the partition-unique UUIDs embedded in the GPT partition table. Find the UUID you saw on the BootNNNN line of the efibootmgr -v output, and you'll know the partition. (On MBR-partitioned disks, there is no real partition UUID, and so a shorter combination of a disk signature number and a partition number is displayed in place of a real partition UUID.) The Disklabel type is definitely a factor here: it indicates your sda uses classic MBR partitioning and boot sequence, while your nvme0n1 uses GPT partitioning and UEFI-style booting. While the GPT partition table can store a boot flag that is essentially the same as the Boot flag field in the fdisk -l output of a MBR-partitioned disk, booting MBR-style from a GPT-partitioned disk is expected to be a rare corner case, and so fdisk -l will not include it. The native UEFI-style way will not use such a flag at all, since it's now the system firmware's job to know both the name of the bootloader file and the PARTUUID of the partition to load it from . But if such a legacy flag is enabled on a GPT partition, using the i command (= print information about a partition) of a modern Linux fdisk will show it, by the presence of a LegacyBIOSBootable keyword on the Attrs: line of output. To actually toggle such a flag, you would have to use the experts-only extra commands of a GPT-aware Linux fdisk : first x , then A to toggle the flag. If you just want a list the partition table with the UEFI partition flags included, you can use fdisk -x /dev/nvme0n1 . Be advised that the output is quite a bit wider than the traditional fdisk -l output. If you are booting using the classic MBR/BIOS style, then the answer to the title question is "you don't, really." There is no ubiquitous standard way for BIOS-style firmware to tell the OS which device was actually used to boot the system. This was a long-standing problem on all OSs and OS installers on systems using legacy BIOS-style boot. If the /sys/firmware/edd directory exists, it may contain information that allows the identification of the boot disk, by identifying the order in which BIOS saw the disks in. By convention, the current boot disk is moved to the first hard disk position (also known as "disk 0x80") in the BIOS disk list, and most BIOS-based bootloaders rely on this fact. So if /sys/firmware/edd/int13_dev80 exists, and the bootloader has not switched the BIOS int13 IDs of the disks around (GRUB can do so, if you have a custom dual/multi-boot configuration that requires swapping disk IDs), then the information within may be useful to identify the actual boot disk used by the firmware. Unfortunately the BIOS extension required to have this information available was not as widespread as it could have been, and not always completely and correctly implemented even when it was present. I've seen a lot of systems with no EDD info available, some systems with incomplete EDD info, and even one system in which querying the EDD info caused the boot to hang. (Apparently the EDD info interface was designed by Dell, so if you mostly work with Dell systems, you may have better luck than me.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/701473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/524688/" ] }
701,536
I'm having trouble debugging a docker issue. On my machine as host, every apt / apt-get / apt-cache command in the debian:jessie image hangs and I can't figure out why. On other machines with docker, when I run (for example) docker run --rm debian:jessie apt list it takes a few seconds but then the list pops up. On my machine, it just hangs forever (> 30 minutes) and uses a full CPU core. Any ideas on how to debug this problem? I'm on a fedora 35 (x86_64) with recent and decent hardware. I've already tried to run different commands - all take a CPU core and freeze. I tried at least apt update , apt upgrade , apt list , apt show apt , apt-cache showpkg apt and certainly a few more I can't remember tried to disable selinux via setenforce 0 on the host, to no effect tried to take the network away via the --network none arguments to docker, to no effect updated my fedora host system, to no effect checked that the docker-ce version is the latest stable - it is tried it with docker run --rm debian:latest apt list (i.e. the latest debian) - this works, but I need the old one (jessie, not latest) tried to stop the fedora firewall via systemctl stop firewalld.service and restart the docker daemon via systemctl restart docker (thanks @rubynorails), to no effect Any ideas on how to go from here?
After hours of more trial and error, another image which worked on a different host brought me on the right track. There seems to be a problem with the default ulimits on fedora [1][2]. The following works fine: docker run --rm --ulimit nofile=10000:10000 -ti debian:jessie apt list I just added the --ulimit parameter to every container/docker build and so far everything works like a charm. [1] https://github.com/coreos/fedora-coreos-docs/issues/103 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1715254
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701536", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/524770/" ] }
701,723
I'm using GNU Awk 5.0.1 and I need to use [} or [) as FS . I can't make it work. Below is what I've tried. root@u2004:~# echo test | awk -F '[}' '{printf}'awk: fatal: invalid regexp: Unmatched [, [^, [:, [., or [=: /[}/root@u2004:~# echo test | awk -F '[\}' '{printf}'awk: warning: escape sequence `\}' treated as plain `}'awk: fatal: invalid regexp: Unmatched [, [^, [:, [., or [=: /[}/root@u2004:~# echo test | awk -F '[\\}' '{printf}'awk: fatal: invalid regexp: Unmatched [, [^, [:, [., or [=: /[\}/root@u2004:~# echo test | awk -F '[}}' '{printf}'awk: fatal: invalid regexp: Unmatched [, [^, [:, [., or [=: /[}}/root@u2004:~# echo test | awk -F "[}" '{printf}'awk: fatal: invalid regexp: Unmatched [, [^, [:, [., or [=: /[}/root@u2004:~# echo test | awk -F "[\}" '{printf}'awk: warning: escape sequence `\}' treated as plain `}'awk: fatal: invalid regexp: Unmatched [, [^, [:, [., or [=: /[}/root@u2004:~# echo test | awk -F "[\\}" '{printf}'awk: warning: escape sequence `\}' treated as plain `}'awk: fatal: invalid regexp: Unmatched [, [^, [:, [., or [=: /[}/root@u2004:~# echo test | awk -F "[}}" '{printf}'awk: fatal: invalid regexp: Unmatched [, [^, [:, [., or [=: /[}}/root@u2004:~# How can I do this?
Since any multi-character string used as the input field separator ( FS ) will be interpreted as a regular expression, the string has to be a valid regular expression. awk -F '\\[}' '{ print }' Nothing special has to be done with the } , but the initial [ has to be escaped to be matched as a literal left square bracket. You need two backslashes because using a single backslash, as in \[} , would escape the square bracket and would set the delimiter expression to [} , which is an invalid regular expression. You may alternatively use [[] in place of \\[ , which matches a literal [ using a bracket expression but which does not save on typing and may be difficult to read. I took the liberty of fixing the code too. The printf statement takes a format string as an argument and then one or more expressions to output. Since you don't provide a format string, you would get an error. A shorter variant is to use 1 (or any non-empty, non-zero string). This would act as a test which is always true. A true test would trigger the default action, which is to print the current record (line). awk -F '\\[}' '1' ... although this would not do anything exciting other than output each line of input. A more useful test of the delimiter value would be awk -F '\\[}' '{ print $1 }' ... which prints the first field of each input record, e.g., {]ABC if the input is {]ABC[}{]123[} .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/520451/" ] }
701,738
While I was reading this answer , the author used this command to put the result of a heredoc to a variable: read -r -d '' VAR <<'EOF'abc'asdf"$(dont-execute-this)foo"bar"''EOF I'm a little confused about the -d option. From the help text for the read command: -d delimcontinue until the first character of DELIM is read, rather than newline So if I pass an empty string to -d , it means read until the first empty string . What does it mean? The author commented under the answer that -d '' means using the NUL string as the delimiter. Is this true (empty string means NUL string)? Why not use something like -d '\0' or -d '\x0' etc.?
Mostly, it means what it says, e.g.: $ read -d . var; echo; echo "read: '$var'"foo.read: 'foo' The reading ends immediately at the . , I didn't hit enter there. But read -d '' is a bit of a special case, the online reference manual says : -d delim The first character of delim is used to terminate the input line, rather than newline. If delim is the empty string, read will terminate a line when it reads a NUL character. \0 means the NUL byte in printf , so we have e.g.: $ printf 'foo\0bar\0' | while read -d '' var; do echo "read: '$var'"; doneread: 'foo'read: 'bar' In your example, read -d '' is used to prevent the newline from being the delimiter, allowing it to read the multiline string in one go, instead of a line at a time. I think some older versions of the documentation didn't explicitly mention -d '' . The behaviour may originally be an unintended coincidence from how Bash stores strings in the C way, with that trailing NUL byte. The string foo is stored as foo\0 , and the empty string is stored as just \0 . So, if the implementation isn't careful to guard against it and only picks the first byte in memory, it'll see \0 , NUL, as the first byte of an empty string. Re-reading the question more closely, you mentioned: The author commented under the answer that -d '' means using the NUL string as delimiter. That's not exactly right. The null string (in the POSIX parlance) means the empty string, a string that contains nothing, of length zero. That's not the same as the NUL byte , which is a single byte with binary value zero (*) . If you used the empty string as a delimiter, you'd find it practically everywhere, at every possible position. I don't think that's possible in the shell, but e.g. in Perl it's possible to split a string like that, e.g.: $ perl -le 'print join ":", split "", "foobar";'f:o:o:b:a:r read -d '' uses the NUL byte as the separator. (*not the same as the character 0 , of course.) Why not use something like -d '\0' or -d '\x0' etc.? Well, that's a good question. As Stéphane commented, originally, ksh93's read -d didn't support read -d '' like that, and changing it to support backslash escapes would have been incompatible with the original. But you can still use read -d $'\0' (and similarly $'\t' for the tab, etc.) if you like it better. Just that behind the scenes, that's the same as -d '' , since Bash doesn't support the NUL byte in strings. Zsh does, but it seems to accept both -d '' and -d $'\0' .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701738", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/520451/" ] }
701,739
I ran this command in Terminal: sudo apt update At the end I received this message: W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://packages.microsoft.com/repos/edge stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY EB3E94ADBE1229CFW: Failed to fetch https://packages.microsoft.com/repos/edge/dists/stable/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY EB3E94ADBE1229CFW: Some index files failed to download. They have been ignored, or old ones used instead.``` Currently running Pop!_OS 22.04 which is based on Ubuntu 22.04
Mostly, it means what it says, e.g.: $ read -d . var; echo; echo "read: '$var'"foo.read: 'foo' The reading ends immediately at the . , I didn't hit enter there. But read -d '' is a bit of a special case, the online reference manual says : -d delim The first character of delim is used to terminate the input line, rather than newline. If delim is the empty string, read will terminate a line when it reads a NUL character. \0 means the NUL byte in printf , so we have e.g.: $ printf 'foo\0bar\0' | while read -d '' var; do echo "read: '$var'"; doneread: 'foo'read: 'bar' In your example, read -d '' is used to prevent the newline from being the delimiter, allowing it to read the multiline string in one go, instead of a line at a time. I think some older versions of the documentation didn't explicitly mention -d '' . The behaviour may originally be an unintended coincidence from how Bash stores strings in the C way, with that trailing NUL byte. The string foo is stored as foo\0 , and the empty string is stored as just \0 . So, if the implementation isn't careful to guard against it and only picks the first byte in memory, it'll see \0 , NUL, as the first byte of an empty string. Re-reading the question more closely, you mentioned: The author commented under the answer that -d '' means using the NUL string as delimiter. That's not exactly right. The null string (in the POSIX parlance) means the empty string, a string that contains nothing, of length zero. That's not the same as the NUL byte , which is a single byte with binary value zero (*) . If you used the empty string as a delimiter, you'd find it practically everywhere, at every possible position. I don't think that's possible in the shell, but e.g. in Perl it's possible to split a string like that, e.g.: $ perl -le 'print join ":", split "", "foobar";'f:o:o:b:a:r read -d '' uses the NUL byte as the separator. (*not the same as the character 0 , of course.) Why not use something like -d '\0' or -d '\x0' etc.? Well, that's a good question. As Stéphane commented, originally, ksh93's read -d didn't support read -d '' like that, and changing it to support backslash escapes would have been incompatible with the original. But you can still use read -d $'\0' (and similarly $'\t' for the tab, etc.) if you like it better. Just that behind the scenes, that's the same as -d '' , since Bash doesn't support the NUL byte in strings. Zsh does, but it seems to accept both -d '' and -d $'\0' .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701739", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/521266/" ] }
701,768
Ok so the situation is that I have an unknown number of sub-directories that all follow the same naming profile folder0, folder1, folder2, folder3 etc Now each folder will have 3 text files and these text files will have the same 3 file names in all folders file1 file2 file3 I would like to find an easy way to concatenate all the text files in all the folders into one text file, starting in folder0 with file1, then file2, then file3 and the same order in all folders. Now for a small number of folders I could use cat cat folder0/file1 folder0/file2 folder0/file3 folder1/file1 folder1/file2 folder1/file3 folder2/file1 folder2/file2 folder2/file3 folder3/file1 folder3/file2 folder3/file3 > textfile but the number of folders is unknown and could range into the 100's or 1000's anybody have any indea of a script that would acomplish this.
Mostly, it means what it says, e.g.: $ read -d . var; echo; echo "read: '$var'"foo.read: 'foo' The reading ends immediately at the . , I didn't hit enter there. But read -d '' is a bit of a special case, the online reference manual says : -d delim The first character of delim is used to terminate the input line, rather than newline. If delim is the empty string, read will terminate a line when it reads a NUL character. \0 means the NUL byte in printf , so we have e.g.: $ printf 'foo\0bar\0' | while read -d '' var; do echo "read: '$var'"; doneread: 'foo'read: 'bar' In your example, read -d '' is used to prevent the newline from being the delimiter, allowing it to read the multiline string in one go, instead of a line at a time. I think some older versions of the documentation didn't explicitly mention -d '' . The behaviour may originally be an unintended coincidence from how Bash stores strings in the C way, with that trailing NUL byte. The string foo is stored as foo\0 , and the empty string is stored as just \0 . So, if the implementation isn't careful to guard against it and only picks the first byte in memory, it'll see \0 , NUL, as the first byte of an empty string. Re-reading the question more closely, you mentioned: The author commented under the answer that -d '' means using the NUL string as delimiter. That's not exactly right. The null string (in the POSIX parlance) means the empty string, a string that contains nothing, of length zero. That's not the same as the NUL byte , which is a single byte with binary value zero (*) . If you used the empty string as a delimiter, you'd find it practically everywhere, at every possible position. I don't think that's possible in the shell, but e.g. in Perl it's possible to split a string like that, e.g.: $ perl -le 'print join ":", split "", "foobar";'f:o:o:b:a:r read -d '' uses the NUL byte as the separator. (*not the same as the character 0 , of course.) Why not use something like -d '\0' or -d '\x0' etc.? Well, that's a good question. As Stéphane commented, originally, ksh93's read -d didn't support read -d '' like that, and changing it to support backslash escapes would have been incompatible with the original. But you can still use read -d $'\0' (and similarly $'\t' for the tab, etc.) if you like it better. Just that behind the scenes, that's the same as -d '' , since Bash doesn't support the NUL byte in strings. Zsh does, but it seems to accept both -d '' and -d $'\0' .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/525006/" ] }
701,786
I trying to find all files ending with : ancestors-the-humankind-odyssey-trailers- 720-1262 .mp4 high-tech-divers- 1080-1234 .mp4 amazon-divers- 720-1323 .mp4 and rename this files like this : 720-1262 .mp4 1080-1234 .mp4 720-1323 .mp4 I use regex but nothing, why ? what's wrong pls ? find . -regex '^.*-(([0-9])*-([0-9])*)\.mp4' Only one directory and these files, no subdirectories. approximately 300 files .mp4
Mostly, it means what it says, e.g.: $ read -d . var; echo; echo "read: '$var'"foo.read: 'foo' The reading ends immediately at the . , I didn't hit enter there. But read -d '' is a bit of a special case, the online reference manual says : -d delim The first character of delim is used to terminate the input line, rather than newline. If delim is the empty string, read will terminate a line when it reads a NUL character. \0 means the NUL byte in printf , so we have e.g.: $ printf 'foo\0bar\0' | while read -d '' var; do echo "read: '$var'"; doneread: 'foo'read: 'bar' In your example, read -d '' is used to prevent the newline from being the delimiter, allowing it to read the multiline string in one go, instead of a line at a time. I think some older versions of the documentation didn't explicitly mention -d '' . The behaviour may originally be an unintended coincidence from how Bash stores strings in the C way, with that trailing NUL byte. The string foo is stored as foo\0 , and the empty string is stored as just \0 . So, if the implementation isn't careful to guard against it and only picks the first byte in memory, it'll see \0 , NUL, as the first byte of an empty string. Re-reading the question more closely, you mentioned: The author commented under the answer that -d '' means using the NUL string as delimiter. That's not exactly right. The null string (in the POSIX parlance) means the empty string, a string that contains nothing, of length zero. That's not the same as the NUL byte , which is a single byte with binary value zero (*) . If you used the empty string as a delimiter, you'd find it practically everywhere, at every possible position. I don't think that's possible in the shell, but e.g. in Perl it's possible to split a string like that, e.g.: $ perl -le 'print join ":", split "", "foobar";'f:o:o:b:a:r read -d '' uses the NUL byte as the separator. (*not the same as the character 0 , of course.) Why not use something like -d '\0' or -d '\x0' etc.? Well, that's a good question. As Stéphane commented, originally, ksh93's read -d didn't support read -d '' like that, and changing it to support backslash escapes would have been incompatible with the original. But you can still use read -d $'\0' (and similarly $'\t' for the tab, etc.) if you like it better. Just that behind the scenes, that's the same as -d '' , since Bash doesn't support the NUL byte in strings. Zsh does, but it seems to accept both -d '' and -d $'\0' .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/525015/" ] }
701,803
I've started the Ubuntu Server 22.04 installation on my Dell R210 II, but right after displaying a few pages of text the monitor goes black, then displays the message input signal out of range . I've worked around this by editing and adding nomodeset in GRUB at the end of the linux line.Now the installer goes further, but shows this error and would not continue ubuntu server probing for devices to install to failed Here I've got stuck and could not find anything relevant to go further (or at least for my level on knowledge :-)) I've decided to install Ubuntu Server 20.04 , then upgrade to 22.04 later, thinking I might bypass the probing error this way.The Ubuntu Server 20.04 installation went fine without issues, then I have upgraded to 22.04 , but after the restart I get again the input signal out of range message on my monitor. I think something goes wrong because I can't ssh into the system, so clearly it does not fully boot. I would appreciate your help to make 22.04 work on my server, if possible. Is there a way to make a change in how Ubuntu starts and add something like nomodeset as I dit before for the installer? I have the upgraded 22.04 on one drive and another 20.04 on another if this helps.I don't have any other hardware beside the Dell R210 II motherboard with the on board GPU
Mostly, it means what it says, e.g.: $ read -d . var; echo; echo "read: '$var'"foo.read: 'foo' The reading ends immediately at the . , I didn't hit enter there. But read -d '' is a bit of a special case, the online reference manual says : -d delim The first character of delim is used to terminate the input line, rather than newline. If delim is the empty string, read will terminate a line when it reads a NUL character. \0 means the NUL byte in printf , so we have e.g.: $ printf 'foo\0bar\0' | while read -d '' var; do echo "read: '$var'"; doneread: 'foo'read: 'bar' In your example, read -d '' is used to prevent the newline from being the delimiter, allowing it to read the multiline string in one go, instead of a line at a time. I think some older versions of the documentation didn't explicitly mention -d '' . The behaviour may originally be an unintended coincidence from how Bash stores strings in the C way, with that trailing NUL byte. The string foo is stored as foo\0 , and the empty string is stored as just \0 . So, if the implementation isn't careful to guard against it and only picks the first byte in memory, it'll see \0 , NUL, as the first byte of an empty string. Re-reading the question more closely, you mentioned: The author commented under the answer that -d '' means using the NUL string as delimiter. That's not exactly right. The null string (in the POSIX parlance) means the empty string, a string that contains nothing, of length zero. That's not the same as the NUL byte , which is a single byte with binary value zero (*) . If you used the empty string as a delimiter, you'd find it practically everywhere, at every possible position. I don't think that's possible in the shell, but e.g. in Perl it's possible to split a string like that, e.g.: $ perl -le 'print join ":", split "", "foobar";'f:o:o:b:a:r read -d '' uses the NUL byte as the separator. (*not the same as the character 0 , of course.) Why not use something like -d '\0' or -d '\x0' etc.? Well, that's a good question. As Stéphane commented, originally, ksh93's read -d didn't support read -d '' like that, and changing it to support backslash escapes would have been incompatible with the original. But you can still use read -d $'\0' (and similarly $'\t' for the tab, etc.) if you like it better. Just that behind the scenes, that's the same as -d '' , since Bash doesn't support the NUL byte in strings. Zsh does, but it seems to accept both -d '' and -d $'\0' .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/277890/" ] }
701,944
I can capture the disk devices of my Linux machine with the following command: lsblk -lnb | numfmt --to=iec --field=4 | grep disk | awk '{print $1}'sdasdbsdcsdd In my bash script I used the line below to test if the script's argument matches one of the disks on my machine: if [[ ` lsblk -lnb | numfmt --to=iec --field=4 | grep disk | awk '{print $1}' | grep -c $arg ` != 0 ]] thenecho “argument is a disk”...fi I run the script as bash /tmp/verify_disks <some arg> But in case $arg is by mistake as arg=”—help” , or in case $arg is null, I get the following exception: Usage: grep [OPTION]... PATTERN [FILE]...Try 'grep --help' for more information. So what is the right approach to check if $arg includes one of the disks in my Linux machines without exception?
You don't seem to be using most of the components of that command. All you need is: lsblk -lnb | awk '$NF=="disk"{print $1}' Then, to avoid the error message when no argument has been given, you need to quote it so grep is given something to search for even if that something is an empty string: if [[ "$(lsblk -lnb | awk '$NF=="disk"{print $1}' | grep -c "$arg")" != 0 ]]; then echo "argument is a disk"fi However, this will return "argument is a disk" when you give it nothing since anything, including an empty line, matches the empty string: $ echo foo | grep -c ""1$ echo "" | grep -c ""1 A simple way to avoid the issue—and also a very good habit to get into, you should always check the sanity of your arguments—is to check your argument first: #!/bin/bashif [[ -z "$1" ]]; then echo "The script needs at least one non-empty argument" exit 1fiarg="$1"if [[ "$(lsblk -lnb | awk '$NF=="disk"{print $1}' | grep -c "$arg")" != 0 ]]; then echo "argument is a disk"else echo "not a disk!"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
701,953
I am running debian 11.3 x64 inside a vmware workstation 16.2. Initially the vm has only one network interface [NAT Switch of vmware] assigned. And it is working pretty well. I tried to add another network adapter to the VM [Bridged Switch]. And the device got added but the network manager connection profile is not created and it showing in disconnected state. below are the nmcli outputs. xxxx@yyyyyy:~$ nmcli deviceDEVICE TYPE STATE CONNECTIONens33 ethernet connected Wired connection 1ens36 ethernet disconnected --lo loopback unmanaged -- xxxxx@yyyyy:~$ nmcliens33: connected to Wired connection 1 "Intel 82545EM" ethernet (e1000), 00:0C:29:1A:1F:8A, hw, mtu 1500 ip4 default inet4 192.168.153.133/24 route4 0.0.0.0/0 route4 192.168.153.0/24 inet6 fe80::20c:29ff:fe1a:1f8a/64 route6 fe80::/64 route6 ff00::/8ens36: disconnected "Intel 82545EM" 1 connection available ethernet (e1000), 00:0C:29:1A:1F:94, hw, mtu 1500lo: unmanaged "lo" loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536DNS configuration: servers: 192.168.153.2 domains: localdomain interface: ens33 To connect to the new interface, I have to create connection profile by running sudo nmcli c add type ethernet ifname ens36 con-name Wired2 As per the debian manual, https://manpages.debian.org/testing/network-manager/NetworkManager.conf.5.en.html no-auto-default - Specify devices for which NetworkManager shouldn't create default wired connection (Auto eth0). By default, NetworkManager creates a temporary wired connection for any Ethernet device that is managed and doesn't have a connection configured . FYI The etc/network/interfaces does not any entry for the device. Network Manager Configuration xxx@yyyyy:~$ cat /etc/NetworkManager/NetworkManager.conf[main]plugins=ifupdown,keyfile[ifupdown]managed=false Questions Why the NetworkManager is not auto creating profiles for wiredconnections? Is there something else to be enabled? Am I doing something wrong?
You don't seem to be using most of the components of that command. All you need is: lsblk -lnb | awk '$NF=="disk"{print $1}' Then, to avoid the error message when no argument has been given, you need to quote it so grep is given something to search for even if that something is an empty string: if [[ "$(lsblk -lnb | awk '$NF=="disk"{print $1}' | grep -c "$arg")" != 0 ]]; then echo "argument is a disk"fi However, this will return "argument is a disk" when you give it nothing since anything, including an empty line, matches the empty string: $ echo foo | grep -c ""1$ echo "" | grep -c ""1 A simple way to avoid the issue—and also a very good habit to get into, you should always check the sanity of your arguments—is to check your argument first: #!/bin/bashif [[ -z "$1" ]]; then echo "The script needs at least one non-empty argument" exit 1fiarg="$1"if [[ "$(lsblk -lnb | awk '$NF=="disk"{print $1}' | grep -c "$arg")" != 0 ]]; then echo "argument is a disk"else echo "not a disk!"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/701953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/525226/" ] }
702,022
I have non-standard data, which I'd like to standardise file: d101 11001e101 9665f101 9663d102 11002e102 11003f102 11004g102 11005 desired output: d101 11001e101 12001f101 12002d102 11002e102 11003f102 11004g102 11005 so the logic should be, if length of column2 = 4 it should replace it with incremental numbering of a provided series: in this case 1200 is series, & 1, 2, 3 .. are increments.
$ awk -v n=12000 'length($2)==4 {$2=++n} {print}' filed101 11001e101 12001f101 12002d102 11002e102 11003f102 11004g102 11005 Note that we first increment n and then assign, to use the new value. If we wanted to start printing from 12000 we would use: $2=n++ , first assign and then increase.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702022", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
702,114
I have a list of files as follows: $ echo *add akfefg aka ab ba I want to echo file names containing both 'a' and 'b' in either order. I tried echo *[ab]*[ba]*aka ab ba and it also prints 'aka'. I've tried $ echo *a*b* && echo *b*a*abba it does its job but 'ba' is printed in next line. Is there a simpler way to do this? Thank you.
In this answer, I'm addressing two different scenarios: You want to match the filename that contains both a and b in the current directory and then you want to do something to these files . You want to process the output of your two echo commands to remove the newlines. If you want to loop over the filenames, then it would be good to have a single pattern that matches all names and then uses the expansion of that pattern in a loop. Using echo would not be advisable as this removes the distinction between individual filenames. In the zsh shell, the single pattern (*a*b*|*b*a*) would expand to all names where the letters a and b occur in any order. In the bash shell, you could use the single extended globbing pattern (or ksh -like globbing pattern) @(*a*b*|*b*a*) to do the same thing. The bash shell does not enable extended globbing patterns by default, so you would need to enable these with shopt -s extglob first. In the bash shell, therefore, you could use something like shopt -s extglobfor name in @(*a*b*|*b*a*); do # process "$name" heredone Another way to generate the wanted filenames for processing is by using find : find . ! -name . -prune -name '*a*' -name '*b*' This looks in the current directory (only) for any name that contains both a and b . You would then call some other utility using -exec at the end of this command to perform the operation, for example with find . ! -name . -prune -name '*a*' -name '*b*' -exec sh -c ' for name do # process "$name" here done' sh {} + Here, find generates the filenames used by the in-line sh -c script. See also Understanding the -exec option of `find` However, if this is a text-processing exercise and the purpose is to arrange two or more lines of text (the output from your two echo invocations) into a single line, then you'd do this by piping the text through paste -s - . This would additionally remove any newlines embedded in filenames. Using paste in this way rather than tr -d '\n' (which also removes newlines) ensures that the resulting text is terminated by a single newline at the end.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702114", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/327890/" ] }
702,193
Clear thinking and clear communication are facilitated when different terms are used to represent different concepts. This is particularly useful when the 2 concepts are very similar but different. We tend to use the term "command" to represent 2 very similar but different concepts. concept 1: A single program entered on a command-line interface. Options might be passed to the program, but there is still only 1 program being used. example(s): $ ls$ ls -alF This concept is referred to as a "command". concept 2: Anything entered on a command-line interface before hitting the Enter key to instruct the shell to process it. example(s): $ ls -alF | head > output.txt; cat output.txt This concept is also referred to as a "command". This is true even though it contains 3 different "commands" according to the previous definition. For people who desire to use different terms to represent different concepts in order to think and communicate ideas more precisely, what are the best terms to use to represent these 2 different concepts?
Well, I guess you could read the Shell Command Language specification, esp. 2.9 Shell Commands . This section describes the basic structure of shell commands. The following command descriptions each describe a format of the command that is only used to aid the reader in recognizing the command type, and does not formally represent the syntax. [...] Each description discusses the semantics of the command; for a formal definition of the command language, consult Shell Grammar . A command is one of the following:[Simple command, Pipeline, List, Compound command, Function definition] This, below, is a simple command , and ls here is the "command name": ls -alF This, below, is a pipeline with two simple commands in it: ls -alF | head > output.txt This, below, is a sequential list (or just a list ) of a pipeline and a simple command: ls -alF | head > output.txt; cat output.txt (Then again, you can take the very first example here also as a (degenerate) pipeline.) And this, below, is a list with an AND-list containing a pipeline of a compound command and a simple command, with another simple command next in the list. for x in a b c; do echo "$x"; done | cat -n && echo ok; echo end. (And if I read the part on Lists correctly, if you separated the commands with a newline instead of a ; , it'd be a compound list, but that's nitpicking already.) I wouldn't really expect other documentation (or posts on this site) to carefully adhere to using those exact phrases (correctly), so trying to communicate using them precisely might turn out to be difficult. Embrace the chaos.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702193", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72482/" ] }
702,210
Before I resort to writing Python to do this - there must be a way in bash with find or similar tools. I want to search a tree, and find all files whose name matches a pattern, that are in a directory matching another pattern. For example: all files named foo_[0-9][0-9] in a dir named bar_[0-9] . So, I want all these files found (from the current dir): a/b/bar_4/foo_02b/bar_0/foo_03a/b/c/d/bar_8/foo_88 Thanks.
For this case, you can use the -path <pattern> option, similar to the -name option of find : find . -path "*bar_[0-9]/foo_[0-9][0-9]"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702210", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90748/" ] }
702,221
Trying to learn about UIDs and GIDs. Various online reading led me to believe that my UID is saved in /etc/passwd , but this doesn't appear to be the case on a server where I work: $ whoamiuser1$ cat /etc/passwd | grep user1$ Is there a(nother) file besides /etc/passwd that could contain my UID? (I'm assuming UID is similar to GID in that there is a file somewhere that contains it. I've found the GID I'm interested in in the file /etc/group ) I know that I can get my UID with the command id -u , but for this question, I'm specifically interested in learning whether there's a file that contains it.
Yes /etc/passwd is one of many ways the user account database can be stored and queried. In many Unix-like systems, the Name Service Switch (initially from Solaris) is responsible for translating some system names to/from ids using a number of methods. Its configuration is usually stored in /etc/nsswitch.conf . In there, you'll find entries for a number of databases and how they are handled (group, passwd, services, hosts, networks...). For the hosts database which is used to translate host names to network protocol addresses, you'll find that DNS and sometimes mDNS are generally queried in addition to /etc/hosts . When a process requests information about a user name such as with the getpwnam() standard function, the methods to use are looked up in that file for the passwd entry. If such a method is the files method, /etc/<db> will be looked up. On GNU systems, that's typically done by some /lib/<system>/libnss_files.so.<version> dynamically loaded module. But you can have many more, such as NIS+, LDAP, SQL. Some of those methods are included with the GNU libc, some can be installed separately. On Debian or derivatives, see the output of apt-cache search 'NSS module' for instance. In enterprise environments, where the user database is centralised, the most popular central DB was NIS, then NIS+ while these days, it's rather LDAP or Microsoft's Active Directory (or its clones for Unix). If present, the get{pw/host/grp}...() functions of the GNU libc will also query a name service caching daemon via /run/nscd/socket instead of invoking the whole NSS stack and query the backend DBs directly. Then the querying will be done by nscd and cached to speed up later queries. Some NSS modules can can also do their caching themselves. On GNU/Linux systems, a popular method is using System Security Services ( sss ). That comes with a separate daemon ( sssd ) that handles the requests and despatches them to other databases (such as LDAP / AD) while also doing some caching. Then /etc/nsswitch.conf will have a sss method for most DBs, and the backends are configured in the sssd configuration. PAM (responsible for authentication) also typically queries sssd in that case. That should help clarify why querying /etc/passwd (or /etc/group or /etc/hosts ...) to get account (or group/host...) information from the command line is wrong in the general case. Most modern systems will have a getent command instead for that (also from Solaris), or more portably, you can use perl 's interface to all the standard get<db>*() functions. $ getent passwd binbin:x:2:2:bin:/bin:/usr/sbin/nologin$ perl -le 'print for getpwnam("bin")'binx22bin/bin/usr/sbin/nologin $ getent services domaindomain 53/tcp$ perl -le 'print for getservbyname("domain", "tcp")'domain53tcp$ perl -le 'print for getservbyname("domain", "udp")'domain53udp
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/702221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/214773/" ] }
702,331
In bash , I know that there are a few ways to check whether a given variable is defined and contains a value. However, I want to check whether a given variable exists in the environment, and not as simply a local variable. For example ... VAR1=value1 # local variableexport VAR2=value2 # environment variableif [[ some_sort_of_test_for_env_var VAR1 ]]then echo VAR1 is in the environmentelse echo VAR1 is not in the environmentfiif [[ some_sort_of_test_for_env_var VAR2 ]]then echo VAR2 is in the environmentelse echo VAR2 is not in the environmentfi How can some_sort_of_test_for_env_var be defined so that the bash code above will cause the following two lines to be printed? VAR1 is not in the environment VAR2 is in the environment I know that I can define a shell function to run env and do a grep for the variable name, but I'm wondering if there is a more direct "bash-like" way to determine whether a given variable is in the environment and not just a local variable. Thank you in advance.
Your title and body are very different. Your title is either impossible or meaningless. All environment variables in bash are also shell variables, so no environment variable ever can be 'only' in the environment. For your body if declare -p VARNAME | grep -q '^declare .x'; then # it's in the environment# or typeset if you prefer the older name If you particularly want the [[ syntax if [[ $(declare -p VARNAME) == declare\ ?x* ]] # ditto
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702331", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274756/" ] }
702,347
How to write a sed (or awk , or both) which will rewrite the following: echo 'v100 v201 v102 v300 v301 v500 v999 v301' | sed/awk ... to this output: v1 v2 v3 v4 v5 v6 v7 v5 i.e. each subsequent vx was rewritten to start with v1...vn and where the same v was used in the sequence (i.e. v301 ) the same v should be applied (as in v5 ). Sidenote: the example input sequence shows all possible eventualities (i.e. duplicates, out of order originals, jumps in original numbers). Are you the sed or awk expert who can answer this?
Using awk : awk '{ for (i=1; i<=NF; ++i) $i = (seen[$i] ? seen[$i] : seen[$i] = "v" ++n) }; 1' This goes through all the fields of each input line and reassigns it. The value that it is reassigned is v followed by the next value of the counter n , unless the field's value has been seen before, in which case its new value will be the same as that field's value was given previously. The 1 at the end triggers the outputting of the modified line. Testing: $ echo 'v100 v201 v102 v300 v301 v500 v999 v301' | awk '{ for (i=1; i<=NF; ++i) $i = (seen[$i] ? seen[$i] : seen[$i] = "v" ++n) }; 1'v1 v2 v3 v4 v5 v6 v7 v5 Alternative awk command that only modifies the field if it matches the regular expression ^v[0-9]+$ : awk '{ for (i=1; i<=NF; ++i) if ($i ~ "^v[0-9]+$") $i = (seen[$i] ? seen[$i] : seen[$i] = "v" ++n) }; 1' Or, formatted across multiple lines for readability: awk '{ for (i=1; i<=NF; ++i) if ($i ~ "^v[0-9]+$") $i = (seen[$i] ? seen[$i] : seen[$i] = "v" ++n)}; 1'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241016/" ] }
702,400
It happens in both Ubuntu 22.04 and Manjaro (Gnome). If I install XRDP on it and connect to it via XRDP, for some apps the sudo password dialogue is working, and for some other apps, it does not work. Why is it so, and is there any way to fix it? An example of not working app is Nautilus (Files). On Manjaro, the nautilus-admin extension is installed by default, and on Ubuntu, you can install it from the official repository. Now, if you right click a directory in Files and choose "Open as Administrator", it fails in XRDP, because it does not show the sudo password dialogue as it does when doing it locally.
Using awk : awk '{ for (i=1; i<=NF; ++i) $i = (seen[$i] ? seen[$i] : seen[$i] = "v" ++n) }; 1' This goes through all the fields of each input line and reassigns it. The value that it is reassigned is v followed by the next value of the counter n , unless the field's value has been seen before, in which case its new value will be the same as that field's value was given previously. The 1 at the end triggers the outputting of the modified line. Testing: $ echo 'v100 v201 v102 v300 v301 v500 v999 v301' | awk '{ for (i=1; i<=NF; ++i) $i = (seen[$i] ? seen[$i] : seen[$i] = "v" ++n) }; 1'v1 v2 v3 v4 v5 v6 v7 v5 Alternative awk command that only modifies the field if it matches the regular expression ^v[0-9]+$ : awk '{ for (i=1; i<=NF; ++i) if ($i ~ "^v[0-9]+$") $i = (seen[$i] ? seen[$i] : seen[$i] = "v" ++n) }; 1' Or, formatted across multiple lines for readability: awk '{ for (i=1; i<=NF; ++i) if ($i ~ "^v[0-9]+$") $i = (seen[$i] ? seen[$i] : seen[$i] = "v" ++n)}; 1'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/379327/" ] }
702,421
I have a file (file1) that looks like this: ROW 1 AA 120 APFGHKDESFNNJFHGRIHJASFGNSKDHFIXXXXXXROW 2 AA 234 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXROW 3 AA 122 XXXXXXXXXXXXXXXXXXXXXROW 4 AA 89 WUAHGLIHGUNGBGDSYUXXXXXXXXXXXXXXFGOAYGIGWEIWIGFUEGFHUIWGEFUROW 5 AA 186 XXWANFJHOUNGRIGNOROW 6 AA 156 WANLHRIOGRNINGIJOHONJPHHYGKHDY... there are multiple rows that contain different numbers of X.however, the result should not contains the rows which only consist of X, it should be: ROW 1 AA 120 APFGHKDESFNNJFHGRIHJASFGNSKDHFIXXXXXXROW 4 AA 89 WUAHGLIHGUNGBGDSYUXXXXXXXXXXXXXXFGOAYGIGWEIWIGFUEGFHUIWGEFUROW 5 AA 186 XXWANFJHOUNGRIGNOROW 6 AA 156 WANLHRIOGRNINGIJOHONJPHHYGKHDY... Thank you for the help!
With awk , print the lines where last field has at least one character which is not X : awk '$NF ~ /[^X]/' fileROW 1 AA 120 APFGHKDESFNNJFHGRIHJASFGNSKDHFIXXXXXXROW 4 AA 89 WUAHGLIHGUNGBGDSYUXXXXXXXXXXXXXXFGOAYGIGWEIWIGFUEGFHUIWGEFUROW 5 AA 186 XXWANFJHOUNGRIGNOROW 6 AA 156 WANLHRIOGRNINGIJOHONJPHHYGKHDY Or with grep : grep -v '[[:space:]]XX*$' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/521972/" ] }
702,437
I have a file like below: somearbitrary number ofleadinglinesa prefix followed by wmd v0.0.0-20220406135915-ce5e3ee6c6bfsometrailinglines This is only an example of what the file could look like. The unvarying parts are that, for the line I am interested in: There is always wmd v0.0.0- , followed by 14 digits, followed by a hyphen, followed by 12 alphanumeric characters How can I write a sed command that will allow me to replace the 20220406135915-ce5e3ee6c6bf portion with the value in a shell variable new_text ? In other words, if new_text had the value 99999999999999-aaaaaaaaaaaa , I want to find the <whatever goes here> part of the sed command that would produce the following output: $ sed -e "s/wmd v0.0.0-<whatever goes here>/wmd v0.0.0-$new_text/" my-file.txtsomearbitrary number ofleadinglinesa prefix followed by wmd v0.0.0-99999999999999-aaaaaaaaaaaasometrailinglines
You can use the \{..\} quantifiers to specify how many times a character class should match. sed -e "s/wmd v0\.0\.0-[0-9]\{14\}-[0-9a-f]\{12\}/wmd v0.0.0-$new_text/"# ~ ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~ Also note that a dot has a special meaning in regular expressions. Backslash it to match literally. Also note that if $new_text contained a slash or some other characters special to sed, the command can break.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392334/" ] }
702,474
I just recently downloaded a game called Starsector and instead of going into the folder every time and running ./starsector.sh , I want to create a desktop entry. Below is my current .desktop file called starsector.desktop : [Desktop Entry]Version=1.0Name=StarsectorGenericName=StarsectorExec=sh -c "cd /usr/games/starsector && sudo ./starsector.sh"Terminal=falseIcon=/usr/games/starsector/graphics/icons/cargo/ai_core_alpha.pngType=ApplicationCategories=Game I have moved this file into ~/.local/share/applications . When copying the Exec line and running it in my shell, it runs the game perfectly fine, but when clicking on the icon it does nothing. Things I have tried I have tried to set Terminal=true Run desktop-file-validate ; there are no errors present Adding exec permissions to the file Copying file to the desktop and clicking "Allow launching" Running gio set myapp.desktop metadata::trusted yes As of now, the file permissions are -rw-rw-r-- . I don't know if this is a file permission problem or simply a problem with the game executable itself somehow. EDIT: More things I have tried Setting Exect=sh -c "cd /usr/game/starsector && sh ./starsector.sh Changed owner and group of starsector.sh from root to my own userChanged owner and group of starsector.desktop from my own user to root:root
You can use the \{..\} quantifiers to specify how many times a character class should match. sed -e "s/wmd v0\.0\.0-[0-9]\{14\}-[0-9a-f]\{12\}/wmd v0.0.0-$new_text/"# ~ ~ ~~~~~~~~~~~~~~~~~~~~~~~~~~ Also note that a dot has a special meaning in regular expressions. Backslash it to match literally. Also note that if $new_text contained a slash or some other characters special to sed, the command can break.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/525833/" ] }
702,477
How to write a sed (or awk , or both) which will take the following: echo '1 aa 2 2 3 bb 5 bb 2 5' | sed/awk ... And only replace the n-th occurrence of a string? For example the 3rd occurrence of 2 or the second occurrence of bb ? So the expected output would be (when replacing 2nd occurrence of bb with replaced for example): 1 aa 2 2 3 bb 5 replaced 2 5 The input string, the replacement string, and n can be any arbitrary input.
Using sed to change the second occurance of bb $ sed 's/bb/new-bb/2' file1 aa 2 2 3 bb 5 new-bb 2 5 or to change the third occurance of 2 $ sed 's/2/12/3' file1 aa 2 2 3 bb 5 bb 12 5
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702477", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/241016/" ] }
702,494
Good days. I have a little problem.I have this function in bash script function denyCheck(){ file=$(readlink -f $(which $(printf "%s\n" "$1"))) if [[ -x $file ]];then return false else return true fi}denyCheck "firefox" This function I pass her a string, what is a comman, and this resolv the orginal path where is the command (by example: /usr/lib/firefox/firefox.sh) and if the file is executable return true else false. But the problem is...The parameter (in the example "firefox") run it as command and open the browser firefox. How can I do for that the parameter not run it? Thank you very much.
Using sed to change the second occurance of bb $ sed 's/bb/new-bb/2' file1 aa 2 2 3 bb 5 new-bb 2 5 or to change the third occurance of 2 $ sed 's/2/12/3' file1 aa 2 2 3 bb 5 bb 12 5
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702494", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/525853/" ] }
702,563
Can a command line utility save sub-strings conditionally in different files? I have a file ( file.txt ) with several lines like the following. 1/1_ABCD4.txt:200207111/1_ABCD10.txt:200207312/2_ABCD2.txt:200711032/2_ABCD5.txt:200711073/3_ABCD1.txt:200902253/3_ABCD3.txt:20090230 My goal is to save 20020711 together with 20020731 in file 1 , 20071103 with 20071107 in file 2 , and 20090225 with 20090230 in file 3 ? I could extract the desired sub-strings after : with the following command, but would lose the reference digit by doing so: $ grep -oP 'txt\:\K[A-Z0-9-]+' 'path/to/file.txt'200207112002073120071103200711072009022520090230 Is it possible to build three separate files with the first digit before / as target reference while using command line? The destination might be the same directory like the original text file. File: 2002071120020731 File: 2007110320071107 File: 2009022520090230 Thank you.
With awk : awk -F'[:/]' '{print $NF > $1}' file We split the row using both / and : as separators. The last field ( $NF ) is what to print, and the first field ( $1 ) is the output filename. After running for your test input file: $ head 1 2 3==> 1 <==2002071120020731==> 2 <==2007110320071107==> 3 <==2009022520090230 Also, depending on your data, it is good to add a condition before this action, to avoid printing to a file with a random name, in case we have more lines with different structure, the input could be dangerous. A simple example, if we want to print only when the first field (the filename) has only digits: awk -F'[:/]' '$1 ~ /^[0-9]+$/ {print $NF > $1}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/508678/" ] }
702,567
Getting this error when installing linux from iso using VirtualBox 6.1. I have tried Linux 8.6 and 8.2 and both are giving same error. I also tried a workaround of disabling "Connect to redhat insight". What is solution for this ? I am really tired trying to fix this issue.
With awk : awk -F'[:/]' '{print $NF > $1}' file We split the row using both / and : as separators. The last field ( $NF ) is what to print, and the first field ( $1 ) is the output filename. After running for your test input file: $ head 1 2 3==> 1 <==2002071120020731==> 2 <==2007110320071107==> 3 <==2009022520090230 Also, depending on your data, it is good to add a condition before this action, to avoid printing to a file with a random name, in case we have more lines with different structure, the input could be dangerous. A simple example, if we want to print only when the first field (the filename) has only digits: awk -F'[:/]' '$1 ~ /^[0-9]+$/ {print $NF > $1}' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702567", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/525917/" ] }
702,776
My assignment is to write a bash script that reads a directory and returns the file type of each file within, including all subdirectories. Using the find command is not allowed. I've tried to implement this using essentially two for loops, but I'm getting a segmentation fault. I found that the script does work however when I pass a directory without subdirectories. Would someone be willing to look over a noob's code and tell me what's wrong? Thank you very much. #!/bin/bashfunc () {for name in $1do if [ -f "$name" ] then file "$name" elif [ -d "$name" ] then func "$1" fidone}directory="$1/*"for name in $directorydo if [ -f "$name" ] then file "$name" elif [ -d "$name" ] then func "$1" fidone
If the " $name " it's processing is a directory, you need to call func on its contents, and not the original argument, or you get an infinite loop, and hence the segfault. Your code can be greatly reduced by using a function on the original argument, and have the function apply to each item separately. Right now you're repeating most of what happens in the function in the main body anyway. #!/bin/bashfunc () { local arg="$1" if [[ -f "$arg" ]] ; then file -- "$arg" return fi if [[ -d "$arg" ]] ; then for file in "$arg"/* ; do func "$file" done fi}func "$1"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/702776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/526105/" ] }
702,788
I'm trying to do the below: echo "/Users/anon/Applications/Chrome Apps.localized/Spotify.app" | sed -E 's:([^\/]*$).*:\1:' which I assumed would capture Spotify.app and replace the entire string with it, but this doesn't work. Instead, I get the entire string back. So, I thought maybe my regex is wrong, so I did the below to test it out: echo "/Users/anon/Applications/Chrome Apps.localized/Spotify.app" | sed 's:[^\/]*$:PWA.app:' But I get the expected output: /Users/anon/Applications/Chrome Apps.localized/PWA.app . So, I'm not sure what am I doing wrong here. Why is the same regex not getting matched when grouped?
If the " $name " it's processing is a directory, you need to call func on its contents, and not the original argument, or you get an infinite loop, and hence the segfault. Your code can be greatly reduced by using a function on the original argument, and have the function apply to each item separately. Right now you're repeating most of what happens in the function in the main body anyway. #!/bin/bashfunc () { local arg="$1" if [[ -f "$arg" ]] ; then file -- "$arg" return fi if [[ -d "$arg" ]] ; then for file in "$arg"/* ; do func "$file" done fi}func "$1"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/702788", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/526112/" ] }
702,796
I researched the kill , pkill and killall commands, and I understood most of their differences. However, I am confused about their signals: If I run kill -l , I see: 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR111) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+338) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+843) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+1348) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-1253) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-758) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-263) SIGRTMAX-1 64) SIGRTMAX But pkill -l gives: pkill: invalid option -- 'l'Usage: pkill [options] <pattern>Options: -<sig>, --signal <sig> signal to send (either number or name) -e, --echo display what is killed -c, --count count of matching processes -f, --full use full process name to match -g, --pgroup <PGID,...> match listed process group IDs -G, --group <GID,...> match real group IDs -i, --ignore-case match case insensitively -n, --newest select most recently started -o, --oldest select least recently started -P, --parent <PPID,...> match only child processes of the given parent -s, --session <SID,...> match session IDs -t, --terminal <tty,...> match by controlling terminal -u, --euid <ID,...> match by effective IDs -U, --uid <ID,...> match by real IDs -x, --exact match exactly with the command name -F, --pidfile <file> read PIDs from file -L, --logpidfile fail if PID file is not locked --ns <PID> match the processes that belong to the same namespace as <pid> --nslist <ns,...> list which namespaces will be considered for the --ns option. Available namespaces: ipc, mnt, net, pid, user, uts -h, --help display this help and exit -V, --version output version information and exitFor more details see pgrep(1). Even when there is no list of signals, this command supports/uses signals, just see in the previous output that appears -<sig>, --signal <sig> signal to send (either number or name) And finally, killall -l returns: HUP INT QUIT ILL TRAP ABRT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM STKFLTCHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH POLL PWR SYS Question Why are the signal lists for kill , killall and pkill not the same? I assumed pkill and killall should had shown the same output as kill -l - and at first glance, it seems like pkill does not support signals. Environment: I have this situation for Ubuntu Server 18:04, 20:04 and Fedora Workstation 36
Why are the signal lists for kill, killall and pkill not the same? Most likely, because they were implemented differently, with different frames of mind, at different times, by different persons. You should note that all of the commands have some form of a --signal argument that can specify any signal the kernel is capable of sending, regardless of which signals the inline help or manual pages may have written into them by hand. As always, consult a command's documentation (generally available in the manual with man command ) for details on its usage, invocation, and options. You can also check §7 of the manual for details- see List of Kill Signals for instance.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/702796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/383045/" ] }
702,884
I have file like this d1000 1000d1001 100d1002 10d1003 1 I want to modify second column where length is not equal 4.But I want to print only lines that are modified, so original text in column 2 stays in coulmn 2, and change is printed in 3rd column with an increment to a number. Desired Output: d1001 100 1101d1002 10 1102d1003 1 1103 I'm trying along these lines, but not able to get the syntax or results awk -v n=1100 '((length($2)!=4 && length($2)>0) {new=($2=++n)}; {print $1, $2, new}' file
You can print the existing line ( $0 ) and a new field like this, , will use the output separator between the arguments. awk -v n=1100 'length($2)!=4 {print $0,++n}' file Output: d1001 100 1101d1002 10 1102d1003 1 1103 If you need any additional formatting of the output, you can use printf function. Here is an example for alignment: $ awk -v n=1100 'length($2)!=4 {printf "%s %4s %s\n", $1, $2, ++n}' filed1001 100 1101d1002 10 1102d1003 1 1103$ awk -v n=1100 'length($2)!=4 {printf "%s %-4s %s\n", $1, $2, ++n}' filed1001 100 1101d1002 10 1102d1003 1 1103
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
702,891
Currently I'm updating my iptables rules using a bash script, where I call the command: iptables -F then I apply the rules. The problem being that I need to update the rules to gain access to port 80, then I drop everything, in a cron job every 10 minutes. So every 10 minutes I call iptables -F to delete old rules and open all ports (the thing I don't want). I want to not have to flush the rules every 10 minutes, just edit or update the existing rules.
You can print the existing line ( $0 ) and a new field like this, , will use the output separator between the arguments. awk -v n=1100 'length($2)!=4 {print $0,++n}' file Output: d1001 100 1101d1002 10 1102d1003 1 1103 If you need any additional formatting of the output, you can use printf function. Here is an example for alignment: $ awk -v n=1100 'length($2)!=4 {printf "%s %4s %s\n", $1, $2, ++n}' filed1001 100 1101d1002 10 1102d1003 1 1103$ awk -v n=1100 'length($2)!=4 {printf "%s %-4s %s\n", $1, $2, ++n}' filed1001 100 1101d1002 10 1102d1003 1 1103
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/526225/" ] }
702,921
I have a Sipeed Lichee RV that runs a Debian image, but I cannot install software due to missing key. I have followed the Package Authentication Instructions but am getting this error: root@sipeed:/etc/apt# gpg --keyserver hkps://keys.openpgp.org --recv-keys 0xE852514F5DF312F6gpg: key E852514F5DF312F6: new key but contains no user ID - skippedgpg: Total number processed: 1gpg: w/o user IDs: 1 My sources.list file contains: deb http://ftp.ports.debian.org/debian-ports/ sid main The apt update command returns The following signatures couldn't be verified because the public key is not available: NO_PUBKEY E852514F5DF312F6
You can print the existing line ( $0 ) and a new field like this, , will use the output separator between the arguments. awk -v n=1100 'length($2)!=4 {print $0,++n}' file Output: d1001 100 1101d1002 10 1102d1003 1 1103 If you need any additional formatting of the output, you can use printf function. Here is an example for alignment: $ awk -v n=1100 'length($2)!=4 {printf "%s %4s %s\n", $1, $2, ++n}' filed1001 100 1101d1002 10 1102d1003 1 1103$ awk -v n=1100 'length($2)!=4 {printf "%s %-4s %s\n", $1, $2, ++n}' filed1001 100 1101d1002 10 1102d1003 1 1103
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702921", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188363/" ] }
702,938
I want to find all subdirectories inside a single directory. I need to filter by the name length. I have come up with the following: find ./directory -maxdepth 1 -type d How do I filter the results by the name length -- for example, only subdirectories with more than 4 characters?
You could use the ? wildcard five times: find ./directory -maxdepth 1 -type d -name '?????*' This requires matches to have at least five characters in their name.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/702938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/525906/" ] }
702,993
In a bash script, I call another programme, but I want to configure that programme with a command line option. The following works: AREA_ARG=""if __SOME_SETTING__ ; then AREA_ARG=" --area us,ca "fiprocess_data -i /some/path $AREA_ARG i.e. bash either executes process_data -i /some/path , or process_data -i /some/path --area us,ca . However shellcheck complains! $ shellcheck test.sh In test.sh line 7:process_data -i /some/path $AREA_ARG ^-------^ SC2086: Double quote to prevent globbing and word splitting.Did you mean: process_data -i /some/path "$AREA_ARG"For more information: https://www.shellcheck.net/wiki/SC2086 -- Double quote to prevent globbing ... I understand the principle, but I want / need the variable to split on the space so that process_data gets 2 arguments. What's the Proper Way™ to do this in bash ?
Use arrays Here is your code re-written using arrays. Also to be a working example ( ls in place of your command), and using correct case for variables (coding standard says that capitalised names are reserved for system). #!/bin/basharea_args=()if true ; then area_args=(-l -a)fils "${area_args[@]}"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/702993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4691/" ] }
703,098
I want to list the username, id and group on /etc/passwd using the following format: username uid gid I have used the following: cut -d: -f1,3,4 /etc/passwd But it returns username:uid:gid . How can I format the command to remove the : or list without it, like this: root 0 0daemon 1 1bin 2 2...
Depending on what output you want The simplest way is to translate the delimiter to what you want with tr , sed or awk ... For example cut -d: -f1,3,4 /etc/passwd | tr ':' '\t'cut -d: -f1,3,4 /etc/passwd | sed 's/:/ --- /g'awk -F: '{ print $1, $3, $4}' /etc/passwd If you want to format as table then use column cut -d: -f1,3,4 /etc/passwd | column -t -s ':'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703098", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/525906/" ] }
703,141
I just noticed, that POSIX date doesn't seem to have the %s or %N format items. So I can't use those. What's an alternative, yet POSIX-compliant way to get the epoch timestamp in my shell script?
For the epoch time as an integer number of seconds, that would be: awk 'BEGIN{srand(); print srand()}' or: awk 'BEGIN{print srand(srand())}' As in POSIX awk , srand() without argument uses the current time to seed the pseudo-random generator. It also returns the previous seed, so the second srand() above returns the epoch time that was used for the previous one¹. You can get the fractional part with something like: echo|LC_ALL=C TZ=UTC0 diff -u /dev/null - |sed -n '2s/.*\(\.[0-9]*\).*/\1/p' POSIX does specify the output format for diff -u , and that the current time be used when the file is - . But several thousands if not millions of nanoseconds will likely have elapsed since you called awk earlier before you get the output, it may not even be the same second. You may however be able to check if it's the case by comparing <epochtime> % 60 with the second part of the (UTC) timestamp in the diff header if you were so inclined. ¹ About that awk solution, note that POSIX used not to say it in so many words. I did raise an objection to POSIX some time ago about the unclear wording, also stating that it was unreasonable in this day and age to force implementations to use that poor a source of entropy. Instead the resolution was to explicitly require srand() use the epoch time.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/703141", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/296048/" ] }
703,147
I'm relatively new to shell scripting, so I apologize if this seems like a simple question to ask. I have a linux VM ( debian, version 11 (bullseye) ) that I can ssh into ( ssh <ip> ), installed a few dependencies (homebrew, pyenv, etc) and was able to successfully use them. However, when I try running commands from outside of the VM ( ssh <user>@<ip> pyenv versions ) in either scripts or using my Mac terminal, I get bash: line 1: pyenv: command not found related errors. I think this may have something to do with whats explained here , but I'm not entirely sure how to circumvent that problem. Adding additional details asked by @terdon in comment below: $ which pyenv/home/linuxbrew/.linuxbrew/bin/pyenv$ grep /home/linuxbrew/.linuxbrew/bin/ ~/.bashrc ~/.bash_profile ~/.profile /etc/bash.bashrc /etc/profilegrep: /home/f0p021s/.bash_profile: No such file or directory/home/f0p021s/.profile:eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" I've also realized that if I look at my path from within my VM it looks like this: $ echo $PATH/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games And when I try to run a similar command from my local machine it looks different: $ ssh <user>@<ip> 'echo $PATH'/usr/local/bin:/usr/bin:/bin:/usr/games
When you ssh into a machine, you start an interactive login shell. When you run ssh ip command , you are starting a non-interactive, non-login shell: $ ssh localhost 'echo $- $0; shopt login_shell'hBc bashlogin_shell off$ ssh localhost[terdon@tpad ~]$ echo $- $0; shopt login_shellhimBHs -bashlogin_shell on See this answer for details on what this is actually showing you. The files read on startup by each type of shell are different. From man bash (emphasis mine): When bash is invoked as an interactive login shell , or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile , if that file exists. After readingthat file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile,in that order , and reads and executes commands from the first one thatexists and is readable. The --noprofile option may be used when theshell is started to inhibit this behavior. When bash is started non-interactively, to run a shell script, for ex‐ample, it looks for the variable BASH_ENV in the environment, expandsits value if it appears there, and uses the expanded value as the nameof a file to read and execute. Bash behaves as if the following com‐mand were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the file‐name. Now, you have shown us that the pyenv command is added to your $PATH in /home/f0p021s/.profile . As you can see above, that file ( ~/.profile ) is read by interactive login shells but not by non-interactive or non-login shells as those only read whatever is pointed to by $BASH_ENV and that is empty by default. So, your options are: Just use the full path to the command: ssh ip /home/linuxbrew/.linuxbrew/bin/pyenv Source ~/.profile : ssh ip '. ~/.profile; pyenv'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/703147", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/526498/" ] }
703,221
Edit to clarify my question: POSIX says: If a <newline> follows the (unquoted) <backslash>, the shell shall interpret this as line continuation. The <backslash> and <newline> shall be removed before splitting the input into tokens. However, dash or other implementations, tokenize input at first. As a result, \<newline> is not recognized but # this is a comment \ is discarded.Is this behavior POSIX compliant? Again, POSIX says that line continuation shall be removed before tokenizing . Isn't the following procedure really POSIX compliant? read the whole input: "echo hello ... \<newline> ... bye" search for unquoted \<newline> and remove them: "echo hello ... bye" tokenize: "echo"(discard ' ')"hello"(discard ' ')(discard "# ... bye") On Ubuntu with dash-0.5.10.2-6 sh (dash) we get the following $ cat /var/tmp/test.shecho hello # this is a comment \echo bye$ sh /var/tmp/test.shhellobye This is because everything after # is treated as a comment, and everything up to \ is discarded, so line continuation of \<newline> does not work. However, POSIX "Escape Character (Backslash)" section states The <backslash> and <newline> shall be removed before splitting the input into tokens. And since comment processing of # is done in tokenization , echo hello # this is a comment \echo bye should be equivalent to echo hello # this is a comment echo bye Does this mean that sh is not POSIX compliant?Or is there some rationale for comment taking precedence over line continuation in this situation?
The shell's input is scanned character by character to divide it into tokens, as described in the section on Token Recognition . [...] the shell shall break its input into tokens by applying the first applicable rule below to the next character in its input. Quoting is handled as part of the token recognition process, but given the example in the question, the shell will encounter the # before the quoted newline. When the shell arrives at an unquoted comment character during its scanning of the input line, the rest of the line, including the final backslash, is discarded as a comment: If the current character is a # , it and all subsequent characters up to, but excluding, the next <newline> shall be discarded as a comment. The <newline> that ends the line is not considered part of the comment. The part of the standard that you quote, the Quoting section, says that when encountering a newline preceded by a backslash... A <backslash> that is not quoted shall preserve the literal value of the following character, with the exception of a <newline> . If a <newline> follows the <backslash> , the shell shall interpret this as line continuation. The <backslash> and <newline> shall be removed before splitting the input into tokens. [...] Note that this does not come into effect until the scanner actually encounters an unquoted backslash, which is handled by the token recognition process: If the current character is <backslash> , single-quote, or double-quote and it is not quoted, it shall affect quoting for subsequent characters up to the end of the quoted text. The rules for quoting are as described in "Quoting". As already mentioned in this answer, the scanner will encounter the comment character first, before seeing the backslash, which will trigger the token recognition rule that handles the rest of the line, including any quoting characters, as a comment. Therefore, the quoting of the newline at the end of the line will never come into effect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/466308/" ] }
703,272
I have no idea what created this file - I guess a terrible shell script. It is called '.env'$'\r' I have tried various versions of rm , and the technique of opening the direcory with vim ./ , selecting the file, and Shift-D to delete. This didn't work, failing with a **warning** (netrw) delete(/root/squawker/.env) failed!NetrwMessage [RO]"NetrwMessage" --No lines in buffer-- How can I delete this pesky file? This is on Ubuntu 20.04
On recent-ish Linux systems (with GNU tools as in most desktop distributions), ls prints names with weird characters using the shell's quoting syntax. If that '.env'$'\r' is what ls gives, the name of the file is .env<CR> , where <CR> is the carriage-return character. You could get that if you had a shell script with Windows line-endings that ran e.g. whatever > .env . The good thing here is that the output of ls there is directly usable as input to the shell. Well, to Bash, ksh, and zsh at least, not a standard POSIX sh, like Debian/Ubuntu's /bin/sh , Dash. So try with just rm -f '.env'$'\r' Of course rm -f .env? should also work to remove anything named .env plus any one character. Now, of course it's also possible that the filename is literally that, what with the single quotes and backslashes. But that's more difficult to achieve by accident. Even so, rm -f *.env* should work to delete anything with .env in the name.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/703272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/526484/" ] }
703,422
I'm running macOS 12.3.1 I added a couple of lines to my .zshrc, viz. export GREP_OPTIONS='--color=always'export GREP_COLOR='1;35;40' After this, when I pipe grep output to tr, it returns the same number of lines but all of the lines are blank for example: grep ^.....$ /usr/share/dict/words | tr "[:lower:]" "[:upper:]" Returns 10230 blank lines. Is this expected?
To output matches in colour, grep writes colouring escape sequences before and after the match. Those are instructions to the terminals to change their background and/or foreground colour. It's important to realise that it is in the output along with the text. You don't see it because your terminal doesn't display them as graphical symbols but instead understand it as special instructions. The escape sequences start with an ESC character (0x1b byte (033 in octal) in ASCII aka \e or ^[ ) and are followed by a few characters which themselves don't have to be control characters. You can reveal those characters by piping the output to things like: $ echo + | grep --color=always . | sed -n l\033[01;31m\033[K+\033[m\033[K$ $ echo + | grep --color=always . | od -An -vtc -tx1 -to1 033 [ 0 1 ; 3 1 m 033 [ K + 033 [ m 033 1b 5b 30 31 3b 33 31 6d 1b 5b 4b 2b 1b 5b 6d 1b 033 133 060 061 073 063 061 155 033 133 113 053 033 133 155 033 [ K \n 5b 4b 0a 133 113 012 (here also including hexadecimal and octal values of the individual bytes) Or (though non-standard and ambiguous): $ echo + | grep --color=always . | cat -A^[[01;31m^[[K+^[[m^[[K$ You can see that in the grep output, there's a \e[01;31\e[K before the match and \e[m\e[K after the match. What escape sequences a given terminal recognises and how varies with the terminal. For xterm for instance, see the specification there . Those above these days are rather ubiquitous. For the one that starts with \e[ and ends in m , the terminal understands each of the ; -separated numbers as different rendering attributes to apply to text that is going to be written from now on. For instance 1 is for bold, 31 sets the foreground colour to red. \e[K is the escape sequence that tells the terminal to clear the screen from the cursor to the end of the line. So the terminal actually sees: <bold-fg_red><clear-to-eol>+<reset-all-attributes><clear-to-eol> But all tr sees is those ESC, [ , ... m along with the other ones that it's been asked to transliterate. In particular here, it will transliterate m to M , and the escape sequence that was changing colour attributes will turn into something else together. To find out about escape sequences and what they do, other than looking at the terminal documentation (such as https://www.invisible-island.net/xterm/ctlseqs/ctlseqs.html mentioned above for xterm), which is sometimes hard to find or inexistent, you can also look in the terminfo database which records the escape sequences recognised by a number of terminals for a few common actions. You can query that database by hand for your terminal (identified by the $TERM environment variable) with infocmp : $ infocmp -xL1 | grep M, delete_line=\E[M, key_enter=\EOM, key_mouse=\E[M, parm_delete_line=\E[%p1%dM, scroll_reverse=\EM, And for the details of what those actions ( capabilities ) are, you can look at the terminfo(5) man page ( man 5 terminfo ). \e[M ( delete_line ) deletes one line, \e[<decimal>M ( parm_delete_line ) deletes <decimal> lines. So your colouring sequences have turned into line deleting sequences once transliterated to upper case. You generally don't want to post-process coloured output as those are only intended for terminals. That's why most commands that support colouring disable it when their output doesn't go to a terminal. For GNU grep , as you already found out, you need --color=auto (or grep --color ) to get that behaviour. Now, if you do still want to see colours, you need to move the colouring to the last command in the pipeline, the one that has its output go to the terminal: <file tr '[:lower:]' '[:upper:]' | grep -xE --colour=auto '.{5}' Here using --colour=auto so that if ever the script that contains that command has its output redirected / post-processed, the colouring is disabled. Here, since the regexp matches the whole line ( -x option above, which avoids having to use ^ and $ like in your approach), you might as well switch the foreground colour to red before and clear attributes after: if [ -t 1 ]; then tput setaf 1 # set ANSI foreground colour tput boldfigrep...if [ -t 1 ]; then tput sgr0 # turn off all attributesfi Here using tput to query terminfo for the right sequence for your terminal, though since most terminals do it the same, do like grep and hardcode the sequences: [ -t 1 ] && printf '\33[1;31m'grep...[ -t 1 ] && printf '\e[m' Using [ -t 1 ] to check that stdout (file descriptor 1) is a terminal.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/703422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/516527/" ] }
703,499
All of my operating systems are mounted on one partition, it's /dev/vda1 . Is there problem if I backup the entire Linux directory with this command root@myOS:/# tar -cJpf /mnt/bax/myOS.tar.xz . Basically I backup the entire system and save it on my second partition. Where /mnt/bax has been mounted by /dev/vda2 (second partition). I'm afraid that a recursive backup will happen, I mean since /dev/vda2 is mounted, does that mean a recursive backup will happen?
You probably don't want to back up mounted and pseudo filesystems;at least not /proc , /sys , /tmp , etc. So use --one-file-system . You can add additional mounted filesystems if you want. root@myOS:/# tar --one-file-system --acls --xattrs --numeric-owner -JpScf /mnt/bax/myOS.tar.xz . /media/additional-mounted-drive Or you can use --exclude to ignore directories: root@myOS:/# tar --acls --xattrs --numeric-owner --exclude=mnt --exclude=proc --exclude=sys --exclude=tmp -JpScf /mnt/bax/myOS.tar.xz .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703499", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/363235/" ] }
703,568
I've run into a couple of similar situations where I can break a single-core bound task up into multiple parts and run each part as separate job in bash to parallelize it, but I struggle with collating the returned data back to a single data stream. My naive method so far has to create a temporary folder, track the PID's, have each thread write to a file with its pid, then once all jobs complete read all pids and merge them into a single file in order of PID's spawned. Is there a better way to handle these kind of multiple-in-one-out situations using bash/shell tools?
My naive method so far has to create a temporary folder, track the PID's, have each thread write to a file with its pid, then once all jobs complete read all pids and merge them into a single file in order of PID's spawned. This is almost exactly what GNU Parallel does. parallel do_stuff ::: job1 job2 job3 ... jobn > output There are some added benefits: The temporary files are automatically removed, so there is no cleanup - even if you kill GNU Parallel. You only need temporary space for the currently running jobs: The temporary space for completed jobs is freed when the job is done. If you want output in the same order as the input use --keep-order . If you want output mixed line-by-line from the different jobs, use --line-buffer . GNU Parallel has quite a few features for splitting up a task into smaller jobs. Maybe you can even use one of those to generate the smaller jobs?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/509524/" ] }
703,751
I have been working on this for a while, visiting dozens of sites and trying all kinds of combinations; however, I cannot get the script to run as intended. Even when they work in https://regex101.com/ , I still cannot get them to work in bash. I am trying to write a bash script which will validate that an input ("$password") is at least eight characters long and contains at least one number and at least one of these special characters: #?!@$ %^&*- GNU bash, version 5.1.16(1)-release-(x86_64-pc-linux-gnu) Any help would be greatly appreciated! read -p "Please enter a password to test: " passwordecho "You entered '$password'"# I have tried all of the following (plus a few others) and cannot get it to work#regex='^[a-zA-Z0-9#@$?]{8,}$'#regex='^[a-zA-Z0-9@#$%&*+-=]{8,}$'#regex='^(?=.*?[A-Z])(?=.*?[a-z])(?=.*?[0-9])(?=.*?[@#$%&*+-=]).{8,}$'#regex='^(?=.*?[a-zA-Z0-9])(?=.*?[#?!@$ %^&*-]).{8,}$'#regex='^(?=.*[A-Za-z])(?=.*[0-9])(?=.*[#?!@$ %^&*-]).{8,}$'if [[ $password =~ $regex ]]; then echo "This works"else echo "Nope"fi
The extended regular expression syntax supported by bash lack the ability to construct single expressions that perform boolean AND tests between several subexpressions. Therefore, it would be easier for you to perform one test per condition. You seem to have three conditions that your string needs to fulfil: At least eight characters. At least one digit (which is what I assume you mean by "number"). At least one character from the set #?!@$ %^&*- . This implies three tests: if [ "${#password}" -ge 8 ] && [[ $password == *[[:digit:]]* ]] && [[ $password == *[#?!@$\ %^\&*-]* ]]then echo 'good'else echo 'not good'fi Some special characters have to be escaped in the last test. We can make it look prettier by using variables: has_digit='*[[:digit:]]*'has_special='*[#?!@$ %^&*-]*' # or possibly '*[[:punct:]]*'if [ "${#password}" -ge 8 ] && [[ $password == $has_digit ]] && [[ $password == $has_special ]]then echo 'good'else echo 'not good'fi Note that I'm not using regular expressions here but ordinary shell patterns. The set matched by [[:punct:]] is the slightly larger set of "punctuation characters" (which notably does not contain the space character, but you could use [[:punct:] ] or [[:punct:][:blank:]] or [[:punct:][:space:]] ): !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ If you really need to use only regular expressions, then do something like has_8='.{8}'has_digit='[[:digit:]]'has_special='[#?!@$ %^&*-]' # or possibly '[[:punct:]]'if [[ $password =~ $has_8 ]] && [[ $password =~ $has_digit ]] && [[ $password =~ $has_special ]]then echo 'good'else echo 'not good'fi Note the changed patterns. A general warning about the regex101.com site is that it does not claim to support POSIX regular expressions specifically, which most standard Unix text processing tools use, only various extended variants of these.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703751", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/441869/" ] }
703,759
I have the below output from unix: $ diff -y --suppress-common-lines backup.txt newfile.txt > `jjj' int, i need only jjj : int as output. tried the below didn't work as expected: $ diff -y --suppress-common-lines backup.txt newfile.txt | grep -i '>' |tr -d '[>]' |sed 's/,//g'
The extended regular expression syntax supported by bash lack the ability to construct single expressions that perform boolean AND tests between several subexpressions. Therefore, it would be easier for you to perform one test per condition. You seem to have three conditions that your string needs to fulfil: At least eight characters. At least one digit (which is what I assume you mean by "number"). At least one character from the set #?!@$ %^&*- . This implies three tests: if [ "${#password}" -ge 8 ] && [[ $password == *[[:digit:]]* ]] && [[ $password == *[#?!@$\ %^\&*-]* ]]then echo 'good'else echo 'not good'fi Some special characters have to be escaped in the last test. We can make it look prettier by using variables: has_digit='*[[:digit:]]*'has_special='*[#?!@$ %^&*-]*' # or possibly '*[[:punct:]]*'if [ "${#password}" -ge 8 ] && [[ $password == $has_digit ]] && [[ $password == $has_special ]]then echo 'good'else echo 'not good'fi Note that I'm not using regular expressions here but ordinary shell patterns. The set matched by [[:punct:]] is the slightly larger set of "punctuation characters" (which notably does not contain the space character, but you could use [[:punct:] ] or [[:punct:][:blank:]] or [[:punct:][:space:]] ): !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ If you really need to use only regular expressions, then do something like has_8='.{8}'has_digit='[[:digit:]]'has_special='[#?!@$ %^&*-]' # or possibly '[[:punct:]]'if [[ $password =~ $has_8 ]] && [[ $password =~ $has_digit ]] && [[ $password =~ $has_special ]]then echo 'good'else echo 'not good'fi Note the changed patterns. A general warning about the regex101.com site is that it does not claim to support POSIX regular expressions specifically, which most standard Unix text processing tools use, only various extended variants of these.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703759", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402943/" ] }
703,764
How do we fetch the email from the entry whose level key has the value 2 from the below JSON document? The expected output is [email protected] Any suggestion, please? { "escalation_policy": { "on_call": [ { "level": 2, "start": "2022-05-25T00:30:00Z", "end": "2022-05-25T09:30:00Z", "user": { "id": "ABOKC", "name": "Jaavena Dobey", "email": "[email protected]", "time_zone": "Tokyo" } }, { "level": 7, "start": "2022-05-23T01:00:00Z", "end": "2022-05-30T01:00:00Z", "user": { "id": "KLSPSP", "name": "kosls frank", "email": "[email protected]", "time_zone": "Tokyo" } }, { "level": 3, "start": "2022-05-23T01:00:00Z", "end": "2022-05-30T01:00:00Z", "user": { "id": "SKSPSLSL", "name": "Smitha Choudhary", "email": "[email protected]", "time_zone": "Tokyo" } } ] }}
The extended regular expression syntax supported by bash lack the ability to construct single expressions that perform boolean AND tests between several subexpressions. Therefore, it would be easier for you to perform one test per condition. You seem to have three conditions that your string needs to fulfil: At least eight characters. At least one digit (which is what I assume you mean by "number"). At least one character from the set #?!@$ %^&*- . This implies three tests: if [ "${#password}" -ge 8 ] && [[ $password == *[[:digit:]]* ]] && [[ $password == *[#?!@$\ %^\&*-]* ]]then echo 'good'else echo 'not good'fi Some special characters have to be escaped in the last test. We can make it look prettier by using variables: has_digit='*[[:digit:]]*'has_special='*[#?!@$ %^&*-]*' # or possibly '*[[:punct:]]*'if [ "${#password}" -ge 8 ] && [[ $password == $has_digit ]] && [[ $password == $has_special ]]then echo 'good'else echo 'not good'fi Note that I'm not using regular expressions here but ordinary shell patterns. The set matched by [[:punct:]] is the slightly larger set of "punctuation characters" (which notably does not contain the space character, but you could use [[:punct:] ] or [[:punct:][:blank:]] or [[:punct:][:space:]] ): !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~ If you really need to use only regular expressions, then do something like has_8='.{8}'has_digit='[[:digit:]]'has_special='[#?!@$ %^&*-]' # or possibly '[[:punct:]]'if [[ $password =~ $has_8 ]] && [[ $password =~ $has_digit ]] && [[ $password =~ $has_special ]]then echo 'good'else echo 'not good'fi Note the changed patterns. A general warning about the regex101.com site is that it does not claim to support POSIX regular expressions specifically, which most standard Unix text processing tools use, only various extended variants of these.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/515000/" ] }
703,918
I would like to print at every lauched command in zsh exit code value.E.g. $ cat file_not_presentcat: file_not_present: No such file or directoryExit code 1 I only know that I can print error code of last command launched in terminal with $ echo $? How can I do?
Personally, I add %(?..%B(%?%)%b) to my $PROMPT , so that the exit status of the previous command be shown in bold in brackets if it was unsuccessful, which is less intrusive than printing the exit status of every command. To print the exit status of every failing command, you can do: TRAPERR() print -u2 Exit status: $? $ false; false; (exit 123)Exit status: 1Exit status: 1Exit status: 123 (123) $ (the (123)$ being from that $PROMPT mentioned above). For every command: TRAPDEBUG() print -u2 Exit status: $? But that will likely get very annoying. You can also print the exit status of the last command that was run with: print_last_status() print -u2 Exit status: $?precmd_functions+=(print_last_status) $ false; (exit 123); trueExit status: 0 Like for the $PROMPT approach, you only see the status of the last command run in the command line you sent at the previous prompt. In any case, printing it on stderr (as I do above with -u2 ) will likely cause fewer problems than doing it on stdout. Doing print Exit status: $? > /dev/tty may be even better.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/527303/" ] }
703,930
Please see my screenshot below. User chj executes chmod +x ichsize.out , but fails with Operation not permitted . ichszie.out has world-rw permission enabled, but it looks not enough. -rw-rw-rw- 1 nobody nogroup 27272 May 26 18:51 ichsize.out The owner of ichsize.out is nobody , because that file is created by the Samba server, serving a [projects] directory location like this: [projects] comment = VS2019 Linux-dev project output path = /home/chj2/projects browseable = yes read only = no guest ok = yes create mask = 0666 #(everybody: read+write) directory mask = 0777 #(everybody: list+modify+traverse) hide dot files = no The Samba client accessed this share with guest identity, and requested creating the ichsize.out file. The system is Raspberry Pi based on Debian version: 11 (bullseye). Ubuntu 20.04 exhibits the same. So I'd like to know, how can I write my smb.conf so that any user on the RasPi can do chmod +x on that file.
If you don't need to worry about the user that owns the files in this share you can use the force user configuration setting to allow Samba users to run commands such as chmod . This will mean that all files will appear to be owned by the account connecting to the share (i.e. if Alice and Bob both connect to the share, Alice will see that she owns all the files, and Bob will also see that he owns all the files), but as a result anyone can run chmod . Example, assuming that shareuser is a valid user account on your Samba server, that sharegroup contains the set of users permitted to access this Share, and that /home/_share exists and is owned by shareuser with permissions of at least 0700 : [Share] comment = Everyone owns these files path = /home/_share browseable = yes read only = no guest ok = no force user = shareuser valid users = "@sharegroup" ; vfs objects = acl_xattr recycle catia Or one that I haven't tested, which allows for guest users: [Share] comment = Everyone owns these files path = /home/_share browseable = yes read only = no guest ok = yes force user = shareuser In a domain joined context, it's even possible to have Samba act on files with true Windows ACLs and ownerships. For example, in the Windows world it's possible for a group to own files and have permissions to change access rights, etc. Seeing as you have guest ok = yes in your context I suspect this isn't relevant, but I'm mentioning it for potential future readers. On the other hand, if you really do mean, " how can I write my smb.conf so that any user on the RasPi can do chmod +x on that file " [my italics for emphasis] then you should know that the smb.conf configuration file is irrelevant for users on the Pi itself. Local UNIX/Linux controls apply to users on the Pi and thus you cannot run chmod on files that you don't own.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16028/" ] }
703,950
I'm curious to understand how ~ is processed in a dependency by Apt or how it's defined for deb files (I'm not sure exactly where the syntax is defined). I ran into it with respect to dependencies of the Ubuntu (Focal) meta package python3 which has the dependency constraint: python3.8 >= 3.8.2-1~ (see here ). I believe package versions are defined so that they lexically sort in order, but when I checked ubuntu focal, there is no version of python3.8 that sorts lexically >= 3.8.2-1~ but there is a version 3.8.10-0ubuntu1~20.04.4 inferring that either Ubuntu Focal's dependencies are broken ( they are not ) or there is some special meaning to ~ in a dependency. The only documentation I can find on the topic is Debian's Declaring relationships between packages . But this doesn't mention a ~ or pattern matching. So what is the meaning of the trailing ~ in a .deb dependency?
Tildes in versions are described in the section of Policy on versions . Basically, tildes sort before anything. Thus >= 3.8.2-1~ is satisfied by any version starting with 3.8.2-1 , including versions with suffixes starting with a tilde themselves, such as 3.8.2-1~bpo (as would be used for backports), as long as there aren’t two tildes in a row. In fact such dependencies, with a tilde at the very end of the version (including the Debian revision), are typically used to facilitate backports. Since this is specifically what your question is about, and isn’t addressed by Debian Policy, it’s worth going into more detail. A typical version dependency would look like python3.8 >= 3.8.2-1 , requiring version 3.8.2-1 or later of the python3.8 package. This would be satisfiable by any later upstream version of Python 3.8, and any later Debian revision of the package (3.8.2-2, or 3.8.2-1ubuntu1, etc.). But it wouldn’t be satisfied by backports, which have versions of the form 3.8.2-1~bpo10+1; since the tilde sorts before the empty string, 3.8.2-1~bpo10+1 is considered to be less than 3.8.2-1. Backporting packages using versioned dependencies of this form thus requires changing their dependencies, which goes counter to the general rule that backports should be as close as possible to the original package. So adding a tilde as the last character of a version in a versioned dependency helps relax the dependency slightly: it allows versions with the same prefix, and a tilde-separated suffix, to satisfy the versioned dependency. This is the opposite of the documented use of tildes for pre-releases , which result in versions which can’t satisfy strictly-versioned dependencies on the final release. (Note that a tilde as the last character in a version number which includes a Debian revision, as given in the question, can’t allow upstream pre-releases — those would look like 3.8.2~pre1-1, which is less than 3.8.2-1~.) Versions aren’t sorted lexically, they’re sorted by component, numerically if possible, lexically otherwise. Thus 3.8.10-0ubuntu1~20.04.4 does satisfy this relationship: 10 is greater than 2, so the dependency is satisfied and the comparison stops there.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/703950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20140/" ] }
704,024
I have a large text file generated from strace which contains in brief : % time seconds usecs/call calls errors syscall------ ----------- ----------- --------- --------- ---------------- 42.93 3.095527 247 12512 unshare 19.64 1.416000 2975 476 access 13.65 0.984000 3046 323 lstat 12.09 0.871552 389 2239 330 futex 11.47 0.827229 77 10680 epoll_wait 0.08 0.005779 66 88 fadvise64 0.06 0.004253 4 1043 193 read 0.06 0.004000 3 1529 3 lstat 0.00 0.000344 0 2254 1761 stat[...] 0.00 0.000000 0 1 fallocate 0.00 0.000000 0 24 access 0.00 0.000000 0 1 open Excluding the first header line, I would like to get from each line the last field, corresponding to the syscall column. Those would include: unshare access lstat futex epoll_wait . .. ... This is what I tried tail -n -13 seccomp | awk '{print $5}' , which has been able to ignore the first line but somehow some lines containing the error row are ignored due to my search been not refined. How do i implement this?
Or like so: awk 'NR>2 {print $NF}' seccompunshareaccess... which, for lines beyond the second, prints the last field of the line. NF holds the number of fields, $NF "expands" to the last field's contents¹. ¹ or the whole record if it doesn't contain any field (is made of blanks only with the default value of FS , the field separator).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/704024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/467676/" ] }
704,050
A file data.csv has the following data 1,avocado,mexican green fruit1,kiwi,green fruit1,banana,yellow fruit1,mango,yellow fruit To organize data into fruit catagories, I've done awk -F ',' '{print >> ($3 ".csv")}' data.csv which creates 3 files, mexican green fruit.csv , green fruit.csv , yellow fruit.csv I want the spaces in the names of these files to be replaced with underscores _ So, the files names should be mexican_green_fruit.csv , green_fruit.csv , yellow_fruit.csv Need help in this awk one liner to do this Looking for an awk only answer
An awk-only answer (as the OP requested) for GNU awk would be: awk -F',' '{print > gensub(/[[:space:]]+/,"_","g",$3) ".csv"}' data.csv An awk-only answer for any POSIX awk if your input is small enough such that you can't exceed the "too many open files" threshold would be: awk -F',' '{out=$3 ".csv"; gsub(/[[:space:]]+/,"_",out); print > out}' data.csv An awk-only answer for any POSIX awk if you might exceed the "too many open files" threshold would be: awk -F',' '{out=$3 ".csv"; gsub(/[[:space:]]+/,"_",out); if (!seen[$3]++) printf "" > out; print >> out; close(out)}' data.csv but that last would be slow as it's closing and reopening the output file for every write and it assumes you can store every $3 value in memory. You can make it a bit more efficient by only closing the output file if/when it changes: awk -F',' '$3 != prev {close(out); out=$3 ".csv"; gsub(/[[:space:]]+/,"_",out); if (!seen[$3]++) printf "" > out; prev=$3} {print >> out}' data.csv If you're OK with an answer that's not awk-only though, then using the DSU (Decorate/Sort/Undecorate) idiom using any POSIX awk, sort, and cut, the following will work efficiently and robustly for any size of input file that sort can handle (and it's designed to use demand paging, etc. to handle extremely large files), and for any number of output files: $ cat tst.sh#!/usr/bin/env bashawk ' BEGIN{ FS=OFS="," } { print $3,NR,$0 }' "${@:-}" |sort -t',' -k1,1 -k2,2n |cut -d',' -f3- |awk ' BEGIN{ FS=OFS="," } $3 != prev { close(out) out = $3 ".csv" gsub(/[[:space:]]+/,"_",out) prev = $3 } { print > out }' $ ./tst.sh data.csv $ head *.csv==> data.csv <==1,avocado,mexican green fruit1,kiwi,green fruit1,banana,yellow fruit1,mango,yellow fruit==> green_fruit.csv <==1,kiwi,green fruit==> mexican_green_fruit.csv <==1,avocado,mexican green fruit==> yellow_fruit.csv <==1,banana,yellow fruit1,mango,yellow fruit For more info on DSU see https://stackoverflow.com/questions/71691113/how-to-sort-data-based-on-the-value-of-a-column-for-part-multiple-lines-of-a-f/71694367#71694367 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/704050", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/224025/" ] }