source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
649,830 | I am new to Linux and trying to learn how to launch and close processes automatically. Eventually I would like to run this/a similar process with cron. Here, just testing "checking in" to google. gcheck.sh looks like this: #!/bin/bash/export DISPLAY=:0firefox --new-window https://google.com I have added execute permissions to gcheck.sh with sudo chmod a+x .I know that $$ will give the PID of the script, but how can I get and kill the PID of just opened firefox window (in case I have other firefox windows open)? Thank you in advance! | Using any awk in any shell on every Unix box: $ cat tst.awkBEGIN { numTags = split("Name City Age Couse",nums2tags) for (tagNr=1; tagNr<=numTags; tagNr++) { tag = nums2tags[tagNr] tags2nums[tag] = tagNr wids[tagNr] = ( length(tag) > length("null") ? length(tag) : length("null") ) } OFS=" | "}(NR==1) || (prevTag=="Couse") { numRecs++}{ gsub(/^"|"$/,"") tag = val = $0 sub(/".*/,"",tag) sub(/[^"]+":"/,"",val) tagNr = tags2nums[tag] vals[numRecs,tagNr] = val wid = length(val) wids[tagNr] = ( wid > wids[tagNr] ? wid : wids[tagNr] ) prevTag = tag}END { # Uncomment these 3 lines if youd like a header line printed: # for (tagNr=1; tagNr<=numTags; tagNr++) { # printf "%-*s%s", wids[tagNr], nums2tags[tagNr], (tagNr<numTags ? OFS : ORS) # } for (recNr=1; recNr<=numRecs; recNr++) { for (tagNr=1; tagNr<=numTags; tagNr++) { val = ( (recNr,tagNr) in vals ? vals[recNr,tagNr] : "null" ) printf "%-*s%s", wids[tagNr], val, (tagNr<numTags ? OFS : ORS) } }} $ awk -f tst.awk fileasxadadad ,aaf dsf | Mum | 23 | BBSnull | Ors | 11 | MBadad sf | Kol | 21 | BBpqr | null | 21 | NN or if you didn't want to use a hard-coded list of tags (field/column names): $ cat tst.awkBEGIN { OFS=" | " }(NR==1) || (prevTag=="Couse") { numRecs++}{ gsub(/^"|"$/,"") tag = val = $0 sub(/".*/,"",tag) sub(/[^"]+":"/,"",val) if ( !(tag in tags2nums) ) { tagNr = ++numTags tags2nums[tag] = tagNr nums2tags[tagNr] = tag wids[tagNr] = ( length(tag) > length("null") ? length(tag) : length("null") ) } tagNr = tags2nums[tag] vals[numRecs,tagNr] = val wid = length(val) wids[tagNr] = ( wid > wids[tagNr] ? wid : wids[tagNr] ) prevTag = tag}END { for (tagNr=1; tagNr<=numTags; tagNr++) { printf "%-*s%s", wids[tagNr], nums2tags[tagNr], (tagNr<numTags ? OFS : ORS) } for (recNr=1; recNr<=numRecs; recNr++) { for (tagNr=1; tagNr<=numTags; tagNr++) { val = ( (recNr,tagNr) in vals ? vals[recNr,tagNr] : "null" ) printf "%-*s%s", wids[tagNr], val, (tagNr<numTags ? OFS : ORS) } }} $ awk -f tst.awk fileName | City | Age | Couseasxadadad ,aaf dsf | Mum | 23 | BBSnull | Ors | 11 | MBadad sf | Kol | 21 | BBpqr | null | 21 | NN Note that the order of the columns in the output for that second script will be the order those tags appear in the input which is why they need a header row to identify the values unless all tags are guaranteed to occur in the input in the order you want them output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/649830",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/464480/"
]
} |
649,996 | I want to back up my SSD using the Linux dd command, but I'm not sure how reliable that method will be. I think I read somewhere that dd does not check for or report errors, so obviously if true then it will be a deal breaker. This will be the command: sudo dd status=progress bs=512K if=/dev/nvme0n1 of=/media/d/ssd.img So please explain how reliable the dd command can be for said use case. And, are there any more reliable and/or easier alternative? | TLDR: Use ddrescue It supports resume/continue capabilities, has automatic logs, and tons of other options. More at the ddrescue home page . Example syntax: ddrescue /dev/sde yourimagename.image sde.log IF you want to (given your comment mentioning restoring) restore the image from the command above onto another drive of the same exact size: ddrescue -f yourimagehere.image /dev/sde restore.logfile Furthermore, it is faster than dd is -- at least it does look like it is when comparing speed of ddrescue and dd + pv . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/649996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/472052/"
]
} |
650,036 | I came across a cool way to go up more than 1 directory level by adding a function to the .bashrc file. It is done by just opening the .bashrc file and inserting the function: #user defined functionsfunction cd_up() { cd $(printf "%0.s../" $(seq 1 $1 ));}alias 'cd..'='cd_up' I want to understand how to read it. What language is it written in? | The contents of .bashrc are executed by Bash itself, in the same way it executes scripts, or indeed what ever an interactive shell is fed. It's a variant of the POSIX shell language, with a number of extensions (some borrowed from other shells, mainly Ksh, some created by Bash itself). Here, function cd_up() { ... ; } defines a function. The standard form for this would be just cd_up() { ... ; } , without the function keyword. The $(...) is a command substitution, it runs the command inside, and places its output as command line arguments. The printf "%0.s../" is the trick here. printf takes a format string, similar to the C function printf() , and a number of arguments to print using that format string. Importantly, it also repeats the format string as many times as necessary to use all arguments (which the C function doesn't and can't do). The specifier %0.s tells to print the argument as a string with zero width, so in effect, the arguments are not printed. But the constant part ../ is printed once for each argument, meaning that printf "%0.s../" a b would print it twice, etc. $(seq 1 $1) is another command substitution, and since seq prints a list of numbers, it is used to provide the required amount of arguments to printf . $1 is the first argument to the current function cd_up . So, e.g. cd_up 2 , would first run seq 1 2 , which prints 1 2 . This is passed as an argument to printf , so you get printf "%0.s../" 1 2 which prints ../../ . This is passed to cd , and cd ../../ is run, going up two levels in the directory tree. With set -x , you can see the commands the shell actually runs. Bash even indicates the nesting level with the + signs: /tmp/foo$ set -x/tmp/foo$ cd_up 2+ cd_up 2+++ seq 1 2++ printf %0.s../ 1 2+ cd ../..//$ Note that it doesn't really matter what the output of seq is, just that there's the correct number of words in it. Since the command substitutions are not quoted, the output goes through word splitting , meaning that it's split on whitespace to multiple arguments. printf "%0.s../" "$(seq 1 $1)" would always pass just one argument after the format string. (See the examples on the linked page.) Another way to write a similar function, with less tricks would be a simple loop. Almost in POSIX sh ( local isn't POSIX): cd_up() { local i=$1 while [ "$i" -gt 0 ]; do cd .. i=$((i - 1)) done} or in Bash/Zsh, using the arithmetic for loop: cd_up() { local i for (( i=$1; i > 0; i-- )); do cd .. done} Though the version with printf is better in that it runs cd only once, so cd - returns back to the original directory, not an in-between one. (Fixing that is left as an exercise.) For resource on the shell language, see e.g. http://mywiki.wooledge.org/BashGuide , and https://unix.stackexchange.com/tags/bash/info | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650036",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/472100/"
]
} |
650,045 | I by accident find a good way to filter useful lines of find : just pipe it to less and when it shows (END) at the bottom, press up arrow key and only correct lines are left. No Permission denied , no symbolic link errors, nothing else. find / -name foo | less But, why? Cannot find an answer about why this behaviour. less magically filters out non-result lines? | find with no “action” applies its default -print action, which outputs the full file name to standard output. Errors go to standard error. The pipe operator only redirects standard output; so only “correct” file names are sent to less , everything else goes to standard error, which is your terminal. less also writes to your terminal, so you’ll initially see both file names and errors on your screen; but when you scroll up in less (or invoke any other action which causes it to update the screen), the errors will be overwritten by less ’s updates since less is only aware of the input it’s seen from find ’s standard output. To page through the complete output in less , you need to redirect standard error too: find / -name foo 2>&1 | less To completely ignore errors, redirect it to the bit bucket instead: find / -name foo 2>/dev/null | less | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/650045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/189721/"
]
} |
650,072 | I want to move all files in my current directory to a directory named NewDir that end with *.bam , except for a specific file named special.file.bam . I have found this command that removes all files, but not sure how to move them, not deleting them: find . ! -name 'special.file.bam' -type f -exec rm -f {} + | If your shell is the bash shell, you can simply do as following by enabling the Extended Glob: shopt -s extglobmv -- !(special.file).bam temp/ to suppress the error:" bash: /usr/bin/mv: Argument list too long " when there are too many files matching the given pattern, do as following: for file in !(special.file).bam; do mv -- "$file" temp/done or with the find command instead and portability: find . -path './*' -prune -type f -name '*.bam' ! -name 'special.file.bam' \ -exec sh -c 'for file; do mv "$file" temp/; done' sh_mv {} + remove -path './*' -prune part to find files in sub-directories too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/441178/"
]
} |
650,076 | I would like to automate the creation of some files so I created a script.I want also to specify the creation date of those files. In a terminal for example, to create a file.txt with creation date of 12 of May 2012 I could touch as seen bellow, touch -d 20120512 file.txt Listing that file confirms the date, -rw-rw-r-- 1 lenovo lenovo 0 May 12 2012 file.txt If I apply the above in a script the files that I am creating all have the current time as creation time and not what I've specified.What am I doing wrong here? Script #!/bin/bash###################################Generate dat and snapshot files.###################################srv_dir="/home/lenovo/Source/bash/srv"main_dir="${srv_dir}/main"database_dir="${main_dir}/Database"dat_file="${main_dir}/remote.dat"if [[ -e ${main_dir} ]]; then echo "${main_dir} allready exists." echo "Aborting..." exit 0fi# Create directories.mkdir -p ${database_dir}# Create files.if [[ $1 == "--dat-newer" ]]; then # User wants dat file to be the latest modified file. # Create dat file with date as 'now'. touch ${dat_file} # Create snapshots with older dates. touch -d 20210511 "${database_dir}/snapshot001" touch -d 20210510 "${database_dir}/snapshot002" touch -d 20210512 "${database_dir}/snapshot004" touch -d 20210514 "${database_dir}/snapshot003"else # Create an old dat file. touch -d 20210512 "${dat_file}" # Create snapshots with older dates. touch -d 20210511 "${database_dir}/snapshot001" touch -d 20210510 "${database_dir}/snapshot002" touch -d 20210512 "${database_dir}/snapshot004" # Create snapshot003 with date as 'now'. touch "${database_dir}/snapshot003"fi# populate dat and snapshot files with data.echo "Data of ${dat_file}" > "${database_dir}/snapshot001"echo "Data of snapshot001" > "${database_dir}/snapshot001"echo "Data of snapshot002" > "${database_dir}/snapshot002"echo "Data of snapshot003" > "${database_dir}/snapshot003"echo "Data of snapshot004" > "${database_dir}/snapshot004" | The last part of your script, writing to each file, will result in the files’ last modification times all being updated to the current time. Changing times using touch should be the last thing you do to your files. Note that touch can’t change the creation time (on file systems that track it); see How to change files creation time? (touch changes only modified time) for details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/322933/"
]
} |
650,381 | I would like to concatenate multiple files following a specific order from an other file. I have multiple files called freq_<something> that I want to concatenate.The "something" are listed in another file called "list". So here is my list: $ cat list003137F002980F002993F I want to do: cat freq_003137F freq_002980F freq_002993F > freq_all But my list contains hundreds of values so I can't really do that! What is a way to automate it? I thought I could append a file with a while read line but it fails... Thanks! M | Use xargs xargs -i cat freq_'{}' < list > freq_all | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/472483/"
]
} |
650,414 | I have a file which contains IP addresses and I want to replace "." with "-" through sed. I am using below command: sed 's/./-/g' iplist.txt after running this command the output which I am getting is ------------- , it replaces whole IP address with "-". | sed uses regular expressions to find text that needs to be changed, and in regular expressions a . means to match any character. Your command is telling sed to change any character to a - . To fix this, you need to escape the . by putting a \ in front of it, to tell sed to only match on actual periods: sed 's/\./-/g' iplist.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/472535/"
]
} |
650,415 | I want to extract relevant data of a traffic junction and it's connections from a log file. Example log: SCN DD1251 At Glasgow Road - Kilbowie Road Modified By ________ Type CR Region WS Subregion UPSTREAM DOWNSTREAM FILTER NODE LINK NODE LINK LINK DD1271 C DD1271 R DD1351 D DD1351 B E Stage Suffix for Offset Optimizer 1 Double Cycle Initially ? N Force Single / Double Cycling status ? N Double Cycle Group 00 Double Cycle Ignore ? N Allow Link Max Saturation N Link Max Sat Override N Stages 1 2 3 4 Fixed N N N Y LRT stage N N N N Skip allowed N N N N Ped stage N N N N Ped invite N N N N Ghost stage N N N N Offset authority pointer 0 Split authority pointer 0 Offset opt emiss weight 000 I/green feedback inhibit N Bus Authority 00 ACIS node 00000 Bus Mode - Central extensions N Local extensions N Recalls N Stage skipping N Stage truncation N Cancels N Bus Priority Selection - Multiple buses N Queue Calculation N Hold recall if faulty N Disable recall N Disable long jtim N Real Cancel N Bus recall recovery type 0 Bus extension recovery type 0 Offset Bus authority pointer 0 Split Bus authority pointer 0 Bus skip recovery 0 Skip importance factor 0 Bus priority status OFF LRT sat 1 000 LRT sat 2 000 LRT sat 3 000 PEDESTRIAN FACILITIES Ped Node N Num Ped Wait Imp Factor 000 Ped Priority 0 Max Ped Priority Freq 00 Ped Lower Sat Threshold 000 Ped Upper Sat Threshold 000 Max Ped Wait Time 000 PEDESTRIAN VARIABLE INVITATION TO CROSS Allow Ped Invite N Ped Priority Auto 000 Ped Invite Upper Sat 000 Prio Level 1 2 3 4 Max Ped Priority Smoothed Time 000 000 000 000 Max Ped Priority Increase Length 00 00 00 00 CYCLE TIME FACILITIES Allow Node Independence N Operator Node Independence 0 Ghost Demand Stage N Num Ghost Assessment Cycles 15 Upper Trigger Ghost 04 Lower Trigger Ghost 0 SCN DD1271 At Glasgow Road - Hume Street Modified 13-OCT-15 15:06 By BDAVIDSON Type CR Region WS Subregion UPSTREAM DOWNSTREAM FILTER NODE LINK NODE LINK LINK DD1301 T DD1301 A DD1251 R DD1251 C Stage Suffix for Offset Optimizer 1 Double Cycle Initially ? N Force Single / Double Cycling status ? N Double Cycle Group 00 Double Cycle Ignore ? N Allow Link Max Saturation N Link Max Sat Override N Stages 1 2 3 Fixed N Y Y LRT stage N N N Skip allowed N N N Ped stage N N N Ped invite N N N Ghost stage N N N Offset authority pointer 0 Split authority pointer 0 Offset opt emiss weight 000 I/green feedback inhibit N Bus Authority 00 ACIS node 00000 Bus Mode - Central extensions N Local extensions N Recalls N Stage skipping N Stage truncation N Cancels N Bus Priority Selection - Multiple buses N Queue Calculation N Hold recall if faulty N Disable recall N Disable long jtim N Real Cancel N Bus recall recovery type 0 Bus extension recovery type 0 Offset Bus authority pointer 0 Split Bus authority pointer 0 Bus skip recovery 0 Skip importance factor 0 Bus priority status OFF LRT sat 1 000 LRT sat 2 000 LRT sat 3 000 PEDESTRIAN FACILITIES Ped Node N Num Ped Wait Imp Factor 000 Ped Priority 0 Max Ped Priority Freq 00 Ped Lower Sat Threshold 000 Ped Upper Sat Threshold 000 Max Ped Wait Time 000 PEDESTRIAN VARIABLE INVITATION TO CROSS Allow Ped Invite N Ped Priority Auto 000 Ped Invite Upper Sat 000 Prio Level 1 2 3 4 Max Ped Priority Smoothed Time 000 000 000 000 Max Ped Priority Increase Length 00 00 00 00 CYCLE TIME FACILITIES Allow Node Independence N Operator Node Independence 0 Ghost Demand Stage N Num Ghost Assessment Cycles 15 Upper Trigger Ghost 04 Lower Trigger Ghost 0 I can already extract the first relevant line using the following Bash script: grep SCN* LOG.TXT > JUNCTIONS.txt Which creates a list of all the junctions like so: SCN DD1251 At Glasgow Road - Kilbowie Road SCN DD1271 At Glasgow Road - Hume Street SCN DD1301 At Glasgow Road - Argyll Road - Cart Street SCN DD1351 At Kilbowie Road - Chalmers Street... However, I want to extract the lines immediately after each link title, down to the final link of the node just before a large amount of whitespace and without capturing anything from Stage Suffix onwards until the next link. Is there a way to modify my BASH script to include an additional number of lines after each matching instance it finds? | sed uses regular expressions to find text that needs to be changed, and in regular expressions a . means to match any character. Your command is telling sed to change any character to a - . To fix this, you need to escape the . by putting a \ in front of it, to tell sed to only match on actual periods: sed 's/\./-/g' iplist.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/472420/"
]
} |
650,422 | I need to turn this hello123456789 into 567 using sed to remove the first nine characters and the last two. I have looked for a command just cant find one i have tried a few languages to execute a function to do it but had no luck. | sed 's/^.........\(.*\)..$/\1/' file or sed 's/^.\{9\}\(.*\)..$/\1/' file or, using the non-standard but commonly implemented -E option to enable the use of extended regular expressions, sed -E 's/^.{9}(.*)..$/\1/' file All of these first matches nine characters at the start of the line, then any number of characters in the middle of the line (these are captured), and finally two characters at the end. Anchoring with ^ and $ is actually not needed here as the middle section of the expression forces the first and last bit of the expression to match at the start and end of the line anyway. The whole line is replaced by the captured characters in the middle, however many they may be. Another approach with sed : sed -e 's/.\{9\}//' -e 's/..$//' file This first expression removes the first nine characters by means of substituting them with nothing, and the second expression removes the last two characters in a similar manner. The second expression needs the anchoring at the end of the line with $ , but the first expression does not need to be anchored as it matches from the start of the line by default. If the string is in a shell variable var , then using these two standard variable substitutions would first remove the first nine, then the last two characters of the string ( ? matches any character when used in a shell globbing pattern): var=${var#?????????}var=${var%??} This mimics the last variation with sed above in that it matches and removes certain number of characters at the start and end of the string without bothering about the middle section of the string at all. Testing this: $ var=hello123456789$ var=${var#?????????}$ var=${var%??}$ printf '%s\n' "$var"567 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650422",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/472549/"
]
} |
650,424 | How do I print the remainder of a string (not just the columns without the delimiter) after the nth delimiter? I have a text file with a bunch of registry keys, similar to: hku\test\user\software\microsoft\windows\currentversion\runonce\delete cached update binary I'm wanting to print everything after the 3rd \ character. So I am looking for the output to be software\microsoft\windows\currentversion\runonce\delete cached update binary I know how to print out specific columns with awk , but is there any simple way using bash to specify a delimiter to split the string at, instead of using the delimiter to print columns? | Pipe through cut -d \\ -f 4- . echo 'hku\test\user\software\microsoft\windows\currentversion\runonce\delete cached update binary' | cut -d \\ -f 4- Yields: software\microsoft\windows\currentversion\runonce\delete cached update binary Note the double \\ , since a single \ is an escape character. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357569/"
]
} |
650,461 | Currently I use rsnapshot to backup data from one encrypted ext4 drive to another. My system opens a LUKS container on each drive and runs rsnapshot according to an hourly schedule. I'm intrigued by btrfs's built in snapshot feature, and I'm curious if it can be used in place of my current setup (assuming of course I reformat the drives). Are there any obvious issues I'm failing to realize? Can my current setup be improved by using btrfs, is it faster for example? | btrfs is a copy-on-write filesystem with many features (like error detection and correction, transparent compression, snapshots, sub-volumes, etc) that make it slower than a traditional filesystem. However, btrfs snapshots are very light-weight and take almost no time to make. And using btrfs send ... | btrfs receive (or btrfs send | ssh | btrfs receive ) is much faster than using rsync or rsnapshot or any other method that needs to compare the files on sender and receiver. Those file comparisons aren't necessary when sending a snapshot because the exact differences between one snapshot and another are already known (they're inherent to the snapshot) so the changed blocks can just be sent to the receiver as a continuous binary stream - no comparisons of file timestamps or contents is needed. In short, the overall filesystem performance will be slower, but backups will be much faster. I use zfs instead of btrfs , which has a very similar snapshot & send/receive mechanism. When I switched from rsync to zfs send for my backups, it reduced the run-time for an incremental backup down from several hours to several minutes. I backup all machines on my local network to a "backup" pool on my main file-server. It had gotten to the point that the rsync backups weren't completing before cron triggered the next day's backup. With multiple simultaneous backups running at all times, the performance of the server was abysmal, and it required constant manual intervention (mostly killing rsync processes) to bring it back to a usable state. The switch to zfs send was the difference between having a usable file-server and an unusable one. Now I rarely ever need to even think about it, it just works. At most every year or two I clear out ancient snapshots on the backup pool (I aggressively auto-expire snapshots on the hosts being backed up, but much less aggressively on the backup pool), which can take a long time if I've let the backup pool retain a million or two snapshots. As for whether it can replace your current setup or not, I recommend creating a btrfs testing VM with two btrfs pools and experiment with making snapshots and sending them from one pool to the other. Or multiple testing VMs so you can experiment with btrfs send ing a snapshot stream over ssh . I would not recommend switching until you are very familiar and comfortable with how btrfs snapshots and btrfs send/receive work. In fact, make some ZFS VMs too so you can get a feel for the differences. VMs are great for trying out new stuff before you decide if you want to use it. Reading docs is essential, but there's nothing like getting your hands dirty if you really want to understand how something works. BTW, transparent compression can offset much of the performance penalty between using a fs like btrfs or ZFS and a more traditional fs like ext4 or xfs, depending on your workload. If fs performance is the only or most important thing for you, then use xfs - it's the clear winner, by far. If you need/want snapshots, snapshot send/recv, compression, ECC, sub-volumes etc then use either ZFS or btrfs. IMO, the only real reason to use btrfs instead of ZFS is that btrfs is in the mainline linux kernel while ZFS probably never will be due to the license conflict between CDDL and GPL. For ZFS, you have to compile and install the kernel module...which is trivially easy with a zfs-dkms module. If you're using Ubuntu then you can use ZFS out of the box, they don't think the license issue is that big a deal - IMO they're wrong about that, but it's unlikely they'll be sued by Oracle. Also, one thing that may be of interest to you since you use LUKS is that ZFS can optionally encrypt any dataset ("sub-volume" in btrfs terminology. Kind of like a combined LV + filesystem in LVM terminology). I've never used either LUKS or ZFS's encryption, so I can't tell you how they compare. I don't really see much reason to use ext4 these days, except that it's pretty much the default for most distros. There's no advantage, no compelling reason to use it. Finally, don't be tempted by de-duplication with ZFS. It sounds like a great idea in theory but what it means in practice is that the de-dupe table needs to be held in RAM so you're reducing your need for more very cheap drives and replacing it with a need for more very expensive RAM. This is, with a few exceptional use-cases (like running hundreds or thousands of the same VM image), a poor bargain - that RAM would be better used for running programs or caching disks. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650461",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/420883/"
]
} |
650,610 | I have a file that contains the some python code. There is a line that contains the following(including the quotes) 'hello "value"' I want to search 'hello "value" in the file. Notice the unclosed quote. I'm using ripgrep with the folowing command: rg -F 'hello "value" The above command is not working for the input 'hello "value" in bash/ zsh. All i want is the literal match. I have used the flag F but because of unclosed quotes in the input string it is not working at all. I also tried enclosing the input inside single/ double quotes as so: rg -F "'hello "value"" or rg -F ''hello "value"' The above is not working as well. I thought the F flag would tell ripgrep to consider the input literally as it is? How do I fix this? | The text 'hello "value" as seen by the shell starts with a single quote and never ends it. You'll get a continuation prompt ( $PS2 , often > ) asking for the rest of the string. To search for something like this you would need to escape the quotes (both sorts) from the shell, which you can do like this "'hello \"value\"" Or if you don't want to escape all the literal double quotes, quote the initial single quote and leave the rest of the string single-quoted "'"'hello "value"'^1 ^3 ^4 ^2 1-2 double-quoted string containing the initial leading single-quote mark 3-4 single-quoted string containing literal double-quote marks If you have particularly complicated strings there would be nothing wrong with you reading the string from the script, where the shell can be instructed not to try to parse it, rather than on the command line, where it will attempt to parse it #!/bin/bash[[ $# -eq 0 ]] && IFS= read -rp "Enter search string: " itemrg -F "${1:-$item}" Make the script executable and put it somewhere in your $PATH . If you provide something on the command line it will use it. If you don't, it will prompt for the string and you can enter one with as much complexity as you like. As a footnote, possibly part of the confusion is the expectation from the Windows/DOS world that the program itself parses the command line. This is not the case in the UNIX/Linux world, where the shell parses the command line and passes the resulting arguments to the command. There is no way to tell to the shell not to parse the command line - that's what it does - so you need to work with that or else bypass the shell. Related reading is How to escape quotes in shell? and What is the difference between the “…”, '…', $'…', and $“…” quotes in the shell? , which explain in much more detail about quoting quotes, etc. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/379023/"
]
} |
650,698 | The last few lines of my file /usr/share/glib-2.0/schemas/org.gnome.Vino.gschema.xml : <schemalist> <schema> <!-- some other tags --> <key name='notify-on-connect' type='b'> <summary>Notify on connect</summary> <description> If true, show a notification when a user connects to the system. </description> <default>true</default> </key> <key name='enabled' type='b'> <summary>Enable remote access to the desktop</summary> <description> If true, allows remote access to the desktop via the RFB protocol. Users on remote machines may then connect to the desktop using a VNC viewer. </description> <default>false</default> </key> </schema></schemalist> If I want to grep this paragraph: <key name='enabled' type='b'> <summary>Enable remote access to the desktop</summary> <description> If true, allows remote access to the desktop via the RFB protocol. Users on remote machines may then connect to the desktop using a VNC viewer. </description> <default>false</default></key> How should I use the grep command to achieve this? | Since your given example is a valid XML file, so I would use xq XML parser tool for that which is part of the yq installation package . xq -x --xml-root key ' .schemalist.schema.key[] | select(."@name" == "enabled")' infile.xml select the "key" tag if its "@name" attribute was equal to 'enabled'. from the xq -h : --xml-output, -x Transcode jq JSON output back into XML and emit it --xml-root XML_ROOT When transcoding back to XML, envelope the output in an element with this name | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/472795/"
]
} |
650,739 | I have a file on Linux, containing the coordinates of thousands of molecules. Each molecule starts with a line containing always the same pattern: @<TRIPOS>MOLECULE And then continues with other lines.I would like to split the file into multiple files, each containing a certain number of molecules.What is the easiest way to do this? | One way is to use awk : awk -v moleculesNum=7 '/^@<TRIPOS>MOLECULE/{ if((++num)%moleculesNum==1){ close(outfile); outfile="file" (++Output) }}{ print >outfile }' infile this splits the original file into multiple files each having maximum 7 number of MOLECULEs (adjustable in moleculesNum=7 parameter) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650739",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/413624/"
]
} |
650,878 | I have a bash commands pipeline that produces a ton of logging text output. But mostly it repeats the previous line except for the timestamp and some minor flags, the main output data changes only once in a few hours. I need to store this output as a text file for future handling/research. What should I pipe it to in order to print only 1st line out of every X? | Print 1 st line and skip next N-1 lines out of every N lines. awk -v N=100 'NR%N==1' infile test with: $ seq 1000 |awk -v N=100 'NR%N==1'1101201301401.... to pass the number of lines you want to skip them we can read that from a parameter too, so: $ seq 1000 |awk -v Num=100 -v Skip=98 '(NR-1)%Num<Num-Skip'12101102201202301302401402501502601602701702801802901902 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650878",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40498/"
]
} |
650,894 | I am using ntpsec of Debian unstable. In my log I saw the following: Mai 22 11:48:34 services ntpd[13428]: CLOCK: time stepped by 1.442261Mai 22 11:55:06 services ntpd[13428]: CLOCK: time stepped by 1.524066Mai 22 12:03:00 services ntpd[13428]: CLOCK: time stepped by 1.702944Mai 22 12:08:34 services ntpd[13428]: CLOCK: time stepped by 1.517894Mai 22 12:17:38 services ntpd[13428]: CLOCK: time stepped by 1.434055Mai 22 12:24:07 services ntpd[13428]: CLOCK: time stepped by 1.084220Mai 22 12:32:29 services ntpd[13428]: CLOCK: time stepped by 1.562280Mai 22 12:38:38 services ntpd[13428]: CLOCK: time stepped by 1.211420Mai 22 12:43:49 services ntpd[13428]: CLOCK: time stepped by 1.185642Mai 22 12:48:58 services ntpd[13428]: CLOCK: time stepped by 0.796154Mai 22 12:54:43 services ntpd[13428]: CLOCK: time stepped by 1.331323Mai 22 13:00:21 services ntpd[13428]: CLOCK: time stepped by 0.849190 And this is not just today, it goes on like that for days. So apparently, ntpd does not properly fix the system clock drift. In /var/lib/ntpsec/ntp.drift there is always: 500.000000 What I have tried now: disabled CONFIG_RTC_SYSTOHC, so the kernel doesn't automatically update the RTC. A few hours later, I ran hwclock -w --update-drift to get at least a better accuracy when reading the RTC. It set the drift factor to 0.78 seconds/day. after that, I ran adjtimexconfig to fix the system clock (something that ntpd should have done). It said: Comparing clocks (this will take 70 sec)...done.Adjusting system time by 275,531 sec/day to agree with CMOS clock...done. The result seems to be that ntpd has to step the time a lot less now: Mai 22 14:24:20 services ntpd[13428]: CLOCK: time stepped by 0.234963Mai 22 14:30:30 services ntpd[13428]: CLOCK: time stepped by 0.145163 Good. But why doesn't ntpd do that by itself? 0.2sec/6min still seems way too inexact, so I guess I'll have to repeat that process a few more times. Any suggestions? | For some reason, your OS clock is being very inaccurate. Normally ntpd would keep it in correct time by slewing it, i.e. telling a slow clock to "speed up" to make it catch up with real time, only adjusting the speed of the clock to match real time when it is actually in sync with the real time, and likewise slowing down the clock if it's being too fast. But for your OS clock, this adjustment seems to be insufficient: the error is so great that ntpd must resort to step adjustments, essentially resetting the system clock to correct time every few minutes. If you want accurate timekeeping for databases and the like, step adjustments should be avoided completely. You should not be happy with any non-zero amount of step adjustments. Fortunately the error seem to be always in the same direction, so it might be a systematic error that can be adjusted out. Note: if this is a virtual machine, the time drift might be caused by the virtualization host running in a high load, and "stealing time" from idle VMs to run the busy ones. If this is the case, check with the virtualization host administrator first for recommended ways to fix the timekeeping: there might be a "paravirtualized clock" option that will let the VM essentially use the host's clock for timekeeping, or other solutions recommended by the host OS/hypervisor vendor. Just make sure the virtualization host does not fiddle with the VM's clock if you are trying to use NTP synchronization: it's one or the other, not both! Note that hwclock -w --update-drift will estimate the drift of the battery-backed RTC clock by comparing it to the OS clock, which in your case is already known to be quite inaccurate. So you will be adjusting a possibly-good clock to match a known-bad one, which does not sound like a good idea. adjtimexconfig on the other hand assumes the battery-backed RTC is correct and adjusts the parameters of the OS clock to match it. If you have access to a known-good NTP timesource, you should instead use adjtimex --host <NTP server> to compare the OS clock directly to the NTP server (stopping ntpd while you do that), and then use adjtimex -p to view the resulting frequency and tick values. Alternatively, you could just use adjtimex -p to see what frequency offset value has been set by ntpd . ntpd will only adjust the frequency value; it won't touch the tick setting at all. If you find the frequency offset value has gone all the way to either end of the scale at +/-32768000, you should adjust the tick value manually, then repeat the process. (If frequency goes to or near the maximum positive value, the tool is trying to speed up the clock and fails to speed it up enough as it runs out of adjustment range. To fix that, increase the tick value. If frequency goes to or near the negative limit, decrease the tick value.) Once you find a tick value that lets the frequency offset value stay at relativelynear the middle of the scale (say, +/- 5000000 or so), then ntpd should have a much better chance at keeping the clock in sync by tweaking the frequency offset value as needed. You should edit the tick value manually into /etc/default/adjtimexconfig and ensure that the adjtimex.service gets executed successfully at boot: it runs before ntpd is started, and so sets the OS clock into "correct gear" before ntpd starts acting as a "cruise control" for it. Once you get the OS clock under control, so that ntpd will keep in a synchronized state ( ntpq -np will display an asterisk in the first column) and there are no log messages about step adjustments other than maybe once at boot time, then you can use hwclock -w --update-drift to estimate the drift rate of the RTC clock. The end result should be a system that keeps as good time as reasonably achievable whether it's powered on or not. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/650894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/335309/"
]
} |
650,942 | Consider this script: #!/bin/shfoo=1if [[ ! -z $foo ]]; then echo abcfi It's using the Bash syntax [[ ... ]] which doesn't work (as expected) when I run it with the default shell on Ubuntu (dash). However, its return code is still zero. $ ./tmp.sh./tmp.sh: 4: ./tmp.sh: [[: not found$ echo $?0 How can I detect this kind of error in a script if I can't rely on the exit code? | Let me first explain why this happens. POSIX Shell Command Languagespec says: The exit status of the if command shall be the exit status of the thenor else compound-list that was executed, or zero, if none wasexecuted. Since in your case then part is not executed and there is no else the exit status is 0. It would also be 0 if you ran this script usingBash as in man bash it says: if list; then list; [ elif list; then list; ] ... [ else list; ] fi The if list is executed. If its exit status is zero, the then list is executed. Otherwise, each elif list is executed in turn, and if its exit status is zero, the corresponding then list is executed and the command completes. Otherwise, the else list is executed, if present. The exit status is the exit sta‐ tus of the last command executed, or zero if no condition tested true. How can I detect this kind of error in a script if I can't rely on the exit code? There are 2 ways I could think of: if you can modify your script add else part to the if construct: #!/bin/sh foo=1 if [[ ! -z $foo ]]; then echo abc else echo not true exit 1 fi if you got if from someone and you're not willing to modify it useshellcheck static analyzer in sh mode to look for possible bugs inthe code and report them to the author: $ shellcheck -s sh dash-exit-status.sh In dash-exit-status.sh line 4: if [[ ! -z $foo ]]; then ^-------------^ SC2039: In POSIX sh, [[ ]] is undefined. ^-- SC2236: Use -n instead of ! -z. For more information: https://www.shellcheck.net/wiki/SC2039 -- In POSIX sh, [[ ]] is undefined. https://www.shellcheck.net/wiki/SC2236 -- Use -n instead of ! -z. Basically, this is a bug to me as one should not use non-POSIXfeatures in scripts that are supposed to be executed by /bin/sh which might but doesn't have to be a symlink to Bash. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/650942",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/392174/"
]
} |
651,000 | My Qnap NAS is cursed with a find command that lacks the -exec parameter, so I have to pipe to something. The shell is: GNU bash, version 3.2.57(2)-release-(arm-unknown-linux-gnueabihf) I'm trying to set the setgid bit on all subdirectories (not files) of the current directory. This does not work: find . -type d | xargs chmod g+s $1 Using "$1" , "$(1)" , $("1") etc. will not work either. They all indicate that chmod is getting passed a directory name containing spaces as two or more parameters (it spits out its standard help message about what parameters are supported). I don't care to use xargs if I don't have to; I think it chokes on long names anyway, doesn't it? These and variants of them do not work: find . -type d | chmod g+sfind . -type d | chmod g+s "$1" I've thought of using awk or sed to inject quotation marks but I have to think there's an easier way to do this. What did people do before -exec ? (The sad thing is that I probably knew, back in 1995 or so, but have long since forgotten.) PS: Various of these directory names will contain Unicode characters, the ? symbol, etc. They're originally from macOS which is rather permissive. That said, I should probably replace all the ? instances with something like the Unicode character ⁇ so Windows doesn't choke on them. But that's also going to require a similar find operation with this crippleware version of find . | The output of find emits file names separated by newlines 1 . This is not the format that xargs wants and find has no way to produce the format that xargs wants: it parses its input as whitespace-separated items, with \'" used for quoting. Some versions of xargs can take newline-separated input, but if your find lacks standard options, chances are that your xargs does too. find . -type d | xargs chmod g+s works as long as your directory names don't contain whitespace or \'" . Note that there's no $1 : that's meaningful to a shell, but no shell is involved in parsing the output of find and feeding it to chmod , only xargs . If your find has -print0 and your xargs has -0 , you can use these options to pass null-delimited file names, which works with arbitrary file names. find . -type d -print0 | xargs -0 chmod g+s If your xargs supports the standard option -I , you can use it to instruct it to process each line as an item, instead of blank-separated quoted strings. This copes with spaces, but not with \"' or newlines. find . -type d | xargs -I {} chmod g+s {} You can use the shell to loop over lines instead of xargs . This works for any file name that doesn't contain newline characters. find . -type d | while IFS= read -r line; do chmod g+s "$line"; done Both of these solutions work only on file names that don't contain newline characters. The output of find with filenames containing newlines is ambiguous except in one case which is painful to parse: find won't spontaneously emit multiple slashes, so if you put // in the path to the directory to traverse, you can recognize this in the output. Here's some minimally tested code using that uses this fact to convert the output from find into the input format of xargs . chars=$(printf '\t "'\\\'){ find .//. -type d; echo .// } | LC_ALL=C sed -n 's/['"$chars"']/\\&/g/^\.\/\// {xs/\n/\\&/gpb}H' | LC_ALL=C xargs chmod g+s 1 More precisely: terminated by newlines (there's a newline after the last name). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/651000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/174665/"
]
} |
651,035 | I'm trying to figure out how to get total number of lines from all .txt files. I think the problem is on the line 6 -> let $((total = total + count )) . Anybody knows what's to correct form of this? #!/bin/bashtotal=0find /home -type f -name "*.txt" | while read -r FILE; do count=$(grep -c ^ < "$FILE") echo "$FILE has $count lines" let $((total = total + count )) done echo TOTAL LINES COUNTED: $total Thank you | Your line 6 is better written as total=$(( total + count )) ... but it would be better still to use a tool that is made for counting lines (assuming you want to count newlines, i.e. the number of properly terminated lines) find . -name '*.txt' -type f -exec cat {} + | wc -l This finds all regular files in or below the current directory that have filenames ending in .txt . All these files are concatenated into a single stream and piped to wc -l , which outputs the total number of lines, which is what the title and text of the question asks for. Complete script: #!/bin/shnlines=$( find . -name '*.txt' -type f -exec cat {} + | wc -l )printf 'Total number of lines: %d\n' "$nlines" To also get the individual files' line count, consider find . -name '*.txt' -type f -exec sh -c ' wc -l "$@" | if [ "$#" -gt 1 ]; then sed "\$d" else cat fi' sh {} + |awk '{ tot += $1 } END { printf "Total: %d\n", tot }; 1' This calls wc -l on batches of files, outputting the line cound for each individual file. When wc -l is called with more than one filename, it will output a line at the end with the total count. We delete this line with sed if the in-line sh -c script is called with more than one filename argument. The long list of line counts and file pathnames is then passed to awk , which simply adds the counts up (and passes the data through) and presents the user with the total count at the end. On GNU systems, the wc tool can read pathnames from a nul-delimited stream. You can use that with find and its -print0 action on these systems like so: find . -name '*.txt' -type f -print0 |wc --files0-from=- -l Here, the found pathnames are passed as a nul-delimited list over the pipe to wc using the non-standard -print0 . The wc utility is used with the non-standard --files0-from option to read the list being passed across the pipe. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/473008/"
]
} |
651,065 | I have multiple R scripts to read (up to 3 i.e. tr1.R, tr2.R, tr3.R). The bash script for reading a single script is given below #!/bin/bash#PBS -l nodes=1:ppn=10,walltime=00:05:00#PBS -M #PBS -m emodule load R/4.0Rscript ~/tr1.R I tried the following as suggested by @cas #!/bin/bash#PBS -l nodes=1:ppn=10,walltime=00:05:00#PBS -M #PBS -m emodule load R/4.0**Rscript ~/tr"$i".R** Further, the job is submitted using for i in {1..3} ; do qsub -o "default.$i.out" -e "errorfile$i" -v i script.shdone This couldnot read Rscript ~/tr"$i".R . | Your line 6 is better written as total=$(( total + count )) ... but it would be better still to use a tool that is made for counting lines (assuming you want to count newlines, i.e. the number of properly terminated lines) find . -name '*.txt' -type f -exec cat {} + | wc -l This finds all regular files in or below the current directory that have filenames ending in .txt . All these files are concatenated into a single stream and piped to wc -l , which outputs the total number of lines, which is what the title and text of the question asks for. Complete script: #!/bin/shnlines=$( find . -name '*.txt' -type f -exec cat {} + | wc -l )printf 'Total number of lines: %d\n' "$nlines" To also get the individual files' line count, consider find . -name '*.txt' -type f -exec sh -c ' wc -l "$@" | if [ "$#" -gt 1 ]; then sed "\$d" else cat fi' sh {} + |awk '{ tot += $1 } END { printf "Total: %d\n", tot }; 1' This calls wc -l on batches of files, outputting the line cound for each individual file. When wc -l is called with more than one filename, it will output a line at the end with the total count. We delete this line with sed if the in-line sh -c script is called with more than one filename argument. The long list of line counts and file pathnames is then passed to awk , which simply adds the counts up (and passes the data through) and presents the user with the total count at the end. On GNU systems, the wc tool can read pathnames from a nul-delimited stream. You can use that with find and its -print0 action on these systems like so: find . -name '*.txt' -type f -print0 |wc --files0-from=- -l Here, the found pathnames are passed as a nul-delimited list over the pipe to wc using the non-standard -print0 . The wc utility is used with the non-standard --files0-from option to read the list being passed across the pipe. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/419716/"
]
} |
651,134 | I would like to rsync over a few files, specified with an array, and delete any other file in a directory. The only approach I can think of is to remove other files using find , and rsync over the files, so we copy as few files as possible. In the following example, I want to delete any other file in /tmp/tmp/ , except for btrfs_x64.efi and iso9660_x64.efi . $ refind_efi_dir='/tmp/tmp/'$ drivers=('btrfs_x64.efi' 'iso9660_x64.efi')$ find ${refind_efi_dir}drivers_x64/ "${drivers[@]/#/! -name }" -type f -exec rm -f {} + I want the expansion to expand to the following command: $ find /tmp/tmp/drivers_x64/ ! -name btrfs_x64.efi ! -name iso9660_x64.efi -type f -exec rm -f {} + But instead it appears to be running the following command: $ find /tmp/tmp/drivers_x64/ "! -name btrfs_x64.efi" "! -name iso9660_x64.efi" -type f -exec rm -f {} + Is there a way to get the former? Ideally it also works if some array entries have spaces in them. | Yes, here you need to generate 3 arguments to find for each element of your array. Also find 's -name takes a pattern, so for it to match on exact file names, you'd need to escape the find wildcard operators ( * , ? , [ and \ ): set -o extendedglob # for (#m)exclusions=()for name ($drivers) exclusions+=(! -name ${name//(#m)[?*[\\]/\\$MATCH})find ${refind_efi_dir}drivers_x64/ $exclusions -type f -exec rm -f {} + "${array[@]/pattern/replacement}" expands to as many elements as there are in the array, after substitution performed on each of them. Here, given that -name takes a file name pattern, it should not contain / , so you could replace each element with !/-name/element and then split on / : set -o extendedglob # for (#m)find ${refind_efi_dir}drivers_x64/ \ ${(@s[/])${drivers//(#m)[?*[\\]/\\$MATCH}/#/!\/-name\/} \ -type f -exec rm -f {} + Or use $'\0' instead of / as it can't be passed in an argument to an external command anyaway: set -o extendedglob # for (#m)find ${refind_efi_dir}drivers_x64/ \ ${(@0)${drivers//(#m)[?*[\\]/\\$MATCH}/#/!$'\0'-name$'\0'} \ -type f -exec rm -f {} + But that doesn't help much with legibility... Here, you could also use zsh 's glob for everything: (cd -P -- $refind_efi_dir && rm -f -- **/^(${(~j[|])drivers})(D.)) Where the j[|] parameter expansion flag joins the elements of the $drivers array with | and ~ causes that | to be treated as a glob operator. That pattern is negated with ^ (for which you need the extendedglob option). D to include hidden files, . to restrict to regular files like your -type f . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63246/"
]
} |
651,155 | How can I create a bash and a zsh prompt that shows only the current directory and its parent directory? For example, if I'm at the dir ~/pictures/photos/2021 , it should show: [photos/2021]$ echo hi That's all. Would like it for bash and for zsh . | In zsh : PS1='[%2d] $ ' See info zsh 'prompt expansion' for details. In bash (or zsh -o promptsubst , though you wouldn't want to do that there as if $PWD contains % characters, that would cause further prompt expansions): PS1='[${PWD#"${PWD%/*/*}/"}] $ ' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/651155",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/462354/"
]
} |
651,195 | I'm stuck creating an awk script that prepares a csv file before analysis. I need to create an output file with columns 1-2, 10, 13-15, 19-21. Also I need to replace the numbers on column 2 to the days of the week (so, 1 = Monday, 2 = Tuesday...) and convert the 21th column from nautical miles to km; and delete "" of columns 10, 13 and 14. Input: "DAY_OF_MONTH","DAY_OF_WEEK","OP_UNIQUE_CARRIER","OP_CARRIER_AIRLINE_ID","OP_CARRIER","TAIL_NUM","OP_CARRIER_FL_NUM","ORIGIN_AIRPORT_ID","ORIGIN_AIRPORT_SEQ_ID","ORIGIN","DEST_AIRPORT_ID","DEST_AIRPORT_SEQ_ID","DEST","DEP_TIME","DEP_DEL15","DEP_TIME_BLK","ARR_TIME","ARR_DEL15","CANCELLED","DIVERTED","DISTANCE",1,2,"EV",20366,"EV","N48901","4397",13930,1393007,"ORD",11977,1197705,"GRB","1003",0.00,"1000-1059","1117",0.00,0.00,0.00,174.00,1,2,"EV",20366,"EV","N16976","4401",15370,1537002,"TUL",13930,1393007,"ORD","1027",0.00,"1000-1059","1216",0.00,0.00,0.00,585.00,1,2,"EV",20366,"EV","N12167","4404",11618,1161802,"EWR",15412,1541205,"TYS","1848",0.00,"1800-1859","2120",0.00,0.00,0.00,631.00, Output: "DAY_OF_MONTH","DAY_OF_WEEK","ORIGIN","DEST","DEP_TIME","DEP_DEL15","CANCELLED","DIVERTED","DISTANCE"1,Tuesday,ORD,GRB,1003,0.00,0.00,0.00,322.2481,Tuesday,TUL,ORD,1027,0.00,0.00,0.00,1083.421,Tuesday,EWR,TYS,1848,0.00,0.00,0.00,1168.61 So far, I've got the command to take the columns needed: cut -d "," -f1-2,10,13-15,19-21 'Jan_2020_ontime.csv' > 'flights_jan_20.csv' And also the code to replace the numbers in column 2 with their respective days of the week: awk 'BEGIN {FS = OFS = ","} $2 == 1 {$2 = "Monday"} $2 == 2 {$2 = "Tuesday"} $2 == 3 {$2 = "Wednesday"} $2 == 4 {$2 = "Thursday"} $2 == 5 {$2 = "Friday"} $2 == 6 {$2 = "Saturday"} $2 == 7 {$2 = "Sunday"} {print}' file.csv I am also missing a way to wrap all the code into the script to execute it later. | #!/bin/awk -fBEGIN { dow[1] = "Monday" dow[2] = "Tuesday" dow[3] = "Wednesday" dow[4] = "Thursday" dow[5] = "Friday" dow[6] = "Saturday" dow[7] = "Sunday" FS=OFS=","}NR == 1 {print $1, $2, $10, $13, $14, $15, $19, $20, $21}NR != 1 { $2 = dow[$2] $21 *= 1.852 gsub(/"/, "", $10) gsub(/"/, "", $13) gsub(/"/, "", $14) print $1, $2, $10, $13, $14, $15, $19, $20, $21} Save this in a file, say: sample.awk . Make it executable: chmod +x sample.awk and run as ./sample.awk data . To save the output in another file add the output redirection operator as follow : ./sample.awk data > out.csv | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651195",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469363/"
]
} |
651,198 | The Podman man pages explains for volume mounts/binds: Labeling systems like SELinux require that proper labels are placed on volumecontent mounted into a container. Without a label, the security system mightprevent the processes running inside the container from using the content. Bydefault, Podman does not change the labels set by the OS. To change a label in the container context, you can add either of two suffixes :z or :Z to the volume mount. These suffixes tell Podman to relabel fileobjects on the shared volumes. The z option tells Podman that two containersshare the volume content. As a result, Podman labels the content with a sharedcontent label. Shared volume labels allow all containers to read/write content.The Z option tells Podman to label the content with a private unshared label. The troubleshooting page explains the same thing with nearly the same words, however. Now, being rather new to Podman and SELinux, wonder, what I actually should use when?I know, when I get permissions errors that they could be due to SELinux, so one of the two switches may fix that. But what are the differences between these two (lower-case z and upper-case Z) options? The difference it says is: :z creates a shared content label :Z creates a private unshared label This introduces many new words: shared and unshared (what does that mean?) ??? vs private (again, it’s not clear what this should say to me) also it says “content label” for one option, while the other one only says “label” – is there a difference between these two terms or is it the same? So what do these words mean in this context?And the final question: When I should I use what? | “Shared” means that multiple containers can share the volume; “unshared” says that they can’t. In a little more detail, :z labels the volume inside each container with the appropriate label ( container_file_t ), and any given volume can be mounted inside multiple containers in parallel, and all running containers with the volume mount will have access to it. Any change made by the host, or any running container, will be visible to all running containers. “Private” means that in addition, the label used inside the container will be private to that container. There’s no additional layering at the file system level, so this effectively means that the content is labelled privately even from the host’s perspective. Containers with the same mount can’t share their access to it — at least with Podman, the last container wins, and is the only container with access to the volume. The opposite of “private” here would be “shared” in my mind, which would explain why there’s no opposing term in the documentation (“shared shared label”). I’m not sure there’s any significance in “content label” v. “label”, unless it’s an allusion to the fact that any content created in such containers will be labelled accordingly, including in the host, so you’ll see files with the container_file_t label. See this post demonstrating the difference in more detail on Podman . Docker has the same distinction . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651198",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146739/"
]
} |
651,315 | macOS sed command to replace last occurrence of a word in a file Replace with the content from another file Replace only the last occurrence, means only once. The word can be a substring abchello or helloabc There can be trailing whitespace or newline after the word sample_file_path = "/Users/saurav/sample.text"sample_file_path_1 = "/Users/saurav/sample1.text" sample.txt : hellohihellook sample1.txt : Iam doinggreat Expected output ( sample.txt ): hellohiIam doinggreatok Need to use file path variable | In three steps, using sed syntax compatible with /usr/bin/sed on macOS, and either bash or zsh (the two main shells on current macOS systems): sed -n '/hello/=' sample.txt |sed -e '$!d' -e $'s/.*/&r sample1.txt\\\n&d/' |sed -f /dev/stdin sample.txt This uses sed in three steps: Finds all lines in sample.txt that matches hello and output the line numbers corresponding to those lines. Deletes all but the last line number outputted by the first step (using $!d , "if this is not the last line, delete it"), and creates a two-line sed script that would modify the last matching line by first reading sample1.txt and then deleting the original line. Given that the last match of hello is on line 3 in the original file, this script would look like 3r sample1.txt3d Applies the constructed sed script on the file sample.txt . Would you want to make the edit "in-place", so that the original sample.txt is modified, then use sed -n '/hello/=' sample.txt |sed -e '$!d' -e $'s/.*/&r sample1.txt\\\n&d/' |sed -i '' -f /dev/stdin sample.txt The same set of commands, but using your variables $sample_file_path and $sample_file_path_1 for the two file paths: sed -n '/hello/=' "$sample_file_path" |sed -e '$!d' -e 's,.*,&r '"$sample_file_path_1"$'\\\n&d,' |sed -i '' -f /dev/stdin "$sample_file_path" Note that I have changed the delimiters in the second command from / to , as the file path contains slashes. You may use any character as a delimiter in the s/// command that is not otherwise part of the regular expression or replacement text. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/473424/"
]
} |
651,342 | I'm writing a script to prepare a csv file that takes columns number 5, 6, 7, 8, 10 and 13; takes the rows that on column 44 are equal to 7 and also meet that the rows that on column 3 are equal to 1, at the same time. Input: "ID_Bcn_2019","ID_Bcn_2016","Codi_Principal_Activitat","Nom_Principal_Activitat","Codi_Sector_Activitat","Nom_Sector_Activitat","Codi_Grup_Activitat","Nom_Grup_Activitat","Codi_Activitat_2019","Nom_Activitat","Codi_Activitat_2016","Nom_Local","SN_Oci_Nocturn","SN_Coworking","SN_Servei_Degustacio","SN_Obert24h","SN_Mixtura","SN_Carrer","SN_Mercat","Nom_Mercat","SN_Galeria","Nom_Galeria","SN_CComercial","Nom_CComercial","SN_Eix","Nom_Eix","X_UTM_ETRS89","Y_UTM_ETRS89","Latitud","Longitud","Direccio_Unica","Codi_Via","Nom_Via","Planta","Porta","Num_Policia_Inicial","Lletra_Inicial","Num_Policia_Final","Lletra_Final","Solar","Codi_Parcela","Codi_Illa","Seccio_Censal","Codi_Barri","Nom_Barri","Codi_Districte","Nom_Districte","Referencia_cadastral","Data_Revisio"1059038,"68849","1","Actiu","2","Serveis","14","Restaurants, bars i hotels (Inclòs hostals, pensions i fondes)","1400002","Restaurants","1400002","QUATRE COSES","1","1","1","1","1","0","1","","1","","1","","0","Rambla Catalunya","430088.542","4582365.352","41.38978196","2.16378361","089004, 329-329, LOC 10","089004","CONSELL DE CENT","LOC","10","329","","329","","114142","019","60490","079","07","la Dreta de l'Eixample","02","Eixample","0125419DF3802E","20190509"1075454,"","1","Actiu","2","Serveis","16","Altres","1600400","Serveis a les empreses i oficines","16004","SORIGUE","1","1","1","1","1","0","1","","1","","1","","1","","427229.272","4577543.637","41.34610100","2.13016600","222206, 19-19, LOC 10","222206","MOTORS","LOC","10","19","","19","","","","","025","12","la Marina del Prat Vermell","03","Sants-Montjuïc","","20190925"1075453,"","1","Actiu","2","Serveis","16","Altres","1600102","Activitats emmagatzematge","1600102","CEJIDOS SIVILA S.A","1","1","1","1","1","0","1","","1","","1","","1","","427178.393","4577526.160","41.34593900","2.12956000","222206, 278-282, LOC 10","222206","MOTORS","LOC","10","278","","282","","","","","025","12","la Marina del Prat Vermell","03","Sants-Montjuïc","","20190925" Output: "Codi_Sector_Activitat","Nom_Sector_Activitat","Codi_Grup_Activitat","Nom_Grup_Activitat","Nom_Activitat","SN_Oci_Nocturn""2","Serveis","14","Restaurants, bars i hotels (Inclòs hostals, pensions i fondes)","Restaurants","1" For the moment, in my script I've got: #!/bin/awk -fBEGIN { FS = OFS = "," }NR == 1 { print $5, $6, $7, $8, $10, $13 }NR != 1 { if ($44 == 7) {print} if ($3 == 1) {print}} But I'm not sure about the last part. So my question would be, how do I extract only the rows that meet these conditions: ($44 == 7) and ($3 == 1) ? | A starting note: none of 44 field cells equals 7. You have 07 . This is not awk, it's Miller , I think it could be useful mlr --csv -N filter -S '$3=="1" && $44=="07" || $1=~"ID"' then cut -f 5,6,7,8,10,13 input.csv >outuput.csv Some comments: filter to filter using you conditions and to have in output the heading row; cut to extract the fields you want In output you will have Codi_Sector_Activitat Nom_Sector_Activitat Codi_Grup_Activitat Nom_Grup_Activitat Nom_Activitat SN_Oci_Nocturn 2 Serveis 14 Restaurants, bars i hotels (Inclòs hostals, pensions i fondes) Restaurants 1 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469363/"
]
} |
651,397 | This is with bash on a Mac running Catalina: This works: rsync -Pa --rsh="ssh -p 19991" --exclude '*.jpg' --exclude '*.mp4' pi@localhost:/home/pi/webcam /Volumes/Media/Webcam\ Backups/raspcondo/webcam/ These do not: rsync -Pa --rsh="ssh -p 19991" --exclude={'*.jpg', '*.mp4'} pi@localhost:/home/pi/webcam /Volumes/Media/Webcam\ Backups/raspcondo/webcam/rsync -Pa --rsh="ssh -p 19991" --exclude {'*.jpg', '*.mp4'} pi@localhost:/home/pi/webcam /Volumes/Media/Webcam\ Backups/raspcondo/webcam/ This is the output: building file list ...rsync: link_stat "/Users/mnewman/*.mp4}" failed: No such file or directory (2)rsync: link_stat "/Users/mnewman/pi@localhost:/home/pi/webcam" failed: No such file or directory (2)0 files to considersent 29 bytes received 20 bytes 98.00 bytes/sectotal size is 0 speedup is 0.00rsync error: some files could not be transferred (code 23) at /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/rsync/rsync-54.120.1/rsync/main.c(996) [sender=2.6.9] What am I doing wrong with the list of file types to exclude? | First of all, your first example works - what's wrong with using that? If you really don't want to do that, try --exclude=*.{jpg,mp4} , which will (in some shells) expand to --exclude=*.jpg --exclude=*.mp4 , but note: this is a shell feature called Brace Expansion . It is not a feature of rsync or rsync's filter rules. This can easily lead to confusion and "surprising" behaviour if you mistakenly think that rsync will use the braces itself (it won't, and can't, and never even sees the braces). The expansion is done before rsync is executed. rsync only sees, e.g., --exclude=*.mp4 because there is no filename that matches that pattern in the current directory. in the unlikely event that there are any filenames that match --exclude=*.mp4 or --exclude=*.jpg , the brace expansion will expand to those exact filenames, without a wild-card. e.g. $ mkdir /tmp/test$ cd /tmp/test$ echo rsync --exclude=*.{jpg,mp4}rsync --exclude=*.jpg --exclude=*.mp4 so far, so good...but look what happens when there are filenames that actually match the brace expansions: $ touch -- --exclude=foo.jpg$ touch -- --exclude=bar.mp4$ touch -- --exclude=foobar.mp4$ echo rsync --exclude=*.{jpg,mp4}rsync --exclude=foo.jpg --exclude=bar.mp4 --exclude=foobar.mp4 A better way to avoid typing lots of --exclude options would be to use an array and printf: excludes=('*.mp4' '*.jpg')rsync ...args... $([ "${#excludes[@]}" -gt 0 ] && printf -- "--exclude='%s' " "${excludes[@]}") ...more args... This would result in a command line like: rsync ...args... --exclude='*.mp4' --exclude='*.jpg' ...more args... Even better would be to use an array and process substitution to provide a "file" for --exclude-from . e.g. rsync ... --exclude-from=<([ "${#excludes[@]}" -gt 0 ] && printf -- '- %s\n' "${excludes[@]}") ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651397",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388095/"
]
} |
651,444 | I'm trying to copy data off a rather damaged CD using the following command: dd if=/dev/sr1 of=IDT.img conv=sync,noerror status=progress However, the 'of' device got disconnected and the dd stopped (output below). ...dd: error reading '/dev/sr1': Input/output error1074889+17746 records in1092635+0 records out559429120 bytes (559 MB, 534 MiB) copied, 502933 s, 1.1 kB/sdd: writing to 'IDT.img': Input/output error1074889+17747 records in1092635+0 records out559429120 bytes (559 MB, 534 MiB) copied, 502933 s, 1.1 kB/s Can I resume with: dd if=/dev/sr1 of=IDT.img conv=sync,noerror status=progress seek=1092635 skip=1092635 Or should the seek/skip numbers be both 1092636 , or should skip/seek be different from each other, or something entirely different? PS I know I'm probably using the wrong command for this, e.g. ddrescue is probably better. But I'm probably stuck with dd now(?). I don't expect any more errors on the output file side of things. | You have encountered read errors, so the options conv=sync,noerror have almost certainly altered the stream of data, unfortunately making your output file worthless or at the very least an inaccurate copy. Each time there is a bad read (short read) on the input, the conv=sync option pads out the block with NUL bytes. The dd command will attempt to continue the input stream from where it left off, but the output now has an unknown number of NUL bytes inserted. You should stop using dd and use ddrescue , which was created for recovering data from bad media. Referenced answers for similar topics What does the two numbers mean respectively in dd's “a+b records” stats? Got “No space left on device” when cloning 1TB disk to 1.2TB disk using dd When is dd suitable for copying data? (or, when are read() and write() partial) What does dd conv=sync,noerror do? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108988/"
]
} |
651,451 | I have a file with gene_id and gene names in one line. I want to replace the word after gene_id with the word after gene or after product or after sprot (if some of it missed). Here is an example of a line: chrM Gnomon CDS 8345 8513 . + 1 gene_id "cds-XP_008824843.3"; transcript_id "cds-XP_008824843.3"; Parent "rna-XM_008826621.3"; Dbxref "GeneID:103728653_Genbank:XP_008824843.3"; Name "XP_008824843.3"; end_range "8513,."; gbkey "CDS"; gene "semaphorin-3F"; partial "true"; product "semaphorin-3F"; protein_id "XP_008824843.3"; sprot "sp|Q13275|SEM3F_HUMAN";chrM StringTie exon 2754 3700 . + . gene_id "cds-YP_007626758.1"; transcript_id "cds-YP_007626758.1"; Parent "gene-ND1"; Dbxref "Genbank:YP_007626758.1,Gene "ID:15088436"; Name "YP_007626758.1"; Note "TAAstopcodoniscompletedbytheadditionof3'AresiduestothemRNA"; gbkey "CDS"; gene "ND1"; product "NADHdehydrogenasesubunit1"; protein_id "YP_007626758.1"; transl_except "(pos:3700..3700%2Caa:TERM)"; transl_table "2"; I tried to make it with sed: sed -E 's/[^gene_id] .*?;/[^gene] .*?;|[^sprot] .*?;|[^product] .*?;/g' But the results were incorrect: chrM Gnomon CDS 8345 8513 . + 1 gene_id "cds-XP_008824843.3"[^gene] .*?;|[^sprot] .*?;|[^product] .*?;chrM StringTie exon 2754 3700 . + . gene_id "cds-YP_007626758.1"[^gene] .*?;|[^sprot] .*?;|[^product] .*?; But I want to save all line, but with another word after gene_id , like this: chrM Gnomon CDS 8345 8513 . + 1 gene_id "semaphorin-3F"; transcript_id "cds-XP_008824843.3"; Parent "rna-XM_008826621.3"; Dbxref "GeneID:103728653_Genbank:XP_008824843.3"; Name "XP_008824843.3"; end_range "8513,."; gbkey "CDS"; gene "semaphorin-3F"; partial "true"; product "semaphorin-3F"; protein_id "XP_008824843.3"; sprot "sp|Q13275|SEM3F_HUMAN";chrM StringTie exon 2754 3700 . + . gene_id "ND1"; transcript_id "cds-YP_007626758.1"; Parent "gene-ND1"; Dbxref "Genbank:YP_007626758.1,Gene "ID:15088436"; Name "YP_007626758.1"; Note "TAAstopcodoniscompletedbytheadditionof3'AresiduestothemRNA"; gbkey "CDS"; gene "ND1"; product "NADHdehydrogenasesubunit1"; protein_id "YP_007626758.1"; transl_except "(pos:3700..3700%2Caa:TERM)"; transl_table "2"; Or like this (if another missed): chrM Gnomon CDS 8345 8513 . + 1 gene_id "sp|Q13275|SEM3F_HUMAN"; transcript_id "cds-XP_008824843.3"; Parent "rna-XM_008826621.3"; Dbxref "GeneID:103728653_Genbank:XP_008824843.3"; Name "XP_008824843.3"; end_range "8513,."; gbkey "CDS"; gene "semaphorin-3F"; partial "true"; product "semaphorin-3F"; protein_id "XP_008824843.3"; sprot "sp|Q13275|SEM3F_HUMAN";chrM StringTie exon 2754 3700 . + . gene_id "ND1"; transcript_id "cds-YP_007626758.1"; Parent "gene-ND1"; Dbxref "Genbank:YP_007626758.1,Gene "ID:15088436"; Name "YP_007626758.1"; Note "TAAstopcodoniscompletedbytheadditionof3'AresiduestothemRNA"; gbkey "CDS"; gene "ND1"; product "NADHdehydrogenasesubunit1"; protein_id "YP_007626758.1"; transl_except "(pos:3700..3700%2Caa:TERM)"; transl_table "2"; Any help will be very much appreciated. | You have encountered read errors, so the options conv=sync,noerror have almost certainly altered the stream of data, unfortunately making your output file worthless or at the very least an inaccurate copy. Each time there is a bad read (short read) on the input, the conv=sync option pads out the block with NUL bytes. The dd command will attempt to continue the input stream from where it left off, but the output now has an unknown number of NUL bytes inserted. You should stop using dd and use ddrescue , which was created for recovering data from bad media. Referenced answers for similar topics What does the two numbers mean respectively in dd's “a+b records” stats? Got “No space left on device” when cloning 1TB disk to 1.2TB disk using dd When is dd suitable for copying data? (or, when are read() and write() partial) What does dd conv=sync,noerror do? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/473569/"
]
} |
651,477 | I have a directory of compressed directories, like this: MainDirectory/FolderA.tar.gzMainDirectory/FolderB.tar.gz Within each directory, some of the files have the same name. Ex. MainDirectory/FolderA.tar.gz/file1.fastaMainDirectory/FolderA.tar.gz/file2.fastaMainDirectory/FolderB.tar.gz/file1.fastaMainDirectory/FolderB.tar.gz/file1.fasta I need to decompress each directory, rename each file with the name of the directory, and then recompress the individual files. My desired output is: MainDirectory/FolderA_file1.fasta.bz2MainDirectory/FolderA_file2.fasta.bz2MainDirectory/FolderB_file1.fasta.bz2MainDirectory/FolderB_file1.fasta.bz2 I came up with this code, but it renames the files to have a literal $f in: cd MainDirectory/for f in *.tar.gz do tar -xvzf $f --transform 's,^,${f},' pbzip2 *.fastq done Output: MainDirectory/'${f}file1.fastq.bz2'MainDirectory/'${f}file2.fastq.bz2' Please may you help me convert the command so it prepends the files with the actual folder name instead? Thank you. | You have encountered read errors, so the options conv=sync,noerror have almost certainly altered the stream of data, unfortunately making your output file worthless or at the very least an inaccurate copy. Each time there is a bad read (short read) on the input, the conv=sync option pads out the block with NUL bytes. The dd command will attempt to continue the input stream from where it left off, but the output now has an unknown number of NUL bytes inserted. You should stop using dd and use ddrescue , which was created for recovering data from bad media. Referenced answers for similar topics What does the two numbers mean respectively in dd's “a+b records” stats? Got “No space left on device” when cloning 1TB disk to 1.2TB disk using dd When is dd suitable for copying data? (or, when are read() and write() partial) What does dd conv=sync,noerror do? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/473589/"
]
} |
651,623 | How could I insert /foo/ after and only after opening brackets? (bar) should become (/foo/bar) while (/baz/bar) should not become (/baz/foo/bar) | In this simple case, you could try sed 's,(bar,(/foo/bar,' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651623",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358584/"
]
} |
651,662 | I'm trying to check if a directory bin is inside a directory which can sometimes change. In this particular case, the version number of ruby can change (e.g. $HOME/.gem/ruby/2.6.0/bin ). Here's what I did so far: #!/usr/bin/env zshruby_gem_home="$HOME/.gem/ruby/*/bin"if [[ -d $ruby_gem_home ]]; then echo "The ruby gems directory exists!"else echo "Ruby gems directory missing!"fi I don't want to use find as this is part of a login process. What's the most elegant way, using built-in zsh/bash commands, to achieve this? Thanks! EDIT: Forgot to mention this is for a zsh script. | Use an array, and check whether the first element (i.e. [0] in bash, [1] in zsh) of the array is a directory. e.g. in bash : # the trailing slash in "$HOME"/.gem/ruby/*/bin/ ensures that# the glob matches only directories.rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ )if [ -d "${rubygemdirs[0]}" ] ; then echo "At least one ruby gem dir exists"else echo "No ruby gem dirs were found"fi To work in both zsh and bash , you need to first find out which shell is currently running. That will tell you whether the first index of an array is 0 or 1 .It will also determine whether you need to use (N) with the glob to tell zsh NOT to exit if there are no matches for the glob (zsh will exit by default on glob matching errors). (at this point, it should be starting to become obvious that writing a script to work reliably in both bash and zsh is probably going to be more trouble than it's worth) if [ -n "$BASH_VERSION" ] ; then first_idx=0 rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ )elif [ -n "$ZSH_VERSION" ] ; then first_idx=1 emulate sh rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ ) emulate zshfiif [ -d "${rubygemdirs[$first_idx]}" ] ; then... If required, you can also iterate over the array to check whether an element matches a particular version. e.g. for d in "${rubygemdirs[@]}" ; do [[ $d =~ /2\.6\.0/ ]] && echo found gem dir for ruby 2.6.0done Note: the / s in the regex are literal forward-slashes. They don't mark the beginning and end of the regex as in sed . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/290185/"
]
} |
651,664 | My question is based on this scenario: serverA , serverB , user1 , and user2 . Both users are present on both the servers. user1 on serverA has SSH keypairs generated and the public key copied to the authorized_keys file on serverB . user2 on serverA has no SSH key pairs generated and also on serverB . user1 logs into serverA . user1 tries to SSH to serverB as user2 ( ssh user2@serverb ) and it works fine, no password asked. My question is this. How does this work? user2 has no public keys on serverB . I always thought that SSH authenticates the user trying to login.Does this mean that SSH on serverB authenticates the currently logged in user1 on serverA ? | Use an array, and check whether the first element (i.e. [0] in bash, [1] in zsh) of the array is a directory. e.g. in bash : # the trailing slash in "$HOME"/.gem/ruby/*/bin/ ensures that# the glob matches only directories.rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ )if [ -d "${rubygemdirs[0]}" ] ; then echo "At least one ruby gem dir exists"else echo "No ruby gem dirs were found"fi To work in both zsh and bash , you need to first find out which shell is currently running. That will tell you whether the first index of an array is 0 or 1 .It will also determine whether you need to use (N) with the glob to tell zsh NOT to exit if there are no matches for the glob (zsh will exit by default on glob matching errors). (at this point, it should be starting to become obvious that writing a script to work reliably in both bash and zsh is probably going to be more trouble than it's worth) if [ -n "$BASH_VERSION" ] ; then first_idx=0 rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ )elif [ -n "$ZSH_VERSION" ] ; then first_idx=1 emulate sh rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ ) emulate zshfiif [ -d "${rubygemdirs[$first_idx]}" ] ; then... If required, you can also iterate over the array to check whether an element matches a particular version. e.g. for d in "${rubygemdirs[@]}" ; do [[ $d =~ /2\.6\.0/ ]] && echo found gem dir for ruby 2.6.0done Note: the / s in the regex are literal forward-slashes. They don't mark the beginning and end of the regex as in sed . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651664",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/473790/"
]
} |
651,676 | I've tried to use the following command, but it is missing the available applications listed in the Applications menu of the Gnome GUI/Desktop environment. I am not sure how to access the information for how these apps are being launched. In KDE or RHEL 6 Gnome (Gnome 2.x), it was easy to just right click on the application launcher and see the command it was using to launch the application. However with Gnome 3 on RHEL 8 I have had no such luck. rpm -qa | Use an array, and check whether the first element (i.e. [0] in bash, [1] in zsh) of the array is a directory. e.g. in bash : # the trailing slash in "$HOME"/.gem/ruby/*/bin/ ensures that# the glob matches only directories.rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ )if [ -d "${rubygemdirs[0]}" ] ; then echo "At least one ruby gem dir exists"else echo "No ruby gem dirs were found"fi To work in both zsh and bash , you need to first find out which shell is currently running. That will tell you whether the first index of an array is 0 or 1 .It will also determine whether you need to use (N) with the glob to tell zsh NOT to exit if there are no matches for the glob (zsh will exit by default on glob matching errors). (at this point, it should be starting to become obvious that writing a script to work reliably in both bash and zsh is probably going to be more trouble than it's worth) if [ -n "$BASH_VERSION" ] ; then first_idx=0 rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ )elif [ -n "$ZSH_VERSION" ] ; then first_idx=1 emulate sh rubygemdirs=( "$HOME"/.gem/ruby/*/bin/ ) emulate zshfiif [ -d "${rubygemdirs[$first_idx]}" ] ; then... If required, you can also iterate over the array to check whether an element matches a particular version. e.g. for d in "${rubygemdirs[@]}" ; do [[ $d =~ /2\.6\.0/ ]] && echo found gem dir for ruby 2.6.0done Note: the / s in the regex are literal forward-slashes. They don't mark the beginning and end of the regex as in sed . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/348104/"
]
} |
651,766 | Dealing with the csv produced by the concatenation of several CSVs, I am looking for the possibility to remove repeats of the header lines (present in the each concatunated CSV being identical among them). here is my CSV contained repeats of the first line: ID(Prot), ID(lig), ID(cluster), dG(rescored), dG(before), POP(before)1000, lig40, 1, 0.805136, -5.5200, 791000, lig868, 1, 0.933209, -5.6100, 421000, lig278, 1, 0.933689, -5.7600, 401000, lig619, 3, 0.946354, -7.6100, 201000, lig211, 1, 0.960048, -5.2800, 391000, lig40, 2, 0.971051, -4.9900, 401000, lig868, 3, 0.986384, -5.5000, 291000, lig12, 3, 0.988506, -6.7100, 161000, lig800, 16, 0.995574, -4.5300, 401000, lig800, 1, 0.999935, -5.7900, 221000, lig619, 1, 1.00876, -7.9000, 31000, lig619, 2, 1.02254, -7.6400, 11000, lig12, 1, 1.02723, -6.8600, 51000, lig12, 2, 1.03273, -6.8100, 41000, lig211, 2, 1.03722, -5.2000, 191000, lig211, 3, 1.03738, -5.0400, 21ID(Prot), ID(lig), ID(cluster), dG(rescored), dG(before), POP(before)10V1, lig40, 1, 0.513472, -6.4600, 15010V1, lig211, 2, 0.695981, -6.8200, 9110V1, lig278, 1, 0.764432, -7.0900, 7010V1, lig868, 1, 0.787698, -7.3100, 6210V1, lig211, 1, 0.83416, -6.8800, 5410V1, lig868, 3, 0.888408, -6.4700, 4410V1, lig278, 2, 0.915932, -6.6600, 3510V1, lig12, 1, 0.922741, -9.3600, 1910V1, lig12, 8, 0.934144, -7.4600, 2410V1, lig40, 2, 0.949955, -5.9000, 3410V1, lig800, 5, 0.964194, -5.9200, 3010V1, lig868, 2, 0.966243, -6.9100, 2010V1, lig12, 2, 0.972575, -8.3000, 1010V1, lig619, 6, 0.979168, -8.1600, 910V1, lig619, 4, 0.986202, -8.7800, 510V1, lig800, 2, 0.989599, -6.2400, 2010V1, lig619, 1, 0.989725, -9.2900, 310V1, lig12, 7, 0.991535, -7.5800, 9ID(Prot), ID(lig), ID(cluster), dG(rescored), dG(before), POP(before)10V2, lig40, 1, 0.525767, -6.4600, 14610V2, lig211, 2, 0.744702, -6.8200, 7810V2, lig278, 1, 0.749015, -7.0900, 7410V2, lig868, 1, 0.772025, -7.3100, 6610V2, lig211, 1, 0.799829, -6.8700, 6310V2, lig12, 1, 0.899345, -9.1600, 2510V2, lig12, 4, 0.899606, -7.5500, 3210V2, lig868, 3, 0.903364, -6.4800, 4010V2, lig278, 3, 0.913145, -6.6300, 3610V2, lig800, 5, 0.94576, -5.9100, 35 To post-process this CSV I need to remove repetitions of the header line ID(Prot), ID(lig), ID(cluster), dG(rescored), dG(before), POP(before) keeping the header only in the begining of the fused csv (on the first line!).I have tried to use the following awk one-liner which is looking for the 1st line and then remove its repeates awk '{first=$1;gsub("ID(Prot)","");print first,$0}' mycsv.csv > csv_without_repeats.csv however it did not recognize the header line, meaning that the pattern was not defined correctly. How my AWK code could be corrected supposed that further it should be piped to sort in other to sort the lines after the filtering of the repeats ? awk '{first=$1;gsub(/ID(Prot)?(\([-azA-Z]+\))?/,"");print first,$0}' | LC_ALL=C sort -k4,4g input.csv > sorted_and_without_repeats.csv | Here's an awk script that will skip any lines that start with ID(Prot) , unless it is the first line: awk 'NR==1 || !/^ID\(Prot\)/' file > newFile Here's the same idea in perl : perl -ne 'print if $.==1 || !/^ID\(Prot\)/' file > newFile Or, to edit the original file in place: perl -i -ne 'print if $.==1 || !/^ID\(Prot\)/' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444749/"
]
} |
651,829 | How can I match the first column from file1 according to the numbers in the second column to the file 2? File file1 k002 25k004 54k003 23 File file2 25 h23 j54 hg Desired output k002 25 hk003 23 jk004 54 hg I have no idea how to do that, and I did not find similar questions. awk 'matching {print ... $1, $2}' file1 file2 > file_des | You could perhaps do something like this: awk 'NR == FNR { x[$2]=$1; next} { print x[$1], $0 }' file1 file2 Where: FNR : The input record number in the current input file. NR : The total number of input records seen so far. Note that this will read entire file1 into memory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651829",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/448172/"
]
} |
651,831 | every our backup files of a database are created. The files are named like this: prod20210528_1200.sql.gz pattern: prod`date +\%Y%m%d_%H%M` The pattern could be adjusted if needed. I would like to have a script that: keeps all backups for the last x (e.g. 3) days for backups older than x (e.g. 3) days only the backup from time 00:00 shall be kept for backups older than y (e.g. 14) days only one file per week (monday) shall be kept for backups older than z days (e.g. 90) only one file per month (1st of each month) shall be kept the script should rather use the filename instead of the date (created) information of the file, if that it possible the script should run every day Unfortunately, I have very little knowledge of the shell-/bash-script language.I would do something like this: if (file < today - x AND date > today - (x + 1)){ if (%H_of_file != 00 AND %M_of_file != 00) { delete file }}if (file < today - y AND date > today - (y + 1)){ if (file != Monday) { delete file }}if (file < today - z AND date > today - (z + 1)){ if (%m_of_file != 01) { delete file }}Does this makes any sense for you?Thank you very much! All the best,Phantom | You could perhaps do something like this: awk 'NR == FNR { x[$2]=$1; next} { print x[$1], $0 }' file1 file2 Where: FNR : The input record number in the current input file. NR : The total number of input records seen so far. Note that this will read entire file1 into memory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651831",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/473941/"
]
} |
651,907 | I am running Gnome on Pop-OS! 20.04. I am using terminator and zsh shell. I want to launch fortune only the first time I use zsh shell in a X session. I have tried those kind of things in .zshrc : ### count the lines ofps -ax | grep zshps -ax | grep /zsh###[ -z "$(pidof zsh)" ] && fortune PS: I had hard times finding appropriate tags for this quesion | Given the precise requirement “once per X session”, the most natural place to store the information that this has already been done in the current X session is in the state of the X server. There isn't really an intended place for custom global data on an X server, but there's a place for custom global configuration which can be changed at any time, which is close enough for the purpose here: X resources . The command line tool to manipulate X resources is xrdb . Beware that xrdb is quirky and the X.org implementation has extremely long-standing bugs. You probably don't want to use xrdb -load (which removes all previously loaded configuration as documented) or xrdb -remove (which removes all previously loaded configuration, unlike what it's supposed to do). Untested code for your .zshrc : if [[ -n ${DISPLAY:+set} ]] && whence xrdb >/dev/null; then if xrdb -query | grep -q '^pietrodito\.session\.ran-fortune:.*true'; then fortune xrdb -merge <<<'pietrodito.session.ran-fortune: true' fifi To unset the option for testing: xrdb -load <(xrdb -query | grep -v '^pietrodito\.session\.ran-fortune:) Note that this is not atomic. fortune may run multiple times if you start zsh instances at almost exactly the same time. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651907",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250067/"
]
} |
651,908 | so I've cloned this github repository https://github.com/xiangzhai/rt5370 It's a drive for my wifi adapter.My problem is that whenever I try to use the "make" command, it returns this error /lib/modules/5.3.7-gentoo--g243aa7022-dirty/build No such file or directory This is what I've tried so far emerge --sync emerge linux-header emerge build No luck, could anyone help me please I've been trying to fix the issue for hours now | Given the precise requirement “once per X session”, the most natural place to store the information that this has already been done in the current X session is in the state of the X server. There isn't really an intended place for custom global data on an X server, but there's a place for custom global configuration which can be changed at any time, which is close enough for the purpose here: X resources . The command line tool to manipulate X resources is xrdb . Beware that xrdb is quirky and the X.org implementation has extremely long-standing bugs. You probably don't want to use xrdb -load (which removes all previously loaded configuration as documented) or xrdb -remove (which removes all previously loaded configuration, unlike what it's supposed to do). Untested code for your .zshrc : if [[ -n ${DISPLAY:+set} ]] && whence xrdb >/dev/null; then if xrdb -query | grep -q '^pietrodito\.session\.ran-fortune:.*true'; then fortune xrdb -merge <<<'pietrodito.session.ran-fortune: true' fifi To unset the option for testing: xrdb -load <(xrdb -query | grep -v '^pietrodito\.session\.ran-fortune:) Note that this is not atomic. fortune may run multiple times if you start zsh instances at almost exactly the same time. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651908",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/473504/"
]
} |
651,921 | I have a 32GB SD Card that contains an Armbian installation for some pi gadget. I want to clone the content into a 16GB card. Using GParted, I shrank the partitions to be less than 16GB and here is the state of the SD Card as shown in fdisk . There are 2 partitions, one is the Armbian and the other one is an small FAT32 partition to share files with windows. Disk /dev/sdk: 29,74 GiB, 31914983424 bytes, 62333952 sectorsDisk model: USB3.0 CRW-SD/MSUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x22563e30Device Boot Start End Sectors Size Id Type/dev/sdk1 8192 25690111 25681920 12,3G 83 Linux/dev/sdk2 25690112 26509311 819200 400M b W95 FAT32 Can you please tell me what would I need to do now to exactly clone what is on the card, including the boot partition? It is strange that the Armbian leaved 8129 sectors free, and calls it unpartitioned space, what is in that area? If I do something like: dd if=/dev/sdk of=/home/user/backup.iso It will create an image with size 32GB.... but I want it to be limited to the last sector of /dev/sdk2 . | You could use the largest end sector for count: dd bs=512 count=26509312 if=/dev/sdk of=devsdk.img Or with a different blocksize: dd bs=1M count=$((26509312*512)) iflag=count_bytes if=/dev/sdk of=devsdk.img It is strange that the Armbian leaved 8129 sectors free, and calls it unpartitioned space, what is in that area? For embedded devices, unpartitioned space can hold bootloaders and kernel images, or anything else really. But it could be as simple as alignment considerations. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651921",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330762/"
]
} |
651,954 | how can I sort a string like, for instance "1.3.2 1.3.1 1.2.3 1.1.1.5" to "1.1.1.5 1.2.3 1.3.1 1.3.2" So I don't know how many numbers the version consists of and I don't know how many versions there are in the string. How to solve this? Thanks | This is one of the few instances where NOT quoting a variable is useful. $ string="1.3.2 1.3.1 1.2.3 1.1.1.5"$ printf "%s\n" $string | sort -V1.1.1.51.2.31.3.11.3.2 This uses GNU sort's -V aka --version-sort option to sort the numbers. You can store that back into a variable, even the same variable ( $string ): $ string=$(printf "%s\n" $string | sort -V)$ echo $string 1.1.1.5 1.2.3 1.3.1 1.3.2 or an array: $ array=( $(printf "%s\n" $string | sort -V) )$ typeset -p arraydeclare -a array=([0]="1.1.1.5" [1]="1.2.3" [2]="1.3.1" [3]="1.3.2") BTW, you should almost certainly be using an array rather than a simple string with white-space separating multiple different values. The only real reason not to is if you're using a shell (like ash ) that doesn't support arrays. e.g. $ array=( 1.3.2 1.3.1 1.2.3 1.1.1.5 )$ typeset -p arraydeclare -a array=([0]="1.3.2" [1]="1.3.1" [2]="1.2.3" [3]="1.1.1.5")$ array=( $(printf "%s\n" "${array[@]}" | sort -V) )$ typeset -p arraydeclare -a array=([0]="1.1.1.5" [1]="1.2.3" [2]="1.3.1" [3]="1.3.2") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651954",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/440130/"
]
} |
651,964 | Have data and *.tsv file where data is described.Would like to use the description and rename the data accordingly. Please have a look : awk command to filter the tsv give this: common_voice_en_22090684.mp3 four common_voice_en_22090691.mp3 no common_voice_en_22090696.mp3 one go through the directory where the *.mp3 are: for i in *.mp3 ; doecho $i mv is command to rename the file and it takes two arguments (the file to change, and with what) How to use awk (to read and use the description) and mv (to rename the existing files with the passed description? So, looking at the above example, the result would be : four.mp3 no.mp3 one.mp3 It is not important to use the suggested commands.Any ideas, suggestions how to do this are most welcome! | This is one of the few instances where NOT quoting a variable is useful. $ string="1.3.2 1.3.1 1.2.3 1.1.1.5"$ printf "%s\n" $string | sort -V1.1.1.51.2.31.3.11.3.2 This uses GNU sort's -V aka --version-sort option to sort the numbers. You can store that back into a variable, even the same variable ( $string ): $ string=$(printf "%s\n" $string | sort -V)$ echo $string 1.1.1.5 1.2.3 1.3.1 1.3.2 or an array: $ array=( $(printf "%s\n" $string | sort -V) )$ typeset -p arraydeclare -a array=([0]="1.1.1.5" [1]="1.2.3" [2]="1.3.1" [3]="1.3.2") BTW, you should almost certainly be using an array rather than a simple string with white-space separating multiple different values. The only real reason not to is if you're using a shell (like ash ) that doesn't support arrays. e.g. $ array=( 1.3.2 1.3.1 1.2.3 1.1.1.5 )$ typeset -p arraydeclare -a array=([0]="1.3.2" [1]="1.3.1" [2]="1.2.3" [3]="1.1.1.5")$ array=( $(printf "%s\n" "${array[@]}" | sort -V) )$ typeset -p arraydeclare -a array=([0]="1.1.1.5" [1]="1.2.3" [2]="1.3.1" [3]="1.3.2") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/474061/"
]
} |
651,978 | Here is my scenario.I have a txt file. emailADD.txt.it contains email ids every line [email protected]@[email protected] And i have files in a folder abc.pdfdef.pdfhij.pdf and so on i want a script to send email to the first id with the first attachment. then another email to second id with the second attachment and so on. both email ids and the attachments will be stored in alphabetical order. the number of email ids and attachments stored will be equal. Please suggest. I have this idea from jesse_b but it doesn't involve different attachments to each email id. #!/bin/bashfile=/location/of/emailAdd.txtwhile read -r email; do #printf '%s\n' 'Hello, world!' | sudo mail -s 'This is the email subject' "$email" done < "$file" | This is one of the few instances where NOT quoting a variable is useful. $ string="1.3.2 1.3.1 1.2.3 1.1.1.5"$ printf "%s\n" $string | sort -V1.1.1.51.2.31.3.11.3.2 This uses GNU sort's -V aka --version-sort option to sort the numbers. You can store that back into a variable, even the same variable ( $string ): $ string=$(printf "%s\n" $string | sort -V)$ echo $string 1.1.1.5 1.2.3 1.3.1 1.3.2 or an array: $ array=( $(printf "%s\n" $string | sort -V) )$ typeset -p arraydeclare -a array=([0]="1.1.1.5" [1]="1.2.3" [2]="1.3.1" [3]="1.3.2") BTW, you should almost certainly be using an array rather than a simple string with white-space separating multiple different values. The only real reason not to is if you're using a shell (like ash ) that doesn't support arrays. e.g. $ array=( 1.3.2 1.3.1 1.2.3 1.1.1.5 )$ typeset -p arraydeclare -a array=([0]="1.3.2" [1]="1.3.1" [2]="1.2.3" [3]="1.1.1.5")$ array=( $(printf "%s\n" "${array[@]}" | sort -V) )$ typeset -p arraydeclare -a array=([0]="1.1.1.5" [1]="1.2.3" [2]="1.3.1" [3]="1.3.2") | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/651978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117977/"
]
} |
652,076 | For example, I looking for files and directories in some directory: ubuntu@example:/etc/letsencrypt$ sudo find . -name example.com*./archive/example.com./renewal/example.com.conf./live/example.comubuntu@example:/etc/letsencrypt$ How can I mark that ./archive/example.com and ./live/example.com are directories in the output above? | Print the file type along with the name with -printf "%y %p\n" : $ sudo find . -name 'example.com*' -printf "%y %p\n"d ./archive/example.comf ./renewal/example.com.confd ./live/example.com The use of -printf assumes GNU find (the most common find implementation on Linux systems). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/652076",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169259/"
]
} |
652,082 | Which IP does an interface use when the host acts as a client? Let's say I have configured eth0 with 2 IP addresses: 192.168.1.7 and 192.168.1.8 The route command shows something like this: $ routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface192.168.240.0 0.0.0.0 255.255.240.0 U 256 0 0 eth0... This basically means that when I try to connect to any host from the 192.168.240.0 network it uses the eth0 interface. Ok, but... Which IP address from that interface? If the host acts as a server and a client connects to my computer using the IP address 192.168.1.7 I understand that eth0 will use 192.168.1.7 to communicate with the client, but what if I am the client? EDIT The IP addresses are made up, I can't add another IP address to an interface in my Ubuntu WSL because I get this error: $ ip address add 192.168.1.7/24 dev eth0RTNETLINK answers: Permission denied The output of ip r s is something like this: $ ip r snone 224.0.0.0/4 dev eth0 proto unspec metric 256none 255.255.255.255 dev eth0 proto unspec metric 256none 224.0.0.0/4 dev eth1 proto unspec metric 256none 255.255.255.255 dev eth1 proto unspec metric 256... EDIT 2 I upgraded to WSL2 and now the command to add ip addresses work (with sudo). $ ip -4 a s dev eth04: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 inet 192.168.249.181/20 brd 192.168.255.255 scope global eth0 valid_lft forever preferred_lft forever inet 192.168.1.7/24 scope global eth0 valid_lft forever preferred_lft forever inet 192.168.1.8/24 scope global secondary eth0 valid_lft forever preferred_lft forever | For Linux, the answer to your question is given here : The initial source address for an outbound packet is chosen in according to the following series of rules. The application can request a particular IP, the kernel will use the src hint from the chosen route path, or, lacking this hint, the kernel will choose the first address configured on the interface which falls in the same network as the destination address or the nexthop router. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/203214/"
]
} |
652,223 | in Linux, is there a layer/script that handles program-requests to open files? Like when you open a file-descriptor in bash: exec 3 <>/documents/foo.txt or your text-editor opens /documents/foo.txt I can't believe an editor can "just open up a file" for read/write access on its own. I rather imagine this to be a request to a "layer" ( init.d script? ) that can to begin with only open a certain amount of files and that keeps tabs on open files with their access-kinds, by what processes they are opened etc. | This layer is inside the kernel in Linux and other systems that don't stray too far from the historical Unix design (and in most non-Unix operating systems as well). This part of the kernel is called the VFS (virtual file system) layer . The role of the VFS is to manage information about open files (the correspondence between file descriptors , open file descriptions and directory entries), to parse file paths (interpreting / , . and .. ), and to dispatch operations on directory entries to the correct filesystem driver. Most filesystem drivers are in the kernel as well, but the FUSE filesystem driver allows this functionality to be delegated outside the kernel. Filesystem operations can also involve user land code if a lower level storage does so, for example if a disk filesystem is on a loop device . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/440130/"
]
} |
652,224 | i need to get the running time of a program as soon as it is closed and i came up with this start=`date +"%M"`while [ `pgrep vlc` ];do echo vlcopen > /dev/nulldonestop=`date +"%M"`[ $stop -lt $start ]&&time=$[ 60-$start+$stop ]||time=$[ $stop-$start ]echo $time > time.txt and it does the job but this is highly inefficient and takes a lot of cup usage how do i do this more efficiently | This layer is inside the kernel in Linux and other systems that don't stray too far from the historical Unix design (and in most non-Unix operating systems as well). This part of the kernel is called the VFS (virtual file system) layer . The role of the VFS is to manage information about open files (the correspondence between file descriptors , open file descriptions and directory entries), to parse file paths (interpreting / , . and .. ), and to dispatch operations on directory entries to the correct filesystem driver. Most filesystem drivers are in the kernel as well, but the FUSE filesystem driver allows this functionality to be delegated outside the kernel. Filesystem operations can also involve user land code if a lower level storage does so, for example if a disk filesystem is on a loop device . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652224",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/472653/"
]
} |
652,253 | Why no enp1s0 ethernet interface in my os? ip -brief link |cut -d" " -f1loenp6s0 Why can't get the result ? ip -brief link |cut -d" " -f1loenp1s0 | The ethernet interface name enp6s0 means the PCI bus location (as indicated by e.g. the lspci command) of that NIC is 06:00.0 . If you don't have a network card at PCI bus location 01:00.0 , you won't get interface name enp1s0 . On many desktop motherboards, the PCI bus location 01:00.0 refers to the first long (16x) PCIe slot, which is the recommended installation location of the first add-on GPU card. Of course, if you set custom names for your network interfaces, you can name them anything you like, but if you deliberately break the relationship between the enp* names and the corresponding PCI bus locations without a very good reason, you would just cause confusion for yourself (and potentially other administrators of the system) in the future. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102745/"
]
} |
652,279 | I'm trying to write a script that notifies me when a static web page has changed. To do it, I'm using wget to download the web page, and diff to check whether it has changed or not. I'm running an Ubuntu 20.04 LTS virtual machine. Here is the example: $ wget --quiet https://twiki.di.uniroma1.it/twiki/view/Reti_Avanzate/InternetOfThings2021 -O file1$ wget --quiet https://twiki.di.uniroma1.it/twiki/view/Reti_Avanzate/InternetOfThings2021 -O file2$ diff -q file1 file2Files file1 and file2 differ As you can see, diff reports differences between the two files. Why? Even if I try to compare them with diff -y they look the same for me. UPDATE Looking for differences with git diff --color-words -- file1 file2 gave the following result: Apparently, there's a field in which the timestamp is added, and in one of the two files there's a <!--GENERATED_HEADERS--> which is absent in the other. Any idea on how to solve it? | You can solve this problem by using w3m with -dump option that ignores tags while rendering the page. $ w3m -dump https://twiki.di.uniroma1.it/twiki/view/Reti_Avanzate/InternetOfThings2021 > file1$ w3m -dump https://twiki.di.uniroma1.it/twiki/view/Reti_Avanzate/InternetOfThings2021 > file2$ if cmp -s file1 file2; then echo "Files are not different"; fi Files are not different $ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/410261/"
]
} |
652,299 | I'm creating an AMI of Ubuntu 20.04 (Focal Fossa), and I want the default Python version to be 3.6. I installed Python 3.6, also the right pip, and then set the alternative like so: update-alternatives --install \ /usr/bin/python3 \ python3 \ /usr/bin/python3.6 \ 10 But then I'm running into many issues related to CPython packages, such as python3-apt (apt_pkg, apt_inst), netifaces , and probably many more I didn't catch yet. They are all located on /usr/lib/python3/dist-packages and the package names are in this format: {name}.cpython-38-x86_64-linux-gnu.so Which makes sense, since the default Python version of Ubuntu 20.04 is Python 3.8. The immediate solution from googling is linking the name like so: ln -s {name}.cpython-38-x86_64-linux-gnu.so {name}.so I.e.: ln -s apt_pkg.cpython-38-x86_64-linux-gnu.so apt_pkg.soln -s netifaces.cpython-38-x86_64-linux-gnu.so netifaces.so I tried reinstalling the relevant packages ( apt install --reinstall python3-apt ) when the default Python version is 3.6, but it didn't work, and this solution of linking the *.so files is not scalable! Is there a way to make Python 3.6 work with the system's default CPython packages? | As you discovered, the system does rely on the system version of Python being as it expects. If you really want a system with Python 3.6, your best bet is to find a (ideally, still supported) release using Python 3.6: in your case, Ubuntu 18.04. If you want to provide Python 3.6 for programs running on your AMI, you could look into using virtual environments instead of replacing the system Python. pyenv is a good place to start. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/652299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150289/"
]
} |
652,316 | The UNIX and Linux System Administration Handbook says: man maintains a cache of formatted pages in /var/cache/man or/usr/share/man if the appropriate directories are writable; however,this is a security risk. Most systems preformat the man pages once atinstallation time (see catman) or not at all. What is the "security risk(s)" here? There is the obvious security risk that someone can alter the man pages to trick a (novice) user into running something undesirable, as pointed out by Ulrich Schwartz in their answer , but I am looking for other ways this could be exploited. Thanks! | It's not safe to let users manipulate the content of man pages (or any data really) that will also be used by other users, because there is a danger of cache poisoning . As the old BOFH joke goes: To learn everything about your system, from the root up, use the "read manual" command with the "read faster" switch like this: rm -rf / (To be clear, do not run this command.) But if I control the man page cache, you might type man rm to see a cached fake man page that tells you rm is indeed "rm - read manual" and not "rm - remove files or directories". Or even output terminal escape sequences that inject code into your shell . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/652316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/459222/"
]
} |
652,331 | I'm trying to find the first non-zero byte (starting from an optional offset) on a block device using dd and print its offset, but I am stuck. I didn't mention dd in the title as I figured there might be a more appropriate tool than dd to do this, but I figured dd should be a good start. If you know of a more appropriate tool and/or more efficient way to reach my goal, that's fine too. In the meantime I'll show you how far I've come with dd in bash, so far. #!/bin/bash# infile is just a temporary test file for now, which will be replaced with /dev/sdb, for instanceinfile=test.txtoffset=0while true; do byte=`dd status='none' bs=1 count=1 if="$infile" skip=$offset` ret=$? # the following doesn't appear to work # ret is always 0, even when the end of file/device is reached # how do I correctly determine if dd has reached the end of file/device? if [ $ret -gt 0 ]; then echo 'error, or end of file reached' break fi # I don't know how to correctly determine if the byte is non-zero # how do I determine if the read byte is non-zero? if [ $byte ???? ]; then echo "non-zero byte found at $offset" break fi ((++offset))done As you can see, I'm stuck with two issues that I don't know how to solve: a. How do I make the while loop break when dd has reached the end of the file/device? dd gives an exit code of 0 , where I expected a non-zero exit code instead. b. How do I evaluate whether the byte that dd read and returns on stdout is non-zero? I think I've read somewhere that special care should be taken in bash with \0 bytes as well, but I'm not even sure this pertains to this situation. Can you give me some hints on how to proceed, or perhaps suggest and alternative way to achieve my goal? | You can do this using cmp , comparing to /dev/zero : cmp /path/to/block-device /dev/zero cmp will give you the offset of the first non-zero byte. If you want to skip bytes, you can use GNU cmp ’s -i option, or if you’re not using GNU cmp , feed it the appropriate data using dd : cmp -i 100 /path/to/block-device /dev/zerodd if=/path/to/block-device bs=1 skip=100 | cmp - /dev/zero This will work with any file, not just block devices. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/652331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/474454/"
]
} |
652,375 | I would like to convert the ZFS output of "10.9T" to actual bytes, using something in a line or two, rather than run generic math functions, and if conditions for T , G , M , etc.. Is there an efficient way to do this? For now, I have something like this: MINFREE="50G"POOLSIZE=`zpool list $POOLNAME -o size` #Size 10.9TPOOLSIZE=$(echo "$POOLSIZE" | grep -e [[:digit:))] #10.9TPOOLFREE=500M #as an examplelet p=POOLSIZE x=POOLFREE y=MINFREE z=POOLSIZE; CALC=$(expr "echo $((x / y))")if [ "${CALC}" < 1 ]; then # we are less than our min free space echo alertfi This produces an error: can't run the expression on 10.9T , or 50G because they arent numbers. Is there a known bash function for this? I also like the convenience of specifying it like i did there in the MINFREE var at the top. So an easy way to convert would be nice. This is what I was hoping to avoid (making case for each letter), the script looks clean though. Edit :Thanks for all the comments! Here is the code I have now. , relevant parts atleast; POOLNAME=sanINFORMAT=auto#tip; specify in Gi, Ti, etc.. (optional)MINFREE=500GiOUTFORMAT=iecNOW=`date`;LOGPATH=/var/log/zfs/zcheck.logBOLD=$(tput bold)BRED=${txtbld}$(tput setaf 1)BGREEN=${txtbld}$(tput setaf 2)BYELLOW=${txtbld}$(tput setaf 3)TXTRESET=$(tput sgr0);# ZFS Freespace check#poolsize, how large is itPOOLSIZE=$(zpool list $POOLNAME -o size -p)POOLSIZE=$(echo "$POOLSIZE" | grep -e [[:digit:]])POOLSIZE=$(numfmt --from=iec $POOLSIZE)#echo poolsize $POOLSIZE#poolfree, how much free space leftPOOLFREE=`zpool list $POOLNAME -o free`#POOLFREE=$(echo "$POOLFREE" | grep -e [[:digit:]]*.[[:digit:]].)POOLFREE=$(echo "$POOLFREE" | grep -e [[:digit:]])POOLFREE=$(numfmt --from=$INFORMAT $POOLFREE)#echo poolfree $POOLFREE#grep -e "vault..[[:digit:]]*.[[:digit:]].")#minfree, how low can we go, before alertingMINFREE=$(numfmt --from=iec-i $MINFREE)#echo minfree $MINFREE#FORMATTED DATA USED FOR DISPLAYING THINGS#echo formattiing sizes:F_POOLSIZE=$(numfmt --from=$INFORMAT --to=$OUTFORMAT $POOLSIZE)F_POOLFREE=$(numfmt --from=$INFORMAT --to=$OUTFORMAT $POOLFREE)F_MINFREE=$(numfmt --from=$INFORMAT --to=$OUTFORMAT $MINFREE)F_MINFREE=$(numfmt --from=$INFORMAT --to=$OUTFORMAT $MINFREE)#echoprintf "${BGREEN}$F_POOLSIZE - current pool size"printf "\n$F_MINFREE - mininium freespace allowed/as specified"# OPERATE/CALCULATE SPACE TEST#echo ... calculating specs, please wait..#let f=$POOLFREE m=$MINFREE x=m/f;declare -i x=$POOLFREE/$MINFREE;# will be 0 if has reached low threshold, if poolfree/minfree#echo $x#IF_CALC=$(numfmt --to=iec-i $CALC)if ! [ "${x}" == 1 ]; then #printf "\n${BRED}ALERT! POOL FREESPACE is low! ($F_POOLFREE)" printf "\n${BRED}$F_POOLFREE ${BYELLOW}- current freespace! ${BRED}(ALERT!}${BYELLOW} Is below your preset threshold!"; echoelse printf "\nPOOLFREE - ${BGREEN}$F_POOLFREE${TXTRESET}- current freespace"; #sleep 3fi | You can use numfmt (in Debian and derivatives it is part of coreutils so it should be there already): numfmt - Convert numbers from/to human-readable strings $ numfmt --from=iec-i 50.1Gi53794465383 it can also read the value from stdin $ echo "50.1Gi" | numfmt --from=iec-i53794465383 Be careful, it takes into account the locale for the decimal separator. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/652375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111873/"
]
} |
652,645 | I am wondering why Ubuntu is being called distro of "GNU/Linux" even though it offers proprietary graphic drivers (and some other things) which are not part of GNU GPL license. | GNU refers to the programs that are in the GNU suite which most distributions, such as Ubuntu, include. For example, Ubuntu ships coreutils which is a GNU suite. Having proprietary parts does not exclude the distribution from including GNU pieces. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/474808/"
]
} |
652,698 | I have the following in a txt file: <ol><li><b><a href="/page1/Mark_Yato" title="Mark Yato">Mark Yato</a> ft. MarkAm & <a href="/page1/Giv%C4%93on" title="Givēon">Givēon</a> - <a href="/page1/Mark_Yato:Thuieo" title="Mark Yato:Thuieo">Thuieo</a> (7)</b></li><li><b><a href="/page1/The_Central" title="The Central">The Central</a> - <a href="/page1/The_Central:AHTIOe oie" title="The Central:AHTIOe oie">AHTIOe oie</a> (7)</b></li><li><b><a href="/page1/Taa_Too_A" title="Taa Too A">Taa Too A</a> - <a href="/page1/Taa_Too_A:ryhwtyw w" title="Taa Too A:ryhwtyw w">ryhwtyw w</a> (8)</b></li> and am trying to make it output as the following: Mark Yato ft. MarkAm & Givēon - ThuieoThe Central - AHTIOe oieTaa Too A - ryhwtyw w To achieve this, I thought I would try removing '<', '>' and everything between them so it's left with just the list I'm trying to get. I tried the following sed command already: sed 's/<[^()]*>//g' but this is outputting just the following: (7)(7)(8) What am I doing wrong and how can I fix the sed command or translate it into awk if it is better use for that? | Parsing markup with regular expressions is notoriously problematic . While not an issue with your sample data, angle brackets may appear in tag attributes, comments and possibly other places, making regular expressions that match from < to > unreliable. You should resort to tools that implement a markup parser. For instance, using pandoc (version >= 2.8) with your sample data (without adding the missing </ol> tag): $ pandoc -f html -t plain file Mark Yato ft. MarkAm & Givēon - Thuieo (7)The Central - AHTIOe oie (7)Taa Too A - ryhwtyw w (8) You may then easily post-process this output as regular text to remove empty lines and other unwanted parts: $ pandoc -f html -t plain file | sed -e '/^$/d' -e 's/[[:blank:]]*([[:digit:]]*)$//'Mark Yato ft. MarkAm & Givēon - ThuieoThe Central - AHTIOe oieTaa Too A - ryhwtyw w Note that, before version 2.8, pandoc used to convert any emphasized text to all-caps when generating output in plain format. The <b> tag in your list items would trigger this behavior (more on this in the changelog or the relevant commit on GitHub). Depending on your actual input data, a workaround could be to use markdown as pandoc 's input format, either explicitly: pandoc -f markdown -t plain file or implicitly, considering it is what pandoc automatically defaults to ( pandoc -t plain file ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/652698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/398851/"
]
} |
652,822 | I'm trying to do some text processing on a file using a bash script. The goal is to take all of the lines starting with "field:" indented under an 'attribute:' label and swap them with the associated line starting with "- attr:" that follows. So far I think I have regex patterns that should match the labels: / *field:(.*)/g / *- attr:(.*)/g But I haven't had any success with the logic to parse through the desired fields and get them to swap correctly. Example Input Text - metric: 'example.metric.1' attributes: field: 'example 1' - attr: 'example1' field: 'example 2' - attr: 'example2' field: 'example 3' - attr: 'example3' field: 'example 4' - attr: 'example4'- metric: 'example.metric.2' attributes: field: 'example 5' - attr: 'example5' field: 'example 6' - attr: 'example6' field: 'example 7' - attr: 'example7'- metric: 'example.metric.3'... Desired Output - metric: 'example.metric.1' attributes: - attr: 'example1' field: 'example 1' - attr: 'example2' field: 'example 2' - attr: 'example3' field: 'example 3' - attr: 'example4' field: 'example 4'- metric: 'example.metric.2' attributes: - attr: 'example5' field: 'example 5' - attr: 'example6' field: 'example 6' - attr: 'example7' field: 'example 7'- metric: 'example.metric.3'... How would I go about accomplishing this? | Using any awk in any shell on every Unix box: $ awk '$1=="field:"{s=ORS $0; next} {print $0 s; s=""}' file- metric: 'example.metric.1' attributes: - attr: 'example1' field: 'example 1' - attr: 'example2' field: 'example 2' - attr: 'example3' field: 'example 3' - attr: 'example4' field: 'example 4'- metric: 'example.metric.2' attributes: - attr: 'example5' field: 'example 5' - attr: 'example6' field: 'example 6' - attr: 'example7' field: 'example 7'- metric: 'example.metric.3' if you might not have a space after field: on some lines or just have a burning desire to use a regexp for some reason then change $1=="field:" to $1~/^field:/ or /^[[:space:]]*field:/ , whichever you prefer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/475989/"
]
} |
652,916 | when I'm working with csv, unwanted commas(',') is misleading my csv file, in result it gives the inconsistency. please find in details below. My sample csv file: 1|a,b|41|c,d|41|e,f|41|g,h|41|i,j|4 I want the end result As: 1|"a,b"|41|"c,d"|41|"e,f"|41|"g,h"|41|"i,j"|4 After adding the quotes I will replace "|" with "," so that my csv will work as I expected. I used below commnd but its not giving as exprected. sed -e 's/,/"&"/' file1.txt | Using csvformat from csvkit , and assuming that the end result should be a CSV file with comma as delimiter (as described in the text of the question): $ csvformat -d '|' file1,"a,b",41,"c,d",41,"e,f",41,"g,h",41,"i,j",4 This reformats the CSV file from having | -characters as delimiter to having the default comma as delimiter. In doing so, it properly quotes the fields that need quoting. This also properly handles fields with embedded newlines: $ cat file1|a,b|41|c,d|41|e,f|41|g,h|41|i,j|42|"line 1,line2"|5 $ csvformat -d '|' file1,"a,b",41,"c,d",41,"e,f",41,"g,h",41,"i,j",42,"line 1,line2",5 If you have a document in some structured document format, such as CSV, JSON, XML, YAML, TOML, etc., there is no reason not to use a parser for that document format to parse that document. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/476078/"
]
} |
652,918 | When I launch a terminal in ubuntu I get following path on echo $PATH /home/myuser/anaconda3/condabin:/home/myuser/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin I want to remove those paths with games keyword from my $PATH , but I couldn't find from where the path like /usr/games , /user/local/games , /usr/sbin are set. I tried to grep by grep xxx ~/.* -l This gives files which set /usr/bin , /usr/local/bin etc.. But not for the above mentioned games and sbin paths.How do I find from where it's set? | Using csvformat from csvkit , and assuming that the end result should be a CSV file with comma as delimiter (as described in the text of the question): $ csvformat -d '|' file1,"a,b",41,"c,d",41,"e,f",41,"g,h",41,"i,j",4 This reformats the CSV file from having | -characters as delimiter to having the default comma as delimiter. In doing so, it properly quotes the fields that need quoting. This also properly handles fields with embedded newlines: $ cat file1|a,b|41|c,d|41|e,f|41|g,h|41|i,j|42|"line 1,line2"|5 $ csvformat -d '|' file1,"a,b",41,"c,d",41,"e,f",41,"g,h",41,"i,j",42,"line 1,line2",5 If you have a document in some structured document format, such as CSV, JSON, XML, YAML, TOML, etc., there is no reason not to use a parser for that document format to parse that document. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652918",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144663/"
]
} |
652,968 | I have a file like the following <g> Good wheatear </g> other parts of line <g> The farm land is to be sold </g> other parts of line<g> knock knock </g> other parts of line I want my output to be like this: <g> Good wheatear </g> <g> The farm land is to be sold </g><g> knock knock </g> i.e. print the content between <g> and </g> tags including the tags I have tried this command: awk '/<s>/, /<\/s>/' trsTest.txt But it prints the whole line. How to print the content between the tags ? | With awk it could be: $ awk -v FS="</?g>" '{print $2}' trsTest.txt Good wheatear The farm land is to be sold knock knock Or if you want to keep the tags: $ awk -v FS="</g> " '{print $1 FS}' trsTest.txt<g> Good wheatear </g><g> The farm land is to be sold </g><g> knock knock </g> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/652968",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/476118/"
]
} |
653,128 | I'd like to start storing the SMART data over time and see any trends based on disk ID/serial number. Something that would let me, for example just get the smart information from disks once a day and put it in a database. Is there already a tool for this in Linux, or do I have to roll my own? | There are already tools which can do this, often as part of a more general monitoring tool. One I find useful is Munin , which has a SMART plugin to trace the available attributes: Munin is available in many distributions. smartmontools itself contains a tool which can log attributes periodically, smartd . You might find that that’s all you need. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19912/"
]
} |
653,202 | Contents of file filelist : /some/path/*.txt/other/path/*.dat/third/path/example.doc I want to list those files, so I do: cat filelist | xargs ls But instead of expanding those globs, I get: ls: cannot access '/some/path/*.txt': No such file or directory ls: cannot access '/other/path/*.dat': No such file or directory /third/path/example.doc | Shells expand globs. Here, that's one of the very rare cases where the implicit split+glob operator invoked upon unquoted command substitution in Bourne-like shells other than zsh can be useful: IFS='' # split on newline onlyset +o noglob # make sure globbing is not disabledls -ld -- $(cat filelist) # split+glob In zsh , you'd do: ls -ld -- ${(f)~"$(<filelist)"} Where f is the parameter expansion flag to split on linefeeds, and ~ requests globbing which is otherwise not done by default upon parameter expansion nor command substitution. Note that if the list of matching files is large, you can run into an Argument list too long error (a limitation of the execve() system call on most systems), which xargs would have otherwise worked around. In zsh , you can use zargs instead: autoload zargszargs --eof= -- ${(f)~"$(<filelist)"} '' ls -ld -- Where zargs will split the list and run ls several times to avoid the limit as necessary as xargs would. Or you could pass the list to a command that is builtin (so doesn't involve the execve() system call): To just print the list of files: print -rC1 -- ${(f)~"$(<filelist)"} Or to feed it to xargs NUL-delimited: print -rNC1 -- ${(f)~"$(<filelist)"} | xargs -r0 ls -ld -- Note that if any of the globs fails to match a file, in zsh , you'll get an error. If you'd rather those globs to expand to nothing, you'd add the N glob qualifier to the globs (which enables nullglob on a per-glob basis): print -rNC1 -- ${(f)^~"$(<filelist)"}(N) | xargs -r0 ls -ld -- Adding that (N) would also turn all the lines without glob operators into globs allowing to filter out files referenced by path and that don't exist; it would however mean you can't use glob qualifiers in the globs in filelist unless you express them as (#q...) and enable the extendedglob option. Also beware that as qualifiers can run arbitrary code, it's important the contents of the filelist file comes from a trusted source. In other Bourne-like shells, including bash , globs that don't match are left as-is, so would be passed literally to ls which would likely report an error that the corresponding file doesn't exist. In bash , you could use the nullglob option (which it copied from zsh) and handle the case where none of the globs match specially: shopt -s nullglobIFS=$'\n'set +o noglobset -- $(<filelist)(( $# == 0 )) || printf '%s\0' "$@" | xargs -r0 ls -ld -- bash , doesn't have any equivalent for zsh 's glob qualifiers. To make sure lines without glob operators (such as your /third/path/example.doc ) are treated as globs and removed if they don't correspond to an actual file, you could add @() to the lines (requires extglob ). That won't work however for line that end in / characters. You could however add @() to the last non- / character and rely on the fact that / always exists shopt -s nullglob extglobIFS=$'\n'set +o noglobset -- $(LC_ALL=C sed 's|.*[^/]|&@()|' filelist)(( $# == 0 )) || printf '%s\0' "$@" | xargs -r0 ls -ld -- In any case, note that the list of supported glob operators vary greatly with the shell. The only one you're using in your sample ( * ) should be supported by all though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653202",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291147/"
]
} |
653,265 | I want to add a dummy IP address but only after two consecutive duplicate lines are found. I am working on a Linux system and this is my input file: IP_Remote_Address Address : 192.168.1.1 IP_Remote_Address Address : 192.168.1.2 IP_Remote_Address Address : 192.168.1.3 IP_Remote_Address IP_Remote_Address Address : 192.168.1.4 IP_Remote_Address Address : 192.168.1.5 IP_Remote_Address Address : 192.168.1.6 IP_Remote_Address Address : 192.168.1.7 IP_Remote_Address IP_Remote_Address Address : 192.168.1.8 My desired output: IP_Remote_Address Address : 192.168.1.1 IP_Remote_Address Address : 192.168.1.2 IP_Remote_Address Address : 192.168.1.3 IP_Remote_Address Address : NOT_FOUND IP_Remote_Address Address : 192.168.1.4 IP_Remote_Address Address : 192.168.1.5 IP_Remote_Address Address : 192.168.1.6 IP_Remote_Address Address : 192.168.1.7 IP_Remote_Address Address : NOT_FOUND IP_Remote Address Address : 192.168.1.8 I have this line line but it replaces only the first duplicate found: awk '{print $0; if((getline nl) > 0){ print ($0!="IP_Remote_Address" && $0 == nl)? nl=$0"INSERT_NOT_FOUND_ABOVE" : nl }}' file.txt I can later then use sed to replace the string INSERT_NOT_FOUND_ABOVE" with this: sed '/INSERT_NOT_FOUND_ABOVE/i Address : NOT_FOUND' file.txt > new_file.txt My only issue is that it can't detect all consecutive duplicates; it finds only the first one. | awk : awk 'p==$0{print " Address : NOT_FOUND"}{p=$0}1' A rather naive solution. p==$0 IF p == current line THEN print not found p=$0 SET p = current line 1 : print Handles consecutive duplicate lines. And as noted by @san-fran in comments under question, "The last IP may be missing too, right?" – Ups. Should have thought of that. So: awk -v e='Address : NOT_FOUND' 'p==$0{print e}{p=$0}END{if($1 ~ "IP")print e}1' Set e = text to inject p==$0 IF p == current line THEN print variable e p=$0 SET p = current line END print e if current line contains IP 1 : print Here the error-string has been added as a variable as we use it twice. (And trimmed for readability in this post). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/653265",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324252/"
]
} |
653,298 | I'm using cygwin to connect to a tiny VM with limited RAM (512M). Also, I'm trying to import to a sqlite3 db from a 4GB csv file and I don't have any clue on import, except 2 lines (8.717.201 total) Seems that I have a control-m char (^M) on 2 lines, so it break csv format and fail to import. When I try to use sed 's|,^M|,|' file.csv control-m char is write textual ASCII (2 chars), so it doesnt search-replace. When I do it with a test file, opened in vi for search and replace, I can see that is write as code (blue colored ^M and act like a single char) How can I fix the csv file? (or how I can write again the control-m sequence on cygwin? Example problematic line: $ cat -e testkeyword3,keyword1,keyword4$keyword1,keyword2,keyword3^M$,keyword4$keyword5,keyword1,keyword2$ How should be: $ cat -e testkeyword3,keyword1,keyword4$keyword1,keyword2,keyword3,keyword4$keyword5,keyword1,keyword2$ PS: As you can see, english is not my native language, so.. sorry for any mistake ¯_(ツ)_/¯ | awk : awk 'p==$0{print " Address : NOT_FOUND"}{p=$0}1' A rather naive solution. p==$0 IF p == current line THEN print not found p=$0 SET p = current line 1 : print Handles consecutive duplicate lines. And as noted by @san-fran in comments under question, "The last IP may be missing too, right?" – Ups. Should have thought of that. So: awk -v e='Address : NOT_FOUND' 'p==$0{print e}{p=$0}END{if($1 ~ "IP")print e}1' Set e = text to inject p==$0 IF p == current line THEN print variable e p=$0 SET p = current line END print e if current line contains IP 1 : print Here the error-string has been added as a variable as we use it twice. (And trimmed for readability in this post). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/653298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55270/"
]
} |
653,299 | I know there are some posts to join multiple files but it took so much time. I have multiple files in which the first columns are for the patients' IDs, then I want to join multiple files, based on the ID numbers in the first column. The codes as below still work, but it took so much time. Thus, does anybody know more efficient way of doing this process? for PHENO in A B C D E F G H I J K L Mdo join -a1 -a2 -e 1 -o auto chr2_${PHENO} chr3_${PHENO} >${PHENO}donefor PHENO in A B C D E F G H I J K L Mdo for file in chr5_${PHENO} chr11_${PHENO} chr14_${PHENO} chr20_${PHENO} \ chr21_${PHENO} chr22_${PHENO} chr6_${PHENO} chr9_${PHENO} chr13_${PHENO} \ chr18-1_${PHENO} chr18-2_${PHENO} chr1-1_${PHENO} chr1-2_${PHENO} \ chr1-3_${PHENO} chr8-1_${PHENO} chr8-2_${PHENO} chr17-1_${PHENO} \ chr17-2_${PHENO} chr19-1_${PHENO} chr19-2_${PHENO} chr19-3_${PHENO} \ chr19-4_${PHENO} chr4-1_${PHENO} chr4-2_${PHENO} chr4-3_${PHENO} \ chr4-4_${PHENO} chr7-1_${PHENO} chr7-2_${PHENO} chr7-3_${PHENO} \ chr10-1_${PHENO} chr10-2_${PHENO} chr10-3_${PHENO} chr10-4_${PHENO} \ chr12-1_${PHENO} chr12-2_${PHENO} chr12-3_${PHENO} chr12-4_${PHENO} \ chr15-1_${PHENO} chr15-2_${PHENO} chr15-3_${PHENO} chr16-1_${PHENO} \ chr16-2_${PHENO} chr16-3_${PHENO}; do join -a1 -a2 -e 1 -o auto ${PHENO} "$file" >${PHENO}.1 mv ${PHENO}.1 ${PHENO} donedone All the files as as below. 150001 patients, showing whether they have a disease or not as 0 or 1.For example, chr2_${PHENO} ID Disease1 02 13 0 4 15 1....150000 0 150001 1 For example, chr3_${PHENO} ID Disease1 12 13 1 4 05 0....150000 0 150001 0 Thank you in advance! | awk : awk 'p==$0{print " Address : NOT_FOUND"}{p=$0}1' A rather naive solution. p==$0 IF p == current line THEN print not found p=$0 SET p = current line 1 : print Handles consecutive duplicate lines. And as noted by @san-fran in comments under question, "The last IP may be missing too, right?" – Ups. Should have thought of that. So: awk -v e='Address : NOT_FOUND' 'p==$0{print e}{p=$0}END{if($1 ~ "IP")print e}1' Set e = text to inject p==$0 IF p == current line THEN print variable e p=$0 SET p = current line END print e if current line contains IP 1 : print Here the error-string has been added as a variable as we use it twice. (And trimmed for readability in this post). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/653299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328248/"
]
} |
653,327 | This is wrong: for f in a*.dat; do awk '.....' file1 "$f" > temp awk '.....' temp > "$f_out"done I would like to use a*.dat as input and then as output with suffix _out.Many thanks | The _ is seen as a part of the variable name, so you are using the variable f_out in stead of f and append out . Using {} will solve your problem. for f in a*.dat; do awk '.....' "$f" > "${f}_out"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/448172/"
]
} |
653,413 | I've added the return value of the last command to PS1 (aka "the prompt") in my .bashrc . Now I'd like to have it shown only if the value is nonzero. Android's shell has it: ${| local e=$? (( e )) && REPLY+="$e|" return $e } Question: how to convert it to bash? | PS1='${?#0}$ ' It uses a special form of parameter expansion , ${?#0} , which means: "Remove the character zero if it is the first character of ${?} , the exit code of the previous command." You can also change the color of the prompt if the last exit code were not zero: PS1='\[\e[0;$(($?==0?0:91))m\]$ \[\e[0m\]' That uses a if-else ternary expression $(($?==0?0:91)) that makes the color code 0;91m (red, see color codes ) if the last command exits with non-zero, or 0;0m (your default color) otherwise. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57678/"
]
} |
653,424 | I have a file with 10023 lines. I would like to copy every 1000 lines from the file and paste it over to new file which can be named as 1.txt and 2.txt and so on. I want to move the files 1.txt 2.txt and so on into newly created folders 1, 2 etc. Can someone please help me in this regard. Thank you | This is what split is for. To split the file into multiple files with 1000 lines (or less, for the last one), you can do: split -d -l 1000 file '' That will split the file into files of 1000 lines each ( -l 1000 ), with numerical suffixes with .txt as an additional suffix and using an empty prefix ( '' ). The result for a file with 10023 lines will be 11 files named 00 , 01 , ..., 10 : $ wc -l file10023 file$ split -d -l 1000 --additional-suffix='.txt' file ''$ ls00.txt 02.txt 04.txt 06.txt 08.txt 10.txt01.txt 03.txt 05.txt 07.txt 09.txt file Note that the -d and --additional-suffix are not portable and might not be available for your implementation of split . They are available for GNU split which is the default on Linux systems. You can now move your files as desired: for i in {00..10}; do mkdir -p $i mv "$i".txt "$i"/done And, if you don't want the leading 0s, you can rename them: for i in {00..10}; do mkdir -p $i mv "$i".txt "$i"/"${i##0}".txtdone Finally, if you want to start from 1 and not from 0, again assuming GNU split , you can do: split -d --numeric-suffixes=1 -l 1000 --additional-suffix='.txt' file '' Which will produce: 01.txt 03.txt 05.txt 07.txt 09.txt 11.txt02.txt 04.txt 06.txt 08.txt 10.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/191337/"
]
} |
653,643 | I have a csv file with loads of data. I wish to cut the 9th column for values >=1 and then use grep to display full rows that match. Sample format: ABC,XYZ,RTY,CREAM,FRANCE,170019,ST REMY CREME,3035540005229,0.75,1,15,26.99,10 ABC,RDS,XSD,SPICE,NETHERLANDS,390476,THE KINGS GINGER,5010493025621,1.5,1,41,49.95,NA ABC,RMS,DKS,TABLE WINE RED,CHILE,400176,SANTA ISABELA,63657001349,3,1,12.5,31.99,0 I have tried with grep . Myfile.csv |cut -d"," -f9 | sort |grep -E "^(1*[1-9][2-9]*(\.[2-9]+)?|1+\.[2-9]*[1-9][2-9]*)$" but it only shows the 9th column values not the full rows with all the columns. and also grep $(cut -d"," -f9 Myfile.csv | grep -E "^(1*[1-9][2-9]*(\.[2-9]+)?|1+\.[2-9]*[1-9][2-9]*)$") Myfile.csv Any help would be great. PS: can't use awk (:- | Although you state awk is not a possibility - for the sake of completeness: awk -F',' '$9>=1' input.csv This will instruct awk to consider , as field separator and print only lines where field 9 has a value equal or larger than 1. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/476810/"
]
} |
653,651 | Whenever I try to run any apt command on my Mac I get Unable to locate an executable at "/Library/Java/JavaVirtualMachines/jdk-12.0.1.jdk/Contents/Home/bin/apt" (-1) error. My .bash_profile looks as below: Export JAVA_HOME=/Library/Java/Home# Setting PATH for Python 3.7# The original version is saved in .bash_profile.pysavePATH="/Library/Frameworks/Python.framework/Versions/3.7/bin:${PATH}"export PATH | Although you state awk is not a possibility - for the sake of completeness: awk -F',' '$9>=1' input.csv This will instruct awk to consider , as field separator and print only lines where field 9 has a value equal or larger than 1. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653651",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/476817/"
]
} |
653,664 | I have files like this on a Linux system: 10S1_S5_L002_chrm.fasta SRR3184711_chrm.fasta SRR3987378_chrm.fasta SRR4029368_chrm.fasta SRR5204465_chrm.fasta SRR5997546_chrm.fasta13_S7_L003_chrm.fasta SRR3184712_chrm.fasta SRR3987379_chrm.fasta SRR4029369_chrm.fasta SRR5204520_chrm.fasta SRR5997547_chrm.fasta14_S8_L003_chrm.fasta SRR3184713_chrm.fasta SRR3987380_chrm.fasta SRR4029370_chrm.fasta SRR5208699_chrm.fasta SRR5997548_chrm.fasta17_S4_L002_chrm.fasta SRR3184714_chrm.fasta SRR3987415_chrm.fasta SRR4029371_chrm.fasta SRR5208700_chrm.fasta SRR5997549_chrm.fasta3_S1_L001_chrm.fasta SRR3184715_chrm.fasta SRR3987433_chrm.fasta SRR4029372_chrm.fasta SRR5208701_chrm.fasta SRR5997550_chrm.fasta4_S2_L001_chrm.fasta SRR3184716_chrm.fasta SRR3987482_chrm.fasta SRR4029373_chrm.fasta SRR5208770_chrm.fasta SRR5997551_chrm.fasta50m_S10_L004_chrm.fasta SRR3184717_chrm.fasta SRR3987489_chrm.fasta SRR4029374_chrm.fasta SRR5208886_chrm.fasta SRR5997552_chrm.fasta5_S3_L001_chrm.fasta SRR3184718_chrm.fasta SRR3987493_chrm.fasta SRR4029375_chrm.fasta SRR5211153_chrm.fasta SRR6050903_chrm.fasta65m_S11_L005_chrm.fasta SRR3184719_chrm.fasta SRR3987495_chrm.fasta SRR4029376_chrm.fasta SRR5211162_chrm.fasta SRR6050905_chrm.fasta6_S6_L002_chrm.fasta SRR3184720_chrm.fasta SRR3987647_chrm.fasta SRR4029377_chrm.fasta SRR5211163_chrm.fasta SRR6050920_chrm.fasta70m_S12_L006_chrm.fasta SRR3184721_chrm.fasta SRR3987651_chrm.fasta SRR4029378_chrm.fasta SRR5215118_chrm.fasta SRR6050921_chrm.fasta80m_S1_L002_chrm.fasta SRR3184722_chrm.fasta SRR3987657_chrm.fasta SRR4029379_chrm.fasta SRR5247122_chrm.fasta SRR6050958_chrm.fasta In all there are 423I was asked to cut them in 32 parts for an optimal parallelisation on 32 CPUSo now I have this: 10S1_S5_L002_chrm.part-10.fasta SRR3986254_chrm.part-26.fasta SRR4029372_chrm.part-22.fasta SRR5581526-1_chrm.part-20.fasta10S1_S5_L002_chrm.part-11.fasta SRR3986254_chrm.part-27.fasta SRR4029372_chrm.part-23.fasta SRR5581526-1_chrm.part-21.fasta10S1_S5_L002_chrm.part-12.fasta SRR3986254_chrm.part-28.fasta SRR4029372_chrm.part-24.fasta SRR5581526-1_chrm.part-22.fasta10S1_S5_L002_chrm.part-13.fasta SRR3986254_chrm.part-29.fasta SRR4029372_chrm.part-25.fasta SRR5581526-1_chrm.part-23.fasta10S1_S5_L002_chrm.part-14.fasta SRR3986254_chrm.part-2.fasta SRR4029372_chrm.part-26.fasta SRR5581526-1_chrm.part-24.fasta10S1_S5_L002_chrm.part-15.fasta SRR3986254_chrm.part-30.fasta SRR4029372_chrm.part-27.fasta SRR5581526-1_chrm.part-25.fasta10S1_S5_L002_chrm.part-16.fasta SRR3986254_chrm.part-31.fasta SRR4029372_chrm.part-28.fasta SRR5581526-1_chrm.part-26.fasta10S1_S5_L002_chrm.part-17.fasta SRR3986254_chrm.part-32.fasta SRR4029372_chrm.part-29.fasta SRR5581526-1_chrm.part-27.fasta10S1_S5_L002_chrm.part-18.fasta SRR3986254_chrm.part-3.fasta SRR4029372_chrm.part-2.fasta SRR5581526-1_chrm.part-28.fasta10S1_S5_L002_chrm.part-19.fasta SRR3986254_chrm.part-4.fasta SRR4029372_chrm.part-30.fasta SRR5581526-1_chrm.part-29.fasta10S1_S5_L002_chrm.part-1.fasta SRR3986254_chrm.part-5.fasta SRR4029372_chrm.part-3.fasta SRR5581526-1_chrm.part-2.fasta10S1_S5_L002_chrm.part-20.fasta SRR3986254_chrm.part-6.fasta SRR4029372_chrm.part-4.fasta SRR5581526-1_chrm.part-30.fasta10S1_S5_L002_chrm.part-21.fasta SRR3986254_chrm.part-7.fasta SRR4029372_chrm.part-5.fasta SRR5581526-1_chrm.part-31.fasta I want to apply a command from the CRISPRCasFinder toolThe command works well when I use it alone on 1 namefile.fasta The command also works well when I use parallel on namefile.part*.fasta . But when I try to make the command more general by using basename , nothing works. I want to use basename to keep the name of my input files in the output folder. I tried this on a smaller data set: time parallel 'dossierSortie=$(basename -s .fasta {}) ; singularity exec -B $PWD /usr/local/CRISPRCasFinder-release-4.2.20/CrisprCasFinder.simg perl /usr/local/CRISPRCasFinder/CRISPRCasFinder.pl -so /usr/local/CRISPRCasFinder/sel392v2.so -cf /usr/local/CRISPRCasFinder/CasFinder-2.0.3 -drpt /usr/local/CRISPRCasFinder/supplementary_files/repeatDirection.tsv -rpts /usr/local/CRISPRCasFinder/supplementary_files/Repeat_List.csv -cas -def G --meta -out /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/Result{} -in /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/{}' ::: *_chrm.part*.fasta And it did this ERR358546_chrm.part-1.fasta SRR4029114_k141_23527.fna.bck SRR5100341_k141_10416.fna.lcp SRR5100345_k141_3703.fna.al1ERR358546_chrm.part-2.fasta SRR4029114_k141_23527.fna.bwt SRR5100341_k141_10416.fna.llv SRR5100345_k141_3703.fna.bckERR358546_chrm.part-3.fasta SRR4029114_k141_23527.fna.des SRR5100341_k141_10416.fna.ois SRR5100345_k141_3703.fna.bwtERR358546_chrm.part-4.fasta SRR4029114_k141_23527.fna.lcp SRR5100341_k141_10416.fna.prj SRR5100345_k141_3703.fna.desERR358546_chrm.part-5.fasta SRR4029114_k141_23527.fna.llv SRR5100341_k141_10416.fna.sds SRR5100345_k141_3703.fna.lcpERR358546_chrm.part-6.fasta SRR4029114_k141_23527.fna.ois SRR5100341_k141_10416.fna.sti1 SRR5100345_k141_3703.fna.llvERR358546_k141_26987.fna SRR4029114_k141_23527.fna.prj SRR5100341_k141_10416.fna.suf SRR5100345_k141_3703.fna.oisERR358546_k141_33604.fna SRR4029114_k141_23527.fna.sds SRR5100341_k141_10416.fna.tis SRR5100345_k141_3703.fna.prjERR358546_k141_90631.fna SRR4029114_k141_23527.fna.sti1 SRR5100341_k141_10942.fna SRR5100345_k141_3703.fna.sdsResultERR358546_chrm.part-3 SRR4029114_k141_23527.fna.suf SRR5100341_k141_164.fna SRR5100345_k141_3703.fna.sti1ResultERR358546_chrm.part-4 SRR4029114_k141_23527.fna.tis SRR5100341_k141_3046.fna SRR5100345_k141_3703.fna.sufResultSRR4029114_chrm.part-1 SRR5100341_chrm.part-10.fasta SRR5100341_k141_3968.fna SRR5100345_k141_3703.fna.tisResultSRR4029114_chrm.part-4 SRR5100341_chrm.part-11.fasta SRR5100341_k141_631.fna SRR5100345_k141_4429.fnaResultSRR5100341_chrm.part-10 SRR5100341_chrm.part-12.fasta SRR5100341_k141_6376.fna SRR5100345_k141_4832.fnaResultSRR5100341_chrm.part-11 SRR5100341_chrm.part-13.fasta SRR5100341_k141_8699.fna SRR5100345_k141_6139.fnaResultSRR5100341_chrm.part-3 SRR5100341_chrm.part-1.fasta SRR5100341_k141_8892.fna SRR5100345_k141_731.fnaResultSRR5100341_chrm.part-9 SRR5100341_chrm.part-2.fasta SRR5100345_chrm.part-10.fasta SRR5100345_k141_731.fna.al1ResultSRR5100345_chrm.part-1 SRR5100341_chrm.part-3.fasta SRR5100345_chrm.part-1.fasta SRR5100345_k141_731.fna.bckResultSRR5100345_chrm.part-4 SRR5100341_chrm.part-4.fasta SRR5100345_chrm.part-2.fasta SRR5100345_k141_731.fna.bwtResultSRR5100345_chrm.part-9 SRR5100341_chrm.part-5.fasta SRR5100345_chrm.part-3.fasta SRR5100345_k141_731.fna.desSRR4029114_chrm.part-1.fasta SRR5100341_chrm.part-6.fasta SRR5100345_chrm.part-4.fasta SRR5100345_k141_731.fna.lcpSRR4029114_chrm.part-2.fasta SRR5100341_chrm.part-7.fasta SRR5100345_chrm.part-5.fasta SRR5100345_k141_731.fna.llvSRR4029114_chrm.part-3.fasta SRR5100341_chrm.part-8.fasta SRR5100345_chrm.part-6.fasta SRR5100345_k141_731.fna.oisSRR4029114_chrm.part-4.fasta SRR5100341_chrm.part-9.fasta SRR5100345_chrm.part-7.fasta SRR5100345_k141_731.fna.prjSRR4029114_chrm.part-5.fasta SRR5100341_k141_10416.fna SRR5100345_chrm.part-8.fasta SRR5100345_k141_731.fna.sdsSRR4029114_k141_14384.fna SRR5100341_k141_10416.fna.al1 SRR5100345_chrm.part-9.fasta SRR5100345_k141_731.fna.sti1SRR4029114_k141_16765.fna SRR5100341_k141_10416.fna.bck SRR5100345_k141_1211.fna SRR5100345_k141_731.fna.sufSRR4029114_k141_23527.fna SRR5100341_k141_10416.fna.bwt SRR5100345_k141_2884.fna SRR5100345_k141_731.fna.tisSRR4029114_k141_23527.fna.al1 SRR5100341_k141_10416.fna.des SRR5100345_k141_3703.fna The names of the folder are not okay because I want for example just ResultERR358546 and not ResultERR358546_chrm.part-2.fasta And I don't want a result for each part but only for each ID. | Your basename command only removes the fixed .fasta extension - as far as I know it cannot remove a variable pattern. However GNU parallel provides a Perl expression replacement string facility that is much more powerful than basename - ex. given $ ls *_chrm.part*.fastaERR358546_chrm.part-2.fasta ERR358546_chrm.part-5.fasta ERR358546_chrm.part-8.fastaERR358546_chrm.part-3.fasta ERR358546_chrm.part-6.fasta ERR358546_chrm.part-9.fastaERR358546_chrm.part-4.fasta ERR358546_chrm.part-7.fasta then $ parallel echo Result'{= s:_.*$:: =}' ::: *_chrm.part*.fastaResultERR358546ResultERR358546ResultERR358546ResultERR358546ResultERR358546ResultERR358546ResultERR358546ResultERR358546 where the substitution s:_.*$:: replaces everything after an underscore with nothing. Transplanting to your original command: time parallel ' singularity exec -B "$PWD" /usr/local/CRISPRCasFinder-release-4.2.20/CrisprCasFinder.simg \ perl /usr/local/CRISPRCasFinder/CRISPRCasFinder.pl \ -so /usr/local/CRISPRCasFinder/sel392v2.so \ -cf /usr/local/CRISPRCasFinder/CasFinder-2.0.3 \ -drpt /usr/local/CRISPRCasFinder/supplementary_files/repeatDirection.tsv \ -rpts /usr/local/CRISPRCasFinder/supplementary_files/Repeat_List.csv \ -cas -def G --meta \ -out /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/Result'{= s:_.*$:: =}' \ -in /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/{}' ::: *_chrm.part*.fasta If you want to capture and include the part index, you could modify the expression to Result'{= s:_chrm\.part-(\d+)\.fasta$:_$1: =}' or '{= s:_chrm\.part-(\d+)\.fasta$:Result_$1: =}' for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653664",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/474781/"
]
} |
653,675 | I want know if I can use variables inside the url used by wget?I'm trying to recreate in BASH:I currently have: WK_ARTIST="$(echo $ARTIST | sed 's/\ /\_/g' )"WK_ALBUM="$(echo $ALBUM | sed 's/\ /\_/g' )"echo "$WK_ARTIST"echo "$WK_ALBUM"wget https://en.wikipedia.org/wiki/File:"$WK_ARTIST"_-_"$WK_ALBUM".jpg I get Warning: wildcards not supported in HTTP.--2021-06-10 07:24:20-- https://en.wikipedia.org/wiki/File:Lamb_Of_God_-_%1B[1;36mLamb_Of_God.jpg Can this be done. | Your basename command only removes the fixed .fasta extension - as far as I know it cannot remove a variable pattern. However GNU parallel provides a Perl expression replacement string facility that is much more powerful than basename - ex. given $ ls *_chrm.part*.fastaERR358546_chrm.part-2.fasta ERR358546_chrm.part-5.fasta ERR358546_chrm.part-8.fastaERR358546_chrm.part-3.fasta ERR358546_chrm.part-6.fasta ERR358546_chrm.part-9.fastaERR358546_chrm.part-4.fasta ERR358546_chrm.part-7.fasta then $ parallel echo Result'{= s:_.*$:: =}' ::: *_chrm.part*.fastaResultERR358546ResultERR358546ResultERR358546ResultERR358546ResultERR358546ResultERR358546ResultERR358546ResultERR358546 where the substitution s:_.*$:: replaces everything after an underscore with nothing. Transplanting to your original command: time parallel ' singularity exec -B "$PWD" /usr/local/CRISPRCasFinder-release-4.2.20/CrisprCasFinder.simg \ perl /usr/local/CRISPRCasFinder/CRISPRCasFinder.pl \ -so /usr/local/CRISPRCasFinder/sel392v2.so \ -cf /usr/local/CRISPRCasFinder/CasFinder-2.0.3 \ -drpt /usr/local/CRISPRCasFinder/supplementary_files/repeatDirection.tsv \ -rpts /usr/local/CRISPRCasFinder/supplementary_files/Repeat_List.csv \ -cas -def G --meta \ -out /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/Result'{= s:_.*$:: =}' \ -in /databis/defontis/Dossier_fasta_chrm_avec_CRISPRCasFinder/Test/{}' ::: *_chrm.part*.fasta If you want to capture and include the part index, you could modify the expression to Result'{= s:_chrm\.part-(\d+)\.fasta$:_$1: =}' or '{= s:_chrm\.part-(\d+)\.fasta$:Result_$1: =}' for example. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653675",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/475899/"
]
} |
653,844 | Searching for a way to remove blank lines from the beginning and the end (using tac ) of a file, I've stumbled across this one: awk 'NF {p=1} p' How / why does this work? I understand NF is only true if there are any fields (if the line is not a blank line). | This will remove blank lines from the beginning, but not from the end of a file. [Notice: this answer was witten before the edit to the question that mentioned tac ] It works as follows: NF is the number of fields found on the current line. If it is zero, that means the line is either empty or blank , i.e. contains at most whitespace (assuming the field separator is left at its default value, where any number of consecutive whitspace is considered as separator). The current line is printed if any condition outside of (and not associated with) rule blocks ( { ... } ) evaluates to true . The flag p is initially uninitialized and will evaluate to false , so a priori nothing will be printed. Once a non-blank line is found ( NF is non-zero and evaluates to true ) the rule block {p=1} is entered and the flag p set to 1 . After that, the p outside the rule block evaluates to true , and any subsequent lines (including the current, first non-blank one) is printed. Notice that since the flag p is never reset, any blank lines coming after the first non-blank line will be printed without filtering. If you want to remove blank lines from the end, too, a two-pass approach will be necessary: awk 'FNR==NR{if (NF) {if (!first) first=FNR; last=FNR} next} FNR>=first && FNR<=last' input.txt input.txt This will process the file twice (hence it is specified twice as operand) In the first pass, where FNR , the per-file line counter is equal to NR , the global line counter, we identify the first and last non-blank line. In the second pass ( FNR is now smaller than NR ), we only print lines between (and including) the so identified first and last non-blank lines. Notice As stated in the answer by Stéphane Chazelas , the two-pass approach only works with regular files. If your input is of a different nature, see the method proposed there for a solution. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/653844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477032/"
]
} |
653,914 | Is there a way to set some kind of an alias so that when I do: cd some/directoryvim .zshrc It does vim ~/.zshrc ? | The answer is much simpler than you think, because you are asking the wrong question. If what you really mean is "how can I create an alias that allows me to edit my ~/.zshrc regardless of my current working directory?" then the answer is simply: alias zshconfig="vim ~/.zshrc" This example is from my own ~/.zshrc and I use it often. An even better option, suggested by @mateen-ulhaq in the comments below, would be: alias zshrc="vim ~/.zshrc" In other words, do not attempt to alias the filename, or the path to the filename. Both of these things are possible, but could create unpredictable side-effects that cause problems for you later, as @michael-homer has pointed out. But since you know that your preferred $EDITOR is vim , simply alias the vim invocation of that particular file and you should be in business. (It is much safer to alias a specific combination of commands and arguments to an unusual string which is less likely to be used out of context.) More generally, however, my advice is that you should not be worried about optimizing the two-character sequence ~/ , but instead you should get used to typing it. Go on ... stretch out your left pinky finger like you're Eddie Van Halen practicing scales. (This will help you to master vim while you are at it, since you probably should get used to hitting the Esc key with that pinky finger!) As your typing skills improve, you'll gain muscle memory for the ~/ sequence, which will help you to reliably type that character sequence when referring to files in your home directory. You'll type it quickly on purpose, and you won't type it by accident. The downside to relying on aliases is that no matter how much they may speed you up on your home computer, they will slow you down when you need to ssh into some remote Docker container in a client's private subnet and find all your aliases are gone. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477099/"
]
} |
653,931 | I feel like this is a really easy question, and when I Google, I find lots of answers for part of the problem, but when I try to put them together, it doesn't work and I can't figure out why. Here's the scenario: I have a file with a lot of text in it. One of those lines matches this pattern: foo = 1700; I want to extract 1700 I want to save it into a bash script variable so I can refer to it later in the script. I cannot get past step 3. Here's what I've tried: sed -nE 's/^foo = //p' file | sed -nE 's/;//p' This prints out: 1700 Great, but what if I need to trim white space or something? If I can't use * / + , I wouldn't know how to do that. I learned that you can't use * / + in a substitution on another answer, so I can't figure out how to do this. I looked into the man page of grep, and I didn't see any option for groups when I search for that word. I think I know how to solve this in awk, but I've always found its regex functions to be a little clunky and for the commandline scripts to require too many escapes, so ideally that's not the only way to solve this. | To start with, here's how to capture the numeric value: $ echo 'foo = 1700;' | sed -n -e 's/^foo = \([0-9]\+\).*/\1/p'1700 That's using sed 's default Basic Regular Expressions (BRE). You can also use Extended Regular Expressions (ERE) with sed's -E option: echo 'foo = 1700;' | sed -n -E -e 's/^foo = ([0-9]+).*/\1/p'1700 The sub-expression [0-9]+ inside the parentheses ( ... ) captures one-or-more digits. This is called a "capture group" and is used in the replacement with the \1 (which is the first capture group - if there are multiple capture groups, they can be used as \1, \2, \3, etc). In this case, the sed script tries to replace the entire line with just the \1 capture group and if that succeeded, print the modified line. Next, you want to get sed 's output into a variable. You do that with with command substitution . e.g. $ myvar=$(echo 'foo = 1700;' | sed -n -E -e 's/^foo = ([0-9]+).*/\1/p')$ echo $myvar1700 To use this in your script, just use your file as an argument to sed instead of piping echo ... into it. myvar=$(sed -n -E -e 's/^foo = ([0-9]+).*/\1/p' file) To trim white space, or to cope with lines that might have optional leading whitespace, or optional whitespace around the = , etc: myvar=$(sed -n -E -e 's/^[[:space:]]*foo[[:space:]]*=[[:space:]]*([0-9]+).*/\1/p' file) Note that some versions of sed (GNU sed, at least. maybe others) understand perl's \s , so you can shorten that to: myvar=$(sed -n -E -e 's/^\s*foo\s*=\s*([0-9]+).*/\1/p' file) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/653931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101311/"
]
} |
654,014 | I can write > echo '{"a": "arbiter", "b": "brisk"}{"a": "astound", "b": "bistro"}' | jq '.a, .b'"arbiter""brisk""astound""bistro" but if I do > echo '{"a": "arbiter", "b": "brisk", "c": ["cloak", "conceal"]} {"a": "astound", "b": "bistro", "c": ["confer", "consider"]}' | jq '.a, .b, .c' I get "arbiter""brisk"[ "cloak", "conceal"]"astound""bistro"[ "confer", "consider"] How do I flatten the c arrays to get instead "arbiter""brisk""cloak","conceal""astound""bistro""confer","consider" Update Since null safety is quite fashionable in several modern languages (and justifiably so), it is perhaps fitting to suppose that the question as asked above was incomplete. It's necessary to know how to handle the absence of a value. If one of the values is null , > echo '{"b": "brisk"}{"a": "astound", "b": "bistro"}' | jq '.a, .b' we get a null in the output null"brisk""astound""bistro" That may well be what we want. We could add a second step in the pipeline (watching out not to exclude "null" ), but it's cleaner if jq itself excludes null s. Just writing select(.a != null) does the trick, but introduces a {} level. What is the right way to discard null s from within jq ? | The jq expression [.a, .b, .c] extracts all the elements that we want from the input objects, and places them in an array. Some of these elements may be arrays, so we need to flatten all elements: [.a, .b, .c] | flatten For the input object { "a": "arbiter", "b": "brisk", "c": [ "cloak", "conceal" ]} this generates the array [ "arbiter", "brisk", "cloak", "conceal"] and you'll get a similar but separate array from your second object. To merge these two arrays into one, we may simply pass the data through .[] , but a shortcut way of writing flatten | .[] is flatten[] . Using this, we arrive at [.a, .b, .c] | flatten[] Summary: echo '...as in the question...' |jq '[.a, .b, .c] | flatten[]' If you additionally want to weed out any null values, filter through select(. != null) , or extract the values using [.a//empty,.b//empty,.c//emtpy] , or filter through map(.//empty) before flatten[] . As a comment: Your input JSON is created using a simple echo in the question. However, to properly create JSON on the command line, consider using a tool like jo , which will additionally encode your data appropriately for inclusion in a JSON document: Your second example JSON could be created using the two jo invocations jo a=arbiter b=brisk 'c[]'=cloak 'c[]'=concealjo a=astound b=bistro 'c[]'=confer 'c[]'=consider | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654014",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477207/"
]
} |
654,067 | If you want to read the single line output of a system command into Bash shell variables, you have at least two options, as in the examples below: IFS=: read user x1 uid gid x2 home shell <<<$(grep :root: /etc/passwd | head -n1) and IFS=: read user x1 uid gid x2 home shell < <(grep :root: /etc/passwd | head -n1) Is there any difference between these two? What is more efficient or recommended? Please note that, reading the /etc/passwd file is just for making an example. The focus of my question is on here strings vs. process substitution . | First note that using read without -r is to process input where \ is used to escape the field or line delimiters which is not the case of /etc/passwd . It's very rare that you would want to use read without -r . Now as to those two forms, a note that neither are standard sh syntax. <<< is from zsh in 1991. <(...) is from ksh circa 1985 though ksh initially didn't support redirecting from/to it. $(...) is also from ksh, but has been standardised by POSIX (as it replaces the ill-designed `...` from the Bourne shell), so is portable across sh implementations these days. $(code) interprets the code in a subshell with the output redirected to a pipe while the parent at the same time, reads that output from the other end of the pipe and stores it in memory. Then once that command finishes, that output, stripped of the trailing newline characters (and with the NUL characters removed in bash ) makes up the expansion of $(...) . If that $(...) is not quoted and is in list context, it is subject to split+glob (split only in zsh). After <<< , it's not a list context, but still older versions of bash would still do the split part (not glob) and then join the parts with spaces . So if using bash , you'd likely want to also quote $(...) when used as target of <<< . cmd <<< word in zsh and older versions of bash causes the shell to store word followed by a newline character into a temporary file, which is then made the stdin of the process that will execute cmd , and that tempfile deleted before cmd is executed. That's the same as happens with << EOF from the Bourne shell from the 70s. Effectively, it is exactly the same as: cmd << EOFwordEOF In 5.1, bash switched from using a temporary file to using a pipe as long as the word can fit whole in the pipe buffer (and falls back to using a tempfile if not to avoid deadlocks) and makes cmd 's stdin the reading end of the pipe which the shell has seeded beforehand with the word . So cmd1 <<< "$(cmd2)" involves one or two pipes, store the whole output of cmd2 in memory, storing it again in either another pipe or a tempfile and mangles the NULs and newlines. cmd1 < <(cmd2) is functionality equivalent to cmd2 | cmd1 . cmd2 's output is connected to the writing end of a pipe. Then <(...) expands to a path that identifies the other end, < that-path gets you a file descriptor to that other end. So cmd2 talks directly to cmd1 without the shell doing anything with the data. You see this kind of construct in the bash shell specifically because in bash , contrary to AT&T ksh or zsh, in: cmd2 | cmd1 cmd1 is run in a subshell¹, so if cmd1 is read for instance, read will only populate variables of that subshell. So here, you would want: IFS=: read -r user x1 uid gid x2 home shell rest_if_any_ignored < <( grep :root: /etc/passwd) The head is superfluous as with -r , read will only read one line anyway². I've added a rest_if_any_ignored for future proofing in case in the future a new field is added to /etc/passwd , causing $shell to contain /bin/sh:that-field otherwise. Portably (in sh ), you can't do: grep :root: /etc/passwd | IFS=: read -r user x1 uid gid x2 home shell rest_if_any_ignored as POSIX leaves it unspecified whether read runs in a subshell (like in bash / dash ...) or not (like zsh / ksh ). You can however do: IFS=: read -r user x1 uid gid x2 home shell rest_if_any_ignored << EOF$(grep :root: /etc/passwd | head -n1)EOF (here restoring the head to avoid the whole of grep 's output to be stored in memory and in the tempfile/pipe). Which is standard even if not as efficient (though as indicated by @muru, the difference for such a small input is likely negligible compared to the cost of running an external utility in a forked process). Performance, if that mattered here, could be improved by using builtin features of the shell to do grep 's job. However, especially in bash , you'd only do that for very small input as a shell is not designed for this kind of task and is going to be a lot worse at it than grep . while IFS=: read <&3 -r user x1 uid gid name home shell rest_if_any_ignoreddo if [ "$name" = root ]; then do-something-with "$user" "$home"... break fidone 3< /etc/passwd ¹ except when the lastpipe option in bash is set and the shell is non-interactive like in scripts ² see also the -m1 or --max-count=1 option of the GNU implementation of grep which would tell grep itself to stop searching after the first match. Or the portable equivalent: sed '/:root:/!d;q' | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/654067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/330980/"
]
} |
654,074 | in bash why does this work fine: $ cat test1.sh#!/bin/bashecho "some text" \"some more text"$ ./test1.shsome text some more text but this fails $ cat test2.sh#!/bin/bashtext="some text" \"some more text"echo $text$ ./test2.sh./test2.sh: line 3: some more text: command not found I was expecting both test1.sh and test2.sh to do the same thing. | Quoting POSIX Shell Command Language , A <backslash> that is not quoted shall preserve the literal value of the following character, with the exception of a <newline>. If a <newline> follows the <backslash>, the shell shall interpret this as line continuation. The <backslash> and <newline> shall be removed before splitting the input into tokens. Since the escaped <newline> is removed entirely from the input and is not replaced by any white space, it cannot serve as a token separator. This means that, in your first example, what the shell actually executes is echo "some text" "some more text" which is the simple command echo followed by two arguments, concatenated using a space character when printed to standard output. In your second example, what the shell actually executes is text="some text" "some more text"echo $text where the first line is interpreted as the simple command some more text (a single token, including the space characters) preceded by the variable assignment text="some text" ; then, echo $text is executed. To produce the same result as the first one, your second snippet may be changed into text="some text "\"some more text"echo "$text" Note, also, the double quotes in echo "$text" , needed to prevent the shell from applying word splitting and filename generation to the expansion of $text (it makes no difference with your sample strings, but it would if they contained whitespace character sequences other than a single space and/or globbing characters). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654074",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5451/"
]
} |
654,103 | File.tsv is a tab delimited file with 7 columns: cat File.tsv1 A J 12 B K N 13 C L O P Q 1 The following reads File.tsv which is tab delimited file with 7 columns, and stores the entries in an Array A. while IFS=$'\t' read -r -a D; do A=("${A[@]}" "${D[i]}" "${D[$((i + 1))]}" "${D[$((i + 2))]}" "${D[$((i + 3))]}" "${D[$((i + 4))]}" "${D[$((i + 5))]}" "${D[$((i + 6))]}")done < File.tsvnA=${#A[@]} for ((i = 0; i < nA; i = i + 7)); do SlNo="${A[i]}" Artist="${A[$((i + 1))]}" VideoTitle="${A[$((i + 2))]}" VideoId="${A[$((i + 3))]}" TimeStart="${A[$((i + 4))]}" TimeEnd="${A[$((i + 5))]}" VideoSpeed="${A[$((i + 6))]}"done Issue Certain entries are empty in tsv files, but the empty values are skipped while reading the file. Note Is the tsv file, empty values are preceded and succeeded by a tab character. Desired Solution Empty values be read and stored in the array. | As I said in my comments, this is not a job for a shell script. bash (and similar shells) are for co-ordinating the execution of other programs, not for processing data. Use any other language instead - awk, perl, and python are good choices. It will be easier to write, easier to read and maintain, and much faster. Here's an example of how to read your text file into an Array of Hashes (AoH) in perl , and then use the data in various print statements. An AoH is a data structure that is exactly what its name says it is - an array where each element is an associative array (aka hash). BTW, this could also be done with an Array of Arrays (AoA) data structure (also known as a List of Lists or LoL), but it's convenient to be able to access fields by their field name instead of having to remember their field number. You can read more about perl data structures in the Perl Data Structures Cookbook which is included with perl. Run man perldsc or perldoc perldsc . You probably also want to read perllol and perlreftut too. and perldata if you're not familiar with perl variables (" Perl has three built-in data types: scalars, arrays of scalars, and associative arrays of scalars, known as hashes ". A "scalar" is any single value, like a number or a string or a reference to another variable) Perl comes with a lot of documentation and tutorials - run man perl for an overview. The included perl docs come to about 14MB, so it's often in a separate package in case you don't want to install it. On debian: apt install perl-doc . Also, each library module has its own documentation. #!/usr/bin/perl -luse strict;# Array to hold the hashes for each recordmy @data;# Array of field header names. This is used to insert the# data into the %record hash with the right key AND to# ensure that we can access/print each record in the right# order (perl hashes are inherently unordered so it's useful# and convenient to use an indexed array to order it)my @headers=qw(SlNo Artist VideoTitle VideoId TimeStart TimeEnd VideoSpeed);# main loop, read in each line, split it by single tabs, build into# a hash, and then push the hash onto the @data array.while (<>) { chomp; my %record = (); my @line = split /\t/; # iterate over the indices of the @line array so we can use # the same index number to look up the field header name foreach my $i (0..$#line) { # insert each field into the hash with the header as key. # if a field contains only whitespace, then make it empty ($record{$headers[$i]} = $line[$i]) =~ s/^\s+$//; } push @data, \%record ;}# show how to access the AoH elements in a loop:print "\nprint \@data in a loop:";foreach my $i (0 .. $#data) { foreach my $h (@headers) { printf "\$data[%i]->{%s} = %s\n", $i, $h, $data[$i]->{$h}; } print;}# show how to access individual elementsprint "\nprint some individual elements:";print $data[0]->{'SlNo'};print $data[0]->{'Artist'};# show how the data is structured (requires Data::Dump# module, comment out if not installed)print "\nDump the data:";use Data::Dump qw(dd);dd \@data; FYI, as @Sobrique points out in a comment, the my @line =... and the entire foreach loop inside the main while (<>) loop can be replaced with just a single line of code (perl has some very nice syntactic sugar): @record{@headers} = map { s/^\s+$//, $_ } split /\t/; Note: Data::Dump is a perl module for pretty-printing entire data-structures. Useful for debugging, and making sure that the data structure actually is what you think it is. And, not at all co-incidentally, the output is in a form that can be copy-pasted into a perl script and assigned directly to a variable. It's available for debian and related distros in the libdata-dump-perl package. Other distros probably have it packaged too. Otherwise get it from CPAN. Or just comment out or delete the last three lines of the script - it's not necessary to use it here, it's just another way of printing the data that's already printed in the output loop. Save it as, say, read-tsv.pl , make it executable with chmod +x read-tsv.pl and run it: $ ./read-tsv.pl file.tsv print @data in a loop:$data[0]->{SlNo} = 1$data[0]->{Artist} = A$data[0]->{VideoTitle} = J $data[0]->{VideoId} = $data[0]->{TimeStart} = $data[0]->{TimeEnd} = $data[0]->{VideoSpeed} = 1$data[1]->{SlNo} = 2$data[1]->{Artist} = B$data[1]->{VideoTitle} = K$data[1]->{VideoId} = N$data[1]->{TimeStart} = $data[1]->{TimeEnd} = $data[1]->{VideoSpeed} = 1$data[2]->{SlNo} = 3$data[2]->{Artist} = C$data[2]->{VideoTitle} = L$data[2]->{VideoId} = O$data[2]->{TimeStart} = P$data[2]->{TimeEnd} = Q$data[2]->{VideoSpeed} = 1print some individual elements:1ADump the data:[ { Artist => "A", SlNo => 1, TimeEnd => "", TimeStart => "", VideoId => "", VideoSpeed => 1, VideoTitle => "J", }, { Artist => "B", SlNo => 2, TimeEnd => "", TimeStart => "", VideoId => "N", VideoSpeed => 1, VideoTitle => "K", }, { Artist => "C", SlNo => 3, TimeEnd => "Q", TimeStart => "P", VideoId => "O", VideoSpeed => 1, VideoTitle => "L", }, ] Notice how the nested for loops print the data structure in the exact order we want ( because we iterated over the @headers array), while just dumping it with the dd function from Data::Dump outputs the records sorted by key name (which is how Data::Dump deals with the fact that hashes in perl aren't ordered). Other comments Once you have your data in a data structure like this, it's easy to insert it into an SQL database like mysql / mariadb or postgresql or sqlite3 . Perl has database modules (see DBI ) for all of those and more. (In debian, etc, these are packaged as libdbd-mysql-perl , libdbd-mariadb-perl , libdbd-pg-perl , libdbd-sqlite3-perl , and libdbi-perl . Other distros will have different package names) BTW, the main parsing loop could also be implemented using another perl module called Text::CSV , which can parse CSV and similar file formats like Tab separated. Or with DBD::CSV which builds on Text::CSV to allow you to open a CSV or TSV file and run SQL queries against it as if it were an SQL database . In fact, it's a fairly trivial 10-15 line script to use these modules to import a CSV or TSV file into an SQL database, and most of that is boilerplate setup stuff...the actual algorithm is a simple while loop to run a SELECT query on the source data and an INSERT statement into the destination. Both of these modules are packaged for debian, etc, as libtext-csv-perl and libdbd-csv-perl . Probably packaged for other distros too. and, as always, available on CPAN. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654103",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220462/"
]
} |
654,187 | Consider the following: # time sleep 1real 0m1.001suser 0m0.001ssys 0m0.000s# echo foo | time sleep 1bash: time: command not found Um... wut? OK, so clearly Bash is searching for commands in a somehow different way when run as a pipeline. Can anyone explain to me what the difference is? Does piping disable shell built-ins or something? (I didn't think it did... but... I can't see how else this is breaking.) | The bash shell implements time as a keyword. The keyword is part of syntax of the pipeline. The syntax of a pipeline in bash is (from the section entitled " Pipelines " in the bash manual): [time [-p]] [!] command1 [ | or |& command2 ] … Since time is part of the syntax of pipelines , not a shell built-in utility, it does not behave as a utility. For example, redirecting its output using ordinary shell redirections is not possible without extra trickery (see e.g. How can I redirect `time` output and command output to the same pipe? ). When the word time occurs in any other place than at the start of a pipeline in the bash shell, the external command with the same name will be called. This is what happens in the case when you put time after the pipe symbol, for example. If the shell can't find an external time command, it generates a "command not found" error. To make the shell use the keyword to time only the sleep 1 command in your pipeline, you may use echo foo | (time sleep 1) Within the subshell on the right hand side of the pipeline, the time keyword is at the start of a pipeline (a pipeline of a single simple command, but still). Also related: How can we make `time` apply to a pipeline or its component? Bash time keyword result only with second piped command, explain why Make bash use external `time` command rather than shell built-in Differences between keyword, reserved word, and builtin? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/654187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26776/"
]
} |
654,221 | I was making a Bash script with the setuid permission on, but it didn't work. So I found my solution here: Why does setuid not work? and Allow setuid on shell scripts Now my script works fine and all (I rewrote it in cpp). To satisfy my curiosity as to why pure Bash shell didn't work, I read this link: http://www.faqs.org/faqs/unix-faq/faq/part4/section-7.html (referenced by this answer: https://unix.stackexchange.com/a/2910 ). At that site, I came across the following: $ echo \#\!\/bin\/sh > /etc/setuid_script $ chmod 4755 /etc/setuid_script $ cd /tmp $ ln /etc/setuid_script -i $ PATH=. $ -i I don't understand the fourth line, which reads ln /etc/setuid_script -i . What does that command do? I've read in the ln manual that -i is just the "interactive" flag (asking whether you want to overwrite an existing file or not). So why does ln /etc/setuid_script -i followed by PATH=. and -i make my shell execute /bin/sh -i ? | The code ln /etc/setuid_script -i is intended to create a hardlink to a file called -i in the current directory. You might need to say ln -- /etc/setuid_script -i to make this work if you are using GNU tools. The shell can get commands to run in 3 different ways. From a string. Use sh -c "mkdir /tmp/me" with the -c flag. From a file. Use sh filename From the terminal, use sh -i or sh . Historically when you have a shell script called foo starting with #!/bin/sh the kernel invokes it with a filename, i.e. /bin/sh foo , to tell it to use the 2nd way of reading commands. If you give it a filename of -i then the kernel invokes /bin/sh -i and you get the third way. There are also race conditions. This was exploited thus. The exec system call is invoked to start the script. The kernel sees the file is SUID, and sets the permissions of the process accordingly. The kernel reads the first few bytes of the file to see what kind of executable it is, finds the #!/bin/sh and so sees it is a script for /bin/sh. The attacker replaces the script. The kernel replaces the current process with /bin/sh. The /bin/sh opens the filename and executes the commands. This is a classic TOCTTOU (time of check to time of use) attack. The check in step 2 is against a different file to the one used (in the open call) in step 6. Both these bugs are usually fixed these days. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477417/"
]
} |
654,239 | I have a fastq file with barcode sequences appended at the header line started with @ after the last :. This pattern repeats every four lines. Below is an example: @FCID:1:1101:15473:1334 1:N:0:TATTTGCGACAAAGTGGACTAGGGGATGCCAGCCGCCGCGGTAATACGTAGGTGGCAAGCGTTATCCGGATTTATTGGGCGTAAAGGGAACGCAGGCGGTCTTTTAAGTCTGATGTGAAAGCCTTCGGCTTAACCGGAGTAGTGCTTTGGAAACTGTGCAGCTCGAGTGCAGGAGAGGTAAGCGGAATTCCTAGTGTAGCGGTGAAATGCGTAGATATTAGGAGGAACACCAGTGGCGAAGGCGGCTTACTGGACTGTAACT+AAAABFFFFFFCGGGGGGGGGGGGGGGGGGGGGHHHHHHGHHGGGHGHGGGGHHHGGGGGHHHHHHHHGGGGHHHGHHGGGGGGGGGGGGHHHHHHHGHGHHHHHHHHFHHHHHHGGGGHHHHGGGGGHHHHHHHHHHGHHHHHHFHHFHGGGGDFHHHHH.EGGGBFFGGGGGGEFFFGGGGFFGGGF-DFEFFFFFFA.-./FFFFBFFFBFFFFFFA?;/B?F@DCFEAAF-@FFBBBBFFEFFFB;@FCID:1:1101:15528:1336 1:N:0:GCGGGAAAAAAAGAATTGGACGAGTGCCAGCAGCCGCGGTAATACGTAGGTGGCAAGCGTTATCCGGAATTATTGGGCGTAAAGAGGGAGCAGGCGGCAGCAAAGGTCTGTGGTGAAAGACTGAAGCTTAACTTCAGTAAGCCATAGAAACCGGGCAGCTAGAGTGCAGGAGAGGATCGTGGAATTCCATGTGTAGCGGTGAAATGCGTAGATATATGGAGGAACACCAGTGGCGAAGGCGACGATCTGGCCTGCAACTGAC+DDDDDFFFFCDCGGGGGGGGGGHGGGGGGGHHHHHHHGHHGHHHGHGGGGHHHGGGGGHHHHHHHHGGGGHHGHHGGGGHHHGGGGGGGHHHHGGHHHHHHHGHHHHHHHHHHHHGHHHGHGHHHHHHHHHHHHHHHHHHGGGGGGGHHHHHGHGHHHGGHGDHHGDFFGGGGGGGGGGFGGGFGGG9?EGFGGFFAD;EFFFFFFFFFFFFFFFDEEFFFFFFF-DE->CFFEEAFFFFFFFBFFFFF0 My goal is to append the barcodes into the sequence reads every 2nd line and everything else is unchanged. Below is my expected output (the barcodes are the last 12 letters of each sequence line). @FCID:1:1101:15473:1334 1:N:0:TATTTGCGACAAAGTGGACTAGGGGATGCCAGCCGCCGCGGTAATACGTAGGTGGCAAGCGTTATCCGGATTTATTGGGCGTAAAGGGAACGCAGGCGGTCTTTTAAGTCTGATGTGAAAGCCTTCGGCTTAACCGGAGTAGTGCTTTGGAAACTGTGCAGCTCGAGTGCAGGAGAGGTAAGCGGAATTCCTAGTGTAGCGGTGAAATGCGTAGATATTAGGAGGAACACCAGTGGCGAAGGCGGCTTACTGGACTGTAACTTATTTGCGACAA+AAAABFFFFFFCGGGGGGGGGGGGGGGGGGGGGHHHHHHGHHGGGHGHGGGGHHHGGGGGHHHHHHHHGGGGHHHGHHGGGGGGGGGGGGHHHHHHHGHGHHHHHHHHFHHHHHHGGGGHHHHGGGGGHHHHHHHHHHGHHHHHHFHHFHGGGGDFHHHHH.EGGGBFFGGGGGGEFFFGGGGFFGGGF-DFEFFFFFFA.-./FFFFBFFFBFFFFFFA?;/B?F@DCFEAAF-@FFBBBBFFEFFFB;@FCID:1:1101:15528:1336 1:N:0:GCGGGAAAAAAAGAATTGGACGAGTGCCAGCAGCCGCGGTAATACGTAGGTGGCAAGCGTTATCCGGAATTATTGGGCGTAAAGAGGGAGCAGGCGGCAGCAAAGGTCTGTGGTGAAAGACTGAAGCTTAACTTCAGTAAGCCATAGAAACCGGGCAGCTAGAGTGCAGGAGAGGATCGTGGAATTCCATGTGTAGCGGTGAAATGCGTAGATATATGGAGGAACACCAGTGGCGAAGGCGACGATCTGGCCTGCAACTGACGCGGGAAAAAAA+DDDDDFFFFCDCGGGGGGGGGGHGGGGGGGHHHHHHHGHHGHHHGHGGGGHHHGGGGGHHHHHHHHGGGGHHGHHGGGGHHHGGGGGGGHHHHGGHHHHHHHGHHHHHHHHHHHHGHHHGHGHHHHHHHHHHHHHHHHHHGGGGGGGHHHHHGHGHHHGGHGDHHGDFFGGGGGGGGGGFGGGFGGG9?EGFGGFFAD;EFFFFFFFFFFFFFFFDEEFFFFFFF-DE->CFFEEAFFFFFFFBFFFFF0 I tried to use awk, but this does not work. awk '(FNR) % 4 == 1 { -F; seq=$8; next } (FNR) % 4 == 2 { line[FNR]=$0; print $0 seq}' R1test.fq > R1test_new.fq Could anyone help? | I will make the following assumptions: All of your records have exactly 4 lines. This is not required by the fastq format but is often the case with short-read data. Your barcode is always the last string of letters after the final : on every 4th line starting with the first. If those assumptions hold true, you can do: awk -F':' 'NR % 4 == 1 {seq=$NF} NR % 4 == 2 { $0=$0 seq}1' R1test.fq > R1test_new.fq This is sort of the same idea as your code, I just removed some unnecessary steps and fixed some issues. The 1 at the end is awk shorthand for "print this line". Your code didn't work because you cannot set use -F to set the field separator inside your awk code, the -F is an option to the awk binary, and not a feature of the awk language. To change the field separator within awk scripts you would use the FS variable (e.g. BEGIN{FS=":"} ). Next, even if you had managed to change the field separator, that would be irrelevant since the line is split before any code is executed. You can only set the separator in a BEGIN{} block. If you set it anywhere else, you also need to tell awk to reparse the line. And anyway, you wanted : as the field separator, not ; . Caveat: This will likely break any downstream processing you want to do since the length of the sequence will not match the length of the phred quality scores. Are you really sure this is a good idea? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654239",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477439/"
]
} |
654,249 | I'm setting up prometheus in a web server, and I noticed that each exporter is its own program that must be added to a directory in $PATH. My question is, is there any advantage to making a specialized directory for these (for example, "/usr/exporters/bin", to make up some example) and put all exporter programs in there, and add that file to the $PATH? Or is it best to just push the programs to the default directory for housing binaries? | The only benefit is having fewer directories in $PATH , therefore, fewer directories to search when looking for an executable, but: This event (searching all the directories in $PATH ) is rare. $PATH entries (the executables) are kept in a hash table within bash , which is updated at startup or via rehash . No need to search $PATH every time. This event isn't expensive. All the information needed (file exists and permissions allow eXecution) can be gathered from the file's directory entry - no need to access each file. Just read the directory. The reason for NOT moving the executables to a common other directory include: You'll have a non-standard environment. When you ask for help, extra effort will be needed to explain this. Problems caused specifically by the non-standard environment will be very difficult to solve. You'll have a non-standard environment. When updated versions are released, you environment won't match what the update expects. You'll have a non-standard environment. You'll have to remember and do the non-standard environment updates this week, next week, the week after that, ... forever. It's Monkey Motion for no benefit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/476423/"
]
} |
654,275 | Im trying to communicate with a couple of UART devices via USB. A HT-06 bluetooth module and a GY-NEO6MV2 GPS module. I am using a Prolific PL2303 USB cable. As a backup I also have a Silicon Labs CP2102. When I connect the PL2303 a lsusb command returns Bus 001 Device 015: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port and a dmesg command returns [147697.657037] usb 1-11: pl2303 converter now attached to ttyUSB0 a ls -l of /dev shows crw-rw---- 1 root dialout 188, 0 Jun 15 08:58 ttyUSB0 and I've added myself to the dialout group as well as setting chmod to 666 . I then use Putty with a serial connection with Port /dev/ttyUSB0 , Baud 9600 and Parity 8,1,None. I connect the PL2303 cable to the HT-06 as GND-GND, VCC-VCC, TX-RX and RX-TX. All pretty basic stuff. The Putty screen starts with a cursor in the top left corner. I send an AT command. Im expecting OK but nothing happens. I have a second HT-06, but still nothing. I thought it might be a broken RX or TX Cable (I get a flashing LED on the HT-06 so VCC and GND are OK) so I swapped out the PL2303 for the CP2102. Both lsusb and dmesg tell me the converter is connected (again at /dev/ttyUSB0 ). Using the same Putty settings I still get nothing. Along similar lines Ive connected the NEO6M with both the PL2303 and the CP2102, and use xgps (a subset of gpsd ). This returns an error gpsd is not connected to /dev/ttyUSB0 and obviously nothing happens. Im using Linux Mint 20 with kernel 5.4.0-74-generic which has the drivers for both CP210X and PL230X. Ive also tried different USB ports (USB2 and USB3)Despite 2 different USB-TTL converters, 3 UART devices and several different serial terminal apps (Ive also tried minicomm and rfcomm ), nothing works. | The only benefit is having fewer directories in $PATH , therefore, fewer directories to search when looking for an executable, but: This event (searching all the directories in $PATH ) is rare. $PATH entries (the executables) are kept in a hash table within bash , which is updated at startup or via rehash . No need to search $PATH every time. This event isn't expensive. All the information needed (file exists and permissions allow eXecution) can be gathered from the file's directory entry - no need to access each file. Just read the directory. The reason for NOT moving the executables to a common other directory include: You'll have a non-standard environment. When you ask for help, extra effort will be needed to explain this. Problems caused specifically by the non-standard environment will be very difficult to solve. You'll have a non-standard environment. When updated versions are released, you environment won't match what the update expects. You'll have a non-standard environment. You'll have to remember and do the non-standard environment updates this week, next week, the week after that, ... forever. It's Monkey Motion for no benefit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477366/"
]
} |
654,322 | I have a file on a Linux system that looks like the following: May 6 19:12:03 sys-login: user1 172.16.2.102 Login /data/netlogon 13473May 6 19:15:26 sys-login: user2 172.16.2.107 Login /data/netlogon 14195May 6 19:28:37 sys-logout: user1 172.16.2.102 Logout /data/netlogon 13473May 6 19:33:28 sys-logout: user2 172.16.2.107 Logout /data/netlogon 14195May 8 07:58:50 sys-login: user3 172.16.6.128 Login /data/netlogon 13272May 8 07:58:50 sys-logout: user3 172.16.6.128 Logout /data/netlogon 13272 And I am trying to calculate the time each user has spent between logging in and logging out in minutes. There will only be one login/logout per user, and I want to generate a report for all users at once. What I have tried: I have tried to first extract the users first: users=$(awk -v RS=" " '/login/{getline;print $0}' data) which returns the users (logged-in), and then I attempt to extract the time at which they logged in, but I am currently stuck. Any help would be appreciated! Edit: I am able to get users and dates doing the following: users=$(grep -o 'user[0-9]' data)dates=$(grep -o '[0-2][0-9]:[0-5][0-9]:[0-5][0-9]' data) If I find a complete solution, I will share here. | Although this site "is not a script-writing service" ;), this is a nice little exercise so I will propose the following awk program. You can save it to a file calc_logtime.awk . #!/usr/bin/awk -f/sys-log[^:]+:.*Log/ { user=$5 cmd=sprintf("date -d \"%s %d %s\" \"+%%s\"",$1,$2,$3) cmd|getline tst close(cmd) if ($7=="Login") { login[user]=tst } else if ($7=="Logout") { logtime[user]+=(tst-login[user]) login[user]=0 }}END { for (u in logtime) { minutes=logtime[u]/60 printf("%s\t%.1f min\n",u,minutes) }} This relies on using the GNU date command (part of the standard tools suite on GNU/Linux systems) and on the time format in your log file being as specified. Also note that this doesn't contain many safety checks, but you should get the idea on how to modify it to your needs. It will look for lines that contain both the string sys-log near the beginning and Log near the end to increase selectivity just in case there may be other content. As stated, this is a very rudimentary test, but again, you can get the idea of how to make it more specific. The user will be extracted as the 5th space-separated field of the line. The action will be extracted as the 7th space-separated field of the line. The timestamp of the action will be converted to "seconds since the epoch" by generating a date call via sprintf and delegating the task to the shell. If the action is Login , the timestamp is stored in an array login , with the username as "array index". If the action is Logout , the duration will be calculated and added to an array logtime containing the total log-time for all users so far. At end-of-file, a report will be generated, by iterating over all "array indices" of logtime and converting the logtimes from seconds to minutes by simple division. You can call it via awk -f calc_logtime.awk logfile.dat | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284325/"
]
} |
654,337 | I have a lengthy text file, partial file content is shown below, [{"site":"1a2v_1","pfam":"Cu_amine_oxid","uniprot":"P12807"},{"site":"1a2v_2","pfam":"Cu_amine_oxid","uniprot":"P12807"},{"site":"1a2v_3","pfam":"Cu_amine_oxid","uniprot":"T12807"},{"site":"1a2v_4","pfam":"Cu_amine_oxid","uniprot":"P12808"},{"site":"1a2v_5","pfam":"Cu_amine_oxid","uniprot":"Z12809"},{"site":"1a2v_6","pfam":"Cu_amine_oxid","uniprot":"P12821"},{"site":"1a3z_1","pfam":"Copper-bind,SoxE","uniprot":"P0C918"}, I need to parse uniprot ids from the above text file and the expected outcome is given below, P12807P12807T12807P12808Z12809P12821P0C918 In order to do the same, I have tried the following commands but nothing works for me, sed -e 's/"uniprot":"\(.*\)"},{"site":"/\1/' file.txtcat file.txt | sed 's/.*"uniprot":" //' | sed 's/"site":".*$//' Kindly help me to parse the ids as mentioned above. Thanks in advance. | If you're on a Linux system, you can very easily do: $ grep -oP '"uniprot":"\K[^"]+' fileP12807P12807T12807P12808Z12809P12821P0C918 The -o tells grep to only print the matching portion of each line and the -P enables Perl Compatible Regular Expressions. The regex is looking for "uniprot":" but then discards it (the \K means "discard anything matched so far", so that it isn't included in the output). Then, you just look for the longest stretch of non- " ( [^"]+ ). Of course, this looks like JSON data so for anything more complicated, you should use a proper parser for it like jq . If you fix your file by adding a closing ] and make it like this: [{"site":"1a2v_1","pfam":"Cu_amine_oxid","uniprot":"P12807"},{"site":"1a2v_2","pfam":"Cu_amine_oxid","uniprot":"P12807"},{"site":"1a2v_3","pfam":"Cu_amine_oxid","uniprot":"T12807"},{"site":"1a2v_4","pfam":"Cu_amine_oxid","uniprot":"P12808"},{"site":"1a2v_5","pfam":"Cu_amine_oxid","uniprot":"Z12809"},{"site":"1a2v_6","pfam":"Cu_amine_oxid","uniprot":"P12821"},{"site":"1a3z_1","pfam":"Copper-bind,SoxE","uniprot":"P0C918"}] You can do: $ jq -r '.[].uniprot' fileP12807P12807T12807P12808Z12809P12821P0C918 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654337",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357294/"
]
} |
654,406 | I have a rather strange problem on two of my notebooks one running Manjaro Linux (Arch for children) and the other Ubuntu 20.10. When I use Jshell the read-eval-print loop tool for Java 11 , I can't paste into Jshell , not with the mouse and not with ctrl + p I made a little video demonstrating the problem. It only happens in Jshell, normal bash isn't affected. ( echo command at the beginning of the first video works fine) https://www.mediafire.com/file/xjy9i8np16zfuit/Peek+2021-06-15+18-03.mp4/file (under 1 MB big) I made another recording that shows that in ether xfce4 terminal or st terminal after pasting a string of characters into jshell it freezes, until typing 17 characters into the seemingly frozen jshell, when the pasted text appears plus the typed in characters after the freeze. (if i use letters instead of numbers like in the video the output looks like this: jshell> System.out.println("This is a Test...")abcdefghijklmnopqrsin both st and xfce4 terminal https://www.mediafire.com/file/m2asx0y5tatnj89/Peek+2021-06-15+18-36.mp4/file (1.3 MB) Used Java Version on both machines is: openjdk 11.0.11 2021-04-20OpenJDK Runtime Environment (build 11.0.11+9)OpenJDK 64-Bit Server VM (build 11.0.11+9, mixed mode) If this should be a question for a Java board, could you point one out to me? | This might be due to the issue: https://bugs.openjdk.java.net/browse/JDK-8242919 Trying to paste to jshell causes a deadlock. This was fixed in Java 15 a while ago, but only recently backported to 11u (should be fixed in 11.0.12) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654406",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477586/"
]
} |
654,451 | I've set some Spotify UI settings in /Users/username/Library/Application Support/Spotify/prefs that I'd like to keep. I'm having an issue where the application overwrites this file every time it launches. I've tried to prevent this from happening with chmod a-w prefs and running ll returns that its permissions are -r--r--r-- with my username as the owner and staff as the group. When I start Spotify, it resets the file to default and changes the permission back to -rw-r--r-- . I'm never asked for my sudo password during this. How is this happening? | The files and its directory belong to your user. So the application running as your user has access to do what it likes with them. In this context, the most likely thing is that Spotify is deleting completely re-writing the file. This requires write permission on the directory, not the file. You could try to remove all write permissions (and even chown root ... it) from the parent directory: chmod 555 '/Users/username/Library/Application Support/Spotify' This might cause other problems with the application, but unfortunately there is very little you can do to prevent the app re-writing the file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654451",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/344393/"
]
} |
654,469 | I have many files in a directory, and I want to remove all but one of the files that have the same prefix. For example, I have the files with the pattern filename.__<random_string>.pdf , (filename can be any string of some length) foo.__.pdffoo.__resume.pdf foo.__name.pdfbar.__.pdfbar.__resume.pdfbar.__name.pdf Now from them I only want one of the three files which have the same prefix, i.e, I only want either of the first three files and either one of the last three. For example, the directory should contain, foo.__.pdfbar.__.pdf Answer with any of the scripting language or shell is accepted. | #!/bin/bashdeclare -A seenfor name in *.__*.pdf; do prefix=${name%%.__*.pdf} if [[ -z ${seen[$prefix]} ]]; then printf 'keeping "%s"\n' "$name" seen[$prefix]=1 else printf 'deleting "%s"\n' "$name" # rm -f -- "$name" fidone The script above extracts the prefix from each filename matching the filename globbing pattern *.__*.pdf in the current directory. If the prefix has not been seen before, the file is kept. Otherwise the file is deleted (the rm command is currently commented out for safety). To track what prefixes have been seen, they are stored as keys in an associative array called seen . Associative arrays were introduced in bash release 4. Since any file matching *.__*.pdf with the same prefix is "equivalent", just renaming all those files to the same name would reduce them down to a single file. This does not require an associative array and can easily be done by /bin/sh : #!/bin/shfor name in *.__*.pdf; do prefix=${name%%.__*.pdf} printf 'moving "%s" to "%s.__.pdf"\n' "$name" "$prefix" # mv -f -- "$name" "$prefix.__.pdf"done Here, all files with the prefix foo are moved to the name foo.__.pdf (the mv command is commented out for safety). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/476324/"
]
} |
654,483 | I'm trying to add to my current lines of Code if possible to Continue to Open my webpage in Kiosk on Chromium during startup, which I've managed with the code below, however I'm trying to make a Username and Password be entered automatically, and press the login button into the Webpage that follows.This is what I have so Far, Stored in /home/pi/kiosk.sh #!bin/bashxset s noblankxset s offxset -dpmsunclutter -idle 0 -root &chromium-browser --noerrdiaglogs --disable-infobars --kiosk https://192.168.0.1/webconsole I then have another set of Code stored in SystemD that I've enabled so it Executes on Startup. located as: /lib/systemd/system/kiosk.service: [Unit]Description=Chromium KioskWants=graphical.targetAfter=graphical.target[Service]Environment=DISPLAY=:0.0Environment=XAUTHORITY=/home/pi/.XauthorityType=simpleExecStart=/bin/bash /home/pi/kiosk.shRestart=on-abortUser=piGroup=pi[Install]WantedBy=graphical.target This all works great, However my only issue is trying to add something to make my login details for this page open automatically. Any Advice? I tried looking into cURL but have no idea with it. And sometimes I'd get a SSL error, which I assume is because the Internal webpage won't have a security Certificate.Thanks for anyone's time who reads this. | #!/bin/bashdeclare -A seenfor name in *.__*.pdf; do prefix=${name%%.__*.pdf} if [[ -z ${seen[$prefix]} ]]; then printf 'keeping "%s"\n' "$name" seen[$prefix]=1 else printf 'deleting "%s"\n' "$name" # rm -f -- "$name" fidone The script above extracts the prefix from each filename matching the filename globbing pattern *.__*.pdf in the current directory. If the prefix has not been seen before, the file is kept. Otherwise the file is deleted (the rm command is currently commented out for safety). To track what prefixes have been seen, they are stored as keys in an associative array called seen . Associative arrays were introduced in bash release 4. Since any file matching *.__*.pdf with the same prefix is "equivalent", just renaming all those files to the same name would reduce them down to a single file. This does not require an associative array and can easily be done by /bin/sh : #!/bin/shfor name in *.__*.pdf; do prefix=${name%%.__*.pdf} printf 'moving "%s" to "%s.__.pdf"\n' "$name" "$prefix" # mv -f -- "$name" "$prefix.__.pdf"done Here, all files with the prefix foo are moved to the name foo.__.pdf (the mv command is commented out for safety). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477676/"
]
} |
654,484 | I've been tracking down an issue I've been facing in a buildkite script, and here's what I've got: Firstly, I enter the shell of a docker image: docker run --rm -it --entrypoint bash node:12.21.0 This docker image doesn't have any text editors, so I create my shell scripts by concating to a file: touch a.shchmod +x a.shprintf '#!/bin/sh\necho ${1:0:1}' >> a.shtouch b.shchmod +x b.shprintf '#!/bin/bash\necho ${1:0:1}' >> b.sh I now run my scripts: ./a.sh hello>./a.sh: 2: ./a.sh: Bad substitution ./b.sh hello >h Can someone tell me in simple terms what the issue is here? This AskUbuntu question says that bash and sh are different shells, and that in many systems sh will symlink to bash. What's going on specifically on this docker image? How would I know? | /bin/sh is only expected to be a POSIX shell, and the POSIX shell doesn’t know about substrings in parameter expansions . POSIX “defines a standard operating system interface and environment, including a command interpreter (or “shell”)”, and is the standard largely followed in traditional Unix-style environments. See What exactly is POSIX? for a more extensive description. In POSIX-inspired environments, /bin/sh is supposed to provide a POSIX-style shell, and in a script using /bin/sh as its shebang, you can only rely on POSIX features (even though most actual implementations of /bin/sh provide more). It’s perfectly OK to rely on a more advanced shell, but the shebang needs to be adjusted accordingly. Since your script relies on a bash feature, the correct shebang is #!/bin/bash (or perhaps #!/usr/bin/env bash ), regardless of the environment it ends up running in. It may happen to work in some cases with #!/bin/sh , but that’s just a happy accident. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/654484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/209769/"
]
} |
654,566 | I want to know if a particular environment variable is set or not, from the command line. I need to distinguish between it being set to a blank string (or just whitespace) and not set at all, so I'd like to get a definitive True/False or Yes/No, not just printing nothing if it's not set. I know that in a script, I can use -z , but I'm not sure if/how I can do it directly from the command line. I tried this: $ echo -z "${MY_URDSFDFS}" But it just prints: -z | Use the ${ VAR + TEXT } form of parameter expansion . ${VAR+1} is empty if VAR is unset and 1 is VAR is set, even if it's empty. (Whereas ${VAR:+1} is empty if VAR is unset or empty, and 1 if VAR is set to a non-empty value.) if [ -n "${MY_URDSFDFS+1}" ]; then echo "\$MY_URDSFDFS is set"else echo "\$MY_URDSFDFS is not set"fi This works in any POSIX or Bourne shell, i.e. in all modern and even most ancient dialects of sh. This does not distinguish between environment variables and shell variables. If you must reject non-exported shell variables, you can do the test in a separate process: sh -c 'test -n "${MY_URDSFDFS+1}"' . This is rarely a concern though. Please note that this is usually a bad idea, because sometimes it's inconvenient to unset a variable and sometimes it's inconvenient to set it to an empty string. Usually an empty environment variable should be treated identically to an unset one. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/337148/"
]
} |
654,689 | While setting up docker on Ubuntu 20.04 I did sudo usermod -G docker $USER . As noted in related questions here, I missed the -a flag and replaced all secondary groups. However, I didn't realize this until after I rebooted my machine. This is a single-user work station. I could fix this with root , but I don't have the password. How do I restore the proper groups without root access? The only one that causes a problem now is sudo , but I'm sure others will crop up. Can I do anything without reinstalling Ubuntu from scratch? | You still have one group left: docker . That means you still have control over the docker daemon. This daemon can run a container with the host's root filesystem mounted and then the container can edit files ( vi is available in busybox ) or simpler: can chroot to the host's filesystem. Download a minimal busybox image: myuser@myhost:~$ docker pull busyboxUsing default tag: latestlatest: Pulling from library/busyboxb71f96345d44: Pull complete Digest: sha256:930490f97e5b921535c153e0e7110d251134cc4b72bbb8133c6a5065cc68580dStatus: Downloaded newer image for busybox:latestdocker.io/library/busybox:latest Run a container with this image interactively and in privileged mode (in case AppArmor would block the chroot command later without it): $ docker run -it --mount type=bind,source=/,target=/host --privileged busybox Continue with interactive commands from the container. You can simply chroot to the mount point to "enter" the root filesystem and get all Ubuntu commands: / # chroot /host Use adduser which is a simpler wrapper around useradd : root@74fc1b7903e5:/# adduser myuser sudoAdding user `myuser' to group `sudo' ...Adding user myuser to group sudoDone.root@74fc1b7903e5:/# exitexit/ # exit Either logout and relog, or change group manually: myuser@myhost$ sg sudo And root access is restored: myuser@myhost$ sudo -i[sudo] password for myuser:root@myhost# Conclusion: be very prudent when allowing remote access to Docker (through port 2375/TCP). It means root access by default. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/654689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21888/"
]
} |
654,700 | I am trying to rename the column headers of a large file, and I want to know the most efficient way to do so. Files are in the range of 10M to 50M lines, with ~100 characters per line in 10 columns. A similar question was asked to remove the first line, and the best answer involved "tail". Efficient in-place header removing for large files using sed? My guess is: bash-4.2$ seq -w 100000000 1 125000000 > bigfile.txtbash-4.2$ tail -n +2 bigfile.txt > bigfile.tail && sed '1 s/^/This is my first line\n/' bigfile.tail > bigfile.new && mv -f bigfile.new bigfile.txt; Is there a faster way? | You still have one group left: docker . That means you still have control over the docker daemon. This daemon can run a container with the host's root filesystem mounted and then the container can edit files ( vi is available in busybox ) or simpler: can chroot to the host's filesystem. Download a minimal busybox image: myuser@myhost:~$ docker pull busyboxUsing default tag: latestlatest: Pulling from library/busyboxb71f96345d44: Pull complete Digest: sha256:930490f97e5b921535c153e0e7110d251134cc4b72bbb8133c6a5065cc68580dStatus: Downloaded newer image for busybox:latestdocker.io/library/busybox:latest Run a container with this image interactively and in privileged mode (in case AppArmor would block the chroot command later without it): $ docker run -it --mount type=bind,source=/,target=/host --privileged busybox Continue with interactive commands from the container. You can simply chroot to the mount point to "enter" the root filesystem and get all Ubuntu commands: / # chroot /host Use adduser which is a simpler wrapper around useradd : root@74fc1b7903e5:/# adduser myuser sudoAdding user `myuser' to group `sudo' ...Adding user myuser to group sudoDone.root@74fc1b7903e5:/# exitexit/ # exit Either logout and relog, or change group manually: myuser@myhost$ sg sudo And root access is restored: myuser@myhost$ sudo -i[sudo] password for myuser:root@myhost# Conclusion: be very prudent when allowing remote access to Docker (through port 2375/TCP). It means root access by default. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/654700",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249967/"
]
} |
654,725 | Let's say I have a JSON like the following: { "key1": { "keyA": "1", "keyB": "null", "keyC": "null" }, "key2": { "keyA": "null", "keyB": "3", "keyC": "null" }} I'd like to find a way of excluding all keys with the value null on my JSON. So the result would be the following: { "key1": { "keyA": "1" }, "key2": { "keyB": "3" }} I know I can exclude specific keys with their names using jq , e.g. cat myjson.json | jq 'del(.[]|.keyA)' , this will erase all my keyA keys inside the json, but I want to exclude the keys according to their values... How can I exclude all keys with value "null" using jq ? | del(..|select(. == "null")) This uses the recursive-descent operator .. and select function to find all the locations anywhere in the object with values that are equal to "null" and gives them to del . select evaluates a boolean expression to decide whether to include a particular value or not, while .. gives every single value in the tree to it, and del accepts any expression that produces locations, which this does. ( demo ) You can use the path function to check what it's finding: path(..|select(. == "null")) and debug what it thinks you're trying to delete first. The output is an array of keys to follow recursively to reach the value, and so the final item in the array is the key that would actually be deleted. You can also use update-assignment with |= empty in jq 1.6 and up (it silently fails in earlier versions). You may or may not find this clearer: (..|select(. == "null")) |= empty ( demo ) deletes those same keys. If your values are true null s, rather than the string "null" , you can use the nulls builtin function in place of the whole select : del(..|nulls) . If your values are true nulls and you're using jq 1.5 (the current version in many distributions), you'll need to use recurse(.[]?; true) in place of .. . This is because null values are specifically excluded from .. (because it's defined as recurse , which is defined as recurse(.[]?) , which is defined as recurse(.[]?; . != null) ). This behaviour changed to the (more useful) one above in 1.6, though the documentation hasn't changed, which appears to be a documentation bug . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/654725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/444310/"
]
} |
654,731 | I have an Epson ET-2756 printer. I'm able to print easily with it, but it took me a long time to understand why under Debian 10 my computer wasn't able to detect its scanner part. Eventually, I've found why : the scanimage command (and then epsonscan2 specifically installed for the printer) are only able to detect its scanner provided they are ran with a sudo . I wonder why... And especially, I would like to remove this prerequisite. How may I remove the need of a sudo to perform a scan? Experience suggested by cas, below : # I look already registered as a scanner group membercat /etc/group | grep scannerscanner:x:117:saned,lebihan# But this command fails:scanimage --format=png >/tmp/test.pngscanimage: no SANE devices found# While this one succeeds:sudo scanimage --format=png >/tmp/test.png | del(..|select(. == "null")) This uses the recursive-descent operator .. and select function to find all the locations anywhere in the object with values that are equal to "null" and gives them to del . select evaluates a boolean expression to decide whether to include a particular value or not, while .. gives every single value in the tree to it, and del accepts any expression that produces locations, which this does. ( demo ) You can use the path function to check what it's finding: path(..|select(. == "null")) and debug what it thinks you're trying to delete first. The output is an array of keys to follow recursively to reach the value, and so the final item in the array is the key that would actually be deleted. You can also use update-assignment with |= empty in jq 1.6 and up (it silently fails in earlier versions). You may or may not find this clearer: (..|select(. == "null")) |= empty ( demo ) deletes those same keys. If your values are true null s, rather than the string "null" , you can use the nulls builtin function in place of the whole select : del(..|nulls) . If your values are true nulls and you're using jq 1.5 (the current version in many distributions), you'll need to use recurse(.[]?; true) in place of .. . This is because null values are specifically excluded from .. (because it's defined as recurse , which is defined as recurse(.[]?) , which is defined as recurse(.[]?; . != null) ). This behaviour changed to the (more useful) one above in 1.6, though the documentation hasn't changed, which appears to be a documentation bug . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/654731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350549/"
]
} |
654,735 | I have a log file that contains the following content. 2021-06-15T22:50:11+00:00 DEBUG {"slug": "something", "key2": "value2"} I would like to tail -f this file and pipe the results to jq command, but I need to strip out 2021-06-15T22:50:11+00:00 DEBUG part before piping to jq since jq expects a JSON string. Is there a way to tail the log file and strip the datetime part at the same time? Ultimately, I would like to use the following command. tail -f :file | jq | Assuming you have access to GNU sed which is able to do unbuffered output: tail -f file | sed -u 's/^[^{]*//' | jq . This would run tail -f on your file and continuously send new data to sed . The sed command would strip everything up to the space before the first { on the line, and then send the result on to jq . The -u option to GNU sed makes it not buffer the output. Without this option, sed would buffer the result and would only send data to jq once the buffer (4 Kb?) was full. Doing buffering like this is standard procedure when the output of a tool is not the terminal itself, and it's done for efficiency reasons. In this case, we may want to turn the buffering off, so we use -u . To select only lines that contain the DEBUG string before the JSON data: tail -f file | sed -u -e '/^[^{]*DEBUG /!d' -e 's///' | jq . or tail -f file | sed -u -n 's/^[^{]*DEBUG //p' | jq . The sed command here would delete all lines that do not start with some text not containing { characters, ending in DEBUG . If such a line is found, the matched text is removed, leaving the JSON data. Note that we here extract the JSON based on the DEBUG string rather than the { that initiates a JSON object. Related to buffering in pipelines: Turn off buffering in pipe | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654735",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/439474/"
]
} |
654,781 | I'm running Windows 10, and have BASH terminal in VisualStudioCode. My problem is that commands such as LS do not work. After some googling, i found that using this command fixes it: export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin This however breaks some other stuff, and so, I copied output of echo $PATH , amalgamated the result with above-mentioned export command, and saved it into the file. Pasting resulting command into terminal fixes everything. And I have to do that every single time I open new terminal, which is awkward. Is there any way to add parts from the first export command to PATH? I know about "Edit the enviroment variable" option in windows, but either that does not work, or i'm doing it wrong, so telling me how to apply, eg. /usr/bin in there so that it works the same way as if I entered export PATH=/usr/bin into command line would help. Eventually, is there perhaps a way to autorun specific command each time new terminal is opened? That would help too. | Assuming you have access to GNU sed which is able to do unbuffered output: tail -f file | sed -u 's/^[^{]*//' | jq . This would run tail -f on your file and continuously send new data to sed . The sed command would strip everything up to the space before the first { on the line, and then send the result on to jq . The -u option to GNU sed makes it not buffer the output. Without this option, sed would buffer the result and would only send data to jq once the buffer (4 Kb?) was full. Doing buffering like this is standard procedure when the output of a tool is not the terminal itself, and it's done for efficiency reasons. In this case, we may want to turn the buffering off, so we use -u . To select only lines that contain the DEBUG string before the JSON data: tail -f file | sed -u -e '/^[^{]*DEBUG /!d' -e 's///' | jq . or tail -f file | sed -u -n 's/^[^{]*DEBUG //p' | jq . The sed command here would delete all lines that do not start with some text not containing { characters, ending in DEBUG . If such a line is found, the matched text is removed, leaving the JSON data. Note that we here extract the JSON based on the DEBUG string rather than the { that initiates a JSON object. Related to buffering in pipelines: Turn off buffering in pipe | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654781",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477994/"
]
} |
654,784 | I need to generate several thousand qr codes of simple IDs (1234, 1235, 1236, ...) and want to be able to also make them human readable. qrencode is a really cool tool to generate qr-codes, BUT no way to add a subtitle there. Any ideas? | Assuming you have access to GNU sed which is able to do unbuffered output: tail -f file | sed -u 's/^[^{]*//' | jq . This would run tail -f on your file and continuously send new data to sed . The sed command would strip everything up to the space before the first { on the line, and then send the result on to jq . The -u option to GNU sed makes it not buffer the output. Without this option, sed would buffer the result and would only send data to jq once the buffer (4 Kb?) was full. Doing buffering like this is standard procedure when the output of a tool is not the terminal itself, and it's done for efficiency reasons. In this case, we may want to turn the buffering off, so we use -u . To select only lines that contain the DEBUG string before the JSON data: tail -f file | sed -u -e '/^[^{]*DEBUG /!d' -e 's///' | jq . or tail -f file | sed -u -n 's/^[^{]*DEBUG //p' | jq . The sed command here would delete all lines that do not start with some text not containing { characters, ending in DEBUG . If such a line is found, the matched text is removed, leaving the JSON data. Note that we here extract the JSON based on the DEBUG string rather than the { that initiates a JSON object. Related to buffering in pipelines: Turn off buffering in pipe | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296500/"
]
} |
654,845 | I want to find all files with acl set.I know this dirty solution: for find all files with acl set in dir. /etc sudo ls -lhR /etc|grep + Someone know a more elegant solution? | Easy and elegant is quite a high bar to reach and your dirty solution fails if you have filenames with + in it (say c++). The alternative is using getfacl recursively, skipping files that don't have ACL getfacl -Rs /your/dir | grep "# file:" That will list them and the grep keeps just the filenames. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/654845",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
Subsets and Splits