source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
592,706
Is there any command line tool available on Linux to check a PDF file version?
Yes. The file command covers this $ file x1.pdfx1.pdf: PDF document, version 1.7$ To get greater detail, you could try pdfinfo , part of popper-utils. $ pdfinfo x1.pdfTitle: Full page photoAuthor: steveProducer: Microsoft: Print To PDFCreationDate: Fri Apr 5 10:14:34 2019ModDate: Fri Apr 5 10:14:34 2019Tagged: noUserProperties: noSuspects: noForm: noneJavaScript: noPages: 9Encrypted: noPage size: 841.5 x 594.75 ptsPage rot: 0File size: 5424973 bytesOptimized: noPDF version: 1.7$
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/592706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
592,720
I would like to check if the file that a filedescriptor points to is deleted in Bash (linux). I have read both Testing if a filedescriptor is valid and Testing if a filedescriptor is valid (for input) . But those answers did not help in this slightly different question. I use the following testcase: # create fileecho hello > /tmp/test.txt# open read-only fdexec 3< /tmp/test.txt# delete filerm /tmp/test.txt# special zero-timeout to check if data available for readingif read -u 3 -t 0then echo "data available for reading"else echo "no data available"fi# close fd (clean up)exec 3<&- This script surprisingly indicates that "data available for reading". However, the file does not exist anymore. So there has to be some caching/buffering going on. Perhaps there is another way, or to avoid the buffer/cache? An alternative that does work, is: ls -l /proc/$$/fd/3 which will indicate -> '/tmp/test.txt (deleted)' . But I would prefer to stick to a pure Bash solution (without spawning too many new processes, or parsing stdout). Note that in any other circumstance, one could of course just use [ -e /tmp/test.txt ] to check. However, I need to know if the original file was deleted, because meanwhile a new file with the exact same filename may have been created . For those that wonder why anyone would need this particular result ( XY problem ), it can be used to safely check from a subshell (with & ) if the parent script is still running by opening an extra fd to /proc/$$/cmdline with protection against a collision with a recycled PID.
Your original file exists completely unchanged. Once a file has been opened by name, the file descriptor your process holds counts as a link to the file. The system does not release the file or its space until all links have been deleted: those can be any number of processes that have a file description open for it, plus any number of hard links. You could stat the file at the time it was opened, and stat the current file by name. If they are different inodes or a different modification date, you have a deleted file and there is a new file. Or you might find you have a deleted file but no new one exists.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140353/" ] }
592,911
As far as I know, ping needs to create a raw socket (which needs either root access or cap_net_raw capabilities). From my understanding the trend these last years has been to remove setuid binaries and replaced them with capabilities . However when I look at the ping binary on my Fedora 32, it doesn't look to have any: $ ls -la $(which ping)-rwxr-xr-x. 1 root root 82960 May 18 10:26 /usr/bin/ping$ sudo getcap -v $(which ping)/usr/bin/ping$ Does ping need to open raw socket on fedora? Or is there another way to give it the permission to open a raw socket?
I think https://fedoraproject.org/wiki/Changes/EnableSysctlPingGroupRange answers your question: Enable the Linux kernel's net.ipv4.ping_group_range parameter to cover all groups. This will let all users on the operating system create ICMP Echo sockets without using setuid binaries, or having the CAP_NET_ADMIN and CAP_NET_RAW file capabilities. Cross-reference detail Targeted release: Fedora 31 Last updated: 2019-08-13 Tracker bug: # 1740809 Release notes tracker: # 376 The sysctl documentation writes, ping_group_range - 2 INTEGERS Restrict ICMP_PROTO datagram sockets tousers in the group range. The default is " 1 0 ", meaning, that nobody(not even root) may create ping sockets. Setting it to " 100 100 "would grant permissions to the single group. " 0 4294967295 " wouldenable it for the world, " 100 4294967295 " would enable it for theusers, but not daemons. An older code example demonstrates the use of this feature, and in particular shows that a socket is created with the IPPROTO_ICMP flag to identify that it will be used for raw ICMP int sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/592911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416285/" ] }
592,916
I want to enumerate numbers (between 1 to 10 136 range) but so far, most tools and trick I tried would struggle after 10 9 numbers or so... Here some examples (with smaller range): For seq 99999 real 0m0.056suser 0m0.004ssys 0m0.051s For seq 9999999 real 1m5.550suser 0m0.156ssys 0m6.615s So on so forth...Thing is, it start to struggle only after the 9th number or so (in this case, 999999999) and on ward.I thought of splitting them in smaller range and running them in parallel: cat <(seq 000 999) <(seq 999 1999) <(seq 1999 2999) <(seq 2999 3999) <(seq 3999 4999) <(seq 4999 5999) <(seq 5999 6999) <(seq 6999 7999) <(seq 7999 8999) <(seq 8999 9999) <(seq 9999 10999) <(seq 10999 11999) <(seq 11999 12999) <(seq 12999 13999) <(seq 13999 14999) real 0m0.258suser 0m0.008ssys 0m0.908s which is considerably slower (especially with bigger range as it goes on) than seq 000 14999 real 0m0.042suser 0m0.000ssys 0m0.041s I tried a perl script i found on SO: #!/usr/bin/perluse strict;if ( ($#ARGV+1)!=2 ) { print "usage $0 \n"; }my @r = &bitfizz( $ARGV[0], $ARGV[1] );for(@r){ print "$_\n"; }sub bitfizz() { $_[0]=join( ",", split(//, $_[0] ) ); for( my $i=1; $i<=$_[1]; $i+=1 ) { $_=$_."{$_[0]}"; } @r=glob( $_ );} with perl script.pl "0123456789" 10* But while it seemed faster than seq (when doing anything less than 10 10 ), it still struggle and seems like it would take forever to complete... I don't need to write the enumerated numbers to a file, but i do need it to be on stdout, so that i can process it. EDIT: @Isaac mentioned in his answer (and in the comment) something that could work, and while it does goes through 10 10 much faster than anything else mentioned, it still struggle for any range bigger than 10 10 (and by extension, 10 136 ). Worth mentioning since it was mentioned as a possible duplicate to this post (which it technically isn't). How do i enumerate from 0 to 10 136 faster than GNU seq does?
It's impossible. There's no way to enumerate 10 136 at all, no matter how you cut it. There's about 10 80 atoms in the observable universe , in total. Imagine the limit of total parallelization where you manage to harness the power of every single atom out there. If every atom takes 100 picoseconds to spit out a single number (which means a clock frequency of 10 GHz, so way faster than current computers), you get 10 90 numbers per second . Enumerating your sequence at this incredible pace will still take 10 46 seconds, or about 10 38 years . I don't think you're prepared to wait for that long.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409852/" ] }
592,918
I enjoy using squashfs for compression because of the simplicity of mounting them as loop devices to access the files inside. I have a lot of rar, tgz and zip files that I would like to convert to squashfs. In this answer , I saw that it is possible to use a pseudo file when compressing a disk image to squashfs to avoid having to use a temporary file the size of the whole disk. mkdir empty-dirmksquashfs empty-dir squash.img -p 'sda_backup.img f 444 root root dd if=/dev/sda bs=4M' I would like to use pseudo files to convert from rar, tgz or zip to squashfs in the same way (on the fly), so I don't have to first extract the whole archive to disk and then compress to squashfs in a separate operation. Some of these archives contain thousands of individual files, some of which will have spaces or other special characters in their filenames. I looked at the README , and I think I would need to use the -pf <pseudo-file> option, but I'm not sure how to create the pseudo file on the fly (and also not have problems with filenames with spaces). I think I would need to use process substitution to create the list of files from the source archive. Ideally I would like to have a command that is able to convert any rar, tgz or zip without having to individually create the pseudo file for each archive, but if anyone can tell me how I can do it with one of those archive formats, then hopefully I can work it out for the others. Thanks everyone.
You could mount them with fuse-zip or archivemount and then create the squashfs file from the mount point. For example, this would work for a zip file: $ mkdir /tmp/zmnt$ fuse-zip -r /path/to/file1.zip /tmp/zmnt$ mksquashfs /tmp/zmnt /path/to/file1.squashfs$ fusermount -u /tmp/zmnt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52247/" ] }
592,922
I have directories of media structured as followed: ~ $ tree bazbaz├── Ajin [Season 1]│   ├── Ajin Demi-Human - 01 - A Topic That Has Nothing to Do with Us.mkv│   ├── Ajin Demi-Human - 02 - Why Is This Happening to Me I Didn`t Do Anything Wrong!.mkv...snip...├── Btooom!│   ├── Btooom! - 01 - Start.mkv│   ├── Btooom! - 02 - The Bloodstained High School Girl.mkv...snip...└── Claymore [Dual Audio] ├── checksums.md5 ├── Claymore - 01 - Great Sword.mkv ├── Claymore - 02 - The Black Card.mkv ├── Claymore - 03 - The Darkness in Paradise.mkv...snip...3 directories, 53 files I need to transverse the baz directory and for each sub directory create a sub-sub directory named "Season 01" and place all files (and any directories) into the newly created sub-sub directory "Season 01". Ex the above would become: baz├── Ajin [Season 1]│ ├── Season 01│ ├── Ajin Demi-Human - 01 - A Topic That Has Nothing to Do with Us.mkv│ ├── Ajin Demi-Human - 02 - Why Is This Happening to Me I Didn`t Do Anything Wrong!.mkv...snip...etc I was trying to use find to accomplish this, but I could not figure out how to structure the -exec command to create the "Season 01" directory, and move the contents of the directory to the new directory. Thanks
You could mount them with fuse-zip or archivemount and then create the squashfs file from the mount point. For example, this would work for a zip file: $ mkdir /tmp/zmnt$ fuse-zip -r /path/to/file1.zip /tmp/zmnt$ mksquashfs /tmp/zmnt /path/to/file1.squashfs$ fusermount -u /tmp/zmnt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/417970/" ] }
592,982
If I use bash 's brace expansion I get a list echo item={one,two,three}item=one item=two item=three Assuming I am in a directory with files/folders that would match a wildcard, is there any way to have an expansion that matches these files/folders? lsblue green redecho item=* # Obviously notitem=*echo item={*} # Maybe? ...but noitem={*} In my example I would like the expansion to be item=blue item=green item=red The best I've got so is code like this items=()for dirent in *; do items+=("item=$dirent"); doneecho "${items[@]}"item=blue item=green item=red
You can store the current directory’s files in an array as follows: items=(*) Then parameter expansion can be used to add the prefix: printf "%s\n" "${items[@]/#/item=}" You can use this to update the array: items=("${items[@]/#/item=}")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/592982", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100397/" ] }
593,097
Consider the following script 1 : #!/bin/bashbash <<ENDprintf "%s\n" "$@"END Surprisingly (to me), the $@ inside the heredoc gets expanded first, and then quoted, so this is the output: $ /tmp/test.sh "a a" b ca a b c If I remove the quotes, I get the simplistic $@ behavior: $ /tmp/test.sh "a a" b caabc Neither option gives the desired output. I tried using an array like args=("$@") , but then the expansion behaves the same way. I tried placing the "result" in a variable like args="$@" , but that just gives me back "a a b c". Any ideas how to get the output to be like the following? a abc 1 Obviously, this is very simplified. In reality I'm using the heredoc to execute a script with docker run , but it is immaterial to this question.
In a here document , the shell performs “dollar substitutions” ( $foo , $(some command) , `some command` , $((x+1)) , and backslash to protect \`$ ). Here documents are expanded like double-quoted strings. To avoid this, use a quote on the end marker for the here document, which then behaves like a single-quoted string (no expansion at all). bash <<'END'printf "%s\n" "$@"END This won't do what you want since "$@" is expanded by the inner bash and it doesn't receive any arguments. The easy direct solution, which may or may not work in your real use case, is to forward the arguments to the inner bash. bash /dev/stdin "$@" <<'END'printf "%s\n" "$@"END If you can't pass arguments to the inner bash in your real use case, you may be able to pass them through environment variables. But you can only pass one string per environment variable, so this isn't convenient if you have a variable number of strings to pass. export widget_count="$1" data_source="$2" frobnicator="$3"…bash <<'END'printf "%s\n" "$widget_count" "$data_source" "$frobnicator"END As a last resort, you may need to quote the values to pass them through. To quote a string for shell syntax: Replace ' by '\'' . Add ' at the beginning and ' at the end.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/532/" ] }
593,181
Whenever I sold a drive I've zeroed it once with shred from a live environment: sudo shred -vzn 0 /dev/sdX Before I double-checked it wasn't mounted. This is the fastest way to securely erase a drive I know of. Now I've heard it's bad for SSDs. Is there a way to securely erase an SSD that's as fast or faster? From a theoretical standpoint I understand that you need to overwrite the whole volume in order to make recovery impossible. So I don't see how there's a way that would put less strain on a SSD. I was told a single pass won't decrease a SSD's life span at all. Would cat /dev/zero > /dev/sdX be as fast? I'm not dealing with sensitive data here and don't need to protect the drive from a knowledgeable person going to great length to recover data. Fast is what I need while not decreasing the SSD's life span. Edit: would this work for a SSD just like for a HDD? dd if=/dev/urandom of=/dev/sdc bs=1M count=2
This is the fastest way to securely erase a drive I know of. For SSDs, no, it's not. blkdiscard /dev/device is dozens times faster and should be equally safe for your use case. Would cat /dev/zero > /dev/sdX be as fast? From the look of it these two commands should be equally fast. Fast is what I need while not decreasing the SSD's life span. You do decrease your SSD lifespan by even writing zeros to it. Zeros are still data.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/593181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418209/" ] }
593,212
Can't figure out how to escape everything while using awk. I need to enclose each input string with with single quotes, e.g. inputstring1string2string3output'string1''string2''string3' Been fighting with escaping ' " $0 and everything else and I just cannot make it work. Either $0 is passed to bash directly, or something else happens.
Here are a couple of ways: use octal escape sequences. On ASCII-based systems where ' is encoded as byte 39 (octal 047), that would be: awk '{print "\047" $0 "\047"}' input'string1''string2''string3' pass the quote as a variable $ awk -v q="'" '{print q $0 q}' input 'string1' 'string2' 'string3'
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/593212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/260833/" ] }
593,366
In Linux Mint 20, if I want to enable snap support to install snap packages, the usual command sudo apt install snapd to install snapd does not work. As you can see in the picture below, if I run the command, it throws an error saying the "snapd package is missing or has been obsolete", "Package snapd is not available" and “Package snapd has no installation candidate.”
The above error is happening because APT Package Manager blocks the installation of snap packages. With Linux Mint 20, the Mint dev team has disabled the snap or snapd support by default. Though Linux Mint has never supported the snap, it has allowed installing Ubuntu snap store or snapd open-source client in previous releases by default. Hence, now if anyone wants to install snap apps, they need to first enable snap support. For that, there is a solution as well. To enable snap support on Mint 20, we can do either of two things: Delete the nosnap.pref file in the directory /etc/apt/preferences.d by running the command: sudo rm /etc/apt/preferences.d/nosnap.pref Comment the three lines of code in the same file. Package: snapdPin: release a=*Pin-Priority: -10 Now, install snapd : sudo apt install snapd Then, any snap apps: sudo snap install <app-name> There is also another method as well to install snap packages without interfering with nosnap.pref file. That is, installing app using version number: sudo apt install <app-name> snapd=VERSION
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593366", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/400073/" ] }
593,577
I have a file of several sections, each section start with specific title but all of them ending with the same string, I want to sort the file sections according to the titles without sorting the content of each section (i.e. take the whole section as one block) there is also a blank line between each two section, to clarify the idea if the input is as string5zyxstringstring2xzyfstring the desired output would be as string2xzyfstringstring5zyxstring
Using GNU sed and sort : sed 's/^$/\x0/g' file | sort -z | tr '\0' '\n' Put null character in empty line sort using null character as delimiter ( -z ) finally replace null delimiter with new line using tr . to remove empty lines in first and last line of the output, you may add | sed '1{/^$/d};${/^$/d}' Output: string2xzyfstringstring5zyxstring (maybe someone can help making \x0 work for non-GNU sed , related Question )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112129/" ] }
593,622
I try to write the message "Ran Cronjob XY" to a logfile if my cronjob ran. Attempt: /opt/cpanel/ea-php73/root/usr/bin/php /home/company/example.de/bin/magento list 2>&1 | grep -v "Ran Cronjob XY" >> /home/company/example.de/var/log/test.cron.log But this fails and logs the output of the command /opt/cpanel/ea-php73/root/usr/bin/php /home/company/example.de/bin/magento list instead of just Ran Cronjob XY Bonus: Show how to print "ran successfully" if the command was executed successfully and "failed" if not.
if some_command >/dev/null 2>&1; then echo ran successfullyelse echo failedfi >>logfile The above code will run the command some_command , discard its output, and then append the text ran successfully to the file logfile if the command finished successfully. If the command fails, it would append the text failed to logfile . In your case, for simplicity (since you have so long pathnames in your commands), I would put this in its own wrapper script and execute that script with a cron job. The script would look like #!/bin/shPATH=/opt/cpanel/ea-php73/root/usr/bin:$PATHlogfile=/home/company/example.de/var/log/test.cron.logif php /home/company/example.de/bin/magento list >/dev/null 2>&1then echo ran successfullyelse echo failedfi >>"$logfile" I modify PATH in the script to allow running php without an absolute path. That script would then be scheduled: * * * * * /path/to/thescript.sh ... where the * * * * * should be replaced by the actual schedule. If you'd wanted to turn this into a "one-liner" for use directly in a crontab entry: * * * * * if /opt/cpanel/ea-php73/root/usr/bin/php /home/company/example.de/bin/magento list >/dev/null 2>&1; then echo ran successfully; else echo failed; fi >>/home/company/example.de/var/log/test.cron.log ... where the * * * * * should be replaced by the actual schedule.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593622", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124191/" ] }
593,699
I am beginning to learn awk and came across something that when I run the following commands $ echo ":a:b:c:" | awk '$1=$1' FS=":" OFS="$"$ echo "a:b:c:" | awk '$1=$1' FS=":" OFS="$"a$b$c$ First command returns nothing, but I expected it to return $a$b$c$ , similar to the second command. And in general, it never prints anything when the field separator is at the beginning of the line. Why so?
In your awk script, printing is triggered as the default action, which in turn depends on "side-effect" evaluation of assignment $1=$1 as a pattern. In the first case, there is an empty field before the first separator, so $1 is the empty string, which evaluates FALSE. In the second case, $1 is the non-empty string a , which evaluates TRUE, triggering the default print action.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/373708/" ] }
593,758
Having input.csv as following: field_name,field_friendly_nameLastNm,Last_Namecntn_last_mod_wrkr_full_nm,Last_Namecontact_last_nm,Last_Namecontact_first_last_nm,Last_Namelast_english_nm,Last_Namelast_pronunciation_nm,Last_Namelast_nm,Last_Namelead_space_last_nm,Last_Namelast_mod_usr_nm,Last_Namelcl_last_nm,Last_Nameadobe_last_topic_nm,Last_Namelast_changed_user_nm,Last_Namelast_purchased_product_service_nm,Last_Namelast_imported_source_nm,Last_Namesubmt_last_nm,Last_Namecntct_last_nm,Last_Namecust_submt_last_nm,Last_Namecust_cntct_last_nm,Last_Namelast_mod_by_nm,Last_Namelast_mod_als_nm,Last_Namelast_mod_nm,Last_Nameship_last_nm,Last_Namebilling_last_nm,Last_Namelast_upd_by_nm,Last_Namewrkr_last_nm,Last_Nametrns_line_itm_last_chg_psn_nm,Last_Nametrns_line_itm_last_cre_psn_nm,Last_Nametrns_hdr_last_chg_psn_nm,Last_Namealtr_last_nm,Last_Nametrns_last_chg_nm,Last_Namelastrepaction_nm,Last_Namelast_build_nm,Last_NameLegalLastNm,Last_NameManagerLastNm,Last_Name4-LastNm,Last_NameNextLevelManagerLastNm,Last_NameManagerLegalLastNm,Last_Name from this file I would like to filter on column1 where condition iscolumn1 value should be made of given set of words in this case (last, name, nm, lst, -, _, [0-9] ) and exclude if contains any other words.And also update column2 as "Found".And my search should be case insensitive. LastNm,Foundlast_nm,Found4-LastNm,Found I'm using this way wchich doesn't work: awk -v q="'" --field-separator ',' '((tolower($1) ~ /last/) && (tolower($1) ~ /name/)) || ((tolower($1) ~ /last/) && (tolower($1) ~ /nm/)) && ($2="found") {print $1 "," $2 }' raw.csv
With GNU awk , gensub could be used to remove all those words, print if empty: awk -F , -v OFS=, 'gensub(/last|lst|name|nm|[0-9_-]*/,"","g",tolower($1))=="" { $2="found"; print $1, $2}' file Unlike sub / gsub , gensub leaves the original record intact and instead returns the resulting string. The same approach could be used with standard awk by copying field into a variable. To include more characters than [0-9_-] , you could use [^[:alpha:]] (i.e. anything that isn't a letter ): last|lst|name|nm|[^[:alpha:]]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593758", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418692/" ] }
593,761
I did a Stupid thing without thinking first. I resized my dual boot Mac partition while in Windows. Consequently, the resized Mac partition while still there cannot be read. I can scan for files. However, of course all I get it file00001.swift for example. The current Partition Type is showing as - DE94BBA4-06D1-4D40-A16A-BFD50179D6AC (Windows Recovery Environment) However it should be I believe - 7C3457EF-0000-11AA-AA11-00306543ECAC I have tried to alter this using Paragon Drive Manager but while it allows me to change some info it does not give access to that item. So my question is; Is there an app that lets me alter the Partition Type, or could someone tell me at what sector etc that data is located so I can do a Byte change perhaps ? I am able to boot into Windows to view/whatever the bad Mac partition. I am able to boot into an External Mac Drive OSx to view/whatever the bad Mac partition. Thanks
With GNU awk , gensub could be used to remove all those words, print if empty: awk -F , -v OFS=, 'gensub(/last|lst|name|nm|[0-9_-]*/,"","g",tolower($1))=="" { $2="found"; print $1, $2}' file Unlike sub / gsub , gensub leaves the original record intact and instead returns the resulting string. The same approach could be used with standard awk by copying field into a variable. To include more characters than [0-9_-] , you could use [^[:alpha:]] (i.e. anything that isn't a letter ): last|lst|name|nm|[^[:alpha:]]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593761", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418695/" ] }
593,772
Is there a plugin of some sorts that gets rid of trailing spaces and tabs in every line when saving a file edited with gedit?
With GNU awk , gensub could be used to remove all those words, print if empty: awk -F , -v OFS=, 'gensub(/last|lst|name|nm|[0-9_-]*/,"","g",tolower($1))=="" { $2="found"; print $1, $2}' file Unlike sub / gsub , gensub leaves the original record intact and instead returns the resulting string. The same approach could be used with standard awk by copying field into a variable. To include more characters than [0-9_-] , you could use [^[:alpha:]] (i.e. anything that isn't a letter ): last|lst|name|nm|[^[:alpha:]]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593772", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418703/" ] }
593,831
I know this has been asked before but I'm having trouble understanding the answers I've found. Basically I want to use sed -i 's/ string / /some/directory /g' file.txt and I can't find a straight answer on how to make /some/directory not break up the sed syntax.
sed allows several syntax delimiters, / being only the one most commonly used. You can just as well say sed -i 's,<string>,<some/directory>,g' file.txt where the , now has the function usually performed by the / , thereby freeing the latter from its special meaning. Note, however (as pointed out by @Jeff Schaller), that now the , must not appear in the file or directory name - and it is a valid character for filenames! This answer gives a good overview on how to proceed when applying sed to a string with special characters.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593831", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416200/" ] }
593,832
I intend to find *.lsf following the current /subdirectories and execute bsub<*.lsf for all the find files.I tried: find ./ -type f -name "*.lsf" -exec bsub < \; The part find ./ -type f -name "*.lsf" works fine. However, the execute part has problems. Can anyone helps figure out?
sed allows several syntax delimiters, / being only the one most commonly used. You can just as well say sed -i 's,<string>,<some/directory>,g' file.txt where the , now has the function usually performed by the / , thereby freeing the latter from its special meaning. Note, however (as pointed out by @Jeff Schaller), that now the , must not appear in the file or directory name - and it is a valid character for filenames! This answer gives a good overview on how to proceed when applying sed to a string with special characters.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410964/" ] }
593,860
Say I have a string like this [[["q", "0"], "R"], "L"], ["q", [["1", "["], "]"]], [["q", ["2", "L"]], "R"], ["q", ["3", ["R", "L"]]] and I want to remove all nested brackets from it ["q", "0", "R", "L"], ["q", "1", "[", "]"], ["q", "2", "L", "R"], ["q", "3", "R", "L"] I understand how an algorithm could be written that does this by pushing and popping a stack or just incrementing and decrementing a counter, but I'm curious if there is a way to do this just with basic tools like sed or awk .
bracket.awk : BEGIN{quote=1}{ for(i=1;i<=length;i++){ ch=substr($0,i,1) pr=1 if(ch=="\""){quote=!quote} else if(ch=="[" && quote){brk++;pr=brk<2} else if(ch=="]" && quote){brk--;pr=brk<1} if(pr){printf "%s",ch} } print ""} $ awk -f bracket.awk file["q", "0", "R", "L"], ["q", "1", "[", "]"], ["q", "2", "L", "R"], ["q", "3", "R", "L"] The idea behind it: Initialize quote=1 . Read the file char-wise. Whenever a quote is found, invert quote variable (if 1 , it becomes 0 , and vice-versa). Then, brackets are only counted if quote is set to 1 and excess brackets are not printed, according to brk counter. The print "" statement is just to add a newline, as the printf above does not do it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/593860", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/375322/" ] }
593,993
How can I convert multi lines to a single line with spaces and quotesusing awk or tr or any other tool which makes it simple(but not for loops)? $ cat databases.txtWp_newFrontend DBDB_EXT Expected: $ cat databases.txt"Wp_new" "Frontend DB" "DB_EXT" Edit 1: Thanks for all the useful answers. But the one I marked as correct isthe one which can be typed on terminal in short time and with fewer hassles (simplicity)so that I(syadmins) can do the operations very fast without making the systems downtime more.
It can be done with awk awk '{printf("\"%s\" ",$0)} END { printf "\n" }' databases.txt Output: "Wp_new" "Frontend DB" "DB_EXT"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205721/" ] }
593,996
I have a JSON file that looks like: {..."python.pythonPath": "","python.defaultInterpreterPath": "",...} I want to update it to {..."python.pythonPath": "/Users/user/.local/share/virtualenvs/venv-Qxxxxxx9/bin/python","python.defaultInterpreterPath": "/Users/user/.local/share/virtualenvs/venv-Qxxxxxx9/bin/python",...} I am confused about using escape characters / and single and double quotation marks. How do I do this using sed? FYI. I am using macOSI tried : sed -i "" 's|/"/"|/"/Users/user//.local/share/virtualenvs/venv-Qxxxxxx9/bin/python/"' settings.json
It can be done with awk awk '{printf("\"%s\" ",$0)} END { printf "\n" }' databases.txt Output: "Wp_new" "Frontend DB" "DB_EXT"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/593996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418882/" ] }
594,257
Upgrading my Debian 'bullseye' distribution does not work due to unmet dependencies. Operating System: Debian GNU/Linux bullseye/sid Kernel: Linux 5.6.0-2-686-pae Architecture: x86 When I try to upgrade my system by using apt update followed by apt upgrade , I get: Some packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: gnustep-base-runtime : Depends: gnustep-base-common (= 1.27.0-3) but 1.26.0-7 is to be installed libgnustep-base1.27 : Depends: gnustep-base-common (= 1.27.0-3) but 1.26.0-7 is to be installedE: Broken packages Does anyone know how to solve this?
Since there’s just been a gnustep-base transition in testing from 1.26 to 1.27, involving an upgrade from libgnustep-base1.26 to libgnustep-base1.27 , you need to allow package removals during upgrade: use apt full-upgrade instead of apt upgrade .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/594257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240990/" ] }
594,350
I have a remote server and connect via a browser to Jupyter notebooks hosted there. The jupyter service is run via systemd . The problem is that jupyter expects two ctrl-c commands within 5 seconds of each other to shut down cleanly. systemd sends only one signal to halt the process, then waits for a timeout, and when it sees that jupyter hasn't stopped, finally sends a kill signal. This leads to a long delay and an unclean exit when I want to stop or restart the service. I know that systemd has an ExecStop parameter but can't find any examples of how it is actually used, and how I can send the equivalent of two ctrl-c keystrokes via this mechanism. My current service file is: [Unit] Description=Jupyter notebook[Service] Type=simple PIDFile=/var/run/jupyter-notebook.pid ExecStart=/home/linuxbrew/.linuxbrew/bin/jupyter notebook --no-browser User=pgcudahy Group=pgcudahy WorkingDirectory=/home/pgcudahy Environment=PATH=/home/linuxbrew/.linuxbrew/opt/python/libexec/bin:/home/linuxbrew/.linuxbrew/opt/cython/bin:/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin:/home/pgcudahy/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin[Install] WantedBy=multi-user.target
So with some more research, what I want to send with ctrl-c is a SIGINT which can be done with /bin/kill -s SIGINT Adding this to my service file shuts down jupyter cleanly ExecStop=/bin/kill -s SIGINT -$MAINPID & /bin/kill -s SIGINT -$MAINPID The whole file is [Unit] Description=Jupyter notebook[Service] Type=simple PIDFile=/var/run/jupyter-notebook.pid ExecStart=/home/linuxbrew/.linuxbrew/bin/jupyter notebook --no-browser ExecStop=/bin/kill -s SIGINT -$MAINPID & /bin/kill -s SIGINT -$MAINPID User=pgcudahy Group=pgcudahy WorkingDirectory=/home/pgcudahy Environment=PATH=/home/linuxbrew/.linuxbrew/opt/python/libexec/bin:/home/linuxbrew/.linuxbrew/opt/cython/bin:/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin:/home/pgcudahy/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin[Install] WantedBy=multi-user.target
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594350", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57065/" ] }
594,376
I have a command that I use on a script and it is working well. I need to append some text to the result of that command. Command: ssh target_server "/home/directory/somescript.sh" | tail -1 I want to add some text on the result of the command above. Sample result: This is the original result Sample Desired result: This is the original result - target_server
Pipe it to sed : ssh target_server "/home/directory/somescript.sh" | tail -1 | sed 's/$/ - target server/' The syntax is s/regexp/replacement/flags . s invokes the substitute command. / is the delimiter. You may choose another character for the delimiter. $ is in the regular expression slot. $ matches the end of the line. - target server is the text which replaces what was matched by the regular expression slot. If the replacement text contained / (for example, - target 01/10 ), you would either escape it or choose another delimiter: sed 's/$/ - target 01\/10/'sed 's|$| - target 01/10|'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/419166/" ] }
594,397
When I try to do sudo pacman -Syu , it gives me error: config file /etc/pacman.d/mirrorlist could not be read: No such file or directory . What should I do?
Restore a valid mirrorlist file from the original source: $ sudo bash# mkdir -p /etc/pacman.d# curl -s "https://www.archlinux.org/mirrorlist/?country=US&country=GB&protocol=https&use_mirror_status=on" | sed -e 's/^#Server/Server/' -e '/^#/d' > /etc/pacman.d/mirrorlist# pacman -S archlinux-keyring# pacman -Syu# exit$ The list you are getting is for some specific countries; here, US and GB are used. Feel free to enter your own country or countries close to you. The commandline above is adapted from the original documentation at the Archlinux Wiki page on Mirrors . I have entered interactive mode in sudo to have correct rights for redirection and I have removed the sorting by mirror speeds because you may or may not have the script for sorting. EDIT: If you get errors about non-existing mirror servers you can edit the file /etc/pacman.d/mirrorlist and comment out those that do not work, e.g. $ sudo nano /etc/pacman.d/mirrorlist===># comment out whole lines by hash like this:# Server = https://mirror.0x.sg/archlinux/$repo/os/$archServer = https://mirror.netweaver.uk/archlinux/$repo/os/$arch# Server = https://mirror.bytemark.co.uk/archlinux/$repo/os/$arch(...) You can also create Server entries for that file by hand at the Archlinux Pacman Mirrorlist Generator . Enable the "Use mirror status:" checkmark [X].
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594397", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/419179/" ] }
594,408
I have a ec2 machine/redhat with Tomcat installed. I want only userX to start this app, not ec2-user. However, ec2 automatically boots with ec2-user after restart. How can I make my userX to execture startup.sh command automatically. Currently I do it manually by loging with userX, then bash /opt/tomcat/bin/startup.sh Thanks
Restore a valid mirrorlist file from the original source: $ sudo bash# mkdir -p /etc/pacman.d# curl -s "https://www.archlinux.org/mirrorlist/?country=US&country=GB&protocol=https&use_mirror_status=on" | sed -e 's/^#Server/Server/' -e '/^#/d' > /etc/pacman.d/mirrorlist# pacman -S archlinux-keyring# pacman -Syu# exit$ The list you are getting is for some specific countries; here, US and GB are used. Feel free to enter your own country or countries close to you. The commandline above is adapted from the original documentation at the Archlinux Wiki page on Mirrors . I have entered interactive mode in sudo to have correct rights for redirection and I have removed the sorting by mirror speeds because you may or may not have the script for sorting. EDIT: If you get errors about non-existing mirror servers you can edit the file /etc/pacman.d/mirrorlist and comment out those that do not work, e.g. $ sudo nano /etc/pacman.d/mirrorlist===># comment out whole lines by hash like this:# Server = https://mirror.0x.sg/archlinux/$repo/os/$archServer = https://mirror.netweaver.uk/archlinux/$repo/os/$arch# Server = https://mirror.bytemark.co.uk/archlinux/$repo/os/$arch(...) You can also create Server entries for that file by hand at the Archlinux Pacman Mirrorlist Generator . Enable the "Use mirror status:" checkmark [X].
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594408", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418913/" ] }
594,414
Just tried to install the following package in my Buster - monitoring-plugins-standard and noticed the following output: The following additional packages will be installed: dirmngr gnupg gnupg-l10n gnupg-utils gpg gpg-agent gpg-wks-client gpg-wks-server gpgconf gpgsm libarchive13 libassuan0 libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libdbi1 libgpgme11 libksba8 libldb1 libnet-snmp-perl libnpth0 libpq5 libradcli4 libsensors-config libsensors5 libsmbclient libsnmp-base libsnmp30 libtalloc2 libtdb1 libtevent0 libtirpc-common libtirpc3 libwbclient0 pinentry-curses python-crypto python-gpg python-ldb python-samba python-talloc python-tdb rpcbind samba-common samba-common-bin samba-dsdb-modules samba-libs smbclient snmp Can someone help me to understand why does apt want to install for example gnupg? Looking at the recommended packages: Recommends: bind9-host | host, dnsutils, libnet-snmp-perl, rpcbind, smbclient, snmp, sudo, libdbi1 (>= 0.8.4), libgnutls30 (>= 3.6.5), libldap-2.4-2 (>= 2.4.7), libmariadb3 (>= 3.0.0), libpq5, libradcli4, zlib1g (>= 1:1.1.4) I can't understand which exact package forces gnupg installation. Can someone explain me additional packages installation logic with gnupg package as example? Thanks.
aptitude can tell you: $ aptitude why monitoring-plugins-standard gnupgp monitoring-plugins-standard Recommends smbclienti A smbclient Depends samba-common (= 2:4.9.5+dfsg-5+deb10u1)i A samba-common Recommends samba-common-bini A samba-common-bin Recommends samba-dsdb-modulesi A samba-dsdb-modules Depends libgpgme11 (>= 1.2.0)i A libgpgme11 Recommends gpgsmi A gpgsm Recommends gnupg (= 2.2.12-1+deb10u1) This works even if none of the packages are installed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276722/" ] }
594,470
I have the source code of a hello world kernel module that works in Ubuntu 20 in a laptop.Now I am trying to compile the same code in Ubuntu 20 but inside WSL2. For that I am using this: make -C /sys/modules/$(shell uname -r)/build M=$(PWD) modules The problem is that /lib/modules is empty. It seems that WSL2 does not bring anything in /lib/modules/4.19.104-microsoft-standard/build I tried getting the headers using: sudo apt search linux-headers-`uname -r`Sorting... DoneFull Text Search... Done But nothing get's populated in the modules folder Is there anything I need to do in order that folder contains all required modules? [EDIT] Getting closer thanks to @HannahJ. I am doing: > sudo make -C /home/<user>/WSL2-Linux-Kernel M=$(pwd) modulesSL2-Linux-Kernel M=$(pwd) modulesmake: Entering directory '/home/<user>/WSL2-Linux-Kernel' CC [M] /home/<user>/containers-assembly-permissionsdemo/demo-2/lkm_example.o Building modules, stage 2. MODPOST 1 modules CC /home/<user>/containers-assembly-permissionsdemo/demo-2/lkm_example.mod.o LD [M] /home/<user>/containers-assembly-permissionsdemo/demo-2/lkm_example.komake: Leaving directory '/home/<user>/WSL2-Linux-Kernel' At the end, I get the lkm_example.ko file created. After that: > sudo insmod lkm_example.koinsmod: ERROR: could not insert module lkm_example.ko: Invalid module format> dmesg[200617.480635] lkm_example: no symbol version for module_layout[200617.480656] lkm_example: loading out-of-tree module taints kernel.[200617.481542] module: x86/modules: Skipping invalid relocation target, existing value is nonzero for type 1, loc 0000000074f1d70f, val ffffffffc0000158> sudo modinfo lkm_example.kofilename: /home/<user>/containers-assembly-permissionsdemo/demo-2/lkm_example.koversion: 0.01description: A simple example Linux module.author: Carlos Garcialicense: GPLsrcversion: F8B272146BAA2381B6332DEdepends:retpoline: Yname: lkm_examplevermagic: 4.19.84-microsoft-standard+ SMP mod_unload modversions This is my Makefile obj-m += lkm_example.oall: make -C /home/<usr>/WSL2-Linux-Kernel M=$(PWD) modulesclean: make -C /home/<usr>/WSL2-Linux-Kernel M=$(PWD) cleantest: # We put a — in front of the rmmod command to tell make to ignore # an error in case the module isn’t loaded. -sudo rmmod lkm_example # Clear the kernel log without echo sudo dmesg -C # Insert the module sudo insmod lkm_example.ko # Display the kernel log dmesgunload: sudo rm /dev/lkm_example sudo rmmod lkm_example [Edit2]This is my kernel module: #include <linux/init.h> #include <linux/module.h> #include <linux/kernel.h> #include <linux/fs.h> #include <asm/uaccess.h> #include <linux/init_task.h> MODULE_LICENSE("GPL"); MODULE_AUTHOR("Carlos Garcia"); MODULE_DESCRIPTION("A simple example Linux module."); MODULE_VERSION("0.01"); /* Prototypes for device functions */ static int device_open(struct inode *, struct file *); static int device_release(struct inode *, struct file *); static ssize_t device_read(struct file *, char *, size_t, loff_t *); static ssize_t device_write(struct file *, const char *, size_t, loff_t *); static int major_num; static int device_open_count = 0; static char msg_buffer[MSG_BUFFER_LEN]; static char *msg_ptr; /* This structure points to all of the device functions */ static struct file_operations file_ops = { .read = device_read, .write = device_write, .open = device_open, .release = device_release }; /* When a process reads from our device, this gets called. */ static ssize_t device_read(struct file *flip, char *buffer, size_t len, loff_t *offset) { ... } /* Called when a process tries to write to our device */ static ssize_t device_write(struct file *flip, const char *buffer, size_t len, loff_t *offset) { ... } /* Called when a process opens our device */ static int device_open(struct inode *inode, struct file *file) { ... try_module_get(THIS_MODULE); } /* Called when a process closes our device */ static int device_release(struct inode *inode, struct file *file) { ... module_put(THIS_MODULE); } static int __init lkm_example_init(void) { ... major_num = register_chrdev(0, "lkm_example", &file_ops); if (major_num < 0) { printk(KERN_ALERT "Could not register device: % d\n", major_num); return major_num; } else { printk(KERN_INFO "lkm_example module loaded with device major number % d\n", major_num); return 0; } } static void __exit lkm_example_exit(void) { /* Remember — we have to clean up after ourselves. Unregister the character device. */ unregister_chrdev(major_num, DEVICE_NAME); printk(KERN_INFO "Goodbye, World !\n"); } /* Register module functions */ module_init(lkm_example_init); module_exit(lkm_example_exit);
I had to do this for an assignment, so I figure I'll share my solution here. The base WSL2 kernel does not allow modules to be loaded. You have to compile and use your own kernel build. How to compile and use a kernel in WSL2 In Ubuntu/WSL: sudo apt install build-essential flex bison libssl-dev libelf-dev git dwarvesgit clone https://github.com/microsoft/WSL2-Linux-Kernel.gitcd WSL2-Linux-Kernelcp Microsoft/config-wsl .configmake -j $(expr $(nproc) - 1) From Windows, copy \\wsl$\<DISTRO>\home\<USER>\WSL2-Linux-Kernel\arch\x86\boot\bzimage to your Windows profile ( %userprofile% , like C:\Users\<Windows_user> ) Create the file %userprofile%\.wslconfig that contains: [wsl2]kernel=C:\\Users\\WIN10_USER\\bzimage Note: The double backslashes ( \\ ) are required. Also, to avoid a potential old bug, make sure not to leave any trailing whitespace on either line. In PowerShell, run wsl --shutdown Reopen your flavor of WSL2 How to compile the module Note: You'll want to do these from /home/$USER/ or adjust the Makefile to match your location. Create a Makefile that contains: obj-m:=lkm_example.oall: make -C $(shell pwd)/WSL2-Linux-Kernel M=$(shell pwd) modulesclean: make -C $(shell pwd)/WSL2-Linux-Kernel M=$(shell pwd) clean Run make Source for the .wslconfig file steps here .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/594470", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401549/" ] }
594,471
I have screen tearing issues. When I set Tearing prevention ("vsync") in Compositor to something else and then back to Automatic the screen tearing is gone. I would like to know what configuration files Tearing prevention ("vsync") changes to troubleshoot this problem and find a permanent fix. I test for screen tearing with this video . I also have screen tearing with the latest live iso with both free and non-free drivers. Operating System: Manjaro Linux KDE Plasma Version: 5.18.5KDE Frameworks Version: 5.70.0Qt Version: 5.15.0Kernel Version: 5.6.16-1-MANJAROOS Type: 64-bitProcessors: 8 × Intel® Core™ i7-6700HQ CPU @ 2.60GHzMemory: 15,5 GiB of RAMGPU: Nvidia GeForce 940M
I had to do this for an assignment, so I figure I'll share my solution here. The base WSL2 kernel does not allow modules to be loaded. You have to compile and use your own kernel build. How to compile and use a kernel in WSL2 In Ubuntu/WSL: sudo apt install build-essential flex bison libssl-dev libelf-dev git dwarvesgit clone https://github.com/microsoft/WSL2-Linux-Kernel.gitcd WSL2-Linux-Kernelcp Microsoft/config-wsl .configmake -j $(expr $(nproc) - 1) From Windows, copy \\wsl$\<DISTRO>\home\<USER>\WSL2-Linux-Kernel\arch\x86\boot\bzimage to your Windows profile ( %userprofile% , like C:\Users\<Windows_user> ) Create the file %userprofile%\.wslconfig that contains: [wsl2]kernel=C:\\Users\\WIN10_USER\\bzimage Note: The double backslashes ( \\ ) are required. Also, to avoid a potential old bug, make sure not to leave any trailing whitespace on either line. In PowerShell, run wsl --shutdown Reopen your flavor of WSL2 How to compile the module Note: You'll want to do these from /home/$USER/ or adjust the Makefile to match your location. Create a Makefile that contains: obj-m:=lkm_example.oall: make -C $(shell pwd)/WSL2-Linux-Kernel M=$(shell pwd) modulesclean: make -C $(shell pwd)/WSL2-Linux-Kernel M=$(shell pwd) clean Run make Source for the .wslconfig file steps here .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/594471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/240617/" ] }
594,572
I need to update a version number in a file. In the file there is one line that contains the semantic-version number like this: foo.bar.blah.blub#10.6.1 Now I want to look for the prefix "blah.blub" to identify the right line and replace the version number. I got the following that obviously doesn't work: sed -i -e 's/^blah.blub#\d\.\d\.\d$/blah.blub'${VERSION}/$ README.MD
Sed uses Basic Regular Expressions by default, and most implementations (including the GNU one which you seem to be using) don't understand \d . Also, your line doesn't start with blah.blub , so the ^ means it will never match. And various other issues. This should do what you need though: sed -i -E "/blah\.blub#/s/#.*/#$VERSION/" README.MD That will replace the text after a # with the contents of $VERSION , but only if this line contains blah.blub . Now, whether that is specific enough to deal with your data, I have no idea, since you have only shown us that one line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594572", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/419325/" ] }
594,607
I am trying to capture all .whl file as the following ls -l /python/*.{whl,}ls: cannot access /python/*.: No such file or directory-rw-r--r-- 1 root root 23000 Jun 14 11:02 /python/argparse-1.4.0-py2.py3-none-any.whl-rw-r--r-- 1 root root 154423 Jun 14 11:02 /python/certifi-2019.9.11-py2.py3-none-any.whl-rw-r--r-- 1 root root 387834 Jun 14 11:02 /python/cffi-1.14.0-cp27-cp27mu-manylinux1_x86_64.whl-rw-r--r-- 1 root root 133356 Jun 14 11:02 /python/chardet-3.0.4-py2.py3-none-any.whl-rw-r--r-- 1 root root 2728298 Jun 14 11:02 /python/cryptography-2.9.2-cp27-cp27mu-manylinux1_x86_64.whl-rw-r--r-- 1 root root 11223 Jun 14 11:02 /python/enum34-1.1.10-py2-none-any.whl-rw-r--r-- 1 root root 58594 Jun 14 11:02 /python/idna-2.8-py2.py3-none-any.whl-rw-r--r-- 1 root root 18159 Jun 14 11:02 /python/ipaddress-1.0.23-py2.py3-none-any.whl-rw-r--r-- 1 root root 125774 Jun 14 11:02 /python/Jinja2-2.11.2-py2.py3-none-any.whl but as you can see we get also - ls: cannot access /python/*.: No such file or directory how to fix that , in way to find only the files that ended with . whl
If you’re only interested in files with a whl extension, ls -l /python/*.whl is what you’re after. Your current command, ls -l /python/*.{whl,} is equivalent to ls -l /python/*.whl /python/*. (thanks to brace expansion ) and fails because the latter pattern doesn’t match anything in /python .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594607", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
594,631
$ echo > 1 2 3; ls -latr 1; cat 1-rw-rw-r-- 1 kahn kahn 4 Jun 23 14:05 12 3 Does this have to do with the evaluation of > redirection? Such that: echo > 1 2 3 Could also be rewritten as and is technically: echo 2 3 > 1 ? Are there any resources for better understanding the order of operations with IO redirection and how it is evaluated? I admit the example is not something that would probably be useful or even occur often, but I'd like to better understand what exactly is happening here.
A proper resource you can refer to is the POSIX shell grammar , which defines a simple command as: simple_command : cmd_prefix cmd_word cmd_suffix | cmd_prefix cmd_word | cmd_prefix | cmd_name cmd_suffix | cmd_name ; The most relevant part, here, is the definition of command_suffix : cmd_suffix : io_redirect | cmd_suffix io_redirect | WORD | cmd_suffix WORD ; which is recursive, allowing for redirections and command arguments to appear in any order. Also, POSIX defines the syntax for redirection as [n]redir-op word While no space is allowed between the optional number n and the redirection operator ( > , in your case), an arbitrary amount of space is allowed between the redirection operator and the following word. After expansion, word ( 1 , in your case) is used as the name of the file the stream is directed to (or from). Thus, it is equally legal to write $ echo > 1 foo bar# ^^^^ ^^^^^^^^^^^^^# \ \# \ cmd_suffix# \ ^^^ ^^^ ^^^# \ \ \ \# \ \ \ WORD# \ \ WORD# \ io_redirect# cmd_name or $ echo foo >1 bar or even $ echo > 1 foo > 1 bar > 1 (where, of course, repeating > 1 serves no purpose). For the sake of completeness: the cmd_prefix in the definition of simple_command is itself recursively defined: cmd_prefix : io_redirect | cmd_prefix io_redirect | ASSIGNMENT_WORD | cmd_prefix ASSIGNMENT_WORD ; meaning that redirections and variable assignments can appear in any order before a command. You may then have, for instance, $ LC_ALL=C <infile sort >outfile 2>errfile# ^^^^^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^^^^^^# \ \ \# cmd_prefix cmd_word cmd_suffix or $ 2>errfile >outfile <infile LC_ALL=C sort or $ LC_ALL=C sort <infile 2>errfile >outfile Which are all equally legal, but it has to be kept in mind that redirections and variable assignments, as well as expansions, are performed from left to right and their order may be relevant (e.g. if infile does not exist, outfile is not truncated in cat <infile >outfile while it is in cat >outfile <infile ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/594631", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137794/" ] }
594,634
I tried to take the output stream of wget and pipe it to sftp put, (wget|sftp), but that did not work, How can we do it ?, condition is I don't want to save the stream locally in my server as a file. Is that really possible .?We can do this with wget | ssh,. I have below command to do sftp and put in single command. sftp user@serverName:/tmp <<< $'put ./fileName'
A proper resource you can refer to is the POSIX shell grammar , which defines a simple command as: simple_command : cmd_prefix cmd_word cmd_suffix | cmd_prefix cmd_word | cmd_prefix | cmd_name cmd_suffix | cmd_name ; The most relevant part, here, is the definition of command_suffix : cmd_suffix : io_redirect | cmd_suffix io_redirect | WORD | cmd_suffix WORD ; which is recursive, allowing for redirections and command arguments to appear in any order. Also, POSIX defines the syntax for redirection as [n]redir-op word While no space is allowed between the optional number n and the redirection operator ( > , in your case), an arbitrary amount of space is allowed between the redirection operator and the following word. After expansion, word ( 1 , in your case) is used as the name of the file the stream is directed to (or from). Thus, it is equally legal to write $ echo > 1 foo bar# ^^^^ ^^^^^^^^^^^^^# \ \# \ cmd_suffix# \ ^^^ ^^^ ^^^# \ \ \ \# \ \ \ WORD# \ \ WORD# \ io_redirect# cmd_name or $ echo foo >1 bar or even $ echo > 1 foo > 1 bar > 1 (where, of course, repeating > 1 serves no purpose). For the sake of completeness: the cmd_prefix in the definition of simple_command is itself recursively defined: cmd_prefix : io_redirect | cmd_prefix io_redirect | ASSIGNMENT_WORD | cmd_prefix ASSIGNMENT_WORD ; meaning that redirections and variable assignments can appear in any order before a command. You may then have, for instance, $ LC_ALL=C <infile sort >outfile 2>errfile# ^^^^^^^^^^^^^^^^ ^^^^ ^^^^^^^^^^^^^^^^^^# \ \ \# cmd_prefix cmd_word cmd_suffix or $ 2>errfile >outfile <infile LC_ALL=C sort or $ LC_ALL=C sort <infile 2>errfile >outfile Which are all equally legal, but it has to be kept in mind that redirections and variable assignments, as well as expansions, are performed from left to right and their order may be relevant (e.g. if infile does not exist, outfile is not truncated in cat <infile >outfile while it is in cat >outfile <infile ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/594634", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93097/" ] }
594,676
How to output a line in reverse order using awk ? I use this construction: { for (i = NF; i > 1; i--){ print $i}} but it print just one word in one string. Before: apple pen dog cat After: cat dog pen apple
awk ' { for (i=NF; i>=1; i--) { printf "%s%s", $i, i == 1 ? ORS : OFS } }' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594676", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399615/" ] }
594,680
I have 3 strings A='apples'B='bananas'C='carrots' I want to see if all of these exist in the fruit.txt file.If I'm missing A then add A , B then add B , and so on. This is what I have now if grep -qF "$A | $B | $C" fruit.txt; then echo 'exist'else echo 'does not exist' echo $A $B $C >> fruit.txtfi
awk ' { for (i=NF; i>=1; i--) { printf "%s%s", $i, i == 1 ? ORS : OFS } }' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594680", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/419415/" ] }
594,780
I would like to introduce a line break into a file having different columns on the basis of the value in the first column. For example: Input file: 1aa6 HETATM 4MO A 8031aa6 HETATM SF4 A 8001ao0 HETATM 5GP A 4671ao0 HETATM SF4 B 4661ao0 HETATM SF4 C 4661b0y HETATM SF4 A 871blu HETATM SF4 A 1011blu HETATM SF4 A 102 Required output: 1aa6 HETATM 4MO A 8031aa6 HETATM SF4 A 8001ao0 HETATM 5GP A 4671ao0 HETATM SF4 B 4661ao0 HETATM SF4 C 4661b0y HETATM SF4 A 871blu HETATM SF4 A 1011blu HETATM SF4 A 102 I tried the csh script, but it didn't worked. #! /bin/cshset bin = /home/x/binforeach i (`cat pdb_ligands | awk '{print $1}'`) echo $i sed "s/$i/&\n\n/" pdb_ligands > output.txtend
$ awk 'NR > 1 && $1 != prev { print "" } { prev = $1 }; 1' pdb_ligands1aa6 HETATM 4MO A 8031aa6 HETATM SF4 A 8001ao0 HETATM 5GP A 4671ao0 HETATM SF4 B 4661ao0 HETATM SF4 C 4661b0y HETATM SF4 A 871blu HETATM SF4 A 1011blu HETATM SF4 A 102 This keeps track of what was in the 1st column on the previous line in prev . If the current 1st column is different from prev , and we are not on the first line of the file, a newline is printed. All lines are then printed unconditionally. An alternative to print "" in the code above is to do $0 = ORS $0 , which adds a newline character (or whatever ORS , the output record separator, is set to) to the start of the current record. This will have the effect of producing an extra newline when the line is printed moments later.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594780", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345004/" ] }
594,783
Since I installed Linux on my old laptop, I cannot access the BIOS menu. I have tried fwsetup but I just get a message saying: Could not set EFI variable 'OsIndications'. I have also tried booting into Windows, holding shift and pressing restart and then once on the recovery screen, selecting Troubleshoot -> Advanced Options -> UEFI Firmware Settings and clicking restart. That just says: There was a problem. Restart your PC to try again. It looks like something didn't load correctly. Restarting might fix the problem. Ifthis happens more than once, you might be able to find help bysearching online for the specific error code. Error code:800704d3. I have tried spamming lots of different keys on startup, including Escape , F1 , F2 , F3 , F4 , F12 , delete and shift . None of them work. I used to be able to get into the bios fine before I installed Linux. I have installed Linux on 2 other laptops without any issues but its just this one that doesn't want to go into the BIOS anymore. I have since (accidentally) uninstalled Linux when I tried to resize my Windows partition with a tool which wiped out my Linux partitions. On boot, it takes me to a grub prompt (not grub rescue) that says: Minimal BASH-like editing is supported. at which point I type exit which gives me a prompt to choose a boot device, where I can boot to Windows Boot Manager just fine but USB devices don't work. To boot to USB, I have to plug the USB stick in, boot the computer, type exit and press enter, then select the USB device option which takes me back into the GRUB prompt again, where I type (for example) chainloader (hd0,msdos2)/efi/boot/grubx64.efi and then boot and it boots to USB just fine. I had a quad boot (Ubuntu MATE (forgot which version, it was a few years ago), Linux Mint, Windows 8 (as that’s what came with the laptop) and Windows 10) until a partition manager deleted the Linux partitions. It was a UEFI install. It is an Acer Aspire V11 Touch and a quick google search said the bios key was F2, which didn’t work. I also tried multiple other keys which also didn’t work. I noticed that if I hold escape and function on boot, it keeps turning on and off. Don’t know if that’s a GRUB thing or what. Sometimes if I spam potential BIOS keys while turning it on, the backlight turns on, off, and then back on. I have tried sudo systemctl reboot --firmware on an Ubuntu 18.04 Live USB and I just get a new line on the terminal and nothing happens, it just sits there until I cancel it with CTRL + C. I just want to be able to get into my BIOS setup which I haven't been able to do since I installed Linux and GRUB.
$ awk 'NR > 1 && $1 != prev { print "" } { prev = $1 }; 1' pdb_ligands1aa6 HETATM 4MO A 8031aa6 HETATM SF4 A 8001ao0 HETATM 5GP A 4671ao0 HETATM SF4 B 4661ao0 HETATM SF4 C 4661b0y HETATM SF4 A 871blu HETATM SF4 A 1011blu HETATM SF4 A 102 This keeps track of what was in the 1st column on the previous line in prev . If the current 1st column is different from prev , and we are not on the first line of the file, a newline is printed. All lines are then printed unconditionally. An alternative to print "" in the code above is to do $0 = ORS $0 , which adds a newline character (or whatever ORS , the output record separator, is set to) to the start of the current record. This will have the effect of producing an extra newline when the line is printed moments later.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/594783", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/419508/" ] }
594,841
I'm looking for the equivalent to this JS assignment: FOO = FOO || "I must have been falsey!";
Either of these expansions might be what you're looking for, depending on when exactly you want to do the assignment: Omitting the colon results in a test only for a parameter that is unset. [...] ${parameter:-word} If parameter is unset or null, the expansion of word is substituted.Otherwise, the value of parameter is substituted. ${parameter:=word} If parameter is unset or null, the expansion of word is assigned toparameter. The value of parameter is then substituted. Positionalparameters and special parameters may not be assigned to in this way. If you just want to set a default on first use, then: some-command "${FOO:=default value}"other-command "$FOO" # both use "default value" if FOO was null/unset If you want to be explicit about it: FOO="${FOO:-default value}"some-command "${FOO}"
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/594841", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/347216/" ] }
594,903
In a common Linux distribution, do utilities like rm , mv , ls , grep , wc , etc. run in parallel on their arguments? In other words, if I grep a huge file on a 32-threaded CPU, will it go faster than on dual-core CPU?
You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library. ldd /bin/grep | grep -F libpthread.so So for example on Ubuntu: for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); do if ldd $x | grep -q -F libpthread.so; then echo $x; fi; done However, this produces a lot of false positives due to programs that are linked with a library that itself is linked with pthread. For example, /bin/mkdir on my system is linked with PCRE (I don't know why…) which itself is linked with pthread. But mkdir is not parallelized in any way. In practice, checking whether the executable contains libpthread gives more reliable results. It could miss executables whose parallel behavior is entirely contained in a library, but basic utility typically aren't designed that way. dpkg -L coreutils grep findutils util-linux | grep /bin/ | xargs grep pthread Binary file /usr/bin/timeout matchesBinary file /usr/bin/sort matches So the only tool that actually has a chance of being parallelized is sort . ( timeout only links to libpthread because it links to librt.) GNU sort does work in parallel: the number of threads can be configured with the --parallel option , and by default it uses one thread per processor up to 8. ( Using more processors gives less and less benefit as the number of processors increases , tapering off at a rate that depends on how parallelizable the task is.) grep isn't parallelized at all. The PCRE library actually links to the pthread library only because it provides thread-safe functions that use locks and the lock manipulation functions are in the pthread library. The typical simple approach to benefit from parallelization when processing a large amount of data is to split this data into pieces, and process the pieces in parallel. In the case of grep, keep file sizes manageable (for example, if they're log files, rotate them often enough) and call separate instances of grep on each file (for example with GNU Parallel ). Note that grepping is usually IO-bound (it's only CPU-bound if you have a very complicated regex, or if you hit some Unicode corner cases of GNU grep where it has bad performance), so you're unlikely to get much benefit from having many threads.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/594903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154237/" ] }
594,914
My problem is the following. (kubuntu 14.04 64bits, kernel 4.40) I have a remote computer (on another place, I can't go on site) that have two network cards. On the second card ( eth1 ), I have a dhcp client which serve the IP 192.168.0.189/24 . Through this IP, I can connect with Teamviewer or anydesk. On the first card ( eth0 ), the IP is set to 192.168.2.10/24 . All works well. But I have a device IP that IP is 192.168.0.100/24 and must be connected on eth0 (note that 192.168.0.100/24 is free on eth1 ). So I add the IP 192.168.0.110/24 to eth0 to access this new device. The problem is, in that case, we cannot initiate new connection on Teamviewer or anydesk. So, I'm looking to explain my system that it must use eth0 to access 192.168.0.100 eth1 to all other 192.168.0.x I think that route could be what I want, but I don't want to test it right now, because on error, things will be terrible to debug. My question is: Will the command route add 192.168.0.100/24 eth0 be enough? Should I generate some script for the other 192.168.0.x addresses? #ip a before ip addr add 192.168.0.110/24 dev eth0ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet 192.168.2.10/24 brd 192.168.2.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether yy:yy:yy:yy:yy:yy brd ff:ff:ff:ff:ff:ff inet 192.168.0.189/24 brd 192.168.0.255 scope global noprefixroute eth1 valid_lft 401100sec preferred_lft forever#ip a after ip addr add 192.168.0.110/24 dev eth0ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff inet 192.168.2.10/24 brd 192.168.2.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet 192.168.0.110/24 scope global secondary enp0s8 valid_lft forever preferred_lft forever 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether yy:yy:yy:yy:yy:yy brd ff:ff:ff:ff:ff:ff inet 192.168.0.189/24 brd 192.168.0.255 scope global noprefixroute eth1 valid_lft 401100sec preferred_lft forever
You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library. ldd /bin/grep | grep -F libpthread.so So for example on Ubuntu: for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); do if ldd $x | grep -q -F libpthread.so; then echo $x; fi; done However, this produces a lot of false positives due to programs that are linked with a library that itself is linked with pthread. For example, /bin/mkdir on my system is linked with PCRE (I don't know why…) which itself is linked with pthread. But mkdir is not parallelized in any way. In practice, checking whether the executable contains libpthread gives more reliable results. It could miss executables whose parallel behavior is entirely contained in a library, but basic utility typically aren't designed that way. dpkg -L coreutils grep findutils util-linux | grep /bin/ | xargs grep pthread Binary file /usr/bin/timeout matchesBinary file /usr/bin/sort matches So the only tool that actually has a chance of being parallelized is sort . ( timeout only links to libpthread because it links to librt.) GNU sort does work in parallel: the number of threads can be configured with the --parallel option , and by default it uses one thread per processor up to 8. ( Using more processors gives less and less benefit as the number of processors increases , tapering off at a rate that depends on how parallelizable the task is.) grep isn't parallelized at all. The PCRE library actually links to the pthread library only because it provides thread-safe functions that use locks and the lock manipulation functions are in the pthread library. The typical simple approach to benefit from parallelization when processing a large amount of data is to split this data into pieces, and process the pieces in parallel. In the case of grep, keep file sizes manageable (for example, if they're log files, rotate them often enough) and call separate instances of grep on each file (for example with GNU Parallel ). Note that grepping is usually IO-bound (it's only CPU-bound if you have a very complicated regex, or if you hit some Unicode corner cases of GNU grep where it has bad performance), so you're unlikely to get much benefit from having many threads.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/594914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15601/" ] }
594,928
OK, this is weird. I have been battling this all day & have been unsuccessful as of yet. I am working on a project that is Python based. The project is started via systemd scripts. Weird thing is vlc/cvlc works to an extent, but there is no dbus control. If I run the python app from the command line, everything works perfectly. Running the app from systemd is the wonkiness. For instance, when it is run with the following code & service script, I can't control vlc with dbus. If I run the python outside of systemd script, I can access the dbus. There is another weird issue that is a side effect of whatever is causing this problem. It will run 1080 vid just fine but not 4k. Try it out with the following & let me know if you can figure it out. I greatly appreciate any & all help. Thanks! PYTHON CODE (testvlc): #!/usr/bin/env pythonfrom subprocess import Popen, PIPEimport timevid = 'somevideo.mp4'cmd = 'DISPLAY=:0 cvlc -f --no-osd %s -L' % vidPopen(cmd, shell=True, stdout=PIPE, stderr=PIPE)while True: print("Hello!") time.sleep(5) SYSTEMD SCRIPT (testvlc.service): [Unit]Description=Test VLC From Python Script[Service]User=userExecStart=/usr/bin/screen -D -S testvlc -m /home/user/testvlc[Install]WantedBy=multi-user.target
You can get a first impression by checking whether the utility is linked with the pthread library. Any dynamically linked program that uses OS threads should use the pthread library. ldd /bin/grep | grep -F libpthread.so So for example on Ubuntu: for x in $(dpkg -L coreutils grep findutils util-linux | grep /bin/); do if ldd $x | grep -q -F libpthread.so; then echo $x; fi; done However, this produces a lot of false positives due to programs that are linked with a library that itself is linked with pthread. For example, /bin/mkdir on my system is linked with PCRE (I don't know why…) which itself is linked with pthread. But mkdir is not parallelized in any way. In practice, checking whether the executable contains libpthread gives more reliable results. It could miss executables whose parallel behavior is entirely contained in a library, but basic utility typically aren't designed that way. dpkg -L coreutils grep findutils util-linux | grep /bin/ | xargs grep pthread Binary file /usr/bin/timeout matchesBinary file /usr/bin/sort matches So the only tool that actually has a chance of being parallelized is sort . ( timeout only links to libpthread because it links to librt.) GNU sort does work in parallel: the number of threads can be configured with the --parallel option , and by default it uses one thread per processor up to 8. ( Using more processors gives less and less benefit as the number of processors increases , tapering off at a rate that depends on how parallelizable the task is.) grep isn't parallelized at all. The PCRE library actually links to the pthread library only because it provides thread-safe functions that use locks and the lock manipulation functions are in the pthread library. The typical simple approach to benefit from parallelization when processing a large amount of data is to split this data into pieces, and process the pieces in parallel. In the case of grep, keep file sizes manageable (for example, if they're log files, rotate them often enough) and call separate instances of grep on each file (for example with GNU Parallel ). Note that grepping is usually IO-bound (it's only CPU-bound if you have a very complicated regex, or if you hit some Unicode corner cases of GNU grep where it has bad performance), so you're unlikely to get much benefit from having many threads.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/594928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137026/" ] }
595,000
Just out of curiosity, This command works: $ <file cat But neither of these do: $ <file forbash: for: command not found...$ <file whilebash: while: command not found... Why is that?
The POSIX grammar is written that way. A command is defined as: command : simple_command | compound_command | compound_command redirect_list | function_definition Where a simple command is simple_command : cmd_prefix cmd_word cmd_suffix [...] and both cmd_prefix and cmd_suffix eventually allow redirections. For compound commands, there's only the redirect_list at the end that allows them, nothing to allow it in the front. It doesn't exactly have to be like this, e.g. Zsh accepts this just fine: % > output for x in a b c ; do echo $x; done% cat output abc But that's Zsh being incompatible, since in standard shells the leading redirection forces the for command to be parsed as a regular simple command, meaning that the following is valid and the shell will try to find and run a command literally called for (with five arguments, and stdout redirected). Giving the error you saw: $ > output for x in a b cbash: for: command not found (In Bash in particular, the whole input line is parsed first, so if you give Bash the whole for-do-done from the first example on a single line, it just drops a syntax error for the do , and doesn't execute the for before that.) If I had to guess, I'd expect the root reason for the definition is that it's always worked like that, and so the standard has codified it like that. Though note that there has been discussion about changing the standard in this regard. (Probably no-one in their right mind actually relies on > output for ... meaning a command called for instead of the shell loop).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595000", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/367552/" ] }
595,013
I'd like to print "chr" at the begin of each line that doesn't start with "#" in my file. input : ##toto#titi1617 output : ##toto#titichr16chr17 I've tried with awk ( awk '$1 ~ /^#/ ... ) or grep ( grep "^[^#]" ... ) but got no success. How can I accomplish this?
I think you need ^[^#] meaning, starting with a character that is not # and reconstruct those lines by prefixing with "chr" awk '/^[^#]/{ $0 = "chr"$0 }1'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416180/" ] }
595,090
Suppose that I have the two files with the following content: $ cat File1.txtAppleorangewatermelonavocadolime $ cat File2.txtorangeApplelime watermelonavocado Basically, there is no difference, as both have same values.I am using the diff command: diff File1.txt File2.txt and it shows files are different as values are misplaced, In my case I require it should have not showed difference. What are the other ways to achieve this, any suggestions are welcome.
Compare the sorted files. In bash (or ksh or zsh), with a process substitution : diff <(sort File1.txt) <(sort File2.txt) In plain sh: sort File1.txt >File1.txt.sortedsort File1.txt >File2.txt.sorteddiff File1.txt.sorted File2.txt.sorted To quickly see the differences between sorted files, comm can be useful: it shows directly the lines that are in one file but not the other. comm -12 <(sort File1.txt) <(sort File2.txt) >common-lines.txtcomm -23 <(sort File1.txt) <(sort File2.txt) >only-in-file-1.txtcomm -13 <(sort File1.txt) <(sort File2.txt) >only-in-file-2.txt If a line is repeated in the same file, the commands above insist on the two files having the same number of repetitions. If you want to treat foobarfoo as identical to barfoo then remove duplicates when sorting: use sort -u instead of sort . If you save the output of sort on one file and use it later when the other file is available, note that the two files must be sorted in the same locale. If you do this, you should probably sort in byte order: LC_ALL=C sort File1.txt >File1.txt.sorted
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/595090", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416452/" ] }
595,094
I don't want to convert tabular data to nice columns like a standard awk recipe would produce. I want some text that's very long to be formatted into columns like a newspaper column. For example turn Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris tempus orci ut odio tincidunt, vel hendrerit ante viverra. Aenean mollis ex erat, ac commodo lectus scelerisque eget. Aenean sit amet purus felis. Aenean sit amet erat eget velit lobortis fermentum eget eget odio. Donec tincidunt rutrum varius. Nunc viverra ac erat id bibendum. Aenean sit amet venenatis arcu. Morbi enim enim, pulvinar sed velit in, sollicitudin tristique urna. In auctor ex vel diam sagittis, at placerat lacus sollicitudin. Sed a arcu dignissim, sodales odio ac, congue ante. Mauris posuere lorem varius tempor tincidunt. Etiam non metus ac nibh vulputate semper. Proin dapibus ullamcorper tortor, sed ultricies est euismod vel. Aliquam erat volutpat.Phasellus at sem ornare, suscipit leo in, bibendum nulla. Sed fermentum enim id est feugiat, in commodo lectus fermentum. Sed quis volutpat felis. Donec turpis felis, dignissim vel mollis nec, pellentesque non odio. Aenean vitae sagittis libero, vel egestas diam. Nullam ornare purus quis eros euismod, viverra pretium turpis rhoncus. Etiam sagittis lorem non nisi molestie, ut dictum risus rhoncus. into Lorem ipsum varius. Nunc non metus ac vel mollis nec,dolor sit amet, viverra ac erat id nibh vulputate pellentesqueconsectetur bibendum. Aenean semper. Proin non odio. Aeneanadipiscing sit amet venenatis dapibus ullamcorper vitae sagittiselit. Mauris arcu. Morbi enim tortor, sed libero, vel egestastempus orci ut enim, pulvinar ultricies diam. Nullam ornareodio tincidunt, sed velit in, est euismod purus quis erosvel hendrerit ante sollicitudin vel. Aliquam erat euismod, viverraviverra. Aenean tristique urna. In volutpat. pretium turpismollis ex erat, auctor ex vel rhoncus. Etiamac commodo lectus diam sagittis, Phasellus at sagittis lorem nonscelerisque at placerat lacus sem ornare, nisi molestie,eget. Aenean sollicitudin. Sed suscipit leo in, ut dictum risussit amet purus a arcu dignissim, bibendum nulla. Sed rhoncus.felis. Aenean sit sodales odio ac, fermentum enimamet erat eget congue ante. Mauris id est feugiat,velit lobortis posuere lorem in commodo lectusfermentum eget varius tempor fermentum. Sedeget odio. Donec tincidunt. Etiam quis volutpattincidunt rutrum felis. Donec turpis felis, dignissim It would need to be "paginated", too by a double \n after the width is full.
You can use fold to break the text up and then feed it to pr . Both are most likely available in your system. If this is the file lorem.txt : Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed doeiusmod tempor incididunt ut labore et dolore magna aliqua. Integermalesuada nunc vel risus commodo viverra maecenas accumsan lacus. Necfeugiat nisl pretium fusce id velit ut tortor pretium. Lacus sedturpis tincidunt id. Nibh sit amet commodo nulla facilisi. In metusvulputate eu scelerisque felis. Id nibh tortor id aliquet. $ fold -w 20 -s lorem.txt | pr -32020-06-25 16:41 Page 1Lorem ipsum dolor Integer malesuada turpis tinciduntsit amet, nunc vel risus id. Nibh sit ametconsectetur commodo viverra commodo nullaadipiscing elit, maecenas accumsan facilisi. In metussed do eiusmod lacus. Nec feugiat vulputate eutempor incididunt nisl pretium fusce scelerisque felis.ut labore et dolore id velit ut tortor Id nibh tortor idmagna aliqua. pretium. Lacus sed aliquet. Check the pr and fold man pages for other options.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/595094", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22881/" ] }
595,249
What does the eu mean after #!/bin/bash -eu at the top of a bash script? Normally I begin my bash scripts with this hashbang/shebang : #!/bin/bash but I just came across one with #!/bin/bash -eu and I have no idea why there is a -eu there. Reading the man bash pages doesn't seem to help me, but maybe I'm overlooking something. Not a duplicate: This is not a duplicate of Correct behavior of EXIT and ERR traps when using `set -eu` . Quoting @ilkkachu in the comments below this question , directly addressing this: ...how -e and -u work with regard to traps or anything else is completely unrelated to how they and other single-character options can be given on the command line. I agree with that. These are separate questions and answers with differing motivations behind them. That question is so different I would never even think to click on it by looking at its title OR its description when trying to understand the answer to my own question here, and the answers are vastly different too.
They're the same options as -e and -u to the set builtin . They can be given on the shell command line too, and they get given as command line arguments from the hashbang line too. (But note e.g. issues with Multiple arguments in shebang ) The online manual says, under "Invoking Bash" , that All of the single-character options used with the set builtin can be used as options when the shell is invoked. The single character options are also explicitly listed in the invocation synopsis on the online manual ( bash [long-opt] [-abefhkmnptuvxdBCDHP] [-o option] ... ), though not in the manpage. set -u tells the shell to treat expanding an unset parameter an error, which helps to catch e.g. typos in variable names. set -e tells the shell to exit if a command exits with an error (except if the exit value is tested in some other way). That can be used in some cases to abort the script on error, without explicitly testing the status of each and every command.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/595249", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114401/" ] }
595,410
My requirements are: send an email from the terminal (for the sake of batch processing) attach a pdf file to that email (the pdfs would be identical for all recipients, except for a watermark) specify a reply-to address ([email protected]) I've tried the "simplest answer to sending one-line messages via gmail is to use ssmtp" , and several variants, and keep getting: laptop sSMTP[19226]: Authorization failed (535 5.7.8 https://support.google.com/mail/?p=BadCredentials d13sm3920147qkj.27 -gsmtp) Google settings: IMAP enabled Allow less secure apps is ON For each ssmtp.conf setup that I tried, I have done DisplayUnlockCaptcha just before: $echo "Testing...1...2...3" | ssmtp [email protected] Looking at the stated thread alone, there is no consensus as to how /etc/ssmtp/ssmtp.conf should be set up: mailhub=smtp.gmail.com:587 vs 465 UseTLS=YES vs UseSTARTTLS=Yes (or both?) hostname=localhost vs whatever was put there as the default (in my case, laptop ) Could someone make a suggestion to sort this out, and possibly paste in full a working conf file? There is a claim in a thread from 2017 , that: You can not use external applications with your normal password, youmust go to https://security.google.com/settings/security/apppasswords Is that the case? (I'm not able to do it) What alternatives are there? PS: OS: Linux Mint 19 Tara ssmtp was tested from a clean install preceded by upgrade/update a couple of years ago, I was a able to send emails using the same OS (earlier version) To make sure the value of AuthPass is correct, I used it to manually log into my google account.
With the following Google settings: IMAP enabledAllow less secure apps is ON The solution is to set an app-password : Note: App passwords setup at Google require that two-factor authentication be enabled first for the account on which you are trying to configure ssmtp for. /var/ssmtp/ssmtp.conf: ## Config file for sSMTP sendmail## The person who gets all mail for userids < 1000# Make this empty to disable rewriting.root=postmaster# The place where the mail goes. The actual machine name is required no # MX records are consulted. Commonly mailhosts are named mail.domain.com# Modified 06/27/2020:# [email protected][email protected]#AuthPass=[usual gmail pwd] # aint' gonna workAuthPass=[pwd generated by https://myaccount.google.com/apppasswords]#UseTLS=YES#mailhub=smtp.gmail.com:465UseSTARTTLS=Yesmailhub=smtp.gmail.com:587# Where will the mail seem to come from?#rewriteDomain=# The full hostnamehostname=laptop# Are users allowed to set their own From: address?# YES - Allow the user to specify their own From: address# NO - Use the system generated From: addressFromLineOverride=YES On the terminal: $echo "Testing...1...2...3" | ssmtp [email protected] Received in my gmailbox: PS: I don't see the usefulness of a vote-down without an explanation. For the second requirement stated in the question (attachment), mutt works like a charm . UPDATE on 12/01/2022 Using msmtp and because less secure app disabled Turn on two step verification Generate an app password using Name='msmtp' Do $ touch /home/user/.msmtp , and modify its content as follows using 2), # content of /home/user/.msmtprc # do: `$ chmod 600 /home/user/.msmtprc` # gmail account [email protected] host smtp.gmail.com port 587 tls on tls_starttls on auth on user google-name from [email protected] # https://myaccount.google.com/apppasswords password xxxxxxxxxxxxxxxx # default account account default : [email protected] $ chmod 600 /home/user/.msmtp $ echo "Testing...1...2...3" | msmtp recipient.address@domain This sends to bcc, look here for alternatives.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/142980/" ] }
595,547
Is it possible to create a RAID array on files for testing purposes? Suppose I want to create a level-1 RAID and I don't have for example 10 block devices to do that but instead I want to simulate that using files instead of block devices. What I've done so far is this: fallocate -l 1M disk1fallocate -l 1M disk2mkfs.ext4 disk1mkfs.ext4 disk2sudo mdadm --create --assume-clean --level=1 --raid-devices=2 /dev/md0 ./disk1 ./disk2 But after that I get the error : mdadm: ./disk1 is not a block device. Any idea?
What you're looking for is called the loop device. It makes files appears as devices like /dev/loop0 etc. They can then be mounted as filesystems, and should work with md. From the man page loop(4) : The loop device is a block device that maps its data blocks not to aphysical device such as a hard disk or optical disk drive, but to theblocks of a regular file in a filesystem or to another block device. See e.g. https://man7.org/linux/man-pages/man4/loop.4.html https://man7.org/linux/man-pages/man8/losetup.8.html For testing things that need block devices, LVM might also be useful. It lets you make multiple logical volumes from a single physical partition (or the other way around) and destroying/recreating/resizing the volumes is also much simpler than with disk partitions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388589/" ] }
595,557
I am trying to build a .bed file with genome coordinates and I am pretty close. Just missing the final step.I have a file that looks like this: LQNS02278165.1 13104710 13109495 BEL-1_PH-I 4785LQNS02278165.1 9139127 9142308 BEL-1_PH-I 3181LQNS02278165.1 9222957 9221339 BEL-1_PH-I -1618 I need to replace the 5 column with the sign, with is gonna be the orientation in the genome.ideally the output looks like this: LQNS02278165.1 13104710 13109495 BEL-1_PH-I +LQNS02278165.1 9139127 9142308 BEL-1_PH-I +LQNS02278165.1 9222957 9221339 BEL-1_PH-I - Any suggestion with awk will be very appreciated!Thanks
This should be enough: awk '{$NF=($NF<0 ? "-" : "+")}1' file $NF=($NF<0 ? "-" : "+") If last field is negative, replace it with a minus sign, else replace it with a plus sign. 1 prints the line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595557", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/266557/" ] }
595,577
This is my working code, but I believe it's not optimized - there must be a way to complete the job much faster than this: find . -type f -iname '*.py' -printf '%h\0' | sort -z -u | xargs -r -0 -I{} sh -c ' find "{}" -maxdepth 1 -type f -iname "*.py" -print0 | xargs -r -0 du -sch | tail -1 | cut -f1 | tr "\n" " " echo -e "{}"' | sort -k1 -hr | head -50 The goal is to search for all directories recursively that contain *.py then print the total size of all *.py files by the name of each directory, sort them in reverse order by size and show only first 50. Any ideas how to improve this code (performance-wise) but keeping the same output? EDIT: I tested your proposals on the following sample: 47GB total: 5805 files Unfortunately, I couldn't compare it toe-to-toe, since not all proposals follow the same guidelines: the total size should be disk usage and delimiter should be only a single space. Formatting should be as follows: numfmt --to=iec-i --suffix=B The following 4 are sorted outputs, but David displays accumulative size of files, not real disk usage. However, his improvement is significant: more than 9.5x faster. Stéphane's and Isaac's code are very tight winners, since their code is approximately 32x faster than the reference code. $ time madjoe.shreal 0m2,752suser 0m3,022ssys 0m0,785s$ time david.sh real 0m0,289suser 0m0,206ssys 0m0,131s$ time isaac.sh real 0m0,087suser 0m0,032ssys 0m0,032s$ time stephane.sh real 0m0,086suser 0m0,013ssys 0m0,047s The following code unfortunately doesn't sort nor display largest 50 results (besides, during previous comparison to Isaac's code, the following code is approx 6x slower than Isaac's improvement): $ time hauke.sh real 0m0,567suser 0m0,609ssys 0m0,122s
To count the disk usage as opposed to the sum of the apparent size, you'd need to use %b ¹ instead of %s and make sure each file is counted only once, so something like: LC_ALL=C find . -iname '*.py' -type f -printf '%D:%i\0%b\0%h\0' | gawk -v 'RS=\0' -v OFS='\t' -v max=50 ' { inum = $0 getline du getline dir } ! seen[inum]++ { gsub(/\\/, "&&", dir) gsub(/\n/, "\\n", dir) sum[dir] += du } END { n = 0 PROCINFO["sorted_in"] = "@val_num_desc" for (dir in sum) { print sum[dir] * 512, dir if (++n >= max) break } }' | numfmt --to=iec-i --suffix=B --delimiter=$'\t' Newlines in the dir names are rendered as \n , and backslashes (at least those decoded as such in the current locale²) as \\ . If a file is found in more than one directory, it is counted against the first one it is found in (order is not deterministic). It assumes there's no POSIXLY_CORRECT variable in the environment (if there is, setting PROCINFO["sorted_in"] has no effect in gawk so the list would not be sorted). If you can't guarantee it³, you can always start gawk as env -u POSIXLY_CORRECT gawk ... (assuming GNU env or compatible; or (unset -v POSIXLT_CORRECT; gawk ...) ). A few other problems with your approach: without LC_ALL=C , GNU find wouldn't report the files whose name doesn't form valid characters in the locale, so you could miss some files. Embedding {} in the code of sh constituted an arbitrary code injection vulnerability. Think for instance of a file called $(reboot).py . You should never do that, the paths to the files should be passed as extra arguments and referenced within the code using positional parameters. echo can't be used to display arbitrary data (especially with -e which doesn't make sense here). Use printf instead. With xargs -r0 du -sch , du may be invoked several times if the list of files is big, and in that case, the last line will only include the total for the last run. ¹ %b reports disk usage in number of 512-byte units. 512 bytes is the minimum granularity for disk allocation as that's the size of a traditional sector. There's also %k which is int(%b / 2) , but that would give incorrect results on filesystems that have 512 byte blocks (file system blocks are generally a power of 2 and at least 512 byte large) ² Using LC_ALL=C for gawk as well would make it a bit more efficient, but would possibly mangle the output in locales using BIG5 or GB18030 charsets (and the file names are also encoded in that charset) as the encoding of backslash is also found in the encoding of some other characters there. ³ Beware that if your sh is bash , POSIXLY_CORRECT is set to y in sh scripts, and it is exported to the environment if sh is started with -a or -o allexport , so that variable can also creep in unintentionally.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/418125/" ] }
595,629
MySQL, fail2ban and many other server applications store their data in /var/lib. Are they using the correct directory? If so why is it named lib, which implies libraries?
/var/lib is indeed the correct directory; as described in the filesystem hierarchy standard , This hierarchy holds state information pertaining to an application or the system. State information is data that programs modify while they run, and that pertains to one specific host. Users must never need to modify files in /var/lib to configure a package's operation, and the specific file hierarchy used to store the data must not be exposed to regular users. The /var/lib directory was added to FSSTND (the FHS’ ancestor) in version 1.1, which gives a timeline for its discussion (between February and October 1994), but I haven’t been able to find archives of the FSSTND mailing list to check. I suspect the lib name was used to parallel /usr/lib , not because the latter stores libraries, but because it stores files which are “not exposed to regular users”: /usr/lib stores files which are essential to the system’s operation, but aren’t exposed to regular users either. It’s likely that /var/lib usage pre-dates its integration in FSSTND. One might expect /srv to be a better alternative nowadays, but that’s explicitly excluded: If the directory and file structure of the data is not exposed to consumers, it should go in /var/lib .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17288/" ] }
595,653
As part of a larger awk script I needed to convert an arbitrary date string into seconds since the Epoch. This isn't available as an awk function so I thought I could resort to calling date on each line of input. (In hindsight I could have used perl , but let's park that thought.) After seeing some unexpected results I reduced the problem to this ( bash and GNU awk ) for f in {1..5}; do echo $f; sleep 2; done | awk '{ "date" | getline x; printf ">>%s<<\n", x }' All the same result, even though I confirmed that the awk loop really is running only once every two seconds >>29 Jun 2020 10:38:24<<>>29 Jun 2020 10:38:24<<>>29 Jun 2020 10:38:24<<>>29 Jun 2020 10:38:24<<>>29 Jun 2020 10:38:24<< Perhaps getline caches. So I tried this for f in {1..5}; do echo $f; sleep 2; done | awk '{ "date; : " NR | getline x; printf ">>NR=%d - %s<<\n", NR, x }'>>NR=1 - 29 Jun 2020 10:44:05<<>>NR=2 - 29 Jun 2020 10:44:07<<>>NR=3 - 29 Jun 2020 10:44:09<<>>NR=4 - 29 Jun 2020 10:44:11<<>>NR=5 - 29 Jun 2020 10:44:13<< All seems good. Caching (if that's what it is) is disabled and I get expected values from date . I then continued down this path one more time, supplying repeated values in the command piped to getline for f in 1 2 1 1 2 3; do echo $f; sleep 2; done | awk '{ "date; : " $1 | getline x; printf ">>NR=%d - f=%d - %s<<\n", NR, $1, x }'>>NR=1 - f=1 - 29 Jun 2020 10:43:01<<>>NR=2 - f=2 - 29 Jun 2020 10:43:03<<>>NR=3 - f=1 - 29 Jun 2020 10:43:03<<>>NR=4 - f=1 - 29 Jun 2020 10:43:03<<>>NR=5 - f=2 - 29 Jun 2020 10:43:03<<>>NR=6 - f=3 - 29 Jun 2020 10:43:11<< I expected row 3 either to result in a new evaluation of the command (delivering a new date value) or else repeating the value from the first line. Neither happens. This has stumped me. I don't understand is why I'm getting the same values for rows 2-5. Changing f from 1 to 2 clearly disabled any caching that was going on. But changing f from 2 back to 1 didn't give me the a cached copy of the first f=1 , but repeated the value for f=2 . Changing the command string to a new value with f=3 triggered a new call to date . Why?
GNU awk's manual mentions that: If the same file name or the same shell command is used with getline more than once during the execution of an awk program (see section Explicit Input with getline ), the file is opened (or the command is executed) the first time only. At that time, the first record of input is read from that file or command. The next time the same file or command is used with getline , another record is read from it, and so on. So it only runs the command once, and on further reads gets EOF, leaving the old value of x unchanged. Compare with what happens if we trash x after each read: $ for f in {1..3}; do echo $f; sleep 2; done | awk '{ "date" | getline x; printf ">>%s<<\n", x; x ="done" }'>>Mon Jun 29 13:37:53 EEST 2020<<>>done<<>>done<< If we replace the date command here with something that keeps a record of when it runs, we can also see the record show it only get executed once. getline also returns zero at EOF and -1 on error, so we could check that: $ for f in {1..3}; do echo $f; sleep 2; done | awk '{ if (("date" | getline x) > 0) printf ">>%s<<\n", x; else printf "error or eof\n"; }'>>Mon Jun 29 13:46:58 EEST 2020<<error or eoferror or eof You need to close() the pipe explicitly to have awk reopen it the next time. $ for f in {1..3}; do echo $f; sleep 2; done | awk '{ "date" | getline x; printf ">>%s<<\n", x; x = "done"; close("date") }'>>Mon Jun 29 13:39:19 EEST 2020<<>>Mon Jun 29 13:39:21 EEST 2020<<>>Mon Jun 29 13:39:23 EEST 2020<< With "date; : " NR | getline x; , all the command lines are distinct, so you get a separate pipe for each. With "date; : " $1 | getline x; , when $1 repeats you get the same issue as in the first case, the second read to the same pipe hits EOF.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595653", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100397/" ] }
595,677
I've been trying with sed , and googling for the whole morning, and I can't seem to get this to work. Is it possible with sed to lowercase text between two specific characters: i.e. SOMENAME=WOODSTOCK,SOMEOTHERNAME=JIMMY, can I lowercase WOODSTOCK and JIMMY (to woodstock and jimmy) on the basis they are between = and , ?
It is possible with GNU sed. Choose one of these two forms based on the greediness of the replacement. sed 's|=.*,|\L&|' filesed 's|=[^,]*,|\L&|' file As the manual states , " \L turns the replacement to lowercase until a \U or \E is found". & is the text matched by the regex. I have modified the sample file to show that you should wisely choose between the geedy =.*, and the non-greedy =[^,]*, regexes: $ cat fileSOMENAME=WOODSTOCK,SOMEOTHERNAME2=JIMMY,WOODSTOCK,FINISH$ sed 's|=.*,|\L&|' fileSOMENAME=woodstock,SOMEOTHERNAME2=jimmy,woodstock,FINISH$ sed 's|=[^,]*,|\L&|' fileSOMENAME=woodstock,SOMEOTHERNAME2=jimmy,WOODSTOCK,FINISH
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595677", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/352497/" ] }
595,716
I am trying to get the difference of 2 dates in epoch form and convert the number back to days: EXPIRYEPOCH=$(date --date="$EXPIRYDATE" +%s)TODAYEPOCH=$(date --date="$TODAYSDATE" +%s)DAYSLEFT=$(expr ($EXPIRYEPOCH - $TODAYEPOCH) / 86400 ) The DAYSLEFT evaluation fails in the above - whereas a single evaluation of subtraction succeeds in the below: DAYSLEFT=$(expr $EXPIRYEPOCH - $TODAYEPOCH) What is the proper formatting to be used to set the DAYSLEFT variable with subtraction followed by division?
Like this (don't use anymore the outdated expr ): dayleft=$(( arithmetic expression )) If you need float numbers in bash , use bc instead: dayleft=$(bc -l <<< "scale=2; 100/3") As stated by Stéphane Chazelas in comments, ksh93 , zsh and yash do support floating points within $((...)) and ((...)) . expr is a program used in ancient shell code to do math. In Posix shells like bash, use $(( expression )) . In bash, ksh88+, mksh/pdksh, or zsh, you can also use (( expression )) or let expression ((...)) is an arithmetic command, which returns an exit status of 0 if the expression is nonzero, or 1 if the expression is zero. Also used as a synonym for "let", if side effects (assignments) are needed. See http://mywiki.wooledge.org/ArithmeticExpression $((...)) is an arithmetic substitution. After doing the arithmetic, the whole thing is replaced by the value of the expression. See http://mywiki.wooledge.org/ArithmeticExpression Command Substitution: "$(cmd "foo bar")" causes the command 'cmd' to be executed with the argument 'foo bar' and "$(..)" will be replaced by the output. See http://mywiki.wooledge.org/BashFAQ/002 and http://mywiki.wooledge.org/CommandSubstitution Avoid using UPPER CASE variables, they are reserved for system use Finally expiryepoch=$(date --date="$expirydate" +%s)todayepoch=$(date --date="$todaysdate" +%s)dayleft=$(bc <<< "scale=2; (todayepoch - expiryepoch) / 86400")
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595716", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137794/" ] }
595,756
Is there a CLI command or something to deal with fontforge e.g. to quickly get a list of all supported symbols in a given font? Something like: $ the_command_I_am_looking_for givenFont.ttfABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩαβγδεζηθικλμνξοπρστυφχψωАБВГДЄЖЅЗИІКЛМНОПРСТѸФХѠЦЧШЩЪЫЬѢЮꙖѤѦѪѨѬѮѰѲѴабвгдєжѕзиіклмнопрстѹфхѡцчшщъыьѣюꙗѥѧѫѩѭѯѱѳѵתשרקץףעסןםלךײיטחזױװוהדגבאⴱⴲⴳⴴⴵⴶⴷⴸⴹⴺⴻⴼⴽⴾⴿⵀⵁⵂⵃⵄⵅⵆⵇⵈⵉⵊⵋⵌⵍⵎⵏⵐⵑⵒⵓⵖⵗⵘⵙⵚⵛⵜⵝⵞⵟⵠⵡⵢⵣⵤⵥⵦⵧⵯ⵰0123456789+-×÷¢£¥₤₥₦₨₩₪₫€₭₮₲₺ƒ₼₽₴₹฿₵₠Ұ/~`|_-,;:!'"()[]{}@$\&#%+¶‡†‽ $</pre><p>Is there a way for this?</p>
For a TrueType font, you can use the ttfdump utility which is available from TeXlive . ttfdump -t cmap -i /usr/share/fonts/truetype/freefont/FreeSerif.ttf |perl -CS -ne 'print chr(hex($1)) if /Char (0x[[:xdigit:]]+)/ and hex($1) != 0xffff; END {print "\n"}' Experimentally, this only seems to list code points below U+FFFF. I don't know if this is a bug in ttfdump or if this is because extra work is needed to reach other planes. For any font that is supported by Freetype , the Freetype library makes this information available, but there doesn't seem to be a readily available command line utility to query it. Here's a one-liner using the Freetype-py Python bindings, which you can install with pip3 install --user freetype-py . python3 -c 'import freetype, sys; stdout = open(1, mode="w", encoding="utf8"); face = freetype.Face(sys.argv[1]); stdout.write("".join(sorted([chr(c) for c, g in face.get_chars() if c]) + ["\n"]))' /usr/share/fonts/truetype/freefont/FreeSerif.ttf
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595756", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56505/" ] }
595,784
I want to make it easier to run this kind of command: find . -type f -exec sed -i 's|wpp-splash|wpp_splash|g' {} \; so I created a function in my .bashrc to shorten it: function sedall() { find . -type f -exec sed -i 's|$1|g' {} \; } this way I can do sedall wpp-splash|wpp_splash But there is a syntax error. I am not sure what it is, but that bash function is resulting in "unexpected end of file". I wonder if it is something with the } characters? I tried escaping them like \{\} but that did not solve the problem. Any help, please?
Many problems there. Variables are not expanded inside single-quotes. { command ; } requires the terminating semicolon (or a newline). sedall wpp-splash|wpp_splash That is understood as a pipeline because you did not protect the pipe character with quotes. I would suggest this: sedall(){ [ "$#" = 2 ] || { echo Two arguments needed; return 9; } find . -type f -exec sed -i "s|$1|$2|g" {} \;} It needs two arguments instead of one, and it checks if those two arguments were given before execution. $ cat a bXABCXYABCY$ sedall ABC DEF$ cat a bXDEFXYDEFY
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/595784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115900/" ] }
596,030
I need to install Linux OS on 30 PCs.Is there any way to install from one ISO image with multicast or something like WDS in Microsoft? I have a Ethernet connection with speed of 100Mb so installing 30 PCs with uni-cast can be very slow.
What you're looking for is probably PXE: https://wiki.archlinux.org/index.php/Preboot_Execution_Environment http://jensd.be/533/linux/create-a-pxe-bootserver-to-server-multiple-linux-distributions https://www.howtoforge.com/ubuntu_pxe_install_server In case your LAN is too slow, you could use Kickstart for Fedora/CentOS/RHEL: https://docs.fedoraproject.org/en-US/fedora/rawhide/install-guide/advanced/Kickstart_Installations/ Fully Automatic Installation: https://fai-project.org/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596030", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/385544/" ] }
596,070
Linux kali-linux 5.6.0-kali2-amd64 #1 SMP Debian 5.6.14-2kali1 (2020-06-10) x86_64 GNU/Linux I want to change my wallpaper from the terminal. I tried methods suggested here and: Gsettings doesn't work : gsettings set org.cinnamon.desktop.background picture-uri "file:///filename" doesn't work. I can't install xsetbg by apt install xsetbg feh method doesn't give output or change wallpaper. Even Gsettings for gnome doesn't work: gsettings set org.gnome.desktop.background picture-uri file:///path/to/your/image.png
Xfce uses the Xfconf configuration system. To access the xfconf there is a CLI tool xfconf-query. https://docs.xfce.org/xfce/xfconf/xfconf-query To find out what property is changed when the backgound changes, run the following command in a terminal window: xfconf-query -c xfce4-desktop -m ...and then change the background using the Settings Manager > Desktop. The command monitors channel xfce4-desktop for changes. It will tell which property on channel xfce4-desktop is changed. Then the command to change that property would be like this xfconf-query -c xfce4-desktop -p insert_property_here -s path/image Change propery and path to image accordingly.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/419318/" ] }
596,110
Usually grep outputs any lines that match a pattern. I want to be able to find lines that match the pattern multiple times. For example, if my search pattern was "foo", then: foo bar # Would not be matchedfoo foo bar # Would be matchedbar foofoo # Would be matchedfoobarfoo # Would be matched Is there a way I can tell grep to find only lines that contain multiple matches of my search pattern?
grep -E "(foo.*){2}" file This matches at least 2 times on each line of file or output, you can give minimum number of matches.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/596110", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61235/" ] }
596,129
I want to rename files like: SL Benfica vs. SC Beira-Mar 136.mp4SL Benfica vs. SC Beira-Mar 137.mp4SL Benfica vs. SC Beira-Mar 138.mp4SL Benfica vs. SC Beira-Mar Jogo 074.mp4SL Benfica vs. SC Beira-Mar Jogo 082.mp4SL Benfica vs. SC Beira-Mar Jogo 112.mp4 But this for f in *.mp4; do echo mv "$f" "${f//[^0-9]/}.mp4"; done Adds a "4" at the end: 1364.mp41374.mp41384.mp40744.mp40824.mp41124.mp4 I think that it gets confused with the "4" in "mp4". How can I solve this?
Remove the .mp4 suffix before you do the replacement. for f in *.mp4; do fname=${f%.mp4} mv -- "$f" "${fname//[^0-9]}.mp4"done I added -- in case your filenames start with a - and removed the echo . Be careful.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596129", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416682/" ] }
596,194
I have a single line of numbers separated by a space. read varsum=0for x in $var; do ... #add numbers together $((sum += x))done I need the printed value to be an absolute number I got one more issue I am getting. The format I am trying to get an absolute number. Nothing I am doing is working. I read that abs(){ number} should give me this but it doesn't seem to be working. Also if that does work where do you work that into the loop?
If you've a negative integer you can treat it as a string and remove the leading dash x=-5echo ${x#-} # "5"x=5echo ${x#-} # "5"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420509/" ] }
596,203
I use Putty to remotely login into SERVER 1 and issue a WGET to another SERVER 2 to make a backup of a large directory. My question is, do I have to stay connected while this process occcurs? Or, is it enough to simply exectute the WGET command and all the files will copy over, even if I disconnect? If I do have to stay connected, how can I configure things so I don't have to stay connected?
If you've a negative integer you can treat it as a string and remove the leading dash x=-5echo ${x#-} # "5"x=5echo ${x#-} # "5"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420670/" ] }
596,276
I would like to check whether a Linux machine supports io_uring. How can this be done? Is there some kernel file that describes support for this, or do all Linux 5.1+ kernels have support?
io_uring doesn’t expose any user-visible features, e.g. as a sysctl ; it only exposes new system calls. It is available since kernel 5.1, but support for it can be compiled out, and it might be backported to older kernels in some systems. The safest way to check for support is therefore to check whether the io_uring system calls are available . If you have /proc/kallsyms , you can look there: grep io_uring_setup /proc/kallsyms Another way to check for the system call is to attempt a safe but malformed call, and check whether the resulting error is ENOSYS , for example: #include <errno.h>#include <linux/io_uring.h>#include <stddef.h>#include <sys/syscall.h>#include <unistd.h>int main(int argc, char **argv) { if (syscall(__NR_io_uring_register, 0, IORING_UNREGISTER_BUFFERS, NULL, 0) && errno == ENOSYS) { // No io_uring } else { // io_uring }} On a kernel supporting io_uring , the available operations vary as new features are introduced with new kernel versions; to determine the supported operations, use io_uring_get_probe .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61092/" ] }
596,307
New to script writing, new to bash, new to firmware modifications, but... enthusiastic as all get out. Here's the link to my camera's firmware. I'm trying to edit and replace the bitrate values specific to recording on this camera and I need some help. I've used a hex editor to find the bitrate values but I haven't found them yet. Still looking... In the meantime I'd like to see if I could change the value of the firmware version using the script, just to test it out and see if it actually works. I believe the firmware version information is stored in the paramdef file. If you open it in hex and search for the firmware version you'll find it in D0B0 (at the bottom of the hex). I want to change the 3 to a 4 value in a bash file and see if it works in the camera when I boot it with a test.sh bash script. I need to know how to reference the hex line in the bash script I need to know how to replace the value in the hex line with my bash script I'm thinking if I can get some positive traction on this script I'll eventually be able to edit the bitrate values of my camera. I'm also not able to flash the same firmware to my camera over and over. It'll only flash a new version. I'm wondering if it's something to do with the system script at the beginning of the firmware files: From config.file : setenv bootargs 'mem=96M quiet console=ttyAMA0,115200 clk_ignore_unused rw root=/dev/mtdblock5 rootfstype=jffs2 mtdparts=hi_sfc:384K(u-boot-GR01V2_2_2GDDR3.bin),64K(rawparam),64K(rawparambak),2944K(media_app_zip.bin),2560K(uImage),1920K(rootfs.jffs2),8064K(appfs.jffs2)'setenv bootcmd 'sf probe 0;sf read 0x84000000 0x60000 0x10000;sf read 0x84100000 0x70000 0x10000;cread 0x84000000 0x84100000 0x10000 0x80100000;go_cpu1 0x80200000 0x89000000 0x80000 0x2e0000;sf read 0x84000000 0x360000 0x280000;bootm 0x84000000'setenv swverv2 'S2_GR01V2_2_2GDDR3_0303111844' Edit: I still haven't found what I'm looking for (u2...)... I'm beginning to think it might be found in the U-boot commands? The Config file has the setenv command which I believe is read by the Linux command structure??? I wanted to see if it would just work on boot and I created a bash.sh file that reads:'''echo 'This is a test' > foo.txt'''
io_uring doesn’t expose any user-visible features, e.g. as a sysctl ; it only exposes new system calls. It is available since kernel 5.1, but support for it can be compiled out, and it might be backported to older kernels in some systems. The safest way to check for support is therefore to check whether the io_uring system calls are available . If you have /proc/kallsyms , you can look there: grep io_uring_setup /proc/kallsyms Another way to check for the system call is to attempt a safe but malformed call, and check whether the resulting error is ENOSYS , for example: #include <errno.h>#include <linux/io_uring.h>#include <stddef.h>#include <sys/syscall.h>#include <unistd.h>int main(int argc, char **argv) { if (syscall(__NR_io_uring_register, 0, IORING_UNREGISTER_BUFFERS, NULL, 0) && errno == ENOSYS) { // No io_uring } else { // io_uring }} On a kernel supporting io_uring , the available operations vary as new features are introduced with new kernel versions; to determine the supported operations, use io_uring_get_probe .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420770/" ] }
596,376
I have a CSV file that contains usernames and passwords. The file looks something like this: user1,password1user2,password2user3,password3 I need to loop through each line to grab the username and password, use those variables, and then grab the next set of usernames and passwords, replace the content of the variables with the new ones, etc. I've been searching for the best way to do this, but I'm not very familiar scripting and I'm getting lost. I've used awk to grab both individually, but I'm struggling to figure out how to use awk within a while loop. And I'm reading that might not be a great approach.
I don't see any need for awk here: #!/bin/bashwhile IFS=, read -r user pass; do # something with "$user" "$pass"done < path/to/file.csv
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596376", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420837/" ] }
596,403
I wanted to search for the string '.vars()' in all my Python files, and somehow I redefined 'grep' as follows: % grep .vars() *.py % which grepgrep () { *.py} I have tried using unset grep and export grep=/bin/grep to correct this, without success. Can somebody explain what I've accidentally done? NOTE: in Bash, it fails with: "syntax error near unexpected token `('".
This defines two functions, one named grep and the other named vars , whose body is *.py : grep .vars() *.py To remove those functions --- and to unshadow grep --- you want: $ unset -f grep .vars From man zshbuiltins : unset [ -fmv ] name ... ... unset -f is equivalent to unfunction....unfunction Same as unhash -f....unhash ... The -f option causes unhash to remove shell functions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596403", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420855/" ] }
596,406
I'd like use awk to parse a file, and add a line, if and only if this line does not already exist. My file: ccccddddaaaammm Example 1 : I'm looking for " aaaa " "aaaa" exists so, nothing happens, the output file will be the same. Example 2 : I'm looking for " bbbb " "bbbb" doesn't exist. My output file should be : ccccddddaaaammmbbbb How can I obtain this result?
$ awk -v x='aaaa' '$0 == x {found=1} END {if(!found) print x} 1' fileccccddddaaaammm$ awk -v x='bbbb' '$0 == x {found=1} END {if(!found) print x} 1' fileccccddddaaaammmbbbb
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420641/" ] }
596,450
I want to grep all lines with only one "#" in a line. Example: xxx#aaa#iiiiixxxxxxxxx#aaa#xxx#bbb#111#yyxxxxxxxxxxxxxxxx#xxx#x#x#v#e# Should give this output xxxxxxxxx#aaaxxxxxxxxxxxxxxxx#xxx#x
try grep '^[^#]*#[^#]*$' file where ^ ; begin of line[^#]* ; any number of char ≠ ## ; #[^#]* ; any number of char ≠ #$ ; end of line as sugested, you can grep on the whole line, with grep -x '[^#]*#[^#]*' with same pattern without begin of line/end of line anchor. -x to grep whole line, see man grep -x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/596450", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/420898/" ] }
596,621
I have a script that I am running inside a ubuntu container: #!/bin/shname=$(cat < /etc/os-release | grep "^NAME" | cut -d "=" -f 2)if [ $name = "Ubuntu" ] then echo "This should return"else echo "This is wrong"fi I started the container by running: docker run -it -v $(pwd):/scripts ubuntu:latest /bin/sh /scripts/test.sh The output I am receiving is "This is wrong" which is not right because I know that the output of $name is "Ubuntu" because my laptop is Ubuntu but I can't find a reason as to why this is going down the else route in the script? It does the same thing on my laptop outside the container.
The file /etc/os-release contains a list shell-compatible variable assignments. You don't need to parse it from a shell script, you can just source (or . ) it and use the relevant variables. $ cat ex.sh#!/bin/sh. /etc/os-releaseecho "$NAME"$ ./ex.shUbuntu
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421032/" ] }
596,653
Is there any limit for the maximum nested directories in the ext4 filesystem? For example ISO-9660 filesystem AFAIK cannot have more than 7 level of sub-directories.
There isn’t any limit inherent in the file system design itself, and experimentation (thanks ilkkachu ) shows that directories can be nested to a depth exceeding limits one might naïvely expect ( PATH_MAX , 4096 on Linux, although that limits the length of paths passed to system calls and can be worked around with relative paths). Part of the implementation apparently assumes that the overall path length, inside a given file system, never goes above PATH_MAX ; see the directory hashing functions which allocate PATH_MAX bytes. The only directory-related limit which seems to be checked in the file system implementation is the length of an individual path component , which is limited to 255 bytes; but that doesn’t have any bearing on the nested depth.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/596653", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388589/" ] }
596,679
What does find . exec /bin/bash -p \; -quit mean? I already know that when it finds something (a file?) with a . it executes /bin/bash . Has -p something to do with /bin/bash or the find command? What does \; and -quit mean?
find . -exec /bin/bash -p \; -quit (which here assumes GNU find or compatible for its -quit ) would start find , which would descend the directory tree starting with . (the current working directory) and for each file, starting with . itself, execute /bin/bash -p (where \; is just there to tell find where the command to execute ends), and if that command succeeds, exit (because of the -quit ). That find command in itself doesn't do much useful. It is a convoluted way to start bash . Here -p can given you a hint as to the (nefarious) intent behind that command. -p would prevent bash from dropping its privileges when it's called in a privilege escalation context (like when called from a process that has executed a setuid executable). It seems like we're in a restricted context where the user is only allowed to execute a restricted set of commands. That could be a done via a restricted shell for instance. But find happens to be among the list of commands that is allowed and whoever set the restricted environment up overlooked the fact that find can execute arbitrary commands without being affected by the restrictions set against the shell ( find is not a shell builtin). So find . -exec /bin/bash -p \; -quit looks like the command someone would run to circumvent those restrictions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421071/" ] }
596,692
I have an array containing strings to exclude with grep from the output of another program. I need to add an -e before each element. For instance: exclude=("$0" /usr/sbin/crond)needs-restarting | grep -Fwiv "${exclude[@]}" Now I know in this case I could prepend --regexp= (or just -e ) to each element like so: exclude=( "${exclude[@]/#/--regexp=}" ) But in the general case, how would I go about it? I came up with this but maybe there's a simpler way. i=0for elem in "${exclude[@]}"; do exclude[i]='-e' exclude[i+1]="$elem" ((i+=2))donedeclare -p exclude
In bash 4.4+, you could do: readarray -td '' array < <( ((${#array[@]})) && printf -- '-e\0%s\0' "${array[@]}") Here using \0 as the delimiter as bash variables can't contain NUL bytes anyway. If you know the array is not going to be empty, you can skip the ((${#array[@]})) && . Example: before: bash-5.0$ array=($'a\nb' '' 'c d' e)bash-5.0$ typeset -p arraydeclare -a array=([0]=$'a\nb' [1]="" [2]="c d" [3]="e") after: bash-5.0$ typeset -p arraydeclare -a array=([0]="-e" [1]=$'a\nb' [2]="-e" [3]="" [4]="-e" [5]="c d" [6]="-e" [7]="e") In zsh , you could use its array zipping operator: opt=-e(($#array == 0)) || array=("${(@)opt:^^array}") Or this convoluted one: set -o extendedglob # for (#m)array=("${(Q@)"${(@z)array//(#m)*/-e ${(qq)MATCH}}"}") Where we replace each element with -e <the-element-quoted> (with the qq flag), and then use z to parse that quoting back into a list of elements (where -e and <the-element-quoted> are then separated out), and Q to remove the quotes (and @ within quotes used to preserve empty elements if any).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421086/" ] }
596,853
I have file with dates: Mar 16Mar 12Mar 13Mar 19Mar 14Mar 17 and I need to calculate amount of days that passed till now. I come with this function: datediff() { d1=$(date -d "$1" +%s); d2=$(date -d "$2" +%s); echo $(( (d1 - d2) / 86400 )) days; }$ datediff 'now' '13 Mar'114 days but I need some loop that will calculate that for every line
You can use a while loop, where the condition is based on the ability to read from standard input: $ cat input.txtMar 16Mar 12Mar 13Mar 19Mar 14Mar 17$ cat ex.sh#!/bin/bashdatediff() { local d1="$(date -d "$1" +%s)" local d2="$(date -d "$2" +%s)" echo "$(( (d1 - d2) / 86400 )) days"}while read line; do datediff 'now' "${line}"done < "${1}"$ ./ex.sh input.txt111 days115 days114 days108 days113 days110 days The script here takes a single argument: the input file. While it can read a line from the file, it calls your datediff function passing now and the content of the line that it read from the file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/292797/" ] }
596,870
I organize all my text notes in AsciiDoc files. AsciiDoc is a file format similar to Markdown but instead of # I use = to denote a first level-heading, like so: first-file.adoc = How To Install MySQL 8.0 on Fedora second-file.adoc = Summaries and Notes From Books I've Read Knowing that each file include one and "only one" first-level heading = This is a First-Level Heading and it is always on line number 1,my goal is to search through all files using grep and only look in those first-level headings. This grep command matches the second file: grep -in '^= .*ooks' *.adoc It makes sense and it is what I need. However my terminal highlight with red everything up until the word "Books" and leaves the rest "I've Read" with white. Is that normal? = Summaries and Notes From Books I've Read
You can use a while loop, where the condition is based on the ability to read from standard input: $ cat input.txtMar 16Mar 12Mar 13Mar 19Mar 14Mar 17$ cat ex.sh#!/bin/bashdatediff() { local d1="$(date -d "$1" +%s)" local d2="$(date -d "$2" +%s)" echo "$(( (d1 - d2) / 86400 )) days"}while read line; do datediff 'now' "${line}"done < "${1}"$ ./ex.sh input.txt111 days115 days114 days108 days113 days110 days The script here takes a single argument: the input file. While it can read a line from the file, it calls your datediff function passing now and the content of the line that it read from the file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421232/" ] }
596,881
I am using Ubuntu 16.04, but I believe my question applies to many distros, such as Debian, CentOS, and Red Hat. The manpage for netstat -l is: Show only listening sockets. (These are omitted by default.) and netstat -a is: Show both listening and non-listening sockets. With the -- interfaces option, show interfaces that are not up Does the output of netstat -a include the output of nestat -l ? It seems like so in the manpage but many websites talk about netstat -plantu so I am wondering if netstat -l covers something that netstat -a does not.
You can use a while loop, where the condition is based on the ability to read from standard input: $ cat input.txtMar 16Mar 12Mar 13Mar 19Mar 14Mar 17$ cat ex.sh#!/bin/bashdatediff() { local d1="$(date -d "$1" +%s)" local d2="$(date -d "$2" +%s)" echo "$(( (d1 - d2) / 86400 )) days"}while read line; do datediff 'now' "${line}"done < "${1}"$ ./ex.sh input.txt111 days115 days114 days108 days113 days110 days The script here takes a single argument: the input file. While it can read a line from the file, it calls your datediff function passing now and the content of the line that it read from the file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/596881", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/396801/" ] }
596,887
While using Xorg X11, on KDE/Gnome/XFCE how can we scale the display/resolution for the whole desktop and/or per application? (when this is not available on the settings GUI) The purpose is to keep the screen resolution unchanged (at max) while scaling the size (bigger/smaller) of the desktop/applications.
Linux display This is detailed in depth on how does Linux's display works? QA. On most desktops system (like KDE or Gnome) there are settings available on their respective settings panel, this guide is for additional/manual settings that can be applied to scale an application or the whole desktop. This reference article have many valuable informations for the matter. Scaling applications Scaling application can be done mainly via DPI , specific environment variable (explained bellow), application own setting or some specific desktop setting (out of scope of this QA). Qt applications can be scaled with the following environment variables, note that many applications are hard-coding sizing and font and thus the result on such app may not be as expected. export QT_AUTO_SCREEN_SET_FACTOR=0export QT_SCALE_FACTOR=2export QT_FONT_DPI=96 Gnome/GTK applications can be scaled with the following environment variables export GDK_SCALE=2export GDK_DPI_SCALE=0.5 Gnome/GTK can as well be scaled globally with this Gnome setting gsettings set org.gnome.desktop.interface text-scaling-factor 2.0 Chromium, can be scaled with the following command chromium --high-dpi-support=1 --force-device-scale-factor=1.5 Xpra (python) can be used along with Run scaled to achieve a per app scaling. Environment variables modification can be placed in ~/.profile for a global and automatic appliance after login. Scaling the desktop with Xorg X11 Xorg 's extension RandR have a scaling feature and can be configured with xrandr . This can be used to scale the desktop to display a bigger environment, this can be useful for HiDPI (High Dots Per Inch) displays. RandR can also be used the other way around , example making a screen with 1366x768 max resolution support a greater resolution like 1920x1080. This is achieved by simulating the new greater resolution while rendering it for the supported max resolution, similar to when we watch a Full-HD video on a screen that is not Full-HD. Scaling the desktop without changing the resolution Getting the screen name: xrandr | grep connected | grep -v disconnected | awk '{print $1}' Reduce the screen size by 20% (zoom-in) xrandr --output screen-name --scale 0.8x0.8 Increase the screen size by 20% (zoom-out) xrandr --output screen-name --scale 1.2x1.2 Reset xrandr changes xrandr --output screen-name --scale 1x1 Scaling the desktop and simulate/render a new resolution When using xrandr to "zoom-in" with the previous method , the desktop remain full screen but when we "zoom-out" with for instance xrandr --output screen-name --scale 1.2x1.2 (to get an unsupported resolution) the desktop is not displayed in full screen because this require updating the resolution (to probably a higher unsupported resolution by the screen), we can use a combinaison of --mode , --panning and --scale , xrandr's parameters to achieve a full screen "zoom-out" scaling (simulate a new resolution), example: Get the current setup xdpyinfo | grep -B 2 resolution# orxdpyinfo Configuration example Scaling at: 120%Used/max screen resolution: 1366 x 768Resolution at 120% (res x 1.2): 1640 x 922 (round)Scaling factor (new res / res): 1.20058565 x 1.20208604 The idea here is to increase the screen resolution virtually (because we are limited to 1366x768 physically) the command would be (replace screen-name ): xrandr --output screen-name --mode 1366x768 --panning 1640x922 --scale 1.20058565x1.20208604 Reset the changes with xrandr --output screen-name --mode 1366x768 --panning 1366x768 --scale 1x1# restarting the desktop may be required example with KDE# kquitapp5 plasmashell# plasmashell & Making xrandr changes persistant There is a multitude of methods to make xrandr changes persistant, this and this QA have many examples. Experiments notes As a side note and experiments result while using SDDM + KDE, and after many tests to achieve a persistant config, I ended up loading a script with ~/.config/autostart ( systemsettings5 > Startup... > Autostart), and naming my script 00-scriptname to make it run first. # 00-scriptname# Applying the main xrandr suited changes (scaling at x1.15)xrandr --output eDP1 --mode 1366x768 --panning 1574x886 --scale 1.15226939x1.15364583# This is where it get odd/complicated, sometimes the screen resolution is not applied correctly or not applied at all... # Note that "xrandr --fb" can be used alone to change the screen resolution on a normal situation... # Here we will be taking advantage of xrandr's "--fb" feature to make the config appliance stable and works every-time.# The odd thing here is while re-applying the new resolution 1574x886 with "--fb" nothing happen, but # if we use use an unsupported resolution like 1574x884 (vs 1574x886) then xrandr force the resolution # to "reset itself" to the configured resolution (1574x886)... # In short just re-apply the setting with "--fb" and an unsupported resolution to force a reset.# ("--fb" can be used alone here without re-applying everything)#xrandr --fb 1574x884 xrandr --fb 1574x884 --output eDP1 --mode 1366x768 --panning 1574x886 --scale 1.15226939x1.15364583 References Some KDE's gui tools: systemsettings5 > display, kcmshell5 xserver and kinfocenter . Links and sources: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 and 12 .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/596887", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120919/" ] }
596,889
I've a problem with my CentOS. When I've tried to install alien (el7 package, because el8 isn't available), I got this error (My CentOS has Polish language) [mlodybukk@localhost Pobrane]$ sudo yum install alienUpdating Subscription Management repositories.Unable to read consumer identityThis system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.Ostatnio sprawdzono ważność metadanych: 0:01:14 temu w dniu nie, 5 lip 2020, 22:45:38.Błąd: Problem: conflicting requests - nothing provides perl(:MODULE_COMPAT_5.16.3) needed by alien-8.90-3.el7.nux.noarch How can I repair this?
Linux display This is detailed in depth on how does Linux's display works? QA. On most desktops system (like KDE or Gnome) there are settings available on their respective settings panel, this guide is for additional/manual settings that can be applied to scale an application or the whole desktop. This reference article have many valuable informations for the matter. Scaling applications Scaling application can be done mainly via DPI , specific environment variable (explained bellow), application own setting or some specific desktop setting (out of scope of this QA). Qt applications can be scaled with the following environment variables, note that many applications are hard-coding sizing and font and thus the result on such app may not be as expected. export QT_AUTO_SCREEN_SET_FACTOR=0export QT_SCALE_FACTOR=2export QT_FONT_DPI=96 Gnome/GTK applications can be scaled with the following environment variables export GDK_SCALE=2export GDK_DPI_SCALE=0.5 Gnome/GTK can as well be scaled globally with this Gnome setting gsettings set org.gnome.desktop.interface text-scaling-factor 2.0 Chromium, can be scaled with the following command chromium --high-dpi-support=1 --force-device-scale-factor=1.5 Xpra (python) can be used along with Run scaled to achieve a per app scaling. Environment variables modification can be placed in ~/.profile for a global and automatic appliance after login. Scaling the desktop with Xorg X11 Xorg 's extension RandR have a scaling feature and can be configured with xrandr . This can be used to scale the desktop to display a bigger environment, this can be useful for HiDPI (High Dots Per Inch) displays. RandR can also be used the other way around , example making a screen with 1366x768 max resolution support a greater resolution like 1920x1080. This is achieved by simulating the new greater resolution while rendering it for the supported max resolution, similar to when we watch a Full-HD video on a screen that is not Full-HD. Scaling the desktop without changing the resolution Getting the screen name: xrandr | grep connected | grep -v disconnected | awk '{print $1}' Reduce the screen size by 20% (zoom-in) xrandr --output screen-name --scale 0.8x0.8 Increase the screen size by 20% (zoom-out) xrandr --output screen-name --scale 1.2x1.2 Reset xrandr changes xrandr --output screen-name --scale 1x1 Scaling the desktop and simulate/render a new resolution When using xrandr to "zoom-in" with the previous method , the desktop remain full screen but when we "zoom-out" with for instance xrandr --output screen-name --scale 1.2x1.2 (to get an unsupported resolution) the desktop is not displayed in full screen because this require updating the resolution (to probably a higher unsupported resolution by the screen), we can use a combinaison of --mode , --panning and --scale , xrandr's parameters to achieve a full screen "zoom-out" scaling (simulate a new resolution), example: Get the current setup xdpyinfo | grep -B 2 resolution# orxdpyinfo Configuration example Scaling at: 120%Used/max screen resolution: 1366 x 768Resolution at 120% (res x 1.2): 1640 x 922 (round)Scaling factor (new res / res): 1.20058565 x 1.20208604 The idea here is to increase the screen resolution virtually (because we are limited to 1366x768 physically) the command would be (replace screen-name ): xrandr --output screen-name --mode 1366x768 --panning 1640x922 --scale 1.20058565x1.20208604 Reset the changes with xrandr --output screen-name --mode 1366x768 --panning 1366x768 --scale 1x1# restarting the desktop may be required example with KDE# kquitapp5 plasmashell# plasmashell & Making xrandr changes persistant There is a multitude of methods to make xrandr changes persistant, this and this QA have many examples. Experiments notes As a side note and experiments result while using SDDM + KDE, and after many tests to achieve a persistant config, I ended up loading a script with ~/.config/autostart ( systemsettings5 > Startup... > Autostart), and naming my script 00-scriptname to make it run first. # 00-scriptname# Applying the main xrandr suited changes (scaling at x1.15)xrandr --output eDP1 --mode 1366x768 --panning 1574x886 --scale 1.15226939x1.15364583# This is where it get odd/complicated, sometimes the screen resolution is not applied correctly or not applied at all... # Note that "xrandr --fb" can be used alone to change the screen resolution on a normal situation... # Here we will be taking advantage of xrandr's "--fb" feature to make the config appliance stable and works every-time.# The odd thing here is while re-applying the new resolution 1574x886 with "--fb" nothing happen, but # if we use use an unsupported resolution like 1574x884 (vs 1574x886) then xrandr force the resolution # to "reset itself" to the configured resolution (1574x886)... # In short just re-apply the setting with "--fb" and an unsupported resolution to force a reset.# ("--fb" can be used alone here without re-applying everything)#xrandr --fb 1574x884 xrandr --fb 1574x884 --output eDP1 --mode 1366x768 --panning 1574x886 --scale 1.15226939x1.15364583 References Some KDE's gui tools: systemsettings5 > display, kcmshell5 xserver and kinfocenter . Links and sources: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 and 12 .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/596889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421247/" ] }
596,894
The Linux's display system uses multiple technology , protocols, extensions, applications, servers (daemon), drivers and concepts to achieve the windowing system for instance: Xorg, Wayland, X11, OpenGL, RandR, XrandR, Screen Resolution, DPI, Display server, etc. That multitude can be overwhelming or confusing when we don't have the full picture. There are multiple documentations for each side of the Linux's display system, but globally how does it work exactly?
Linux display The Linux's display system, uses multiple technology, protocols, extensions, applications, servers (daemon), drivers and concepts to achieve the windowing system for instance: Xorg, Wayland, X11, OpenGL, RandR, XrandR, Screen Resolution, DPI, Display server, etc. This can be overwhelming to understand fully, but each side of it is meant for a specific purpose and they are not used all together at the same time. X protocol The X Window System, X11 (X version 11) is a windowing system for bitmap displays, common on Unix-like operating systems, X provides the basic framework for a GUI environment: drawing and moving windows on the display device and interacting with a mouse and keyboard. X does not mandate the user interface, this is handled by individual programs. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces. X originated at the Project Athena at Massachusetts Institute of Technology (MIT) in 1984. The X protocol has been at version 11 (hence "X11") since September 1987. The X.Org Foundation leads the X project, with the current reference implementation, X.Org Server, available as free and open source software under the MIT License and similar permissive licenses. X implementation Most Linux distribution uses X.Org Server which is the free and open-source implementation of the display server for the X Window System (X11) stewarded by the X.Org Foundation. Xorg/X alone doesn't support multiple provided features like scaling or rendering, for that Xorg uses extensions such as XFixes , RandR (RandR is managed by xrandr it can for instance setup panning, resolution or scaling), GLX (OpenGL extension), Render or Composite which causes an entire sub-tree of the window hierarchy to be rendered to an off-screen buffer, applications can then take the contents of that buffer and do whatever they like, the off-screen buffer can be automatically merged into the parent window or merged by external programs, called compositing managers to do compositing on their own like some window managers do; E.g. Compiz, Enlightenment, KWin, Marco, Metacity, Muffin, Mutter and Xfwm. For other " non-compositing " window managers, a standalone composite manager can be used, example: Picom , Xcompmgr or Unagi . Xorg supported extensions can be listed with: xdpyinfo -display :0 -queryExtensions | awk '/^number of extensions:/,/^default screen number/' . On the other hand Wayland is intended as a simpler replacement for Xorg/X11, easier to develop and maintain but as of 2020 desktop's support for Wayland is not yet fully ready other than Gnome (e.g. KDE Kwin and Wayland support ); on the distributions side, Fedora does use Wayland by default . Note that Wayland and Xorg can work simultaneously , this can be the case depending on the used configuration. XWayland is a series of patches over the X.Org server codebase that implement an X server running upon the Wayland protocol. The patches are developed and maintained by the Wayland developers for compatibility with X11 applications during the transition to Wayland, and was mainlined in version 1.16 of the X.Org Server in 2014. When a user runs an X application from within Weston, it calls upon XWayland to service the request. The whole scope A display server or window server is a program (like Xorg or Wayland) whose primary task is to coordinate the input and output of its clients to and from the rest of the operating system, the hardware, and each other. The display server communicates with its clients over the display server protocol, a communications protocol, which can be network-transparent or simply network-capable. For instance X11 and Wayland are display server communications protocols. As shown on the diagram a window manager is an other important element of the desktop environment that is a system software that controls the placement and appearance of windows within a windowing system in a graphical user interface. Most window managers are designed to help provide a desktop environment. They work in conjunction with the underlying graphical system that provides required functionality support for graphics hardware, pointing devices, and a keyboard, and are often written and created using a widget toolkit. KDE uses KWin as a window manager (it has a limited support for Wayland as of 2020), similarly Gnome 2 uses Metacity and Gnome 3 uses Mutter as a window manager. An other important aspect of a windows manager is the compositor or compositing window manager , which is a window manager that provides applications with an off-screen buffer for each window. The window manager composites the window buffers into an image representing the screen and writes the result into the display memory. Compositing window managers may perform additional processing on buffered windows, applying 2D and 3D animated effects such as blending, fading, scaling, rotation, duplication, bending and contortion, shuffling, blurring, redirecting applications, and translating windows into one of a number of displays and virtual desktops. Computer graphics technology allows for visual effects to be rendered in real time such as drop shadows, live previews, and complex animation. Since the screen is double-buffered , it does not flicker during updates. The most commonly used compositing window managers include: Linux, BSD, Hurd and OpenSolaris-Compiz, KWin, Xfwm, Enlightenment and Mutter. each one have its own implementation, for instance KDE's KWin's compositor have many features/settings like animation speed, tearing prevention (vsync), window thumbnails, scaling method and can use OpenGLv2/OpenGLv3 or XRender as a rendering backend along with Xorg. ( XRender/Render not to confuse with XRandR/RandR ). OpenGL (Open Graphics Library) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering. OpenGL is a rendering library that can be used with Xorg, Wayland or any application that implements it. OpenGL installation can be checked with glxinfo | grep OpenGL . The display resolution or display modes of a computer monitor or display device is the number of distinct pixels in each dimension that can be displayed. It is usually quoted as width × height, with the units in pixels: for example, 1024 × 768 means the width is 1024 pixels and the height is 768 pixels. xrandr can be used to add or render/simulate a new display resolution. The DPI stand for dots per inch and is a measure of spatial printing/display , in particular the number of individual dots that can be placed in a line within the span of 1 inch (2.54 cm). Computer's screens do not have dots, but do have pixels, the closely related concept is pixels per inch or PPI and thus DPI is implemented with the PPI concept. The default 96 DPI mesure mean 96x96 vertically and horizontally. Additionally Is X DPI (dot per inch) setting just meant for text scaling? QA is very informative. Notes Some KDE's gui tools: systemsettings5 > display, kcmshell5 xserver and kinfocenter . References Links and sources: 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 and 12 .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/596894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120919/" ] }
596,959
There are a few commands in a script I need to run, it's a custom linter for source code. Each run generates a report and fails with exit code 1 in case of rules violations. I need to run all commands to generate reports before failure, and fail if any command fails with non-zero exit code. This script doesn't work because it exists on first error: lint ./module1/src/main/java && lint ./module2/src/main/java && lint module3/src/main/java Now I'm using this script: lint ./module1/src/main/javacode1="$?"lint ./module2/src/main/javacode2="$?"lint ./module3/src/main/javacode3="$?"if [[ "$code1" != "0" || "$code2" != "0" || "$code3" != "0" ]]; then exit 1fi But it looks overcomplicated and it's not extendable (I need to add additional variables and check for next command). Is it possible to make this script more elegant?
If you want all your tests to complete, and then return a final overall status code, that's how you have to code it. Here's one way #!/bin/bash#ss=0lint ./module1/src/main/java || ((ss++))lint ./module2/src/main/java || ((ss++))lint ./module3/src/main/java || ((ss++))exit $ss As written it fails with an exit code corresponding to the number of failed tests. You can still test for true/false (zero/non-zero), but if you require the code to exit with precisely 1 in the event that any one or more tests have failed, change ((ss++)) to ss=1 throughout.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/596959", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/252528/" ] }
597,083
I want to write a C program, and I need to parse stdin . If I type cat file.txt | grep -v match , how does stdout from cat resolve with -v match ? Are they concatenated? Are they two different strings? I ran cat /dev/pts/0 and file /dev/pts/0 , so I didn't find anything (seemingly) useful there.
The standard definition for the main function of a C program is int main(int argc, char *argv[]) Here, argc and argv are the command line arguments, -v and match for grep in this case. Note that they're not a single string, but the shell has already split the arguments to distinct strings (NUL/ \0 terminated, as usual in C). argc contains the number of arguments, and argv the arguments themselves. Standard input on the other hand is just a FILE * , you can use it directly with any of the stdio functions. fgets(buf, sizeof(buf), stdin) etc. I'm not sure where you got cat /dev/pts/0 . It would read from that particular pseudo-terminal, possibly conflicting with reads by your shell on that same terminal. (Try to open two terminals, xterm, SSH sessions, screen, whatever. Then run tty on the first one, it shows the name of the terminal there, e.g. /dev/pts/123 . Run cat /dev/pts/123 (with the given name) in the second terminal, then try to type something in the first.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597083", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/407674/" ] }
597,118
I want to customize my fish shell using the Web UI mode, but when running fish_config colors , the following error is shown. surface@Surface ~> fish_config starting-colorsWeb config started at file:///tmp/web_configoafehdco.htmlHit ENTER to stop.Start : This command cannot be run due to the error: The system cannot find the file specified.At line:1 char:1+ Start "file:///tmp/web_configoafehdco.html"+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (:) [Start-Process], InvalidOperationException + FullyQualifiedErrorId : InvalidOperationException,Microsoft.PowerShell.Commands.StartProcessCommandd I am running Ubuntu in the Windows Subsystem for Linux
After going through some articles and not finding the correct answer, I found that when running 'help' it opens the browser and points to file://wsl%24/Ubuntu-20.04/usr/share/doc/fish/index.html#variables-for-changing-highlighting-colors and when we're trying to run fish_config , it points to "file://wsl%24/Ubuntu/tmp/web_configpo_b9wan.html" That means, we need to change our wsl%24/Ubuntu to wsl%24/Ubuntu-20.04. To do so, first of all, open the webconfig directory. cd /usr/share/fish/tools/web_config Now, give write permission to the webconfig.py file. sudo chmod 777 webconfig.py Modify the following line from "file:///" + f.name" to "file://wsl%24/Ubuntu-20.04" + f.name in webconfig.py file. Change the file's permission back to its original state by running chmod 644 webconfig.py You're good to go.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421474/" ] }
597,259
Is there a standard dummy executable file that does nothing in Linux? I have a shell command that always opens $EDITOR before the build process to input arguments manually. In my case, my arguments are always already set (this is an automated script) so I never need it, but it still pops up and awaits user input. To solve this, I created an empty executable file that does nothing, so I can set EDITOR=dummy and the build script calls it, it exits and the build process can start. My question is, is there an existing official file in Linux that when executed does nothing, a sort of placeholder that I could use for this purpose?
There's the standard utilities true and false . The first does nothing but return an exit status of 0 for successful execution , the second does nothing but return a non-zero value indicating a non-successful result (*) . You probably want the first one. Though some systems that really want you to enter some text (commit messages, etc.) will check if the "edited" file was actually modified, and just running true wouldn't fly in that case. Instead, touch might work; it updates the timestamps of any files it gets as arguments . However, if the editor gets any other arguments than the filename touch would create those as files. Many editors support an argument like + NNN to tell the initial line to put the cursor in, and so the editor may be called as $EDITOR +123 filename.txt . (E.g. less does this, git doesn't seem to.) Note that you'll want to use true , not e.g. /bin/true . First, if there's a shell involved, specifying the command without a path will allow the shell to use a builtin implementation, and if a shell is not used, the binary file will be found in PATH anyway. Second, not all systems have /bin/true ; e.g. on macOS, it's /usr/bin/true . (Thanks @jpaugh.) (* or as the GNU man page puts it , false "[does] nothing, unsuccessfully". Thanks @8bittree.)
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/597259", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421601/" ] }
597,300
I have a file with following type of expression in every line "Age=22 years and Height=6 feet", I want to extract Age and height numbers only. I have tried grep -oP '(?<=Age=)[^years]+' $f | awk '{ printf "%d \n",$1; } and get age correctly. How can I get both Age and height. When I try nested pattern match i get height only. This is the pattern I've tried grep -oP '(?<=Age=)[^years]+.+(?<=Height=)[^feet]+' $f | awk '{ printf "%d \n",$1; }
This is not doing what you think it does, it works only by accident: [^years]+ It means, match any character except y , e , a , r and s at least once. Also, instead of Look-behind assertion, I would use keep-out . It has the benefit that it can be of variable length, then you can easily match both Age and Height . (Age|Height)=\K Then, instead of making a negative match, use a positive one, matching only numbers: grep -Po '(Age|Height)=\K\d+' -- $ echo "Age=22 and Height=6" | grep -Po '(Age|Height)=\K\d+'226
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597300", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421632/" ] }
597,314
I want to change the IO policy to noop for all the VMs of all kernel version. (Even if there is a future updated version also) cat /sys/block/sda/queue/scheduler Output as follow[mq-deadline] kyber bfq nonegrubby --update-kernel=ALL --args="elevator=noop"rebootEven after applying the following command also, policy was not updated. "[mq-deadline] kyber bfq none" Additionally tried the following command also Modified the /etc/default/grub with elevator=noopand ran the command grub2-mkconfig still the the IO policy is not updated on Centos 8 In centos 8, how to update the IO policy to noop?
This is not doing what you think it does, it works only by accident: [^years]+ It means, match any character except y , e , a , r and s at least once. Also, instead of Look-behind assertion, I would use keep-out . It has the benefit that it can be of variable length, then you can easily match both Age and Height . (Age|Height)=\K Then, instead of making a negative match, use a positive one, matching only numbers: grep -Po '(Age|Height)=\K\d+' -- $ echo "Age=22 and Height=6" | grep -Po '(Age|Height)=\K\d+'226
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597314", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/96015/" ] }
597,317
How to write a shell script to rename the file in linux? Ex: 234-2020-08-06-12-13-14-abc_up.csv is renamed to 234-abc_up-2020-08-06-12-13-14.csv
This is not doing what you think it does, it works only by accident: [^years]+ It means, match any character except y , e , a , r and s at least once. Also, instead of Look-behind assertion, I would use keep-out . It has the benefit that it can be of variable length, then you can easily match both Age and Height . (Age|Height)=\K Then, instead of making a negative match, use a positive one, matching only numbers: grep -Po '(Age|Height)=\K\d+' -- $ echo "Age=22 and Height=6" | grep -Po '(Age|Height)=\K\d+'226
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597317", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/415050/" ] }
597,368
I have a variable with nested json, a={ "version": "3.0", "user": "unknown_unknown", "dateGenerated": "2020-07-08T11:53:23Z", "status": "OK", "data": [ { "parameter": "t_2m:C", "coordinates": [ { "lat": 39.23054, "lon": 9.11917, "dates": [ { "date": "2020-07-08T15:53:23Z", "value": 25.1 } ] } ] } ]} Looking for a way to grep the "value" in the nested json (like the one highlighted) in variable a . I'am using grep and jq but I can't show value, it shows "dates"( echo $result | grep -Po '"dates":.*?[^\\],.*?[^\\]"' )but not just the value. Any help?
You want the "value" from (the first object in the "dates" array) from (the first object in the "coordinates" array) from (the first object in the "data" array) $ a='{"version":"3.0","user":"unknown_unknown","dateGenerated":"2020-07-08T11:53:23Z","status":"OK","data":[{"parameter":"t_2m:C","coordinates":[{"lat":39.23054,"lon":9.11917,"dates":[{"date":"2020-07-08T15:53:23Z","value":25.1}]}]}]}'$ echo "$a" | jq -r '.data[0].coordinates[0].dates[0].value'25.1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/597368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421684/" ] }
597,413
My system is Debian Buster. I would like to track the list of installed packages with git. When I list installed packages with dpkg -l , where does the list come from? I found some package info in /var/lib/dpkg/status , but this file has more information than I am interested in. Is there some other place where the package list is stored ? What is the best file to track, so that I can have overview of installed packages, their versions, or uninstalled packages? UPDATE I have tried tracking /var/lib/dpkg/status with git, but the output is very unclear and confusing. There is simply too much information in status. I just need to track the list of installed packages, and their versions. Something like output of dpkg -l. Is the list of packages, as shown by dpkg -l, stored in some file, or is it generated each time on the fly? Could I create a git repository in /var/lib/dpkg/ and create some filter in git, so that basically only the output of dpkg -l is beineg tracked?Or perhaps that each time I run git status`, the list is created dynamically ? Or any other solution, I am not sure what possibilities git offers.
The list of installed packages is in /var/lib/dpkg/status ; that is the canonical reference. Installed packages are signalled in that file by their “install ok installed” status. dpkg -l processes this file every time it’s run, and uses the information stored therein to produce its output. If you want a simpler set of data to track, simplifying comparisons, you’ll have to generate it whenever necessary. If you only want to track a list of installed packages, you can run dpkg --get-selections periodically and store its output in a file tracked with git ; since you also want versions, dpkg -l might be more suitable. As pointed out by Martin Konrad , if you want to be able to use the information generated here to restore the state of the system at a later date, you should also track the manually-installed markers, and I’d add the holds too: apt-mark showmanualapt-mark showhold You could add all the above to a dpkg hook, to track all changes to your system; for example, using /etc/packages/ to hold the files (rather than /var/lib/dpkg , which is “owned” by dpkg and should be left as-is), create a file named /etc/dpkg/dpkg.cfg.d/package-history , containing post-invoke="if [ -x /usr/local/bin/package-history ]; then /usr/local/bin/package-history; fi" and a file named /usr/local/bin/package-history containing #!/bin/shcd /etc/packagesdpkg --get-selections > selectionsdpkg -l > listapt-mark showhold > holdsapt-mark showmanual > manual The latter needs to be executable: sudo chmod 755 /usr/local/bin/package-history The outputs of all the commands above are sorted, so there’s no need to post-process them. With those files, you’ll be able to restore the installed package states exactly, track version changes, and also see packages which have been removed but not purged. You can either add git commit (checking for changes first) to the package-history script, or use etckeeper to track changes to the files in /etc/packages , or even make /etc/packages a git repository itself. Using a dpkg hook ensures that the files will be updated with any package change, whether driven by apt or dpkg or any other tool piggy-backing on top of dpkg . If you commit in the package-history script itself, then the commit granularity will correspond to dpkg executions; if you rely on etckeeper , it will correspond to actions involving etckeeper . To handle the commit in the script, add if [ "$(git status --porcelain | wc -l)" -gt 0 ]; then git add * git commit -m 'Update package information'fi to the end of the script above; you should then run it once manually, as root, to initialise the git history (after mkdir /etc/packages; git init /etc/packages ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155832/" ] }
597,560
I have multiple files(100-1000) in foo dir. I want to append each filename to its own content. I think the for loop should resolve appending a random string to for each file, for f in *; do printf "%10s \n" $(shuf -i 50-100 -n 50 -r) >> $f; done How can I append filename to all this shuffled numbers, directly concatanate them? for f in *; do printf "%10s \n" $f $(shuf -i 50-100 -n 50 -r) >> $f; done result in file 5: 5678969... expected result: 567589569...
Ok, if I get it right, you want to prefix a fixed string to all numbers printed by shuf . If so, just add that string to the start of the printf format string: $ printf "x%10s\n" $(shuf -i 50-100 -n 3 -r)x 71x 70x 92 Change the %10s to %s get them back to back without the whitespace. Similarly, you can use the loop variable instead: $ for f in 1 2 3; do printf "$f%s\n" $(shuf -i 50-100 -n 2 -r); done151197268256364354 Add the >> "$f" to redirect to files. Note that since the fixed part is part of the format string here, any % signs and backslashes would be interpreted by printf .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597560", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421849/" ] }
597,692
Running ./VBoxLinuxAdditions.run in CentOS8 gives following error: Kernel headers not found for target kernel 4.18.0-193.6.3.el8_2.x86_64. Please install them and execute /sbin/rcvboxadd setupValueError: File context for /opt/VBoxGuestAdditions-6.0.22/other/mount.vboxsf already definedmodprobe vboxguest failed How do I install the required kernel headers?
# dnf update -y# dnf install kernel-devel make gcc -y Followed by reboot in case a new kernel gets installed should fix it for you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/278839/" ] }
597,698
Running the $uname -r command on my embedded device reveals my kernel to be 4.14.73-ltsi (it is a custom Linux kernel built for an embedded device). Now I intend to install the real time (RT) patch for this kernel version. However, on the official webpage I don't see a patch specific for kernel version 4.14.73. The nearest ones I see are patch-4.14.87-rt49.patch.xz and patches-4.14.63-rt43.tar.xz . Which of these two patches would be most compatible with my Linux kernel version (4.14.73-ltsi) ?
# dnf update -y# dnf install kernel-devel make gcc -y Followed by reboot in case a new kernel gets installed should fix it for you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421959/" ] }
597,736
Is there any way to disable KWin compositing effects from command-line? AFAIK it's possible to disable it via System settings->Hardware->Display and Monitor->Compositor but it requires a KWin restart.If I'm able to disable the compositor from command line , then I can easily assign a shortcut to it. Any idea? BTW I use KDE Plasma 5.19.
To disable compositing: qdbus org.kde.KWin /Compositor suspend To enable compositing qdbus org.kde.KWin /Compositor resume
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/597736", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/388589/" ] }
597,764
I've got a relatively small list of filenames generated from a pipeline based on find . The file names contain spaces and possibly punctuation but definitely no other non-printing characters or newlines. For example, Netherlands/Purge (GDPR) 2020-01-09.txtNetherlands/Purge (GDPR) 2020-01-27.txtSwitzerland/New mailing 2020-01-27.txt I want to edit these files as a set ( vi file1 file2 file3 rather than vi file1; vi file2; vi file3 ), partly so that I can easily jump forwards and backwards between them. I've started with Using a generated list of filenames as argument list — with spaces , which has a standard find -print0 | xargs -0 mycommand solution. Unfortunately this does not work when mycommand is an editor because although xargs can assemble the set of files to edit, stdin is already taken up from the pipeline and I can't see a way to run an editor in-place. I can't use find -exec vi {} + because I'm using a pipeline to validate the set of filenames, and not just find itself. My other option is to copy and paste, assembling the list of file names, surrounding them with quotes, and then prefixing the result with vi . For these three files it's trivial, but in the general case it's not an easily-reusable solution, vi 'Netherlands/Purge (GDPR) 2020-01-09.txt' 'Netherlands/Purge (GDPR) 2020-01-27.txt' 'Switzerland/New mailing 2020-01-27.txt' Given a GNU/Linux platform with bash as my preferred shell (in case it matters), how can I edit a similarly generated list of files?
From the comments I get something similar like this is your command: find -type f -mtime +14 -mtime -22 -iname '*.xml' | while IFS= read -f x; do xmlstarlet sel -T -t -v '//magicElement' -n "$x" | grep -q magicValue && echo "$x"; done Instead of piping to a while - loop you could use -exec sh -c '...' to filter files: find -type f -mtime +14 -mtime -22 -iname '*.xml' \ -exec sh -c 'xmlstarlet sel -T -t -v "//magicElement" "$1" | grep -q magicValue' find-sh {} \; \ -exec vi -- {} + Try: Consider three files: .├── a:<magicElement>magicValue</magicElement>├── b:<magicElement>magicValue</magicElement>└── c:<magicElement>someOtherValue</magicElement>$ find . -type f \ -exec sh -c 'xmlstarlet sel -T -t -v "//magicElement" "$1" | grep -q magicValue' find-sh {} \; \ -exec echo vi -- {} + Output: vi -- ./a ./b
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100397/" ] }
597,770
I got a command foo that outputs a list of files separated by a new line. How can I filter those files by their content using regex, and output the filtered files list?
If your system has GNU xargs , you could do something like foo | xargs -d '\n' grep -l regex
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597770", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220235/" ] }
597,823
I essentially want to put the name of the query in front of each line containing the SourceField. I have tried this concatenation scheme awk '/<\^Query/ && p{print p;p=""}{p=p $0}END{if(p) print p}' But this only works until I have multiple source fields. When that happens it concatenates all the lines with SourceField: Query: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTER SourceField: LO My data is: Query: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTER SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER I want the output to look like this: Query: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2Query: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER
$ awk '/^ +/{print q, $0; next} {q=$0}' fileQuery: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER or if you prefer any of these formats (there are many other possibilities too!): $ awk 'sub(/^ +/,""){print q, $0; next} {q=$0}' fileQuery: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER$ awk '/^ +/{$1=$1; print q, $0; next} {q=$0}' fileQuery: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER$ awk -v OFS='\t' '/^ +/{$1=$1; print q, $0; next} {q=$0}' fileQuery: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100557/" ] }
597,857
I was doing research on how to generate all non-repeating permutations of a set of a numbers without using recursion in Bash and I found this answer that worked, but I want to understand why. Say you have three numbers: 1, 2, 3. The following command will generate all possible non-repeating permutations: printf "%s\n" {1,2,3}{1,2,3}{1,2,3} | sort -u | sed '/\(.\).*\1/d'123132213231312321 I understand what the printf with %s does when the argument is the brace expansions of the set {1, 2, 3} three times (which would print every single possible outcome). I know that sort -u will output only unique lines. I know that sed /<pattern>/d is used to delete any lines that match <pattern> . Reading the pattern within the sed , I am somewhat confused. I know how to read regex but I don't see how this pattern works within the sed command. \( = literal '('. = any character, once\) = literal ')'.* = any character, zero or more times\1 = reference to first captured group How does then the sed command remove non-unique values from this regex pattern? I don't understand how there's a reference to a captured group, when there's not really one? The parentheses are being used in the pattern to be matched literally? Everything about this execution makes sense to me until the sed command.
$ awk '/^ +/{print q, $0; next} {q=$0}' fileQuery: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER or if you prefer any of these formats (there are many other possibilities too!): $ awk 'sub(/^ +/,""){print q, $0; next} {q=$0}' fileQuery: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER$ awk '/^ +/{$1=$1; print q, $0; next} {q=$0}' fileQuery: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER$ awk -v OFS='\t' '/^ +/{$1=$1; print q, $0; next} {q=$0}' fileQuery: D Monthly Loan SourceField: LOAD-NO SourceTable: MASTERQuery: D Monthly Loan SourceField: LO SourceTable: MASTERQuery: D Monthly Loan SourceField: HI SourceTable: MASTER2QUERY: M FORBEARANCE_1 SourceField: LOAN-NO SourceTable: MASTER
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394038/" ] }
597,962
As far as I know, the C.UTF-8 locale started as Debian's attempt to modernize the standard C locale and over time some other distros, such as Fedora, added support for it. But what about other Linux distros, Unices (e.g. BSD and macOS) and environments (e.g. Cygwin and MinGW)? Is it safe to rely on its presence on any modern Unix-like system?
It's "hit or miss" (it largely depends on the propensity of developers to copy features from other systems versus filling missing bits from a standard): As suggested by Tom Hale , Arch Linux does not use this. Nor does NetBSD 9 Nor does macOS (Catalina, Mojave, or Big Sur) But FreeBSD 12 does, OpenBSD 6.7 does OpenSUSE has "C.utf8" (close enough) Mageia 7 does Solaris 11.4 does ( since 11.4.42 ) A minimally sized C.UTF-8 locale was released with GNU C library 2.35 . Any distribution with that version of the library (or later) is likely to have C.UTF-8 . At some point that will be relevant for Arch Linux (as of September 8, 2021, it uses glibc 2.33 ). The other platforms are unaffected.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/597962", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/421165/" ] }
598,028
I am running Red Hat Enterprise Linux Server release 7.8 (Maipo) and when I try to run. yum install epel-release No package epel-release available. Error: Nothing to do I need to install this for DKMS
Run: yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm You have to do this because epel-release package is directly available in CentOS base repository, but not in RHEL repository. According to documentation , it is also recommended to enable some optional repositories: subscription-manager repos --enable "rhel-*-optional-rpms" --enable "rhel-*-extras-rpms" --enable "rhel-ha-for-rhel-*-server-rpms"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/598028", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145040/" ] }
598,036
My POSIX is_integer () function looks like this for a long time: #!/bin/shis_integer (){ [ "$1" -eq "$1" ] 2> /dev/null} However, today, I found it broken . If there are some spaces around the number, it surprisingly also evaluates to true , and I have no idea how to fix that. Example of correct (expected) behavior: is_integer 123 evaluates to true . Example of incorrect (unexpected) behavior: is_integer ' 123' also evaluates to true , however it obviously contains a leading space, thus the function is expected to evaluate to false in such cases. POSIX-compliant suggestions only, please. Thank you.
#!/bin/shis_integer (){ case "${1#[+-]}" in (*[!0123456789]*) return 1 ;; ('') return 1 ;; (*) return 0 ;; esac} Uses only POSIX builtins.It is not clear from the spec if +1 is supposed to be an integer, if not then remove the + from the case line. It works as follows. the ${1#[+-]} removes the optional leading sign. If you are left with something containing a non digit then it is not an integer, likewise if you are left with nothing. If it is not not an integer then it is an integer. Edit: change ^ to ! to negate the character class - thanks @LinuxSecurityFreak
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/598036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/126755/" ] }
598,095
we are trying to install the following rpm ( that actually already installed ) rpm -qa | grep sshpasssshpass-1.06-2.el7.x86_64yum install sshpass-1.06-1.el7.x86_64.rpmLoaded plugins: langpacksExamining sshpass-1.06-1.el7.x86_64.rpm: sshpass-1.06-1.el7.x86_64sshpass-1.06-1.el7.x86_64.rpm: does not update installed package.Error: Nothing to doecho $?1 but its returned exit code 1 why yum not ignoring the rpm that already installed and returned error? other example rpm -qa | grep figletfiglet-2.2.5-9.el7.x86_64yum install figlet-2.2.5-9.el7.x86_64.rpmLoaded plugins: langpacks, product-id, search-disabled-repos, subscription-managerconfigurationExamining figlet-2.2.5-9.el7.x86_64.rpm: figlet-2.2.5-9.el7.x86_64figlet-2.2.5-9.el7.x86_64.rpm: does not update installed package.Error: Nothing to doecho $?1 note - we also try yum clean all , and removed the /var/cache/yum/* but not help
I am under the impression that you are complaining about yum default behavior. yum serves as a package manager that installs/removes or upgrades packages. If there's Error: Nothing to do it is de-facto failing a task to install / remove or upgrade a package - hence return code == 1. If you would like to check if the package is installed and if it is not then install it try the following : rpm -qa | grep wget || yum -y install wget This should give you $? == 0 in the standard scenario.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/598095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }