source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
680,393
How many pages has pdf? Could you provide a way to get this information in bash script?
Here's my pdfpages script. The machines I use tend to have at least one of QPDF ( qpdf ) or Poppler ( pdfinfo ) so it's good enough for me, but there are other tools that can do the job. #! /bin/shif type qpdf >/dev/null 2>/dev/null; then pdfpages1 () { qpdf --show-npages "$1" }elif type pdfinfo >/dev/null 2>/dev/null; then pdfpages1 () { pdfinfo -- "$1" | sed -n '/^Pages:/ s/.*[^0-9]//p' }else echo 1>&2 "None of the supported tools is available: pdfinfo, qpdf"fifor x; do printf '%8d %s\n' $(pdfpages1 "$x") "$x"done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680393", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9689/" ] }
680,614
I have two directories dir1 and dir2 . I want to copy all the files and folders in dir1 to dir2 except the files that have .txt extension. How I can do this?
Using rsync with --exclude option. rsync -av --exclude '*.txt' dir1/ dir2/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145309/" ] }
680,621
I have PC1 and PC2, both under Ubuntu, with active SSH servers, and behind the same router. I will use different bash prompts for the sake of clarity.I can ssh from PC1 to PC2 using the local IP. [user@PC1]$ ssh [email protected]/user@PC2/$ logoutConnection to PC2 closed.[user@PC1]$ ssh [email protected]/user@PC2/$ logoutConnection to PC2 closed.[user@PC1]$ dig +short myip.opendns.com @resolver1.opendns.com<External IP>[user@PC1]$ ssh [email protected]/user@PC2/$ dig +short myip.opendns.com @resolver1.opendns.com<External IP> How can I ssh using the external IP (which is the same for both? [user@PC1]$ ssh user@<External IP> ... etc? Would the procedure be different from what I would do if PC1 and PC2 were behind different routers? Note that I meant to learn how to do this now that I have both PCs at hand, and prepare PC2 for future occasions when I am away.
Using rsync with --exclude option. rsync -av --exclude '*.txt' dir1/ dir2/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/137608/" ] }
680,635
I stumbled upon the bsdutils package in Debian. The description says: This package contains the bare minimum of BSD utilities needed for a Debian system: logger, renice, script, scriptlive, scriptreplay and wall. The remaining standard BSD utilities are provided by bsdextrautils. Similarly, the description of bsdmainutils also mention BSD: This package contains lots of small programs many people expect to find when they use a BSD-style Unix system. I was surprised to see that these packages relates to BSD, in the context of a Linux system. Do these packages use some code from BSD? What is a BSD-style Unix system ?
In the beginning, there was Unix , which was a product developed by Bell Labs (a subsidiary of AT&T ). A lot of groups customized their copy and added their own programs, and shared their improvements with others (for pay or for free). One such group was the University of California, Berkley (UCB). They shared the Berkeley Software Distribution (BSD) under a very liberal license (known today as the original BSD license ). Originally, this was a set of additions to the basic Unix. Eventually, they rewrote the complete operating system, so that it could be used without getting a license from AT&T. Apart from BSD, the main suppliers of Unix operating systems were computer vendors who sold the operating system with the computer. Some kept basing their operating system on the AT&T version. These systems are known as the System V family, because it was based on this version of AT&T Unix. Other vendors used the BSD version. Some made their own, with the goal of being broadly compatible with the two main players (System V and BSD) but each with their own specifics. A “System V operating system” is a system that is more compatible with AT&T Unix. A “BSD operating system” is a system that is more compatible with BSD. GNU was another project to make an operating system that could play the same role as BSD: freely available, and with the same kinds of features as Unix. GNU was much more ambitious than BSD, but as a result they didn't manage to do everything they wanted, and in particular they were missing a critical bit: a kernel. In the 1990s, Linux became the de facto standard kernel for GNU, and an operating system based mostly on GNU core programs on a Linux kernel is known as “Linux”, or sometimes “GNU/Linux”. GNU/Linux has its own history that's independent from System V and BSD, so it doesn't have all the features that all actual System V systems share, or all the features that all actual BSD systems share. Debian's bsdutils and bsdmainutils are collections of small programs that are typically present on BSD systems, but not part of the core that is present on all Unix systems. The bsdutils collection is from util-linux . They're programs with similar interfaces to the BSD utilities with the same name, but most if not all were written completely independently, and they're distributed under a GNU license. bsdmainutils is a collection of programs copied from a BSD collection, still distributed under a BSD license. They're now maintained by Debian, but they pick up some improvements made by BSD distributions.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/680635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50687/" ] }
680,821
Can I extract from PATH only the directories where I (current user) have permissions to write? I can imagine I'd need something like echo $PATH | grep... but I can't figure out what.
Split $PATH up on colons, by changing IFS to split fields on colons during word expansion , and check whether you can write to each component with the -w test : (IFS=:; for p in $PATH; do [ -w "$p" ] && printf '%s\n' "$p"; done) This will ignore empty entries (which represent the current directory ) and will give incorrect results for entries containing globbing characters (as pointed out by Uncle Billy). To handle both, use sh -fc 'IFS=:; for p in $PATH""; do [ -w "${p:-.}" ] && printf "%s\n" "$p"; done'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/680821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147671/" ] }
680,828
Is there a bash option or another way to make bash to only expand curly-braced variables like ${var} and ignore regular ones like $var ? Update - here is why i want that:I have a relatively large program written in pure bash and it has to stay in pure bash (it is an installer designed to run on most Linux distributions, we do not want to add additional dependencies).It has a very simple template engine written in bash. Here is the template parsing code (all of it): function apply_template() { local src=$1 local dst=$2 local code="cat<<EOF$(cat $src)EOF" eval "$code" > $dst} It is very simple yet very effective. We have 30 templates that are evaluated using this code. Here is an example: [ { "host" : "${primary_node_ip}", "access" : { "type" : "ssh", "user" : "${user}", "key" : "id_rsa" } ] Bash replaces variables with values, everything is fine. The problem occurs when we have a template that has "$" signs that should not be treated as variables, for example an nginx config.If fact it only happens in one template which is an nginx conf. The perfect solution would be to enable this (non existent as i now know) mode in my "apply_template" function.
No, there is not. Variable expansion is described in the manual at 3.5.3 Shell Parameter Expansion The ‘$’ character introduces parameter expansion, command substitution, or arithmetic expansion. The parameter name or symbol to be expanded may be enclosed in braces, which are optional [...] If you really really need this please explain why (this smells like an XY problem) you may be able to pre-process the bash code to escape all $ that precede an identifier with no punctuation in between.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150413/" ] }
680,981
I have a main directory with 100 .mp4 files. I also have a set of sub-directories that goes dir_1, dir_2, dir_3 , etc, up to 100. What I want do is to loop through the main directory and distribute the .mp4 files to all the subfolders, each having only one. Then, there should be two loops or one loop with two variables, whichever one is possible. This is approximately what I'm trying to achieve in a single line of code. for file in *.mp4 & x in {1..100}; do mv $file dir_$x; done
set -- *.mp4for dir in dir_*/; do mv -- "$1" "$dir" shiftdone This first assigns the names of all the MP4 files to the list of positional parameters using set . It then iterates over the directories matching the pattern dir_*/ . For each directory, it moves the first MP4 file from the list of positional parameters into that directory, and then shifts that MP4 file off the list. There is no check to verify that there are as many directories as MP4 files in the above code. Would you want that, you could do set -- *.mp4for dir in dir_*/; do if [ "$#" -eq 0 ]; then echo 'Ran out of MP4 files' >&2 exit 1 fi mv -- "$1" "$dir"doneif [ "$#" -ne 0 ]; then echo 'Too many MP4 files' >&2 exit 1fi This code would work in any sh -like POSIX shell.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/680981", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/300737/" ] }
681,033
I needed to create an empty file. So I used this: echo "" > file Then when I am performing a check in my program (if the file is empty) like this, if(file.content == '') do something The if block never executes. When I open the file using nano file , there's nothing in the file. I even tried doing this print(file.content) , the output is still empty. What's causing this error? And what can I do to fix this?
echo "" outputs a newline, so your file isn’t empty, it contains a newline. ls -l will show you that its size is one byte. To create an empty file, use a command with no output: : > file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/681033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/471848/" ] }
681,059
I am working with a CSV data set which looks like the below: year,manufacturer,brand,series,variation,card_number,card_title,sport,team2015,Leaf,Trinity,Printing Plates,Magenta,TS-JH2,John Amoth,Soccer,2015,Leaf,Trinity,Printing Plates,Magenta,TS-JH2,John Amoth,Soccer,2015,Leaf,Trinity,Printing Plates,Magenta,TS-JH2,John Amoth,,2015,Leaf,Metal Draft,Touchdown Kings,Die-Cut Autographs Blue Prismatic,TDK-DF1,Darren Smith,Football,2015,Leaf,Metal Draft,Touchdown Kings,Die-Cut Autographs Blue Prismatic,TDK- DF1,Darren Smith,Football,2015,Leaf,Trinity,Patch Autograph,Bronze,PA-DJ2,Duke Johnson,Football,2015,Leaf,Army All-American Bowl,5-Star Future Autographs,,FSF-RG1,Rasheem Green,Soccer, It contains a number of duplicates that I need to remove (keeping one instance of the record). Based on Remove duplicate entries from a CSV file I have used sort -u file.csv --o deduped-file.csv which works well for examples like 2015,Leaf,Trinity,Printing Plates,Magenta,TS-JH2,John Amoth,Soccer,2015,Leaf,Trinity,Printing Plates,Magenta,TS-JH2,John Amoth,Soccer, but does not capture examples like 2015,Leaf,Trinity,Printing Plates,Magenta,TS-JH2,John Amoth,Soccer,2015,Leaf,Trinity,Printing Plates,Magenta,TS-JH2,John Amoth,, Where the data is incomplete, but is a representation of the same thing. Is it possible to remove duplicates based on specified fields e.g year, manufacturer, brand, series, variation?
I would create a "key" of the first 5 fields, and then only print a line if that key is being seen for the first time: awk -F, ' {key = $1 FS $2 FS $3 FS $4 FS $5} !seen[key]++ ' file year,manufacturer,brand,series,variation,card_number,card_title,sport,team2015,Leaf,Trinity,Printing Plates,Magenta,TS-JH2,John Amoth,Soccer,2015,Leaf,Metal Draft,Touchdown Kings,Die-Cut Autographs Blue Prismatic,TDK-DF1,Darren Smith,Football,2015,Leaf,Trinity,Patch Autograph,Bronze,PA-DJ2,Duke Johnson,Football,2015,Leaf,Army All-American Bowl,5-Star Future Autographs,,FSF-RG1,Rasheem Green,Soccer,
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/681059", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143150/" ] }
681,422
I've come upon this sed command and I cannot figure out what it is doing. I understand that it is changing a file in place using -i , that it is using a script -e and that the script is $a\ , but what is this script doing? sed -i -e '$a\' filename
As others have said, e.g. in How to add a newline to the end of a file? , with GNU sed (and some other implementations), $a\ adds a newline to the end of a file if it doesn’t have one. Why it does this isn’t so clear, and the documentation doesn’t explain it. However, examining the source code does... Let’s start with the documentation. $a\ is a variant of a , exploiting a special case of a GNU extension : As a GNU extension, the a command and text can be separated into two -e parameters, enabling easier scripting: $ seq 3 | sed -e '2a\' -e hello12hello3$ sed -e '2a\' -e "$VAR" The way a is implemented in sed is with a queue of text to append, tracked in an append_queue . When the time comes to process this queue, in a function called dump_append_queue , the first step is output_missing_newline (&output_file); which adds a missing newline if necessary — to ensure that the appended text will be added to separate lines, not to the end of the current line. Then the contents of the queue are appended. With sed -i '$a\' , the missing newline is added if necessary, and then the queue is processed — but the queue is empty, so no additional change is made.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/681422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/422946/" ] }
681,456
A badly-written script created a directory named '--' (including the single quotes) in my home directory. When I cd to that directory, I am brought back to my home directory. I'd like to remove that item, but cannot figure out how to do it. Escaping some or all of the characters in the directory name, returns No such file or directory . Linux version Linux version 5.11.0-1022-aws (buildd@lgw01-amd64-036) (gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #23~20.04.1-Ubuntu SMP Mon Nov 15 14:03:19 UTC 2021
In your case, since you actually have the quotes as part of the name, you can just do: rm -r \'--\' Or rmdir \'--\' A more common situation is that the quotes are not part of the name so you also need to deal with the fact that the name looks like an option (starts with a - ). In such cases, the classic approaches are: Use -- to signify the end of options so that anything after the -- will not be parsed as an option even if it starts with - : rmdir -- "'--'" Use a full path or just ./ (but you also need to quote the name to protect the ' from the shell): rmdir ./"'--'" Use GNU find : find . -name "'--'" -delete Use something else. Like Perl: perl -e "rmdir(\"'--'\")" Note that all of the above assume the directory is empty. If it isn't, just use one of these instead: rm -r ./"'--'" or rm -r -- "'--'"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/681456", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505238/" ] }
681,521
I want to sell my SSD to my friend, it's a 256GB, I have personal stuff on this SSD, so I want to make sure that it will become extremely hard or at least hard to recover the data back. Erasing the whole SSD, as if it were new. The other questions similar to this one are over-mentioning that: erasing one partition is useless. The commands cat , dd , shred are useless. I read that erasing the SSD is only useful when erasing the whole disk and not only one partition.So I'm asking to make sure if this information is correct and to know how to do it. So I thought it would be better to create a new thread here for me and for those people who aren't concerned about data losing anything because they only want to get rid of their SSDs in return for money instead of throwing them in the garbage.
Useless depends on context. shred can actually be rather useless - when trying to shred a single file, while other copies of the file still exist [every time you click Save, it's another copy] - but there's also the hand sanitizer definition of useless: it kills 99.9% so in practice, it's not useless at all, but people worry about the 0.01% anyway. For many SSDs, a simple blkdiscard will cause that data to be gone and never to be seen again. blkdiscard -v /dev/deleteyourssd# verification:echo 3 > /proc/sys/vm/drop_cachescmp /dev/zero /dev/deleteyourssd# expected result: EOF on /dev/deleteyourssd If that's not good enough for you, you can use dd or shred to do a random data wipe: dd bs=1M status=progress if=/dev/urandom of=/dev/deleteyourssd# orshred -v -n 1 /dev/deleteyourssd For verification, the random data wipe needs to be done by encrypting zeroes: cryptsetup open --type plain --cipher aes-xts-plain64 /dev/deleteyourssd cryptyourssd# Three unicorns went into a bar and got stuck in the door framebadblocks -b 4096 -t 0 -s -w -v /dev/mapper/cryptyourssd You can also run another verification pass after power cycling (for the encryption method it works only if you re-use the same passphrase): cmp /dev/zero /dev/mapper/cryptyourssd# expected result: EOF on /dev/mapper/cryptyourssd Getting data back — after discarding/overwriting and verifying that everything's gone — would require a bit of a miracle and involves corner cases that most users don't really need to concern themselves with. But if that's still not good enough for you, you can use Secure Erase if the SSD manufacturer provides it for your SSD model, this is described in detail in the ArchLinux wiki: https://wiki.archlinux.org/title/Solid_state_drive/Memory_cell_clearing Just don't mess it up by setting complex ATA password and then locking yourself out, effectively bricking the device. Keep it simple. Usually with SSDs, it's already quite impossible to restore any data simply after reinstalling Linux, since most flavors mkfs also discard the entire space first thing: mke2fs 1.46.4 (18-Aug-2021)Discarding device blocks: done Creating filesystem [...] In this fashion it's indeed quite useless to erase SSDs yourself, since everything already does that for you without asking, anyway. And even if that didn't happen, almost every distro sets up fstrim to wipe all free space regularly. Data recovery is a lot less successful on SSDs than on HDDs where you actually have to go out of your way to overwrite free space. SSDs are so good at discarding all your data in an eyeblink, you should really worry more about losing the data you still need (make backups and backups of your backups), than worry about being unable to erase it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/681521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/495380/" ] }
681,523
I have a list file and a source file presentin one directory ( /int/source/HR100 ). So the contents of the source directory look like below. Customer_Account_20211202.csvCustomer_Account.lst The list file ( Customer_Account.lst )contains the name of the source file, i.e., Customer_Account_20211202.csv . Now I want to zip the source fileand move it to a destination directory ( /int/source/HR100/Archive ). I am able to achieve the movement using a one-liner Unix commandas shown below, but I can't figure out to zip and move the file. My preference is Gzip ( .gz ) format. Code I am using: xargs -a Customer_Account.lst mv -t /int/source/HR100/Archive The above one-liner moves the code without compressing. I want a one-liner that will read the file from list,compress and then move.
Useless depends on context. shred can actually be rather useless - when trying to shred a single file, while other copies of the file still exist [every time you click Save, it's another copy] - but there's also the hand sanitizer definition of useless: it kills 99.9% so in practice, it's not useless at all, but people worry about the 0.01% anyway. For many SSDs, a simple blkdiscard will cause that data to be gone and never to be seen again. blkdiscard -v /dev/deleteyourssd# verification:echo 3 > /proc/sys/vm/drop_cachescmp /dev/zero /dev/deleteyourssd# expected result: EOF on /dev/deleteyourssd If that's not good enough for you, you can use dd or shred to do a random data wipe: dd bs=1M status=progress if=/dev/urandom of=/dev/deleteyourssd# orshred -v -n 1 /dev/deleteyourssd For verification, the random data wipe needs to be done by encrypting zeroes: cryptsetup open --type plain --cipher aes-xts-plain64 /dev/deleteyourssd cryptyourssd# Three unicorns went into a bar and got stuck in the door framebadblocks -b 4096 -t 0 -s -w -v /dev/mapper/cryptyourssd You can also run another verification pass after power cycling (for the encryption method it works only if you re-use the same passphrase): cmp /dev/zero /dev/mapper/cryptyourssd# expected result: EOF on /dev/mapper/cryptyourssd Getting data back — after discarding/overwriting and verifying that everything's gone — would require a bit of a miracle and involves corner cases that most users don't really need to concern themselves with. But if that's still not good enough for you, you can use Secure Erase if the SSD manufacturer provides it for your SSD model, this is described in detail in the ArchLinux wiki: https://wiki.archlinux.org/title/Solid_state_drive/Memory_cell_clearing Just don't mess it up by setting complex ATA password and then locking yourself out, effectively bricking the device. Keep it simple. Usually with SSDs, it's already quite impossible to restore any data simply after reinstalling Linux, since most flavors mkfs also discard the entire space first thing: mke2fs 1.46.4 (18-Aug-2021)Discarding device blocks: done Creating filesystem [...] In this fashion it's indeed quite useless to erase SSDs yourself, since everything already does that for you without asking, anyway. And even if that didn't happen, almost every distro sets up fstrim to wipe all free space regularly. Data recovery is a lot less successful on SSDs than on HDDs where you actually have to go out of your way to overwrite free space. SSDs are so good at discarding all your data in an eyeblink, you should really worry more about losing the data you still need (make backups and backups of your backups), than worry about being unable to erase it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/681523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505309/" ] }
681,547
I have the following files /folder/abc1.txt.gz/folder/abc2.txt.gz/folder/abc3.txt.gz I would like to make a txt file with the following abc1 /folder/abc1.txt.gzabc2 /folder/abc2.txt.gzabc3 /folder/abc3.txt.gz I have used the following command find /folder -name 'abc*.txt.gz' -type f -printf '%f %p\n' > out.txt This will output: abc1.txt.gz /folder/abc1.txt.gzabc2.txt.gz /folder/abc2.txt.gzabc3.txt.gz /folder/abc3.txt.gz How can I have only the first part of the filename (without .txt.gz) folowed by the path?
Useless depends on context. shred can actually be rather useless - when trying to shred a single file, while other copies of the file still exist [every time you click Save, it's another copy] - but there's also the hand sanitizer definition of useless: it kills 99.9% so in practice, it's not useless at all, but people worry about the 0.01% anyway. For many SSDs, a simple blkdiscard will cause that data to be gone and never to be seen again. blkdiscard -v /dev/deleteyourssd# verification:echo 3 > /proc/sys/vm/drop_cachescmp /dev/zero /dev/deleteyourssd# expected result: EOF on /dev/deleteyourssd If that's not good enough for you, you can use dd or shred to do a random data wipe: dd bs=1M status=progress if=/dev/urandom of=/dev/deleteyourssd# orshred -v -n 1 /dev/deleteyourssd For verification, the random data wipe needs to be done by encrypting zeroes: cryptsetup open --type plain --cipher aes-xts-plain64 /dev/deleteyourssd cryptyourssd# Three unicorns went into a bar and got stuck in the door framebadblocks -b 4096 -t 0 -s -w -v /dev/mapper/cryptyourssd You can also run another verification pass after power cycling (for the encryption method it works only if you re-use the same passphrase): cmp /dev/zero /dev/mapper/cryptyourssd# expected result: EOF on /dev/mapper/cryptyourssd Getting data back — after discarding/overwriting and verifying that everything's gone — would require a bit of a miracle and involves corner cases that most users don't really need to concern themselves with. But if that's still not good enough for you, you can use Secure Erase if the SSD manufacturer provides it for your SSD model, this is described in detail in the ArchLinux wiki: https://wiki.archlinux.org/title/Solid_state_drive/Memory_cell_clearing Just don't mess it up by setting complex ATA password and then locking yourself out, effectively bricking the device. Keep it simple. Usually with SSDs, it's already quite impossible to restore any data simply after reinstalling Linux, since most flavors mkfs also discard the entire space first thing: mke2fs 1.46.4 (18-Aug-2021)Discarding device blocks: done Creating filesystem [...] In this fashion it's indeed quite useless to erase SSDs yourself, since everything already does that for you without asking, anyway. And even if that didn't happen, almost every distro sets up fstrim to wipe all free space regularly. Data recovery is a lot less successful on SSDs than on HDDs where you actually have to go out of your way to overwrite free space. SSDs are so good at discarding all your data in an eyeblink, you should really worry more about losing the data you still need (make backups and backups of your backups), than worry about being unable to erase it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/681547", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505343/" ] }
682,679
After an upgrade on my server system I saw that the system added a user with name "Debian" to a lot of groups. I checked that it has no password set in /etc/shadow , so I think it's benign. But just for completeness sake: Where do I find information about how on a certain distro (let's begin with Debian, Fedora, Ubuntu) which system users should exist and what is more likely to be some unwanted guest?
Where do I find information about how on a certain distro (Let's begin with Debian, Fedora, Ubuntu) what system users should exist and what is more likely to be some unwanted guest? There's no easy answer to that, especially a cross-distribution answer. Compare with a minimal installation of the same version distribution with the same packages. Review the differences. Note that if you upgraded from an earlier version, there may be extra system users and groups that are no longer used, but still present because the upgrader can't be sure that they're no longer used. i saw that the system added a user with name "Debian" to a lot of groups A legitimate user who's in a lot of groups would typically be a human account with privileges. This could be the initial user created during the installation or a user added later. Debian does not create a user called Debian , and I imagine other distributions wouldn't either. Debian does create users and groups called Debian- something to run system services, but these would not be in “a lot of groups” (I'm not sure if there are any that are in anything but their default group). I checked, that it has no passwd set in /etc/shadow, so i think it's benign. Having no password in /etc/shadow doesn't make an account unusable. Most commonly, the account could have an SSH public key. Check .ssh/authorized_keys and .ssh/authorized_keys2 in the user's home directory as well as any other AuthorizedKeys… directive in /etc/sshd_config (or /etc/ssh/sshd_config or wherever your distribution puts it). adm: group Depending on the distribution and on local sysadmin preferences, this could be a group that's given root access via sudo. Check /etc/sudoers and /etc/sudoers.d/* . If you're looking for a badly hidden backdoor (having something suspicious in /etc/group definitely counts as badly hidden), you need to check other things, like an alternative service listening to network logins, a setuid program somewhere, etc. Even if you don't find anything, keep in mind that the badly hidden part could be planted there by a competent attacker to give you a false sense of security when you find and fix it. If you're unsure whether your system has been breached, you need to nuke it from orbit . But before you do that, check with your fellow admins to see if this is just a badly named manually created account.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/682679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/204598/" ] }
682,683
I can't choose the compress program while using tar v1.26. While this works tar -c -I 'xz' -f foo.tar.xz * This won't work: tar -c -I 'xz -T0' -f foo.tar.xz *tar (child): xz -T0: Cannot exec: No such file or directorytar (child): Error is not recoverable: exiting nowtar: Child returned status 2tar: Error is not recoverable: exiting now Do you have any ideas?
Your version of tar doesn’t support specifying options with -I ; the -I argument must be the compressor’s executable name only. This was changed in version 1.27 . In your case, you can run xz separately, as explained by Romeo Ninov , or you can specify the options using XZ_OPT : XZ_OPT=-T0 tar -c -I xz -f foo.tar.xz *
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/682683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52864/" ] }
682,735
One of our devices froze today with the following kernel messages: [79648.067306] BUG: unable to handle page fault for address: 0000000004000034[79648.067315] #PF: supervisor read access in kernel mode[79648.067318] #PF: error_code(0x0000) - not-present page From the call trace (see below) it appears that this error was caused by the graphics driver (i915). Presumably, a kernel update would fix the problem, however, I'm interested in the background of this problem so I have 3 questions: What do these 3 lines mean exactly, or where can I find a description to these errors? If I enable the hardware watchdog, would it reboot the system when this error occurs? Can this error occur due to faulty hardware (Memory)? System: 5.4.0-91-generic, Ubuntu 20.04.1 LTS Full dump of the kernel ringbuffer (dmesg): [79648.067306] BUG: unable to handle page fault for address: 0000000004000034[79648.067315] #PF: supervisor read access in kernel mode[79648.067318] #PF: error_code(0x0000) - not-present page[79648.067322] PGD 0 P4D 0[79648.067328] Oops: 0000 [#1] SMP PTI[79648.067335] CPU: 3 PID: 668 Comm: Xorg Not tainted 5.4.0-91-generic #102-Ubuntu[79648.067338] Hardware name: Shuttle Inc. DH310S/DH310S, BIOS 1.06 03/23/2020[79648.067349] RIP: 0010:find_get_entry+0x7a/0x170[79648.067355] Code: b8 48 c7 45 d0 03 00 00 00 e8 d2 ff 85 00 49 89 c4 48 3d 02 04 00 00 74 e4 48 3d 06 04 00 00 74 dc 48 85 c0 74 3d a8 01 75 39 <8b> 40 34 85 c0 74 cc 8d 50 01 f0 41 0f b1 54 24 34 75 f0 48 8b 45[79648.067359] RSP: 0018:ffffb80a8093f728 EFLAGS: 00010246[79648.067364] RAX: 0000000004000000 RBX: 00000000000004a6 RCX: 0000000000000000[79648.067367] RDX: 0000000000000026 RSI: ffff9a369e5ff6c0 RDI: ffffb80a8093f728[79648.067370] RBP: ffffb80a8093f770 R08: 00000000001120d2 R09: 0000000000000000[79648.067373] R10: ffff9a3714c8eaa0 R11: 0000000000003c64 R12: 0000000004000000[79648.067376] R13: 00000000000004a6 R14: 0000000000000001 R15: ffff9a371bf261c0[79648.067381] FS: 00007f5b0d819a40(0000) GS:ffff9a372ed80000(0000) knlGS:0000000000000000[79648.067384] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033[79648.067387] CR2: 0000000004000034 CR3: 000000025bf12003 CR4: 00000000003606e0[79648.067390] Call Trace:[79648.067401] find_lock_entry+0x1f/0xe0[79648.067408] shmem_getpage_gfp+0xef/0x940[79648.067417] ? __kmalloc+0x194/0x290[79648.067424] shmem_read_mapping_page_gfp+0x44/0x80[79648.067520] shmem_get_pages+0x250/0x650 [i915][79648.067530] ? __update_load_avg_se+0x23b/0x320[79648.067538] ? update_load_avg+0x7c/0x670[79648.067619] ____i915_gem_object_get_pages+0x22/0x40 [i915][79648.067692] __i915_gem_object_get_pages+0x5b/0x70 [i915][79648.067774] __i915_vma_do_pin+0x3ee/0x470 [i915][79648.067845] eb_lookup_vmas+0x68a/0xb70 [i915][79648.067930] ? eb_pin_engine+0x255/0x410 [i915][79648.067990] i915_gem_do_execbuffer+0x38f/0xc20 [i915][79648.067997] ? security_file_alloc+0x29/0x90[79648.068004] ? _cond_resched+0x19/0x30[79648.068010] ? apparmor_file_alloc_security+0x3e/0x160[79648.068016] ? __radix_tree_replace+0x6d/0x120[79648.068020] ? radix_tree_iter_tag_clear+0x12/0x20[79648.068027] ? kmem_cache_alloc_trace+0x177/0x240[79648.068035] ? __pm_runtime_resume+0x60/0x80[79648.068040] ? recalibrate_cpu_khz+0x10/0x10[79648.068044] ? ktime_get_mono_fast_ns+0x4e/0xa0[79648.068048] ? __kmalloc_node+0x213/0x330[79648.068107] i915_gem_execbuffer2_ioctl+0x1eb/0x3d0 [i915][79648.068112] ? radix_tree_lookup+0xd/0x10[79648.068167] ? i915_gem_execbuffer_ioctl+0x2d0/0x2d0 [i915][79648.068196] drm_ioctl_kernel+0xae/0xf0 [drm][79648.068218] drm_ioctl+0x24a/0x3f0 [drm][79648.068278] ? i915_gem_execbuffer_ioctl+0x2d0/0x2d0 [i915][79648.068288] do_vfs_ioctl+0x407/0x670[79648.068293] ? fput+0x13/0x20[79648.068299] ? __sys_recvmsg+0x88/0xa0[79648.068305] ksys_ioctl+0x67/0x90[79648.068311] __x64_sys_ioctl+0x1a/0x20[79648.068317] do_syscall_64+0x57/0x190[79648.068323] entry_SYSCALL_64_after_hwframe+0x44/0xa9[79648.068327] RIP: 0033:0x7f5b0db7937b[79648.068332] Code: 0f 1e fa 48 8b 05 15 3b 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e5 3a 0d 00 f7 d8 64 89 01 48[79648.068335] RSP: 002b:00007fff24ca5d88 EFLAGS: 00000246 ORIG_RAX: 0000000000000010[79648.068339] RAX: ffffffffffffffda RBX: 000055eaa18c2290 RCX: 00007f5b0db7937b[79648.068342] RDX: 00007fff24ca5db0 RSI: 0000000040406469 RDI: 000000000000000c[79648.068345] RBP: 00007f5b0ba31000 R08: 0000000000000002 R09: 0000000000000001[79648.068347] R10: 00007f5b0d4156a0 R11: 0000000000000246 R12: 00007fff24ca5db0[79648.068350] R13: 000000000000000c R14: 000000000000001a R15: 0000000000000068[79648.068354] Modules linked in: wdat_wdt nls_iso8859_1 dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio snd_hda_intel snd_intel_dspcfg snd_hda_codec snd_hda_core snd_hwdep snd_pcm snd_seq_midi intel_rapl_msr snd_seq_midi_event intel_rapl_common snd_rawmidi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel snd_seq kvm rtsx_pci_ms rapl snd_seq_device intel_cstate memstick snd_timer mei_me mei snd soundcore mac_hid acpi_pad sch_fq_codel ip_tables x_tables autofs4 btrfs zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear i915 crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd i2c_algo_bit rtsx_pci_sdmmc glue_helper drm_kms_helper syscopyarea sysfillrect sysimgblt i2c_i801 fb_sys_fops r8169 rtsx_pci drm realtek ahci libahci video[79648.068413] CR2: 0000000004000034[79648.068418] ---[ end trace 447ad409d057183e ]---[79648.068425] RIP: 0010:find_get_entry+0x7a/0x170[79648.068429] Code: b8 48 c7 45 d0 03 00 00 00 e8 d2 ff 85 00 49 89 c4 48 3d 02 04 00 00 74 e4 48 3d 06 04 00 00 74 dc 48 85 c0 74 3d a8 01 75 39 <8b> 40 34 85 c0 74 cc 8d 50 01 f0 41 0f b1 54 24 34 75 f0 48 8b 45[79648.068432] RSP: 0018:ffffb80a8093f728 EFLAGS: 00010246[79648.068435] RAX: 0000000004000000 RBX: 00000000000004a6 RCX: 0000000000000000[79648.068438] RDX: 0000000000000026 RSI: ffff9a369e5ff6c0 RDI: ffffb80a8093f728[79648.068441] RBP: ffffb80a8093f770 R08: 00000000001120d2 R09: 0000000000000000[79648.068443] R10: ffff9a3714c8eaa0 R11: 0000000000003c64 R12: 0000000004000000[79648.068446] R13: 00000000000004a6 R14: 0000000000000001 R15: ffff9a371bf261c0[79648.068449] FS: 00007f5b0d819a40(0000) GS:ffff9a372ed80000(0000) knlGS:0000000000000000[79648.068452] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033[79648.068455] CR2: 0000000004000034 CR3: 000000025bf12003 CR4: 00000000003606e0
[79648.067306] BUG: unable to handle page fault for address: 0000000004000034[79648.067315] #PF: supervisor read access in kernel mode[79648.067318] #PF: error_code(0x0000) - not-present page These errors indicate kernel code tried to access an invalid pointer.The kernel code tried to access the virtual memory address 0x0000000004000034 , but found that it doesn't correspond to any real memory page (the page could not be faulted in). The second and third lines give context that 1) the code was running in kernel mode (supervisor mode) 2) the access was a read; and 3) the problem was the page was missing, rather than incompatible page protections (such as writing to a read-only page). This likely a bug in kernel/driver code.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/682735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505384/" ] }
682,793
I am using XManager xshell as ssh client, connected to a remote server, then executed commands: nohup sleep 60 &ps -ef | grep sleepexit then login again ps -ef | grep sleep that process is gone! anything may cause this ? the ssh daemon is openssh 8, server is redhat 7
With systemd-logind , there is the (default) setting: KillUserProcesses=yes in your logind.conf . It will terminate all processes started by the user as part of their login session after the user logs out. You can set it to no, or set the following setting to your user: KillExcludeUsers=yourusername
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/682793", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209127/" ] }
682,803
I'm going to make a video of image files, but the filenames contain numbers in scientific notation, so the ordering of the name will not be correct. The filenames are in this format: ABC_1.000000E-01.png ~ ABC_1.100000E+01.png,DEF_1.000000E-01.png ~ DEF_1.100000E+01.png,GHI_1.000000E-01.png ~ GHI_1.100000E+01.png,... If I change the number notation used for the numbers, the order is not correct again, so I want to change it as below. ABC_001.png ~ ABC_110.png,DEF_001.png ~ DEF_110.png,GHI_001.png ~ GHI_110.png,... How may I do this on my Linux system?
With systemd-logind , there is the (default) setting: KillUserProcesses=yes in your logind.conf . It will terminate all processes started by the user as part of their login session after the user logs out. You can set it to no, or set the following setting to your user: KillExcludeUsers=yourusername
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/682803", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/506580/" ] }
683,063
rmdir deletes only an empty directory. To delete recursively, rm -rf is used. Why doesn't rmdir have a recursive option? Logically, when I am deleting a directory , I want to use rmdir . Given that rm is used for deleting a directory in all but the simplest case, why does rmdir even exist? The functionality is subsumed in rm . Is this just a historical accident?
Unlinking directories was originally a privileged operation : It is also illegal to unlink a directory(except for the super-user). So rmdir was implemented as a small binary which only removed directories , which at the time involved removing .. and . inside the directory, and then the directory itself. rmdir was designed to be setuid root; it performs separate permission tests using access to determine whether the real user is allowed to remove a directory. Like any setuid root binary, it’s better to keep it simple and tightly-focused. rm -r actually used this separate binary to delete directories as necessary. It seems the lasting difference between rm -r and rmdir is the result of this initial difference. Presumably since rm acquired the ability to delete recursively early on, and rmdir was supposed to have a very small remit, it was never deemed useful to give rmdir the ability to delete recursively itself.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/683063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146345/" ] }
683,192
I have the following file: ------Introduction----------Optio eum enim ut. Et quia molestias eos. Doloribus laborum quia quae. Magnam cupiditate quis consectetur.-----Chapter1: Foo-----Odit beatae eius voluptas temporibus sint quia. Eos et tempora similique laboriosam optio consequatur quibusdam. Fugit suscipit cupiditate ea perspiciatis rem labore cum eos.-----Chapter bar-----Et consequatur quia quia et architecto et sunt. Perferendis qui deserunt qui est illo est sapiente ipsam. Fugiat vel amet magni in quam. Eligendi totam cum sapiente harum blanditiis minima With the following constaints: The header symbol - appears at least 5 characters or more. There could be an arbitrary (but finite) number of blank lines between - and the header. The expected output is: IntroductionChapter1: FooChapter bar I know this could be accomplished using awk but please don't suggest that. I would like to see a pure GNU sed solution. This is what I have tried so far: sed -n ':a; /-\+/{n; /^$/!{p; b a}}' input.txt But that command doesn't seem to work.
This prints the lines that contain at least one alphabetical or numerical character, as long as they are inside a header. sed -n '/^-----/,/^-----/{/[[:alnum:]]/p;}' file GNU Sed manual: Range addresses
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/683192", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/504663/" ] }
683,209
I am reading about range addresses in GNU sed. But I don't understand how does that exactly work, I have tried to run sed --debug but the output is too verbose to understand. Let's assume that I have the following file input.txt : ===sep1Aghroum===sep2Thirjeen===sep3Ya wedi mata ikinikh===sep4Ifoullissen===sep5 When I try to use range addresses as follows: sed -n '/=/,/=/{/=\|^$/! p}' input.txt The output is: # it prints non-empty lines from ===sep1 to ==sep2, and from ===sep3 to ==sep4, etc. AghroumYa wedi mata ikinikh As far as I know, GNU sed process the input file line by line, why it doesn't matches also the range address between ==sep2 to ==sep3 ? (Please note that I am not asking how to get those lines, I know how to do that by using something like sed -n '/===/!p' . But I am asking why it doesn't start the second range address from ===sep2 to ===sep3 ) Thank you
This prints the lines that contain at least one alphabetical or numerical character, as long as they are inside a header. sed -n '/^-----/,/^-----/{/[[:alnum:]]/p;}' file GNU Sed manual: Range addresses
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/683209", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/504663/" ] }
683,212
I am having issue on updating my linux distribution because the package manager for some reason is no longer installing the necessary files on /usr/lib/python-exec folder and I am not having progress on solving it by the default installation. So I want to know how I can populate that folder, the files I am needing are /usr/lib/python-exec/python3.9/glib-genmarshal, /usr/lib/python-exec/python3.9/glib-mkenums, /usr/lib/python-exec/python3.9/gdbus-codegen, /usr/lib/python-exec/python3.9/dtrace and /usr/lib/python-exec/python3.9/gtkdoc-scan . I could not find much information regarding python-exec, just that it stores python wrappers of binary programs. my last attempt was installing exec-wrappers to build the wrappers on the mentioned folders, I made some progress but now I am getting errors about not finding python modules
This prints the lines that contain at least one alphabetical or numerical character, as long as they are inside a header. sed -n '/^-----/,/^-----/{/[[:alnum:]]/p;}' file GNU Sed manual: Range addresses
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/683212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298450/" ] }
683,223
I have this long line: <hdr><name><first>John</first><mid></mid><last>Smith</last></name><dob>04181995</dob><phone>5550001111<phone></hdr> how to extract just the following? <first>John</first><mid></mid><last>Smith</last><dob>04181995</dob><phone>5550001111<phone> tried sed but get extra tags: echo "<hdr><name><first>John</first><mid></mid><last>Smith</last></name><dob>04181995</dob><phone>5550001111<phone></hdr>" | sed -e 's/></>\n</g'<hdr><name><first>John</first><mid></mid><last>Smith</last></name><dob>04181995</dob><phone>5550001111<phone></hdr> Perhaps grep can do it. I am lost. Please, help
This prints the lines that contain at least one alphabetical or numerical character, as long as they are inside a header. sed -n '/^-----/,/^-----/{/[[:alnum:]]/p;}' file GNU Sed manual: Range addresses
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/683223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/507009/" ] }
683,598
context: https://stackoverflow.com/a/47348104/15603477 printf -v pasteargs %*s 16paste -d\ ${pasteargs// /- } < <(seq 1 42)1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 30 31 3233 34 35 36 37 38 39 40 41 42 paste -d, --delimiters=LIST reuse characters from LIST instead ofTABs ${parameter/pattern/string} The pattern is expanded to produce a pattern just as in filenameexpansion. Parameter is expanded and the longest match of patternagainst its value is replaced with string. The match is performedaccording to the rules described below (see Pattern Matching). Ifpattern b egins with ‘/’, all matches of pattern are replaced with string. after checked with manual. what does ${pasteargs// /- } do. I do know %s refers to printf argument. But %*s 16 I don't know. Even I quoted manual, still not sure paste -d\
printf %*s 16 means: print 16 spaces. See this answer for further explanation. So now pasteargs is a variable with a value of 16 spaces. ${pasteargs// /- } means: Replace all occurrences of (space) in the variable with - (in other words: add a hyphen before each space in the variable). As you quoted from the manual: If pattern begins with ‘/’, all matches of pattern are replaced with string. And the pattern here is / , which means: all matches of space. So now the value of pasteargs is 16 hyphens separated by spaces. Regarding the paste command, you first need to understand that it's followed by 16 hyphens, meaning 16 streams. Basically it will merge every 16 consecutive lines into one line. By default, when those lines are merged, they are delimited by tabs . So paste -d\ (notice the trailing space after the backslash) means to separate the lines by spaces ( \ ) instead of tabs . To summarize, this command (as advertised) just merges each 16 consecutive lines from the input into one line separated by spaces.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/683598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505362/" ] }
683,599
I have an odd issue where the XTEST keyboard on my PC will randomly send the 255 keycode, which turns on the screen if it's off. I've disabled most suspect programs like KDE Connect, but the issue is still there. Is there any way to see which exact process is responsible for the keystroke?
printf %*s 16 means: print 16 spaces. See this answer for further explanation. So now pasteargs is a variable with a value of 16 spaces. ${pasteargs// /- } means: Replace all occurrences of (space) in the variable with - (in other words: add a hyphen before each space in the variable). As you quoted from the manual: If pattern begins with ‘/’, all matches of pattern are replaced with string. And the pattern here is / , which means: all matches of space. So now the value of pasteargs is 16 hyphens separated by spaces. Regarding the paste command, you first need to understand that it's followed by 16 hyphens, meaning 16 streams. Basically it will merge every 16 consecutive lines into one line. By default, when those lines are merged, they are delimited by tabs . So paste -d\ (notice the trailing space after the backslash) means to separate the lines by spaces ( \ ) instead of tabs . To summarize, this command (as advertised) just merges each 16 consecutive lines from the input into one line separated by spaces.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/683599", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/507325/" ] }
683,661
I am using Ubuntu 20.04, and I usually need to find the final executable path for a given command. Let's take awk as an example: file $(which awk)# Output: /usr/bin/awk: symbolic link to /etc/alternatives/awkfile /etc/alternatives/awk# Output: /etc/alternatives/awk: symbolic link to /usr/bin/gawk My question is: Is there a command (or a flag for file command) that will return directly the final path of a given command? in the example above I would like it to return the path of gawk Thank you
You can use readlink : $ readlink -f -- "$(which awk)"/usr/bin/gawk From man readlink : -f, --canonicalize canonicalize by following every symlink inevery component of the given name recursively;all but the last component must exist
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/683661", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/504663/" ] }
683,811
I have this Linux shell command: echo $(python3 -c 'print("Test"+"\0"+"M"*18)') | nc -u [IP] [PORT] My intention is to pipe the output of the print statement to the netcat command. The netcat command creates a socket to some service that essentially returns an stdout of the string passed in. The problem here is that when I try to run this command, I get this message: -bash: warning: command substitution: ignored null byte in input ; and my null byte \0 gets ignored. But I don't want the null byte to be ignored. How do I tell the system to NOT ignored my null byte and take in the input exactly as I've specified. I have done some Google searches but honestly speaking they haven't helped much. Also, any link to some great article is much appreciated. EDIT Using printf worked. Ordinarily passing python3 -c 'print("Test"+"\0"+"M"*18)' also worked. Valued @cas explanation. I guess I might be sticking to printf given it's faster (though speed isn't particularly a concern in my case). Thanks to all those who contributed :-).
Strings in bash can not contain a NUL byte, and that includes any output from a command substitution. Bash variables can't contain a NUL either. This can not be ignored or over-ridden (although it can be worked around in some commands, such as printf , by using \0 as a representation of NUL, same as \n is a representation of a newline character). You could just pipe the output of python directly into nc , without echo and command substitution (as in @CharlieWilson's answer), but even that isn't necessary. bash's printf built-in can do what you want. e.g. { printf 'Test\0'; printf -- '%.0sM' {1..18}; } | nc -u [IP] [PORT] This uses a group command ( { list; } ) to first print "Test" and a NUL byte ( \0 ) with printf, then uses printf again to print a zero-width string ( %.0s ) followed by an M 18 times. The output of the entire group command is piped into nc . This works because the printf format is "is re-used as necessary to consume all of the arguments" (see help printf ), and the brace expansion {1..18} expands to the integers from 1 to 18, supplying eighteen arguments to a printf format string that only has one zero-width string field and a literal "M" character. Hence, 18 M characters are output. You can see exactly what the group command is outputting by piping it into a hex dumper like xxd or hd instead of nc : $ { printf 'Test\0'; printf -- '%.0sM' {1..18}; } | hd00000000 54 65 73 74 00 4d 4d 4d 4d 4d 4d 4d 4d 4d 4d 4d |Test.MMMMMMMMMMM|00000010 4d 4d 4d 4d 4d 4d 4d |MMMMMMM|00000017 With this, you can see that the fifth character is a NUL ( 00 ). The output from python is slightly different because python's print automatically appends a newline character ( 0a ): $ python3 -c 'print("Test"+"\0"+"M"*18)' | hd00000000 54 65 73 74 00 4d 4d 4d 4d 4d 4d 4d 4d 4d 4d 4d |Test.MMMMMMMMMMM|00000010 4d 4d 4d 4d 4d 4d 4d 0a |MMMMMMM.|00000018 If the program you're sending this to with nc requires that newline, you can print that with printf '\n' . or with echo . $ { printf 'Test\0'; printf -- '%.0sM' {1..18}; echo; } | hd00000000 54 65 73 74 00 4d 4d 4d 4d 4d 4d 4d 4d 4d 4d 4d |Test.MMMMMMMMMMM|00000010 4d 4d 4d 4d 4d 4d 4d 0a |MMMMMMM.|00000018 FYI: for more info on group commands, see man bash and search for Compound Commands : { list; } list is simply executed in the current shell environment. list must beterminated with a newline or semicolon. This is known as a group command. The return status is the exit status oflist. Note that unlike the metacharacters ( and ) , { and } are reservedwords and must occur where a reserved word is permitted to be recognized.Since they do not cause a word break, they must be separated from list bywhitespace or another shell metacharacter. BTW, using printf for this will be faster than using python because it avoids the overhead of executing python and compiling the tiny python script...which would be irrelevant on anything even resembling a modern system for a one-off command, but significant if run in a loop. e.g. on my ~ 10 year old 6-core AMD Phenom II 1090T: $ time { printf 'Test\0'; printf -- '%.0sM' {1..18}; } TestMMMMMMMMMMMMMMMMMMreal 0m0.000s user 0m0.000s sys 0m0.000s$ time python3 -c 'print("Test"+"\0"+"M"*18)' TestMMMMMMMMMMMMMMMMMMreal 0m0.036s user 0m0.028s sys 0m0.008s printf doesn't actually take 0 seconds, it takes more than that, but the amount of time is too small to be represented by my $TIMEFORMAT string. Perl is a bit faster to start up and compile and run its tiny script than python, but you still wouldn't want to run it repeatedly in a shell loop: $ time perl -e 'print "Test\0" . "M" x 18 . "\n"' TestMMMMMMMMMMMMMMMMMMreal 0m0.008s user 0m0.000s sys 0m0.009s or even faster using perl's printf: $ time perl -e 'printf "Test\0%s\n", "M" x 18' TestMMMMMMMMMMMMMMMMMMreal 0m0.003s user 0m0.003s sys 0m0.000s
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/683811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/507530/" ] }
684,070
I want to install aria2c manually by copying it to /usr/local/bin since I installed aria2c with apt on /usr/bin which one of those is going to be executed if I type aria2c ?
The executable that will be executed depends on the ordering of the directories in the PATH variable. If /usr/bin is listed before /usr/local/bin , then /usr/bin/aria2c would be executed rather than /usr/local/bin/aria2c . If your shell does hashing of executables, and if it has already accessed aria2c from /usr/bin before you installed the same utility in /usr/local/bin , then it may choose /usr/bin/aria2c regardless of the ordering of the directories in PATH . Note that this probably only happens in the specific case where you have used the utility, then install it in another location, and then try to use it again in the same shell session. The command hash -r would clear the remembered locations of utilities in a shell session. See also How do I clear Bash's cache of paths to executables? If you have an alias or shell function called aria2c , then that would be used before the shell uses PATH to locate the executable. On my personal (non-Linux) system: $ printf '%s\n' "$PATH" | tr ':' '\n'/usr/bin/bin/usr/sbin/sbin/usr/X11R6/bin/usr/local/bin/usr/local/sbin/usr/games As you can see, /usr/local/bin is way after /usr/bin on my system. I've set it up like that to avoid accidentally overriding base system utilities in /usr/bin . You likely want the opposite order if you want to give local executables priority over the ones in /usr/bin .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/684070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/507746/" ] }
684,104
How does bit rot affect a LUKS container and the filesystem inside? Suppose you have a filesystem that is well suited to deal with bit rot. Now put it inside a LUKS container. In case bit rot corrupted the container, I assume the decrypted filesystem will suffer huge amounts of corrupted raw bytes / blocks. How does LUKS protect against this?
Bitrot in the LUKS header (key and otherwise critical material): it's *poof* gone. (There is a bit of redundancy and checksum for the LUKS2 header but it doesn't cover much, so chances are... it's still gone). Bitrot in encrypted data: it depends on the encryption mode, but in general, a single bit flip will result in 16 wrong bytes. Set up encryption: # truncate -s 32M bitrottest.img# cryptsetup luksFormat bitrottest.img# cryptsetup luksOpen bitrottest.img bitrottest Make it all zero: # shred -n 0 -z /dev/mapper/bitrottest # hexdump -C /dev/mapper/bitrottest 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*01000000 Flip a bit: # losetupNAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC/dev/loop0 0 0 1 0 bitrottest.img 0 4096# dd bs=1 count=1 skip=30M if=/dev/loop0 | hexdump -C00000000 a2 |.|00000001# printf "\xa3" | dd bs=1 count=1 seek=30M of=/dev/loop0# dd bs=1 count=1 skip=30M if=/dev/loop0 | hexdump -C00000000 a3 |.|00000001 Result: # hexdump -C /dev/mapper/bitrottest 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*00e00000 eb d1 bd b0 2a f5 77 73 35 df 82 40 1e a7 27 11 |....*.ws5..@..'.|00e00010 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|*01000000 One flipped bit, 16 whacky bytes. Protection? None whatsoever. For that, you'd have to add integrity (just to report errors, redundancy is still a separate issue from that). You are not supposed to deliberately write corrupt data to your storage. Storage is supposed to report read errors instead of returning bogus data. In that case your data is still gone, but at least, it's not silent bitrot.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/684104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/361422/" ] }
684,171
I'm migrating some software from Unix to Linux. I have the following script; it is a trigger of a file transfer. What do the exec commands do? Will they work also on Linux? #!/bin/bashflog=/mypath/log/mylog_$8.logpid=$$flog_otherlog=/mypath/log/started_script_${8}_${pid}.logexec 6>&1exec 7>&2exec >> $flogexec 2>&1exec 1>&6 exec 2>&7/usr/local/bin/sudo su - auser -c "/mypath/bin/started_script.sh $1 $pid $flog_otherlog $8" The started script is the following: #!/bin/bashflusso=$1pidpadre=$2flogcurr=$3corrid=$4pid=$$exec >> $flogcurrexec 2>&1if [ $1 = pippo ] || [ $1 = pluto ] || [ $1 = paperino ] then fullfile=${myetlittin}/$flusso filename="${flusso%.*}" datafile=$(ls -le $fullfile | awk '{print $6, " ", $7, " ", $9, " ", $8 }') dimfile=$(ls -le $fullfile | awk '{print $5 " " }') aaaammgg=$(ls -E $fullfile | awk '{print $6}'| sed 's#-##g') aaaamm=$(echo $aaaammgg | cut -c1-6) dest_dir=${myetlwarehouse}/mypath/${aaaamm} dest_name=${dest_dir}/${filename}_${aaaammgg}.CSV mkdir -p $dest_dir cp $fullfile $dest_name rc_copia=$?fi I will change ls -le into ls -l --time-style="+%b %d %T %Y" and ls -E into ls -l --time-style=full-isoand in Linux.
exec [n]<&word will duplicate an input file descriptor in bash. exec [n]>&word will duplicate an output file descriptor in bash. See 3.6.8 in: https://www.gnu.org/software/bash/manual/html_node/Redirections.html The order of arguments can be confusing, though. In your script: exec 6>&1 creates a copy of file descriptor 1 , i.e. STDOUT, and stores it as file descriptor 6 . exec 1>&6 copies 6 back unto 1 . It could also have been moved by appending a dash, i.e. 1<&6- closing descriptor 6 and leaving only 1 . In between, you'll usually find operations that write to STDOUT and STDIN, e.g. in a subshell. Also see: Practical use for moving file descriptors
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/684171", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184179/" ] }
684,172
I wrote this bash script, but it doesn't work. dex=`date +%Y%m%d`if [ -f "bv$dex.txt" ]; theneval bv$dex=`cat ./bv$dex.txt`let bv$dex+=1 ;echo $bv$dex > ./bv$dex.txtelseecho 1 > ./bv$dex.txtfi For some reason it just writes variable dex in file, instead of number + 1
exec [n]<&word will duplicate an input file descriptor in bash. exec [n]>&word will duplicate an output file descriptor in bash. See 3.6.8 in: https://www.gnu.org/software/bash/manual/html_node/Redirections.html The order of arguments can be confusing, though. In your script: exec 6>&1 creates a copy of file descriptor 1 , i.e. STDOUT, and stores it as file descriptor 6 . exec 1>&6 copies 6 back unto 1 . It could also have been moved by appending a dash, i.e. 1<&6- closing descriptor 6 and leaving only 1 . In between, you'll usually find operations that write to STDOUT and STDIN, e.g. in a subshell. Also see: Practical use for moving file descriptors
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/684172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/488179/" ] }
684,362
I'm trying to find out what happened to my laptop LUKS drive. I'm sure it ran out of battery since I forgot to plug it in. This morning I booted the system and the LUKS password does not work. I tried several reboots and every time it ends up offering emergency console after 3 tries because it can't decrypt the drive. My question is, if the laptop lost power and did not have time to suspend/sleep, can that corrupt the password? I had thought a power-off corruption would cause it to not ask for any password at all... corrupted, not just mess up password. I tried decrypting the crypt volume while booted from live CD but it still fails. In this case would it still be worthwhile to try restore boot files?
LUKS works by dedicating a small amount of space, typically on the encrypted partition, to the "LUKS header". That header contains a checksum, so it should detect corruption. Further, there are two copies; each with their own checksum, so it should automatically use the other copy if one is corrupted. Along with the header, there is the keyslot data, which actually stores the encryption keys. That is not duplicated, I believe will at least detect corruption (and you could use a backup key/passphrase if you have one). Documentation of the format can be found at https://gitlab.com/cryptsetup/LUKS2-docs/blob/master/luks2_doc_wip.pdf So I think it's very unlikely the on-disk data got corrupted, and even more unlikely that doesn't get a corruption error instead of a wrong passphrase. More likely, you're just entering the wrong passphrase — possibly you're using a different keyboard layout, had caps lock on, or you've forgotten it. If you have a backup recovery key, you can use that to recover your data. (Side note, if it did get corrupted and you don't have a backup of the header + keyslots or the master key, the data is entirely unrecoverable). Frostschutz points out that if you're still on the old LUKS1 format then there isn't a checksum, so corruption could occur (though there are still magic numbers, etc., so if the entire sector were overwritten that'd be noticed). Also, if you've upgraded from a very old cryptsetup/gcrypt (2014-era), then there was a bugfix which broke cryptsetup; see https://gitlab.com/cryptsetup/cryptsetup/-/wikis/FrequentlyAskedQuestions#8-issues-with-specific-versions-of-cryptsetup for details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/684362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/426322/" ] }
684,417
I have tool that is able to create a completion file for bash, zsh and fish. I normally use zsh, but i cannot get this completion file to work on zsh. So as a test i installed fish and created the completion file for fish. Also that one i cannot get to work. All other completions are working fine in zsh and in fish, so i suspect that the tool creates a broken completion file.Is there a way to check the syntax of a completion file for errors?
My answer addresses zsh (with the “new” completion system, i.e. after calling compinit ) and bash. For fish, see NotTheDr01ds's answer . If there is a syntax error in the completion file, you'll see an error message when the completion code is loaded. In bash, this happens when /etc/bash_completion is loaded. In zsh, this happens the first time the completion function is invoked. But syntax errors are rare. Most errors are not syntax errors, and there's no way to check whether a completion function works other than invoking it. If an actual error happens while generating the completions, you'll see an error message on the terminal. But if the code doesn't trigger an error, but doesn't generate the desired completions, then there's no error to display. The first thing to check is whether the completion function you want is actually invoked. For zsh, check echo $_comps[foo] where foo is the command whose options/arguments are to be completed. For bash, check complete -p foo If your custom completion function isn't loaded, see Dynamic zsh autocomplete for custom commands or Custom bash tab completion . If you want to debug the completion code, in zsh, press ^X? ( Ctrl+X ? ) instead of Tab to run _complete_debug . This places a trace of the completion code into a file which you can view by pressing Up Enter .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/684417", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/132683/" ] }
684,523
In this answer to How does "cat << EOF" work in bash? on Stack Overflow, I get the first two points. But I don't get the third point Pass multi-line string to a pipe in Bash Pass multi-line string to a pipe in Bash $ cat <<EOF | grep 'b' | tee b.txt foo bar baz EOF Since It have 3 word, 2 pipe character. Then I am not sure how to intrepret it.
From your comment: I am not sure what does the first pipe ("|") character for? The first | character connects the output of cat to the input of grep . << redirects the input of cat ; it's a totally independent redirection, similar to < in cat <some_file | grep … . You may prefer <<EOF cat | grep 'b' | tee b.txt (compare this answer ) because if you read this from left to right then it will strictly correspond to how the data flows: here document → cat → grep → tee . Note all this can be done without cat : <<EOF grep 'b' | tee b.txtfoobarbazEOF (or grep 'b' <<EOF | … ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/684523", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505362/" ] }
684,765
If I run a command in the terminal like this $ tyop --count 3 --exact haveibeenpwned and the command returns with an error code, for example command not found: tyop , how can I rerun the last command keeping the command line arguments --count 3 --exact haveibeenpwned with another command name (for example typo instead of tyop )? $ typo --count 3 --exact haveibeenpwned I'm looking for a shortcut, or a shell function, like !! or !^ , if possible.
typo !* From man bash : Word Designators Word designators are used to select desired words from the event. A : separates the event specification from the word designator. It may be omitted if the word designator begins with a ^, $, *, -, or %. Words are numbered from the beginning of the line, with the first word being denoted by 0 (zero). Words are inserted into the current line separated by single spaces. * All of the words but the zeroth. This is a synonym for `1-$'. It is not an error to use * if there is just one word in the event; the empty string is returned in that case.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/684765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505038/" ] }
684,792
I installed the editor Atom as snap. Normally i can start Atom from command line by typing atom . but after reinstalling my Ubuntu system I get an error: nohup: failed to run command '/tmp/troubadix/atom-build/Atom/atom': No suchfile or directory Every other program installed via Snap or Deb starts from terminal just fine.
typo !* From man bash : Word Designators Word designators are used to select desired words from the event. A : separates the event specification from the word designator. It may be omitted if the word designator begins with a ^, $, *, -, or %. Words are numbered from the beginning of the line, with the first word being denoted by 0 (zero). Words are inserted into the current line separated by single spaces. * All of the words but the zeroth. This is a synonym for `1-$'. It is not an error to use * if there is just one word in the event; the empty string is returned in that case.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/684792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/508527/" ] }
684,811
I have a compressed raw image of a very large hard drive created using cat /dev/sdx | xz > image.xz . However, the free space in the drive was zeroed before this operation, and the image consists mostly of zero bytes. What's the easiest way to extract this image as a sparse file, such that the blocks of zeroes do not take up any space?
Citing the xz manpage (which you really should consult with such questions), in which I very quickly searched for sparse : --no-sparse Disable creation of sparse files. By default, if decompressing into a regular file, xz tries to make the file sparse if the decompressed data contains long sequences of binary zeros . It also works when writing to standard output as long as standard output is connected to a regular file and certain additional conditions are met to make it safe. Creating sparse files may save disk space and speed up the decompression by reducing the amount of disk I/O. (emphasis mine) So, you don't have to do anything; just decompress with the default xz tool.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/684811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36553/" ] }
684,833
I am writing a bash script which contains a simple if section with two conditions: if [[ -n $VAR_A ]] && [[ -n $VAR_B ]]; then echo >&2 "error: cannot use MODE B in MODE A" && exit 1 fi A senior engineer reviewed my code and commented: please avoid using && when you could simply execute the two commands in subsequent lines instead. He didn't further explain. But out of curiosity, I wonder if this is true, and what is the reason for avoiding using && .
The review comment probably refers to the second usage of the && operator. You don't want to not exit if the echo fails, I guess, so writing the commands on separate lines makes more sense: if [[ -n $VAR_A ]] && [[ -n $VAR_B ]]; then echo >&2 "error: cannot use MODE B in MODE A" exit 1fi BTW, in bash you can include && inside the [[ ... ]] conditions: if [[ -n $VAR_A && -n $VAR_B ]]; then
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/684833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/491732/" ] }
684,839
So I'm building a custom Linux-based OS, and I chose to run it as a RAM disk (initramfs). Unfortunately, I keep getting a Kernel Panic during boot. RAMDISK: gzip image found at block 0using deprecated initrd support, will be removed in 2021.exFAT-fs (ram0): invalid boot record signatureexFAT-fs (ram0): failed to read boot sectorexFAT-fs (ram0): failed to recognize exfat typeexFAT-fs (ram0): invalid boot record signatureexFAT-fs (ram0): failed to read boot sectorexFAT-fs (ram0): failed to recognize exfat typeList of all partitions:0100 4096 ram0 (driver?)0101 4096 ram1 (driver?)0102 4096 ram2 (driver?)0103 4096 ram3 (driver?)0104 4096 ram4 (driver?)0105 4096 ram5 (driver?)0106 4096 ram6 (driver?)0107 4096 ram7 (driver?)0108 4096 ram8 (driver?)0109 4096 ram9 (driver?)010a 4096 ram10 (driver?)010b 4096 ram11 (driver?)010c 4096 ram12 (driver?)010d 4096 ram13 (driver?)010e 4096 ram14 (driver?)010f 4096 ram15 (driver?)No filesystem could mount root, tried: vfat msdos exfat ntfs ntfs3Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) Any chance this is something missing in my kernel build? Here's how I've designed the OS: Component My Choice Init Daemon initrd Commands busybox 1.35.0 Kernel Linux 5.15.12 filesystem msdos, fat, exfat, ext2, ext3, or ext4 Bootloader syslinux or extlinux NOTES: I tried each file system one at a time, and all provide the same response, which leads me to believe that it is not an issue with the filesystem itself. I also tried both syslinux and extlinux for testing purposes. Here's how I've structured my disk: /media/vfloppy└── [ 512 Jan 3 08:06] boot ├── [ 36896 Jan 3 08:06] initramfs.cpio.gz ├── [ 512 Jan 3 08:06] syslinux │   ├── [ 283 Jan 3 08:06] boot.msg │   ├── [ 120912 Jan 3 08:06] ldlinux.c32 │   ├── [ 60928 Jan 3 08:06] ldlinux.sys │   └── [ 173 Jan 3 08:06] syslinux.cfg └── [ 939968 Jan 3 08:06] vmlinux Here is my syslinux.cfg : DISPLAY boot.msgDEFAULT linuxlabel linux KERNEL /boot/vmlinux INITRD /boot/initramfs.cpio.gz APPEND root=/dev/ram0 init=/init loglevel=3PROMPT 1 TIMEOUT 10F1 boot.msg I've also enabled the following filesystem options in my kernel's .config file: CONFIG_BLK_DEV_INITRD=yCONFIG_INITRAMFS_SOURCE=""CONFIG_FS_IOMAP=yCONFIG_EXT2_FS=yCONFIG_EXT2_FS_XATTR=yCONFIG_FS_MBCACHE=yCONFIG_EXPORTFS_BLOCK_OPS=yCONFIG_FAT_FS=yCONFIG_MSDOS_FS=yCONFIG_PROC_FS=yCONFIG_BLK_DEV_RAM=yCONFIG_BLK_DEV_RAM_COUNT=16CONFIG_BLK_DEV_RAM_SIZE=4096CONFIG_HAVE_KERNEL_GZIP=yCONFIG_RD_GZIP=yCONFIG_DECOMPRESS_GZIP=y
The review comment probably refers to the second usage of the && operator. You don't want to not exit if the echo fails, I guess, so writing the commands on separate lines makes more sense: if [[ -n $VAR_A ]] && [[ -n $VAR_B ]]; then echo >&2 "error: cannot use MODE B in MODE A" exit 1fi BTW, in bash you can include && inside the [[ ... ]] conditions: if [[ -n $VAR_A && -n $VAR_B ]]; then
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/684839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/508479/" ] }
684,925
When the hex number is relative small, I can use echo 0xFF| mawk '{ printf "%d\n", $1}' to convert hex to dec. When then hex number is huge, mawk does not work any more, e.g. echo 0x8110D248 | mawk '{ printf "%d\n", $1 }' outputs 2147483647 (which is wrong, 2147483647 is equivalent to 0x7FFFFFFF ). How can I convert larger numbers? I have a lot of numbers (one number per line, more than 10M) to be processed, e.g: each 0xFF\n 0x1A\n 0x25\n . How to make it work for such occasion? By xargs ? Is there any better method? xargs is really slow.
A better command to use for arbitrarily large numbers is bc . Here's a function to perform the conversion hextodec() { local hex="${1#0x}" printf "ibase=16; %s\n" "${hex^^}" | bc}hextodec 0x8110D2482165363272 I'm using a couple of strange-looking features here that manipulate the value of the variables as I use them: "${1#0x}" - This references "$1" , the first parameter to the function, as you would expect. The # is a modifier (see man bash , for example, or read POSIX ) that removes the following expression from the front of the value. For example, 0xab12 would be returned as ab12 "${hex^^}" - This references "$hex" but returns its value with alphabetic characters mapped to uppercase. (This is a bash extension, so read man bash but not POSIX.) For example, 12ab34 would be returned as 12AB34 In both cases the { … } curly brackets bind the modifiers to the variable; "$hex^^" would have simply returned the value of the $hex variable followed by two up-arrow/caret characters
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/684925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/416809/" ] }
684,984
I have a file user-history.txt and the file contents are in the following pattern. user-1 6user-1 7user-2 6user-2 7user-2 8user-3 6user-3 7user-3 9user-4 6 I would like to combine the records so that each user is only mentioned once, and the second column is combined respectively. Desired Output user-1 6,7user-2 6,7,8user-3 6,7,9user-4 6 What I have tried I have not been able to get my head around this problem as I am not yet experienced enough. I have looked for other solutions and, though there are similar questions, I have not found any which solve my specific problem. I would be open to other solutions if (G)AWK is not the simplest tool to use for this task. Detailed explanation would be appreciated so I can improve my knowledge.
$ cat tst.awk$1 != prev { if ( prev != "" ) { print prev, vals } prev = $1 vals = $2 next}{ vals = vals "," $2 }END { print prev, vals} $ awk -f tst.awk fileuser-1 6,7user-2 6,7,8user-3 6,7,9user-4 6 I THINK what that's doing is obvious enough to not need any explanation but if there's any part of it you don't understand just ask in a comment below.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/684984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/439791/" ] }
685,025
I have a file which contains a list of words under each other where these words belong to one sentence, and then the words that belong to the next sentences are also under each other. The chunk of words related to one sentence are followed by a space as shown in Representation #2 below Expected Output: (Representation #1): These are the words for sentence 1These are the words for sentence 2 Expected Input: (Representation #2): Thesearethewordsforsentence 1thesearethewordsforsentence 2 I tried following this question but it doesn't work where I have different words for different sentences, so how can I change representation number 2 to representation number 1 in linux?
With awk: awk 'BEGIN { RS = "" } {gsub(/ *\n */, " "); print}' FILE
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685025", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/486264/" ] }
685,037
I am trying to create a simple bash script that can run the "specific" port scan on mulitple IPs and Ports using nmap -p.The issue I am having is that when it reads the port# followed by the IP from the .txt file, the text file has the necessary space between port and IP, but it causes the script to fail. The code I have is below. I was trying to make this simple the only other thing I can think of is creating an array, but even then I am thinking that format for the nmap -p port scan is going to have the same issue. Any suggestions? for i in $(cat 'filepathway')donmap -p $idone its executing this: nmap -p 'port#' instead of this: nmap -p 'port#' 'IP#' The .txt looks like this:(these values are random) 23001 172.55.545.25423002 172.55.545.254 ...
With awk: awk 'BEGIN { RS = "" } {gsub(/ *\n */, " "); print}' FILE
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685037", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/495733/" ] }
685,119
How to config to permanently exclude certain directory for find command. https://stackoverflow.com/questions/4210042/how-to-exclude-a-directory-in-find-command I tried to add the following alias to bashrc alias find='find -not \( -path ./.git -prune \)' seems not working. In ripgrep you can config it. https://github.com/BurntSushi/ripgrep/blob/master/GUIDE.md#configuration-file So How to once for all config make find exclude certain directory like git.
I'd write it as a myfind wrapper script like: #! /bin/sh -predicate_found=false skip_one=falsefor arg do if "$skip_one"; then skip_one=false elif ! "$predicate_found"; then case $arg in (-depth | -ignore_readdir_race | -noignore_readdir_race | \ -mount | -xdev | -noleaf | -show-control-chars) ;; (-files0-from | -maxdepth | -mindepth) skip_one=true;; (['()!'] | -[[:lower:]]?*) predicate_found=true set -- "$@" ! '(' -name .git -prune ')' '(' esac fi set -- "$@" "$arg" shiftdoneif "$predicate_found"; then set -- "$@" ')'else set -- "$@" ! '(' -name .git -prune ')'fiexec find "$@" Which inserts ! ( -name .git -prune ) before the first non-option predicate¹ (or at the end if no predicate is found), and wraps the rest between ( and ) to avoid problems with expressions using -o . For instance, myfind -L . /tmp /etc -maxdepth 1 -type d -o -print would become find -L . /tmp /etc -maxdepth 1 ! '(' -name .git -prune ')' '(' -type d -o -print ')' . That prune s all .git directories. To prune only the ones at depth 1 of each of the directories passed as arguments, with FreeBSD's find , you could add a -depth 1 before -name .git . .git dirs could end up being traversed if you add some -mindepth 2 (or any number greater than 2). Note that -prune cannot be used in combination with -depth (or -delete which implies -depth ). ¹ here taking care of the GNU find option predicates to avoid its warnings if you insert things before them. It uses heuristics for that. It could be fooled if you used for instance BSD's myfind -f -my-dir-starting-with-dash-...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685119", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505362/" ] }
685,125
My bash script echo -n "Round Name:"read roundmkdir $roundecho -n "File Names:"read $1 $2 $3 $4 $5 $6cp ~/Documents/Library/Template.py $1.py $2.py $3.py $4.py $5.py $6.py . I have automation for directories and want the same automation for filenames. After taking unknown inputs, How can I make my shell scripts do this? cp ~/Documents/Library/Template.py A.py B.py C.py D1.py D2.py $round/.
I'd write it as a myfind wrapper script like: #! /bin/sh -predicate_found=false skip_one=falsefor arg do if "$skip_one"; then skip_one=false elif ! "$predicate_found"; then case $arg in (-depth | -ignore_readdir_race | -noignore_readdir_race | \ -mount | -xdev | -noleaf | -show-control-chars) ;; (-files0-from | -maxdepth | -mindepth) skip_one=true;; (['()!'] | -[[:lower:]]?*) predicate_found=true set -- "$@" ! '(' -name .git -prune ')' '(' esac fi set -- "$@" "$arg" shiftdoneif "$predicate_found"; then set -- "$@" ')'else set -- "$@" ! '(' -name .git -prune ')'fiexec find "$@" Which inserts ! ( -name .git -prune ) before the first non-option predicate¹ (or at the end if no predicate is found), and wraps the rest between ( and ) to avoid problems with expressions using -o . For instance, myfind -L . /tmp /etc -maxdepth 1 -type d -o -print would become find -L . /tmp /etc -maxdepth 1 ! '(' -name .git -prune ')' '(' -type d -o -print ')' . That prune s all .git directories. To prune only the ones at depth 1 of each of the directories passed as arguments, with FreeBSD's find , you could add a -depth 1 before -name .git . .git dirs could end up being traversed if you add some -mindepth 2 (or any number greater than 2). Note that -prune cannot be used in combination with -depth (or -delete which implies -depth ). ¹ here taking care of the GNU find option predicates to avoid its warnings if you insert things before them. It uses heuristics for that. It could be fooled if you used for instance BSD's myfind -f -my-dir-starting-with-dash-...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685125", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/508426/" ] }
685,134
We want to scan for words on our Gnu/Linux machine.A simple way is for example with grep grep -r "some_word" /var However in the case we want to scan the whole filesystem, then we can't do just grep -r "some_word" / , because it is scan the files in /proc , (these should be exclude).Therefore I want to know if there are some useful tools to scan for this purpose ? I know in gnu grep - we have the option of --exclude-dir , but I still want to know a better way for searching words on Gnu/Linux file-system.
I'd write it as a myfind wrapper script like: #! /bin/sh -predicate_found=false skip_one=falsefor arg do if "$skip_one"; then skip_one=false elif ! "$predicate_found"; then case $arg in (-depth | -ignore_readdir_race | -noignore_readdir_race | \ -mount | -xdev | -noleaf | -show-control-chars) ;; (-files0-from | -maxdepth | -mindepth) skip_one=true;; (['()!'] | -[[:lower:]]?*) predicate_found=true set -- "$@" ! '(' -name .git -prune ')' '(' esac fi set -- "$@" "$arg" shiftdoneif "$predicate_found"; then set -- "$@" ')'else set -- "$@" ! '(' -name .git -prune ')'fiexec find "$@" Which inserts ! ( -name .git -prune ) before the first non-option predicate¹ (or at the end if no predicate is found), and wraps the rest between ( and ) to avoid problems with expressions using -o . For instance, myfind -L . /tmp /etc -maxdepth 1 -type d -o -print would become find -L . /tmp /etc -maxdepth 1 ! '(' -name .git -prune ')' '(' -type d -o -print ')' . That prune s all .git directories. To prune only the ones at depth 1 of each of the directories passed as arguments, with FreeBSD's find , you could add a -depth 1 before -name .git . .git dirs could end up being traversed if you add some -mindepth 2 (or any number greater than 2). Note that -prune cannot be used in combination with -depth (or -delete which implies -depth ). ¹ here taking care of the GNU find option predicates to avoid its warnings if you insert things before them. It uses heuristics for that. It could be fooled if you used for instance BSD's myfind -f -my-dir-starting-with-dash-...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
685,196
column is available in packages util-linux and bsdmainutils . Both these packages are installed in Linux Mint 20.2 $ type columncolumn is /usr/bin/columncolumn is /bin/column Both these column are pointing to the bsd column tool. How can I access the tool from util-linux ?
In Linux Mint 20.2, util-linux doesn’t provide column ; the version shipped in Mint is 2.34-0.1ubuntu9.1, but the package only started providing column in version 2.35.2-3 of the package . You can verify which packages provide a given binary using apt-file : $ apt-file search bin/columnautogen: /usr/bin/columns bsdmainutils: /usr/bin/columnxymon: /usr/lib/xymon/cgi-bin/columndoc.sh column changed packages during a transition from bsdmainutils to util-linux ; this transition hasn’t reached Mint yet. The old bsdmainutils tools are now part of a new bsdextrautils package, which is built from util-linux . This will only be available in Linux Mint once a release is made based on Ubuntu 21.04 or later. If you really want the util-linux version of column , you’ll have to build it yourself.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685196", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220462/" ] }
685,233
I was wondering if it would be possible to write a disk image file directly to a partition without saving it as a file first. Something like dd if="http://diskimages.com/i_am_a_disk_image.img" of=/dev/sdb1 bs=2M I would also accept an answer in C or Python because I know how to compile them.
This is actually trivial. You can write to the device just like it's a file, and there are commands for directly downloading content and either writing it to a file or writing it to "stdout". As the user root you can simply: curl https://www.example.com/some/file.img > /dev/sdb Where /dev/sdb is your hard drive. This is not generally recommended but will work just fine and is useful in very small devices without much disk space. Incidently it would be more normal to write a disk image to a disk /dev/sdb not a partition /dev/sdb1 .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/685233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/499916/" ] }
685,296
I am running a test server (local application), I want to fetch its process id and kill. How to do it? I am running the test-server using command nohup ./test-server & and while verifying the PID for the process using ps -ef | grep 'test-server' | grep -v 'grep' | awk '{ printf $2 }' output: svr-ser+ 42707 42618 0 10:43 pts/2 00:00:00 /bin/sh ./test-serversvr-ser+ 42709 42707 0 10:43 pts/2 00:00:00 /bin/sh ./test -Dserver_port=1099 -s -j test-server.logsvr-ser+ 42734 42709 9 10:43 pts/2 00:00:01 /usr/bin/java -server -XX:+HeapDumpOnOutOfMemoryError -Xms1g -Xmx1g -XX:MaxMetaspaceSize=256m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:G1ReservePercent=20 -Djava.security.egd=file:/dev/urandom -Duser.language=en -Duser.region=EN -Dserver_port=1099 -s -j test-server.log & Using the following command to kill the process, ps -ef | grep 'test-server' | grep -v 'grep' | xargs kill -9 Output: kill: cannot find process "svr-ser+"Killed How to retrieve only 42707 and kill it, I want to kill very specifically the ./test-server process nothing else.
To get the PID of something whose name you can describe by a regex, as you do with your grep , can simply be done using pgrep test-server , as in kill -9 $(pgrep test-server) . But that's a detour that you don't have to take; pkill does it directly, pkill -9 test-server .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685296", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339057/" ] }
685,305
In short: mkfifo fifo; (echo a > fifo) &; (echo b > fifo) &; cat fifo What I expected: ab since the first echo … > fifo should be the first to have opened the file, so I expect that process to be the first to write to it (with its open unblocking first). What I get: ba To my surprise, this behaviour also happened when opening two separate terminals to do the writing in definitely independent processes. Am I misunderstanding something about the first-in, first-out semantics of a named pipe? Stephen suggested adding a delay: #!/usr/bin/zshdelay=$1N=$(( $2 - 1 ))out=$(for n in {00..$N}; do mkfifo /tmp/fifo$n (echo $n > /tmp/fifo$n) & sleep $delay (echo $(( $n + 1000 )) > /tmp/fifo$n )& # intentionally using `cat` here to not step into any smartness cat /tmp/fifo$n | sort -C || echo +1 rm /tmp/fifo$ndone)echo "$(( $res )) inverted out of $(( $N + 1 ))" Now, this works 100% correct ( delay = 0.1, N = 100 ). Still, running mkfifo fifo; (echo a > fifo) &; sleep 0.1 ; (echo b > fifo) &; cat fifo manually almost always yields the inverted order. In fact, even copying and pasting the for loop itself fails about half of the time. I'm very confused about what's happening here.
This has nothing to do with FIFO semantics of pipes, and doesn’t prove anything about them either way. It has to do with the fact that FIFOs block on opening until they are opened for both writing and reading; so nothing happens until cat opens fifo for reading. since the first echo should be first. Starting processes in the background means that you don’t know when they will actually be scheduled, so there’s no guarantee that the first background process will do its work before the second one. The same applies to unblocking blocked processes . You can improve the odds, while still using background processes, by artificially delaying the second one: rm fifo; mkfifo fifo; echo a > fifo & (sleep 0.1; echo b > fifo) & cat fifo The longer the delay, the better the odds: echo a > fifo blocks waiting to finish opening fifo , cat starts and opens fifo which unblocks echo a , and then echo b runs. However the major factor here is when cat opens the FIFO: until then, the shells block trying to set up the redirections. The output order seen ultimately depends on the order in which the writing processes are unblocked. You’ll get different results if you run cat first: rm fifo; mkfifo fifo; cat fifo & echo a > fifo & echo b > fifo That way, opening fifo for writing will tend not to block (still, without guarantees), so you’ll see a first with a higher frequency than in the first setup. You’ll also see cat finishing before echo b runs, i.e. only a being output.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/685305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106650/" ] }
685,452
I have a file named path.txt that contains the directory paths to some files as rows: ../../data/first.gz../../data/second.gz I want to read path.txt , read each line, store the content of those files (.gz files) into a new file. I found a similar question here awk command for reading files that are the contents of another file and this code (file names changed to match my data). awk '{ while ((getline a < $0) > 0) print a }' path.txt >> newfile I am new to awk and bash. I do not know how to modify the above code to use zcat or similar to open zip files and print content to newfile. Could someone please help me to modify the code or propose a new one?Thanks in advance.
Use xargs with zcat (here assuming the GNU implementation for its -r and -d options): <path.txt xargs -rd'\n' zcat -- >>output To zcat output of each .gz file into individual output files, you don't really need to use a shell-loops at all here, just call an inline-script as following: <infile xargs -rd'\n' -I{} sh -c 'zcat -- "$1" >output."${1##*/}"' xargs-sh {}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/422550/" ] }
685,721
I am trying to understand this code: awk 'NR%2{printf "%s ",$0;next;}1' yourFile Now I try to customize it. Given this error.txt content: KEY 4048:1736 string3KEY 0:1772 string1KEY 4192:1349 string1KEY 7329:2407 string2KEY 0:1774 string1 Then: awk 'NR%2{printf NR "%s ", $0; next}1' error.txt ... will return: 1KEY 4048:1736 string 33KEY 0:1772 string 15KEY 4192:1349 string 17KEY 7329:2407 string 29KEY 0:1774 string 1 I guess NR%2 refer to even line numbers, but I am not sure what the 1 refers to. Without 1 , awk 'NR%2{printf NR "%s ", $0; next}' error.txt will return one line. 1KEY 4048:1736 string 3KEY 0:1772 string 5KEY 4192:1349 string 7KEY 7329:2407 string 9KEY 0:1774 string Overall, I am still not getting it. I've looked at these pages so far: https://www.tecmint.com/use-next-command-with-awk-in-linux/ https://stackoverflow.com/a/32482224/15603477 https://stackoverflow.com/a/9605559/15603477
that % is Modulus/Remainder arithmetic operator , which finds the modulus division of two or more numbers. the NR in awk represents the current record number which its division by 2 where it's written as a condition statement , that results with 0 for even records numbers and so 0 is a false result therfore the followed action is not performed and; the 1 at the end, an idiom as known as always true condition will be executed that prints the even line numbers at the end of previous lines (it runs whenever the NR%2 result 0) where those are printed with printf without line break. see What is the meaning of '1' at the end of an awk script
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505362/" ] }
685,754
I just wanted to understand the meaning of the following statement and whether they seem correct. test -x /usr/bin/find || exit 0 Command 1Command 2Command 3 The output of test -x /usr/bin/find is always 0. That means the exit 0 command will be executed , meaning Command 1, 2, 3 will never be executed. Am I right here?
test -x /usr/bin/find (or [ -x /usr/bin/find ] ) does not output anything. The test will be true if /usr/bin/find is an existing executable file, and false if the pathname does not exist, or if it's not executable. If test exits successfully (with a zero exits status, signifying "no error"), the shell will execute the rest of the commands. If it exits with a failure (a non-zero exit status, signifying "some error"), exit 0 will terminate the current shell, preventing the rest of the commands from running. It would arguably be better to use exit 1 or just exit in place of exit 0 when find can't be found in /usr/bin though. Using exit 0 masks the exit status of test (which would be non-zero), and prevents the caller of this script from being notified of the failure of finding find at the given location. Related to the fact that an exit status of zero evaluates to "true" when tested as a boolean in the shell: Why the Unix command exit with non-zero value in Shell and evaluates to True when used in bash if condition? Related to using || and && in general: What are the shell's control and redirection operators?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/508675/" ] }
685,766
I've checked these two questions ( question one , question two ), but they were not helpful for me to understand. I have a file file.txt with 40 lines of Hello World! string. ls -l shows that its size is 520 bytes. Now I archive this file with tar -cvf file.tar file.txt and when I do ls -l again I see that file.tar is 10240 bytes. Why? I've read some manuals and have understood that archiving and compressing are different things. But can someone please explain how it is working?
tar archives have a minimum size of 10240 bytes by default; see the GNU tar manual for details (but this is not GNU-specific). With GNU tar , you can reduce this by specifying either a different block size, or different block factor, or both: tar -cv -b 1 -f file.tar file.txt The result will still be bigger than file.txt , because file.tar stores metadata about file.txt in addition to file.txt itself. In most cases you’ll see one block for the file’s metadata (name, size, timestamps, ownership, permissions), then the file content, then two blocks for the end-of-archive entry, so the smallest archive containing a non-zero-length file is four blocks in size (2,048 bytes with a 512-byte block).
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/685766", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/504939/" ] }
685,794
I though that the syntax for printf statements is printf format, item1, item2, ... as described e.g. here However, in this question printf is used like this: printf NR "%s ", $0 and it works! Why? Is it expected?
There are two features at work here: printf , and AWK string concatenation . NR "%s " produces the concatenation of the value of NR and the string %s ; that is then given to printf as its first argument. A clearer way of writing this would be printf "%d%s ", NR, $0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/459270/" ] }
685,814
I have a bunch of apps installed locally in my home directory. In order for them to be globally available I add them to PATH in .bashrc : PATH="$PATH:/home/user/apps/app1/bin"PATH="$PATH:/home/user/apps/app2/bin"PATH="$PATH:/home/user/apps/appn/bin" How can I set it up so that I don't have to add each new one? I'm trying this but it's not working: PATH="$PATH:/home/user/apps/*/bin" NOTE : I'm aware I can add them with a loop, but I'm also concerned my PATH variable will become too large, I'm wondering if it is possible to wildcard it somehow.
Wildcards will not be expanded in $PATH , no. Per the bash manual , PATH is: A colon-separated list of directories in which the shell looks for commands (my emphasis). Coming from another direction, the Command Search and Execution section of the manual says, in part: If the name is neither a shell function nor a builtin, and contains no slashes, Bash searches each element of $PATH for a directory containing an executable file by that name. ... (my emphasis) -- which makes no mention of any special processing done on the path elements, only that they are expected to be directories (as-is). I'm not sure off-hand what the limit is for the size of a bash variable; I suspect it's available memory. PATH doesn't need to be exported, but many people do; if it is exported, it will need to fit along with other environment variables and arguments into getconf ARG_MAX (ref: https://unix.stackexchange.com/a/124422/117549 ). A large PATH directory should not induce too much of a performance overhead, since bash uses a hash table to remember locations of previously-found commands (per-session). If you do hit a limit (visual or technical) with adding each individual application directory to your PATH, I would recommend adding one "symlink" directory to your PATH where you then link in the desired executables from the various applications.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685814", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318297/" ] }
685,830
I'd like to upgrade my kernel to try to fix a persistent issue I have with intermittent freezing. I've tried manually installing the kernel, but it throws errors during configuration and then upon sudo apt upgrade it shows: linux-headers-5.16.0-051600-generic : Depends: libssl3 (>= 3.0.0~~alpha1) but it is not installable Is this something that can be worked around? As it stands my Linux installation is unusable and I've been holding out for this kernel as my last thing to try before being forced back to Windows.
WARNING: the below method may break your system. You have been warned. Ubuntu mainline kernel 5.15.7+ and 5.16 bumps the requirement from libssl1.1 (>= 1.1.0) to libssl3 (>= 3.0.0~~alpha1) .You can find the change from the header packages: dpkg -I linux-headers-5.15.6-051506-generic_5.15.6-051506.202112010437_amd64.deb | grep Depends# Depends: linux-headers-5.15.6-051506, libc6 (>= 2.34), libelf1 (>= 0.142), libssl1.1 (>= 1.1.0), zlib1g (>= 1:1.2.3.3)dpkg -I linux-headers-5.15.7-051507-generic_5.15.7-051507.202112080459_amd64.deb | grep Depends# Depends: linux-headers-5.15.7-051507, libc6 (>= 2.34), libelf1 (>= 0.142), libssl3 (>= 3.0.0~~alpha1), zlib1g (>= 1:1.2.3.3) However, the package libssl3 is only available to Ubuntu 22.04: libssl3 Same as its parent package libssl-dev , 3.0+ is only available to Ubuntu 22.04 too: libssl-dev Therefore, if you're running Ubuntu 21.10 (or below), apt could not find the required libssl3>3.0. You could try manually downloading and installing the package from Ubuntu 22.04: https://packages.ubuntu.com/jammy/amd64/libssl3/download # wget http://mirrors.kernel.org/ubuntu/pool/main/o/openssl/libssl3_3.0.1-0ubuntu1_amd64.deb# sudo dpkg -i libssl3_3.0.1-0ubuntu1_amd64.deb This is NOT recommended , as libssl3 is not included in Ubuntu 21.10 or below and Ubuntu 22.04 has not been formally announced until April. However, libssl3 has *almost the same dependency as libssl1.1. There should be no issue in using it on Ubuntu 21.10. update If you really needs these new kernels for ubuntu 20.04 , download the following debs from ubuntu 22.04: libc6_2.34-0ubuntu3_amd64.deblibc6-dev_2.34-0ubuntu3_amd64.deblibc-bin_2.34-0ubuntu3_amd64.deblibc-dev-bin_2.34-0ubuntu3_amd64.deblibnsl2_1.3.0-2build1_amd64.deblibnsl-dev_1.3.0-2build1_amd64.deblibssl3_3.0.1-0ubuntu1_amd64.deblocales_2.34-0ubuntu3_all.debrpcsvc-proto_1.4.2-0ubuntu5_amd64.deb If you trust me, I made a copy to Google Drive: Google drive once downloaded all above into one folder, run: # assume root and in this folderdpkg --force-depends --install *.debapt --fix-broken install Your Ubuntu 20.04 is now good for kernel 5.16. It was tested on my server for a week and nothing went wrong. However, it is known that this still NOT works on some systems and breaks them! Use at your own risk! Please wait for Ubuntu 22.04 in the coming April.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/685830", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/262714/" ] }
686,068
Is there a way to extract the contents of an ISO image file to a folder in one step? I have been doing this and want to do less typing, and not to have to do the mount -o loop as well as the need to be root to do the mount command to access the ISO image contents: cp rhel-server-7.6-x86_64-dvd.iso /home/ron/mkdir /home/ron/tempmount -o loop /root/rhel-server-7.6-x86_64-dvd.iso /home/ron/tempmkdir /home/ron/rhel7.6dvdmv /home/ron/temp/* /home/ron/rhel7.6dvdrmdir /home/ron/temp
There's multiple programs that can just treat an ISO-9660 file as an archive. 7z is a popular one: 7z x yourfile.iso works. If this is a modern Linux with a user session manager running, udisksctl loop-setup -f yourfile.iso is a way to get your file into a loop device, and consequently automounted. Also: mv /home/ron/temp/* /home/ron/rhel7.6dvd that line must be throwing a lot of errors: a mounted ISO image is read-only, so you can't move things away from it. cp instead.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686068", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154426/" ] }
686,124
In a file that has any garbled text before and after a section that is marked by patterns START and END (specific strings that occur only once each and in the correct order and on the same line). I would like to do some string manipulation ONLY on the part between START and END Example input: aomodi3hriq32| ¶³r 0q93aoiSTART_this_is_to_be_modified_ENDaqsdofuha23uru| ²23i ii3uhfiaoawpo3<9"§ A hSTART_this_also_needs_modification_ENDqa 032/a237(°1Q"§ >A_this_START changeme ENDnot_this_modias In terms of sed -operations, the substring (and the substring only) between START and END should be modified as if I used sed 's/_this_// ; s/modi/MODI/ ; y/as/45/' . Example output: aomodi3hriq32| ¶³r 0q93aoiSTARTi5_to_be_MODIfied_ENDaqsdofuha23uru| ²23i ii3uhfiaoawpo3<9"§ A hSTART4l5o_need5_MODIfic4tion_ENDqa 032/a237(°1Q"§ >A_this_START ch4ngeme ENDnot_this_modias awk with FS="START|END" fails as the OFS cannot be set to multiple values at different positions. I tried using sed with a nested command substitution and different separators ( ~ ) but failed and also fear that there might be characters before START /after END that will mess with the command (e.g. a / ). The idea was to only select the "inner" substring and do the operations then use it as part of the replacement: sed "s/^\(.*\)START.*END\(.*\)$/\1$(sed 's~^.*START~~ s~END.*~~ s~_this_~~ s~modi~MODI~ y~as~45~' infile)\2/" infile I am not familiar with e.g. perl .... but whatever it takes. Is there any way to make a set of sed -operations apply to a REGEX-matched substring of a line only?
perl -CSD -ne ' if (my ($before, $between, $after) = /^(.*START)(.*)(END.*)/) { s/_this_//, s/modi/MODI/, tr/as/45/ for $between; print "$before$between$after\n"; } else { print; }' -- file -CSD decodes the input from UTF-8 and encodes output to UTF-8 Instead of populating the three variables $before , $between , and $after , we could have used /p with ${^PREMATCH} and ${^POSTMATCH} , but I don't find the solution nicer: if (my ($s) = /START(.*)END/p) { s/_this_//, s/modi/MODI/, tr/as/45/ for $s; print "${^PREMATCH}START${s}END${^POSTMATCH}";} else { print; } If START...END parts can be repeated on a single line, you need to loop over each line. for my $part (split /(START.*?END)/) { if ($part =~ /^START.*END$/) { s/_this_//, s/modi/MODI/, tr/as/45/ for $part; } print "$part";}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686124", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123460/" ] }
686,296
This is an great answer. https://stackoverflow.com/a/6739327/15603477 But Still a little bit confused me. without a variable, awk '/^nameserver/ { printf("nameserver 127.0.0.1\n")} {print}' file2 will get: # Generated by NetworkManagerdomain dhcp.example.comsearch dhcp.example.comnameserver 127.0.0.1nameserver 10.0.0.1nameserver 127.0.0.1nameserver 10.0.0.2nameserver 127.0.0.1nameserver 10.0.0.3 After try serval combination, I found out that I had to use awk '/^nameserver/ && !a { printf("nameserver 127.0.0.1\n"); a=1 } {print}' file2 But I am still confused with !a and a=1 function to stop the printf("nameserver 127.0.0.1\n") duplication.
the use of these kind of variables known as "flag variable", that let the program knows a certain condition has met then decide based on its value for further processing; here that's a simple control flag that whenever it doesn't set and has 0 or empty value (in awk default variables' value is 0 when doing integer comparison or its empty string when doing string comparison) then the string will be printed and the flag a=1 will be set, and so it will stop adding duplicates since that's now non-zero value. In many programming languages, ! (not) negates the evaluation result of an statement which when it's used like ! a , it negates the evaluation result of the a variable, if a=0 , then ! a will retun 1, or will retun 0 if it was a=1 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686296", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505362/" ] }
686,330
There are multiple related Questions, seems they don't use awk to solve the problem. Extracting positive/negative floating-point numbers from a string How to extract the numbers from a filename echo "blah foo123bar234blah" | egrep -o '([0-9]+)' returns 123234 But echo "blah foo123bar234blah" | awk '{ match($0,/([0-9]+)/,m); print m[0], m[1],m[2]}' returns 123 123 and echo "blah foo123bar234blah" | awk '{ match($0,/([0-9]+).+([0-9]+)/,m); print m[0], m[1],m[2]}' returns 123bar234 123 4 In the manual , in the section: match(string, regexp [, array]) , the example is: echo foooobazbarrrrr | gawk '{ match($0, /(fo+).+(bar*)/, arr); print arr[1], arr[2]}' Which returns foooo barrrrr . So how can I extract multiple numbers from a string using awk (equivalent of grep -o )?
With GNU awk for multi-char RS and RT: $ echo "blah foo123bar234blah" | awk -v RS='[0-9]+' '$0=RT'123234 With any awk (and retaining the original regexp instead of negating it as that's only easy with a simple bracket expression and not a robust general approach): $ echo "blah foo123bar234blah" | awk -v FS='\n' '{gsub(/[0-9]+/,FS"&"FS); for (i=2;i<=NF;i+=2) print $i}'123234 or: $ echo "blah foo123bar234blah" | awk '{ while (match($0,/[0-9]+/) ) {print substr($0,RSTART,RLENGTH); $0=substr($0,RSTART+RLENGTH)} }'123234
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686330", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/505362/" ] }
686,458
I'm trying to search for PDF files that have more than 100 pages and then moving them into a specific directory in the UNIX/LINUX terminal. Something a bit like this: find . -name '*.pdf' -pagenumber>100 -exec mv -t ~/directory Obviously -pagenumber>100 is not the right command. Is there a specific command for this?
The difficult bit here is to count the number of pages in a PDF document. The find utility can't do this by itself, so we need an external tool to do this. On most Unix systems, you will be able to install exiftool . This tool is part of the libimage-exiftool-perl package on Ubuntu, and of p5-Image-ExifTool on OpenBSD. It is able to do many things related to meta data in media files, for example to output the number of pages in a PDF document: $ exiftool -s3 -PageCount document.pdf10 We can use this with find to move the documents with more than 100 pages to a separate directory: mkdir -p ~/tmp/100-plus-pages || exitfind . -name '*.pdf' -type f -exec sh -c ' for pathname do if [ "$(exiftool -s3 -PageCount "$pathname")" -gt 100 ]; then mv "$pathname" ~/tmp/100-plus-pages fi done' sh {} + This calls a short in-line script for batches of found PDF files. The in-line script iterates over the current batch of found files and runs the exiftool command on each. If the number outputted by the command is strictly greater than 100, the file is moved to the 100-plus-pages directory in ~/tmp . We want to avoid searching the destination directory for PDF files, which is why I chose to create that directory under ~/tmp (anywhere separate from where find will search would do, but you probably want it to be on the same filesystem). You could also do as follows to avoid entering the directory if you want to keep it in the current directory: mkdir -p 100-plus-pages || exitfind . -path ./100-plus-pages -prune -o -name '*.pdf' -type f -exec sh -c ' for pathname do if [ "$(exiftool -s3 -PageCount "$pathname")" -gt 100 ]; then mv "$pathname" 100-plus-pages fi done' sh {} + You may want to test run this with mv replaced by echo first.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/686458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/486557/" ] }
686,459
My application requires maximum single-thread performance and suffers from switching to the Intel E cores. I am looking for a way to disable E cores on Intel i9-12900K on my Ubuntu 20.04 machine without access to bios (it is a rented dedicated server). Or for any possible way to distinguish such cores and assign CPU affinities using taskset to exclude them from execution. Tried to find the answer myself in Google. Only found that there are indeed scheduler issues for now, but there is no clear fixes or workarounds available for my problem.
taskset is a standard feature to assign cores to applications which works perfectly in your situation. E.g. in the case of Intel Core i9 12900K pin your task to the first sixteen cores and you're good to go: taskset 0xFFFF applicationtaskset -c 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 application The second form is longer but easier to read. AFAIK the standard Linux kernel doesn't currently have any infrastructure to hint the kernel that certain applications need to use certain types of cores. Yes, the Linux kernel supports BIG.little ARM architectures but I've not heard of API to utilize this feature. As of January, 2022 the Linux kernel does not support Intel Thread Director in any shape or form. There have been no patches, nothing. Lastly, it's worth noting that Linux and Windows differ in how they report HT/SMT siblings. Windows lists them in pairs, i.e. Core 1: Thread 1 Thread 2, Core 2: Thread 1 Thread 2, etc. Linux first lists all physical cores, then their HT/SMT siblings. So, if you want to test physical cores without using HT/SMT for a sixteen-core CPU, you'll do this: taskset -c 0,1,2,3,4,5,6,7 applicationtaskset 0xFF application More on it here: How do I know which processors are physical cores? Option N2: you can put E cores offline, and they will become invisible for your system: echo 0 | sudo tee /sys/devices/system/cpu/cpu{NN}/online For Intel Core i9 12900K that'll be for i in {16..23}; do echo 0 | sudo tee /sys/devices/system/cpu/cpu${i}/online; done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686459", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510251/" ] }
686,502
I am trying to run top with multiple PIDs using -p option and xargs . However, top fails to run with error top: failed tty get : $ pgrep gvfs | paste -s -d ',' | xargs -t top -ptop -p 1598,1605,1623,1629,1635,1639,1645,1932,2744top: failed tty get I used the -t option for xargs to see the full command which is about to be executed. It seems fine and I can run it successfully by hand: top -p 1598,1605,1623,1629,1635,1639,1645,1932,2744 However, it does not run with xargs . Why is that?
Turns out that there is a special option --open-tty in xargs for interactive applications like top . From man xargs : -o, --open-tty Reopen stdin as /dev/tty in the child process before executing the command. This is useful if you want xargs to run an interactive application. The command to run top should be: pgrep gvfs | paste -s -d ',' | xargs --open-tty top -p
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/686502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87918/" ] }
686,513
I want to understand how the APT package is managed in general, considering the following situation I got into today: I was trying to add MongoDB to my Debian machine. apt search mongodb showed good-looking results, and before attempting to install I read the MondoDB documentation which stated: Follow these steps to run MongoDB Community Edition on your system. These instructions assume that you are using the official mongodb-org package -- not the unofficial mongodb package provided by Debian -- and are using the default settings. From this, I understood and was surprised that what I get from Debian's apt install is unofficial by the developers of the app. This sounds worse than "not recommended". I do understand Debian APT package repository tends to show old versions and is never meant to catch up with latest leading edge updates. There are so many ways to deal with this, but now I'm concerned by the words unofficial . Does this mean, packages related to MongoDB (or any other app) on the APT repository isn't officially approved by the app developers? Or was it officially shipped by the developers but "avoid because it's not the latest version"? Or did someone (some entity?) copy from the official installation package and paste it to APT? I'm not trying to understand just this specific case with MongoDB. Instead I want to understand the overall "politics" on applications and APT. How does it work, how was it supposed to work? If this is a noob question then I'm sorry, but I couldn't find a good explanation online. Any links or reference would be appreciated.
Packages in all distributions (not only Debian) are usually not packaged by the developers of the application, but by the members of the community of the distribution, usually called packagers or package maintainers . Sometimes the application developer can be also the packager in some distributions but it isn't a rule and developers definitely cannot maintain their application in all distributions (for example I maintain my software in Fedora, but it is packaged by someone else in Debian). When it comes to "approval" and being "official" or "unoffical". We are talking about free software here, the licenses allow distributing the software so you don't need anyone's approval to package software for a distribution. The developers may disagree with the way their software is being packaged and shipped but that's all they can do. I'm not sure what makes the package (un)official. I guess all packages are in theory unofficial because they are made by a third party. It probably depends on your definition of being (un)official. One thing that can cause tension between packagers and developers is the release cycle. Distribution (especially "stable" distributions like Debian Stable or RHEL/CentOS) have their own release cycle and their own promises about software and API stability which is usually different from the upstream release cycle. This is the reason why you see older versions in your distributions, usually with some bug fixes backports. And sometimes upstream developers don't like this, because they get bug reports for things that are already fixed but not backported etc. And sometimes packagers make their own decisions about compile time options and other things that change (default) functionality of the software, which can be also annoying. So developers tell you something like "Use our 'official' packages instead of your distribution packages" and it's up to the user to decide what is best for them.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/686513", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/496102/" ] }
686,516
What I am trying to do (in bash) is: for i in <host1> <host2> ... <hostN>; do ssh leroy@$i "sudo -i; grep Jan\ 15 /var/log/auth.log" > $i;done to get just today's entries from these hosts auth.logs and aggregate them on my local filesystem. sudo is required because auth.log only allows root access. Using the root user isn't an option because that account is disabled. Using key-based authentication isn't an option because the systems implement 2FA (key and password). When I do the above (after initial authentication) I get sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper I have tried various parameters to the -S option and included the -M option, nothing works. Searching the web doesn't surface anything with this exact situation.
Packages in all distributions (not only Debian) are usually not packaged by the developers of the application, but by the members of the community of the distribution, usually called packagers or package maintainers . Sometimes the application developer can be also the packager in some distributions but it isn't a rule and developers definitely cannot maintain their application in all distributions (for example I maintain my software in Fedora, but it is packaged by someone else in Debian). When it comes to "approval" and being "official" or "unoffical". We are talking about free software here, the licenses allow distributing the software so you don't need anyone's approval to package software for a distribution. The developers may disagree with the way their software is being packaged and shipped but that's all they can do. I'm not sure what makes the package (un)official. I guess all packages are in theory unofficial because they are made by a third party. It probably depends on your definition of being (un)official. One thing that can cause tension between packagers and developers is the release cycle. Distribution (especially "stable" distributions like Debian Stable or RHEL/CentOS) have their own release cycle and their own promises about software and API stability which is usually different from the upstream release cycle. This is the reason why you see older versions in your distributions, usually with some bug fixes backports. And sometimes upstream developers don't like this, because they get bug reports for things that are already fixed but not backported etc. And sometimes packagers make their own decisions about compile time options and other things that change (default) functionality of the software, which can be also annoying. So developers tell you something like "Use our 'official' packages instead of your distribution packages" and it's up to the user to decide what is best for them.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/686516", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/436589/" ] }
686,517
I have read nearly every answer about this topic on this website or Stackoverflow but didn't manage to solve the issue below. When I copy the text from a PDF file and paste it into a text file file.txt ) , the text looks normal but when I use cat command: cat -v file.txt The output is: vbox = NoneM-BM- M-BM- M-BM- M-BM- def __init__(self, title="Error!", parent=None,M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- flags=Gtk.DialogFlags.MODAL, buttons=("NO",M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- Gtk.ResponseType.NO, "_YES",M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- Gtk.ResponseType.YES)):M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- super().__init__(title=title, parent=parent, flags=flags,M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- buttons=buttons)M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- self.vbox = self.get_content_area()M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- self.hbox = Gtk.Box(orientation=Gtk.Orientation.HORIZONTAL,spacing=5)M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- icon_theme = Gtk.IconTheme.get_default()M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- icon = icon_theme.load_icon("dialog-question", 48,M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- Gtk.IconLookupFlags.FORCE_SVG)M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- image = Gtk.Image.new_from_pixbuf(icon)M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- self.hbox.pack_start(image, False, False, 5)M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- self.vbox.add(self.hbox)M-BM- M-BM- M-BM- M-BM- def set_message(self, message, add_msg=None):M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- self.hbox.pack_start(Gtk.Label(message), False, False, 5)M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- M-BM- if add_msg != None: Or when I use bat command : bat -A file.txt The output is: vbox•=•None␊\u{a0}\u{a0}\u{a0}\u{a0}def•__init__(self,•title="Error!",•parent=None,␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}flags=Gtk.DialogFlags.MODAL,•buttons=("NO",␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}Gtk.ResponseType.NO,•"_YES",␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}Gtk.ResponseType.YES)):␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}super().__init__(title=title,•parent=parent,•flags=flags,␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}buttons=buttons)␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}self.vbox•=•self.get_content_area()␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}self.hbox•=•Gtk.Box(orientation=Gtk.Orientation.HORIZONTAL,␊spacing=5)␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}icon_theme•=•Gtk.IconTheme.get_default()␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}icon•=•icon_theme.load_icon("dialog-question",•48,␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}Gtk.IconLookupFlags.FORCE_SVG)␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}image•=•Gtk.Image.new_from_pixbuf(icon)␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}self.hbox.pack_start(image,•False,•False,•5)␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}self.vbox.add(self.hbox)␊\u{a0}\u{a0}\u{a0}\u{a0}def•set_message(self,•message,•add_msg=None):␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}self.hbox.pack_start(Gtk.Label(message),•False,•False,•5)␊\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}\u{a0}if•add_msg•!=•None:␊ On Visual studio code, when I hover on those characters, I get: The character U+00a0 is not a basic ASCII character. How can I use sed command to replace those characters with normal "space" characters?
Looks like the UTF-8 encoding of the non-breaking space (U+00A0) , the bytes are c2 a0 in hex. Something like sed -e 's/\xc2\xa0/ /g' in GNU sed should work to replace them with regular spaces.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686517", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/504663/" ] }
686,551
I wanted to get the month name on macOS 11.6, and I tried checking the man page of date checking the man page of strptime But I couldn't figure out what format specifier to use to display the month name. After searching the internet, it seems that %b displays it. I would like to know, where can I find all the information about specifiers within UNIX? What's the official source, if man page doesn't have this info?
The manual for the C language function strftime() ( man strftime ) should contain all the date format specifiers across most Unix and Unix-like systems. The strptime() function has to do with parsing strings into time values (which is not what you want to do), whereas strftime() has to do with outputting time values as formatted strings (which you want to do). See also the POSIX specification for the strftime() interface .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17265/" ] }
686,559
I am trying to start bash with openvt from init system.For this I wrote the following script #!/bin/bashopenvt -c 8 -- /bin/bash It starts and runs, but the Ctrl-C and Ctrl-4 shortcuts don't work. Ctrl-D, Ctrl-S and Ctrl-Q work fine. I also noticed that if I run this script manually from the terminal, it works without problems, but if I run it from another script in background (&), the described problem occurs. In general, my task is to run an arbitrary program on a free tty. In this example, I've kept the code to a minimum to make the problem more specific.
The manual for the C language function strftime() ( man strftime ) should contain all the date format specifiers across most Unix and Unix-like systems. The strptime() function has to do with parsing strings into time values (which is not what you want to do), whereas strftime() has to do with outputting time values as formatted strings (which you want to do). See also the POSIX specification for the strftime() interface .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510330/" ] }
686,662
Trying to delete files using a list of the headers and a wildcard. The headers for the files are the same but the endings are different. Not sure how to get the wildcard in there. I was trying to download some SRA files from NCBI and ran into download issues. Files that didn't download correctly created file1.fastq and file1.sra.cache (and so on for file2, file3, etc...) Files that downloaded correctly give me file.fastq and file.sra.cache, so my downloads look like this: file1.fastqfile1.srafile2.fastqfile2.sra.cachefile3.fastqfile3.sra.cache Where file1 is a successful download, but not file2 or file3. I want to delete all files associated with file2 and file3. I figured out which files didn't download correctly with ls *.sra.cache . I now have a list of the faulty file headers (e.g., file2, file3, ...; so just the beginning part). How could I feed in the list of filenames and add a wildcard to remove them? I'd like to make a list of filenames with a wildcard, like file2*file3* and do something like cat list.txt | xargs rm , but I'm not sure how to get the wildcard in there to work. Unix thinks * is part of the filename if I put it into the list itself.
No need to use a separate file list or alike, you can use a simple for -loop like: for f in *.sra.cache; do rm -- "${f%sra.cache}"*done ${f%sra.cache} removes the sra.cache extension from the file name.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686662", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510447/" ] }
686,861
I'm trying to obtain the processID of pcmanfm like this: pgrep -f "pcmanfm" When pcmanfm is not running, the command above returns nothing (as I expect). However, when I run the command from python, it returns a process ID even when pcmanfm is not running: processID = os.system('pgrep -f "pcmanfm"') Furthermore, if you run the command above multiple times at a python3 prompt, it returns a different processID each time. All the while, pcmanfm has been closed prior to these commands. >>> processID = os.system('pgrep -f "pcmanfm"')17412>>> processID = os.system('pgrep -f "pcmanfm"')17414>>> processID = os.system('pgrep -f "pcmanfm"')17416 This is really messing up my ability to launch pcmanfm if it isn't currently running. My script thinks it is running when it isn't. Why is this happening? I'm actually encountering this issue in an Autokey script that I've attempted to write based on this video I watched. Here's my current script: processID = system.exec_command('pgrep -f "pcmanfm" | head -1',True)dialog.info_dialog("info",processID)if (processID): cmd = "wmctrl -lp | grep " + processID + " | awk '{print $1}'" windowID = system.exec_command(cmd,True) # dialog.info_dialog("info",windowID) cmd = "wmctrl -iR " + windowID #dialog.info_dialog("info",cmd) system.exec_command(cmd,False)else: #os.system("pcmanfm /home/user/Downloads") cmd = "/usr/bin/pcmanfm /home/user/Downloads" system.exec_command(cmd,False) The problem is, I keep getting processIDs even when pcmanfm isn't running. The script properly focuses pcmanfm if it is running, but it won't launch it if it isn't. Update: I finally got this script to work by taking out -f and replacing it with -nx (from @they 's advice). Also, I added some exception handling to ignore autokey exceptions caused by empty output that's expected. Additionally, I converted it to a (more flexible) function so that it will service a wider variety of commands/applications: import redef focusOrLaunch(launchCommand): appName = re.findall('[^\s/]+(?=\s|$)',launchCommand)[0] processID = None try: processID = system.exec_command('pgrep -nx "' + appName + '"',True) except Exception as e: #dialog.info_dialog("ERROR",str(e)) pass #dialog.info_dialog("info",processID) if (processID): cmd = "wmctrl -lp | grep " + processID + " | awk '{print $1}'" windowID = system.exec_command(cmd,True) # dialog.info_dialog("info",windowID) cmd = "wmctrl -iR " + windowID #dialog.info_dialog("info",cmd) system.exec_command(cmd,False) else: system.exec_command(launchCommand,False) cmd = "/usr/bin/pcmanfm ~/Downloads"focusOrLaunch(cmd)
Proposed solution: Remove the -f option from your pgrep command. Explanation: You probably get the process ID of the shell that is executed to run your command. A new shell process with a new PID will be created for every system.exec_command . Run e.g. sh -c 'pgrep -af nonexistent' and check the output. You will probably get something like 11300 sh -c pgrep -af nonexistent With an existing command I also get a line for the shell sh -c 'pgrep -af sshd'695 /usr/sbin/sshd -D11207 sshd: pi [priv]11224 sshd: pi@pts/011331 sshd: [accepted]11343 sh -c pgrep -af sshd Depending on the PID values, your head command might extract the PID of a process you are looking for or the PID of the shell process. With option -f you explicitly tell pgrep to search the whole command line instead of the process name only. This way it will find the string in the shell's command line argument. Without -f you won't get the shell process. $ sh -c 'pgrep -a sshd'695 /usr/sbin/sshd -D11207 sshd: pi [priv]11224 sshd: pi@pts/011364 sshd: [accepted]
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686861", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40149/" ] }
686,980
I have a script like this find path -type f -exec md5sum {} +' It has this conclusion /tmp❯ find $pwd -type f -exec md5sum {} + \a7c8252355166214d1f6cd47db917226 ./guess.bashe1c06d85ae7b8b032bef47e42e4c08f9 ./qprint.bash8d672b7885d649cb76c17142ee219181 ./uniq.bash2d547f5b610ad3307fd6f466a74a03d4 ./qpe523166a51f0afbc89c5615ae78b3d9b0 ./Makefile57a01f2032cef6492fc77d140b320a32 ./my.cc5c7b1345f1bcb57f6cf646b3ad0869e ./my.h6014bc12ebc66fcac6460d634ec2a508 ./my.exe0ff50f0e65b0d0a5e1a9b68075b297b8 ./levik/2.txt5f0650b247a646355dfec2d2610a960c ./levik/1.txt5f0650b247a646355dfec2d2610a960c ./levik/3.txt We need such a conclusion 5f0650b247a646355dfec2d2610a960c ./levik/1.txt5f0650b247a646355dfec2d2610a960c ./levik/3.txt
If you’ve got GNU uniq , you can ask it to show all lines duplicating the first 32 characters¹: find path -type f -exec md5sum {} + | sort | uniq -D -w32 The list needs to be sorted since uniq only spots consecutive duplicates. This also assumes that none of the file paths contain a newline character; to handle that, assuming GNU implementations of all the tools, use: find . -type f -exec md5sum -z {} + | sort -z | uniq -z -D -w32 | tr '\0' '\n' (GNU md5sum has its own way of handling special characters in file names , but this produces output which isn’t usable with uniq in the way shown above.) ¹ Technically, in current versions of GNU uniq , it's the first 32 bytes that are considered, for instance UTF-8 encoded á and é characters would be considered identical by uniq -w1 as their encoding both start with the 0xc3 byte. In the case of 0-9a-f characters found in hex-encoded MD5 sums though, that makes no difference as those characters are always encoded on one byte.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/686980", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/498234/" ] }
686,992
I'm wondering what is the meaning of the process identifier in Linux, is it the order of the process? Is it a code that identifies the nature of the process or simply a number randomly generated to uniquely identify a process? Are different processes with a similar PID related in some way?
PID is short for ‘process identifier’. That’s exactly what it is, a way to ‘uniquely’ identify a process on the system. Note that I have ‘uniquely’ in quotes here. This is because a PID is only unique for the lifetime of the process it is assigned to. As far as how a PID is chosen, it varies by system. The original approach is to simply assign the next number that has not been used, up to some maximum value, and once you get to that max you start reusing previously used but currently unused numbers, starting back from the lowest such number again. Linux takes that original approach, because it’s simple and fast. The downside is that some poorly written software may rely on PIDs in ways that it should not (such as using them to seed an internal random number generator or create a temporary file name), which allows for some potentially nasty local exploits if you’re using such software (but such software is thankfully increasingly rare). Some systems, such as OpenBSD, instead pick PIDs at random from the currently unused values between 1 and the maximum. This eliminates the local security issues, but in exchange it slows down creation of new processes, opens you up to random users on the internet potentially nasty things (such as the exploit outlined in this security Stack Exchange question ), and possibly breaks software that expects PIDs to not be reused quickly. Others, like FreeBSD, allow you to choose either approach, or alternatively use a middle ground. This allows you to pick which particular set of security issues you want to deal with (hint, it’s probably the local issues, not the remote issues), or even choose a middle ground (which is usually the correct choice).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/686992", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510756/" ] }
687,108
Assume that i have a text file containing the following 5 lines: Tue 18 2022 car model: Toyota , car motor: 2001 , car color: blue , year of production: 2018Thu 19 2022 car model: Mercedes , car color: black , year of production: 2012 , car motor: 4000Thu 20 2022 used: yes , car motor: 1999 , car model: Mercedes , car color: black , year of production: 2012Thu 20 2022 car model: Kia , car motor: 1500 , car color: red , used: no , year of production: 2010Thu 20 2022 price: 150, car model: GMC , car color: purple , car motor: 3500 , year of production: 2010 i'm looking for grep/awk (or other utility that's available on freebsd 11) that will find/print every line where the following condition evaluated TRUE: Phrase "car motor:" followed by a space and then a numerical value greater than 2000 Such grep/awk is expected to find/print the following lines from the text file: Tue 18 2022 car model: Toyota , car motor: 2001 , car color: blue , year of production: 2018Thu 19 2022 car model: Mercedes , car color: black , year of production: 2012 , car motor: 4000Thu 20 2022 price: 150, car model: GMC , car color: purple , car motor: 3500 , year of production: 2010
I would think that perl would be available on freebsd, and your requirements translate quite directly: perl -ne 'print if /car motor: (\d+)/ and $1 > 2000' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687108", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510854/" ] }
687,116
I've just updated my laptop running KDE Neon with the help of pkcon refresh && pkcon update . After restarting my laptop screen shows a weird static noise on the screen which I cannot remove. You can see this in the following video I made: https://www.youtube.com/watch?v=k-oZB6sCptU Weirdly enough, my external screens are working just fine. neofetch shows the following: display manager: ssdm I really have no clue what might be causing this and how to solve it. Does anyone know what is going wrong here? Please let me know if I need to provide more info!
I would think that perl would be available on freebsd, and your requirements translate quite directly: perl -ne 'print if /car motor: (\d+)/ and $1 > 2000' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687116", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510865/" ] }
687,126
I have a directory /u01/oracle/folders with these subfolders: [root@ricusesasctl01vm tax_receipts]# ls -ltrtotal 64drwxr-xr-x 5 OICDev1 oic 4096 Mar 1 2021 Argentinadrwxr-xr-x 5 OICDev1 oic 4096 Mar 1 2021 Brazildrwxr-xr-x 3 OICDev1 oic 4096 Mar 1 2021 completeddrwxr-xr-x 3 OICDev1 oic 4096 Mar 1 2021 duplicatedrwxr-xr-x 5 OICDev1 oic 4096 Mar 1 2021 EAOdrwxr-xr-x 3 OICDev1 oic 4096 Mar 1 2021 erroreddrwxr-xr-x 5 OICDev1 oic 4096 Mar 1 2021 Japandrwxr-xr-x 5 OICDev1 oic 4096 Mar 1 2021 Koreadrwxr-xr-x 5 OICDev1 oic 4096 Mar 1 2021 SAOdrwxr-xr-x 5 OICDev1 oic 4096 Mar 1 2021 SPPOdrwxr-xr-x 3 OICDev1 oic 4096 Mar 1 2021 tempdrwxr-xr-x 4 OICDev1 oic 4096 Mar 1 2021 templatedrwxr-xr-x 3 OICDev1 oic 4096 Mar 1 2021 template2drwxr-xr-x 5 OICDev1 oic 4096 Mar 1 2021 WHQdrwxr-xr-x 5 OICDev1 oic 4096 May 10 2021 Canadadrwxr-xr-x 3 OICDev1 oic 4096 Jun 8 2021 canada In a shell script, SourceDirectory="/u01/oracle/folders"TargetDirectory=/u01/oracle/folders" For the value of $SourceDirectory , I want to list all the subfolders except SAO. Using this command in loop #100 Loop through each directory (e.g. brazil, canada, uk)#for EachDir in "$SourceDirectory"*;do strFiles="" echo "Current Directory is $EachDir" I tried : SourceDirectory=$(find /u01/oracle/folders -maxdepth 1 -type d \( ! -name SAO \)) It skips the directory SAO, but the output is one long string. How do I split this string into directories? Example: [root@ricusesasctl01vm tax_receipts]# SourceDirectory=$(find /u01/oracle/folders -maxdepth 1 -type d \( ! -name SAO \))[root@ricusesasctl01vm tax_receipts]# echo $SourceDirectory/u01/oracle/folders /u01/oracle/folders/duplicate /u01/oracle/folders/Brazil /u01/oracle/folders/completed /u01/oracle/folders/template2 /u01/oracle/folders/Canada /u01/oracle/folders/SPPO /u01/oracle/folders/template /u01/oracle/folders/WHQ /u01/oracle/folders/EAO /u01/oracle/folders/errored /u01/oracle/folders/Korea /u01/oracle/folders/Japan /u01/oracle/folders/Argentina /u01/oracle/folders/temp /u01/oracle/folders/canada
I would think that perl would be available on freebsd, and your requirements translate quite directly: perl -ne 'print if /car motor: (\d+)/ and $1 > 2000' file
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687126", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510879/" ] }
687,159
I have a large directory of music files whose titles follow the below format: Title_stringOfNumbers - Artist.mp3 My goal is to remove the underscores followed by numbers and switch the artist's name with the title. For example, the original filename is: whats up_7979841261 - randomArtist.mp3 My desired filename: randomArtist - whats up.mp3 The title can contain special characters ( ' , ! , . , _ , ( , ) , / , \ and even Japanese characters) and numbers, but an underscore character and a number are never next to each other in the first part of the filename (so there are no multiple word title_2_6878492178471289 - artist.mp3 -like files). I tried using the rename command in the terminal to remove the underscores so far, but I managed to hit a roadblock, because this line didn't do anything and I'm not familiar with using it. rename 'y/_//' * This is all using POP!_OS 21.10, so anything that works with Ubuntu should work with my system too. I used the Perl script rename (installed with sudo apt install rename ). However, I found out that all three variants ( rename , prename , file-rename ) are installed on my system.
You can use (Perl) rename : rename -n 's/^(.*)(_\d+) - (.*)\.mp3$/$3 - $1.mp3/' *.mp3 Remove the -n to actually run the operation if the result looks good. If your files have correct id3 tags, it might be better to rename using these tags: e.g. mp3rename or exiftool sudo apt install mp3renamemp3rename -s '&a - &t'mp3rename *.mp3
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687159", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/500278/" ] }
687,217
I was copying Asterisk call recordings from our main server to a samba share and the creation date and times were changed to the current date and time. The file format is: in-xxxxxxxxxx-xxxxxxxxxx-20211020-162749- 1634761669 .7921917.wav The bold part is in EPOCH time.I have hundreds of these files and I need to change the creation date of the file based on that EPOCH time stamp in the file's name.Can anyone help me?
With GNU touch , you can use touch -d @1634761669.7921917 file to set the last modification time of a file to the specified epoch time (even with subsecond precision as here). So you could do in zsh : #! /bin/zsh -ret=0for file in *-<->.<->.wav; do t=${file:r} t=${t##*-} touch -d @$t -- $file || ret=$?doneexit $ret If it's really the creation time , often called birth time , as reported by ls -l --time=birth with recent versions of GNU ls for instance that you want to change, AFAIK that is not possible on Linux other than changing the clock back to that time and create the file again. The part below is wrong, it does not change the time in the namespace only as I originally believed. See @Busindre's answer for details. If on Linux (a recent version¹), you could however only change the clock in a new `time` namespace so as not to affect the system's clock globally. For instance, with: sudo unshare --time sh -c 'date -s @1634761669.7921917 && exec cp -a file file.new' You would create a file.new copy of file with a birth time close to @1634761669.7921917 $ sudo unshare --time sh -c 'date -s @1634761669.7921917 && exec cp -a file file.new'$ ls -l --time=birth --time-style=+%s.%N file file.new-rw-r--r-- 1 stephane stephane 0 1642699170.474916807 file-rw-r--r-- 1 stephane stephane 0 1634761669.792191700 file.new The zsh script above could then be written: #! /bin/zsh -ret=0for file in *-<->.<->.wav; do t=${file:r} t=${t##*-} unshare --time sh -c ' date -s "@$1" && exec cp -aTn -- "$2" "$2.new"' sh "$t" "$file" && mv -f -- "$file.new" "$file" || ret=$?doneexit $ret (and would need to be run as root ). Some second thought while revisiting this: I just realised that causes a potential problem: that unshare --time hack allows the birth time to be set to some arbitrary time in the past but that also causes the change status time (the one reported by ls -lc for instance) to be set in the past, to the specified time, plus the time it took to make the copy). That ctime is not meant to be settable arbitrarily either. By setting it in the past like that, it may break the assumptions that some software may make about those files. For instance a backup software may decide to disregard it because it has a ctime that predates the last backup time. So it may be better to make sure the ctime is not set in that namespace with a faked clock time, for instance, by only creating the file in the past, but copying its contents in the present: unshare --time sh -Cc ' umask 77 && date -s "@$1" && : > "$2.new"' sh "$t" "$file" && cp -aT -- "$file" "$file.new" && mv -f -- "$file.new" "$file" ¹ you need a Linux kernel 5.6 or above and CONFIG_TIME_NS to be enabled in the kernel and util-linux 2.36 or above.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687217", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/510995/" ] }
687,240
I'm running an Ubuntu based distro Linux version 4.1.18-ipipe (ubuntu1604@ubuntu1604) (gcc version 4.9.3 (Ubuntu/Linaro 4.9.3-13ubuntu2) When this system boots up, rsyslogd is not running. So any C programs that call syslog(...) do not report any information. The simple fix to this is to SSH into the system and issue an rsyslogd on the terminal. Is there a standard way to have this utility start up automatically?
With GNU touch , you can use touch -d @1634761669.7921917 file to set the last modification time of a file to the specified epoch time (even with subsecond precision as here). So you could do in zsh : #! /bin/zsh -ret=0for file in *-<->.<->.wav; do t=${file:r} t=${t##*-} touch -d @$t -- $file || ret=$?doneexit $ret If it's really the creation time , often called birth time , as reported by ls -l --time=birth with recent versions of GNU ls for instance that you want to change, AFAIK that is not possible on Linux other than changing the clock back to that time and create the file again. The part below is wrong, it does not change the time in the namespace only as I originally believed. See @Busindre's answer for details. If on Linux (a recent version¹), you could however only change the clock in a new `time` namespace so as not to affect the system's clock globally. For instance, with: sudo unshare --time sh -c 'date -s @1634761669.7921917 && exec cp -a file file.new' You would create a file.new copy of file with a birth time close to @1634761669.7921917 $ sudo unshare --time sh -c 'date -s @1634761669.7921917 && exec cp -a file file.new'$ ls -l --time=birth --time-style=+%s.%N file file.new-rw-r--r-- 1 stephane stephane 0 1642699170.474916807 file-rw-r--r-- 1 stephane stephane 0 1634761669.792191700 file.new The zsh script above could then be written: #! /bin/zsh -ret=0for file in *-<->.<->.wav; do t=${file:r} t=${t##*-} unshare --time sh -c ' date -s "@$1" && exec cp -aTn -- "$2" "$2.new"' sh "$t" "$file" && mv -f -- "$file.new" "$file" || ret=$?doneexit $ret (and would need to be run as root ). Some second thought while revisiting this: I just realised that causes a potential problem: that unshare --time hack allows the birth time to be set to some arbitrary time in the past but that also causes the change status time (the one reported by ls -lc for instance) to be set in the past, to the specified time, plus the time it took to make the copy). That ctime is not meant to be settable arbitrarily either. By setting it in the past like that, it may break the assumptions that some software may make about those files. For instance a backup software may decide to disregard it because it has a ctime that predates the last backup time. So it may be better to make sure the ctime is not set in that namespace with a faked clock time, for instance, by only creating the file in the past, but copying its contents in the present: unshare --time sh -Cc ' umask 77 && date -s "@$1" && : > "$2.new"' sh "$t" "$file" && cp -aT -- "$file" "$file.new" && mv -f -- "$file.new" "$file" ¹ you need a Linux kernel 5.6 or above and CONFIG_TIME_NS to be enabled in the kernel and util-linux 2.36 or above.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687240", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287718/" ] }
687,271
I have the following JPEG files : $ ls -l-rw-r--r-- 1 user group 384065 janv. 21 12:10 CamScanner 01-10-2022 14.54.jpg-rw-r--r-- 1 user group 200892 janv. 10 14:55 CamScanner 01-10-2022 14.55.jpg-rw-r--r-- 1 user group 283821 janv. 21 12:10 CamScanner 01-10-2022 14.56.jpg I use $ img2pdf to transform each image into a PDF file. To do that : $ find . -type f -name "*.jpg" -exec img2pdf "{}" --output $(basename {} .jpg).pdf \; Result : $ ls -l *.pdf-rw-r--r-- 1 user group 385060 janv. 21 13:06 CamScanner 01-10-2022 14.54.jpg.pdf-rw-r--r-- 1 user group 201887 janv. 21 13:06 CamScanner 01-10-2022 14.55.jpg.pdf-rw-r--r-- 1 user group 284816 janv. 21 13:06 CamScanner 01-10-2022 14.56.jpg.pdf How can I remove the .jpg part of the PDF filenames ? I.e., I want CamScanner 01-10-2022 14.54.pdf and not CamScanner 01-10-2022 14.54.jpg.pdf . Used alone, $ basename filename .extension prints the filename without the extension, e.g. : $ basename CamScanner\ 01-10-2022\ 14.54.jpg .jpgCamScanner 01-10-2022 14.54 But it seems that syntax doesn't work in my $ find command. Any idea why ? Note : if you replace $ img2pdf by $ echo it's the same, $ basename doesn't get rid of the .jpg part : $ find . -type f -name "*.jpg" -exec echo $(basename {} .jpg).pdf \;./CamScanner 01-10-2022 14.56.jpg.pdf./CamScanner 01-10-2022 14.55.jpg.pdf./CamScanner 01-10-2022 14.54.jpg.pdf
The issue with your find command is that the command substitution around basename is executed by the shell before it even starts running find (as a step in evaluating what the arguments to find should be). Whenever you need to run anything other than a simple utility with optional arguments for a pathname found by find , for example if you need to do any piping, redirections or expansions (as in your question), you will need to employ a shell to do those things: find . -type f -name '*.jpg' \ -exec sh -c 'img2pdf --output "$(basename "$1" .jpg).pdf" "$1"' sh {} \; Or, more efficiently (each call to sh -c would handle a batch of found pathnames), find . -type f -name '*.jpg' -exec sh -c ' for pathname do img2pdf --output "$(basename "$pathname" .jpg).pdf" "$pathname" done' sh {} + Or, with zsh , for pathname in ./**/*.jpg(.DN); do img2pdf --output $pathname:t:r.png $pathnamedone This uses the globbing qualifier .DN to only match regular files ( . ), to allow matching of hidden names ( D ), and to remove the pattern if no matches are found ( N ). It then uses the :t modifier to extract the "tail" (filename component) of $pathname , :r to extract the "root" (no filename suffix) of the resulting base name, and then adds .png to the end. Note that all of the above variations would write the output to the current directory , regardless of where the JPEG file was found. If all your JPEG files are in the current directory, there is absolutely no need to use find , and you could use a simple loop over the expansion of the *.jpg globbing pattern: for pathname in ./*.jpg; do img2pdf --output "${pathname%.jpg}.png" "$pathname"done The parameter substitution ${pathname%.jpg} removes .jpg from the end of the value of $pathname . You may possibly want to use this substitution in place of basename if you want to write the output to the original directories where the JPEG files were found, in the case that you use find over multiple directories, e.g., something like find . -type f -name '*.jpg' -exec sh -c ' for pathname do img2pdf --output "${pathname%.jpg}.pdf" "$pathname" done' sh {} + See also: Understanding the -exec option of `find`
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/687271", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152418/" ] }
687,325
In short, I instinctively wrote a command like this to find the two files prefix.ext and prefix_suffix.ext down a hierarchy find /some/path -type f -name 'prefix?(_suffix).zip' but it doesn't work. Since man find , under -name patter refers to pattern as a "shell pattern", I was wandering if one has control on which pattern should be used and, specifically if extglob option can be used.
find only uses “basic” shell patterns, as described in POSIX . It doesn’t support extglob -style globs (even though the GNU implementation says it uses fnmatch , and the GNU C library’s implementation of fnmatch supports extended patterns ). If you’re using GNU find , you can filter using regular expressions instead; see the relevant section of the documentation for details: -regex '.*/prefix\(_suffix\)?\.zip' with the default regular expression type, or -regextype posix-extended -regex '.*/prefix(_suffix)?\.zip' with EREs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/687325", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/164309/" ] }
687,421
I've been trying for a few days already, but still cannot figure it out how to get the proper size of my HDD drive with a python script.My HDD is 1Tb. As I know in Gb it is 1000Gb, and in GiB it is 931GiB roughly.When I type in the terminal lsblk it shows this: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 931,5G 0 disk ├─sda1 8:1 0 512M 0 part /boot/efi└─sda2 8:2 0 931G 0 part / Ok.Then I try lshw --class disk it shows 931GiB as well: *-disk description: ATA Disk product: ST1000LM035-1RK1 physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: SDM2 serial: WDEWEKZF size: 931GiB (1TB) capabilities: gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=5 guid=f166251c-436c-421f-aba8-9910d76f9fab logicalsectorsize=512 sectorsize=4096 Then I try to get the size through a python script: total, used, free, percent = disk_usage('/')print(f"Total: {total}")print(f"Used: {used}")print(f"Free: {free}")total2, used2, free2, percent2 = disk_usage('/boot/efi')print(f"Total: {total2}")print(f"Used: {used2}")print(f"Free: {free2}") output: Total: 982900588544Used: 118413897728Free: 814486605824Total: 535805952Used: 5484544Free: 530321408 982900588544 / 1024 / 1024 / 1024 = 915 GiB. 535805952 = 500 MiB. df command shows this: Filesystem 1K-blocks Used Available Use% Mounted onudev 8092080 0 8092080 0% /devtmpfs 1627768 1712 1626056 1% /run/dev/sda2 959863856 115646148 795389500 13% /tmpfs 8138832 12368 8126464 1% /dev/shmtmpfs 5120 4 5116 1% /run/locktmpfs 8138832 0 8138832 0% /sys/fs/cgroup/dev/sda1 523248 5356 517892 2% /boot/efitmpfs 1627764 24 1627740 1% /run/user/1000 The sum of all 1K-blocks gives 1Tb. So, where is another 931 - 915 = 16 GiB of HDD space?And how to get the size in a correct way?Linux Mint 20.1 x64 Thanks.
If that's ext4, it's the size that is lost to filesystem metadata, mainly inode tables. As an example, a /home partition here. Partition is 751619276800 bytes ( sudo /sbin/blockdev --getsize64 /dev/mapper/Watt-home ) "df" size is 739691814912 ( df --block-size=1 /home ) Inode count is 45875200 ( df -i /home ) Inodes on ext4 are 256 bytes. So if you do the math, (751619276800-739691814912-(45875200*256))/1024^2 ≈ 175MiB. That's the rest of the filesystem metadata (superblocks, etc.). To make sure this is right, compare to a filesystem that was initialized with a lower inode ratio — one way is with the -T largefile or -T largefile4 option (see /etc/mke2fs.conf for the possibilities). I have one here: Partition size: 429496729600 df size: 429229522944 inodes: 409600 Note how much closer the df size is to the partition size (over 99.9%). That's because there are far fewer inodes. And if you do that math again, (429496729600-429229522944-(409600*256))/1024^2 ≈ 155MiB. Keep in mind that on ext4, the number of inodes is a hard limit on the number of files you can have. It (or rather the ratio of 1 inode per N blocks) is also set once at mkfs and can not be changed. But if you have a filesystem that you know will only be used to store large files, you can save some space by having fewer of inodes, as I did on my second filesystem. You can see the overhead subtracted out in the kernel source code: https://elixir.bootlin.com/linux/latest/source/fs/ext4/super.c#L6095 — and also the existence of a minixdf mount option that will stop it from doing so, and maybe do more weirdness too. I didn't check, and the only documentation I found about it was them trying to remove it but keeping it when people complained. BTW: In addition to this overhead for inode tables, etc., there is often 5% of space reserved, typically for root. That doesn't subtract from the total size, but will subtract from the available space. You can change this amount tune2fs -m ; other options let you specify by block count instead ( -r ) and change which user ( -u ) or group ( -g ) can use the reserved space. One benefit is even if users fill a partition, the sysadmin has some space to use for recovery. Note: ext2/ext3 used 128-byte inodes, half the size. Small filesystems still do. You can actually set it a mkfs time with the -I option; see the mkfs.ext4 manpage for caveats (I would not recommend changing to 128).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/442051/" ] }
687,436
I can ping google.com for several seconds and when I press Ctrl + C , a brief summary is displayed at the bottom: $ ping google.comPING google.com (74.125.131.113) 56(84) bytes of data.64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=2 ttl=56 time=46.7 ms64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=3 ttl=56 time=45.0 ms64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=4 ttl=56 time=54.5 ms^C--- google.com ping statistics ---4 packets transmitted, 3 received, 25% packet loss, time 3009msrtt min/avg/max/mdev = 44.965/48.719/54.524/4.163 ms However, when I do the same redirecting output to log file with tee , the summary is not displayed: $ ping google.com | tee logPING google.com (74.125.131.113) 56(84) bytes of data.64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=1 ttl=56 time=34.1 ms64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=2 ttl=56 time=57.0 ms64 bytes from lu-in-f113.1e100.net (74.125.131.113): icmp_seq=3 ttl=57 time=50.9 ms^C Can I get the summary as well when redirecting output with tee ?
ping shows the summary when it is killed with SIGINT , e.g. as a result of Ctrl C , or when it has transmitted the requested number of packets (the -c option). Ctrl C causes SIGINT to be sent to all processes in the foreground process group, i.e. in this scenario all the processes in the pipeline ( ping and tee ). tee doesn’t catch SIGINT (on Linux, look at SigCgt in /proc/$(pgrep tee)/status ), so when it receives the signal, it dies, closing its end of the pipe. What happens next is a race: if ping was still outputting, it will die with SIGPIPE before it gets the SIGINT ; if it gets the SIGINT before outputting anything, it will try to output its summary and die with SIGPIPE . In any case, there’s no longer anywhere for the output to go. To get the summary, arrange to kill only ping with SIGINT : killall -INT ping or run it with a pre-determined number of packets: ping -c 20 google.com | tee log or (keeping the best for last), have tee ignore SIGINT , as you discovered.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/687436", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87918/" ] }
687,592
This problem is related to Samba and inodes are not necessary. I have a problem handling a certain file that has some special characters in it. If I search it by its inode it will list the file: $ find . -inum 90505400 -exec ls {} \;./12 String Quartet No. 16 in F Major Op. 135: Der schwer gefa?te Entschlu?: Grave, ma non troppo tratto (Mu? es sein ?) - Allegro (Es mu? sein !).flac However, if I then proceed to use cp or rm on the file it will throw a file not found error (in German 'Datei oder Verzeichnis nicht gefunden'): $ find . -inum 90505400 -exec cp {} ne.flac \;cp: './12 String Quartet No. 16 in F Major Op. 135: Der schwer gefa?te Entschlu?: Grave, ma non troppo tratto (Mu? es sein ?) - Allegro (Es mu? sein !).flac' kann nicht zum Lesen geöffnet werden: Datei oder Verzeichnis nicht gefunden I wonder, if I can copy the file with another command that uses the inode directly. I also had this problem for some time now. I can remove all files with rm * , but I would like to fix the broken filename. It is an ext4 filesystem which I mount on a Raspi from an external USB HDD with this line (changed obfuscated paths and IPs): UUID=e3f9d42a-9703-4e47-9185-33be24b81c46 /mnt/test ext4 rw,auto,defaults,nofail,x-systemd.device-timeout=15 0 2 I then share it with samba: [mybook]path=/mnt/testpublic = yesbrowseable = yeswriteable = yescomment = testprintable = noguest ok = no And I mount this on a Lubuntu 16 with this: //192.168.1.190/test /home/ben/test cifs auto,nofail,username=XXX,password=XXX,uid=1000,gid=1000 I connect to the Lubuntu 16 through VNC from a Macbook. Or I SSH directly into it. I am just telling this for full information. I also mount the share on that Macbook (and others) in Finder. Finder does not display the filename correctly. After a useful comment from a user, I realized I should try to manipulate the file on the host with the original filesystem instead of trying to do it over samba. SSH ing into the host reveals this filename (look at the sign with 0xF022 after '135'): '12 String Quartet No. 16 in F Major Op. 135 Der schwer gefa?te Entschlu? Grave, ma non troppo tratto (Mu? es sein ) - Allegro (Es mu? sein !).flac' I then was able to copy the file with cp on the host itself. (In case anybody wonders how I came to the filename: I split a summed up flac file with it's cue sheet into the separate files and they got named automatically.)
All of open() (for copying), rename() and unlink() (removal) work by filenames. There's really nothing that would work on an inode directly, apart from low-level tools like debugfs . If you can remove the file with rm * , you should be able to rename it with mv ./12* someothername.flac , or copy it with cp ./12* newfile.flac (assuming ./12* matches just that file). find in itself shouldn't be that different. But you mentioned Mac, and I think Mac requires filenames to be valid UTF-8 and that might cause issues if the filenames are broken. Linux doesn't names that are invalid UTF-8, but of course there, too, some tools might react oddly. (I haven't tested.) Having Samba in there might not help either. Assuming that has something to do with the issue, you could try to SSH in to the host with the filesystem, skipping the intermediary parts, and rename the files there.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687592", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105065/" ] }
687,824
Today, cron-apt informed me that there are pending security updates on my Debian stable system: CRON-APT RUN [/etc/cron-apt/config]: Tue Jan 25 04:00:01 CET 2022CRON-APT SLEEP: 3076, Tue Jan 25 04:51:17 CET 2022CRON-APT ACTION: 3-downloadCRON-APT LINE: /usr/bin/apt-get -o quiet=1 dist-upgrade -d -y -o APT::Get::Show-Upgraded=trueReading package lists...Building dependency tree...Reading state information...Calculating upgrade...The following package was automatically installed and is no longer required: linux-image-5.10.0-9-amd64Use 'apt autoremove' to remove it.The following packages will be upgraded: bsdextrautils bsdutils eject libblkid1 libmount1 libsmartcols1 libuuid1 mount util-linux util-linux-locales10 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.Need to get 3561 kB of archives.After this operation, 16.4 kB of additional disk space will be used.Get:1 http://security.debian.org bullseye-security/main amd64 bsdutils amd64 1:2.36.1-8+deb11u1 [148 kB]Get:2 http://security.debian.org bullseye-security/main amd64 util-linux amd64 2.36.1-8+deb11u1 [1141 kB]Get:3 http://security.debian.org bullseye-security/main amd64 mount amd64 2.36.1-8+deb11u1 [186 kB]Get:4 http://security.debian.org bullseye-security/main amd64 bsdextrautils amd64 2.36.1-8+deb11u1 [145 kB]Get:5 http://security.debian.org bullseye-security/main amd64 libblkid1 amd64 2.36.1-8+deb11u1 [193 kB]Get:6 http://security.debian.org bullseye-security/main amd64 libmount1 amd64 2.36.1-8+deb11u1 [212 kB]Get:7 http://security.debian.org bullseye-security/main amd64 libsmartcols1 amd64 2.36.1-8+deb11u1 [158 kB]Get:8 http://security.debian.org bullseye-security/main amd64 libuuid1 amd64 2.36.1-8+deb11u1 [83.9 kB]Get:9 http://security.debian.org bullseye-security/main amd64 eject amd64 2.36.1-8+deb11u1 [102 kB]Get:10 http://security.debian.org bullseye-security/main amd64 util-linux-locales all 2.36.1-8+deb11u1 [1192 kB]Fetched 3561 kB in 0s (47.6 MB/s)Download complete and in download only mode However, looking at https://www.debian.org/security/ , I do not find a matching announcement: Recent Advisories These web pages include a condensed archive of security advisories posted to the debian-security-announce list. [21 Jan 2022] DSA-5052-1 usbview security update [20 Jan 2022] DSA-5051-1 aide security update [20 Jan 2022] DSA-5050-1 linux security update [15 Jan 2022] DSA-5048-1 libreswan security update ... So, either (1) the announcement is delayed or (2) something fishy is going on. (I am aware that the probability for (1) is much higher than for (2), but still...) How shall I proceed to verify that this is indeed a genuine and benign security update? I tried looking at the package information page of one of the updated packages ( https://packages.debian.org/bullseye/bsdutils ), but the "Debian Changelog" link on the right-hand side shows that the last modification was half a year ago. Notes: While I am interested in an answer to this particular case, I am more interested in a general answer on how to proceed in such a case (see the bolded question above). If you think that this question is more suitable for security.se, feel free to migrate.
Assuming you still trust the infrastructure, you can find out what changed by requesting the changelogs on your system; for example $ apt changelog util-linux/bullseye-securityutil-linux (2.36.1-8+deb11u1) bullseye-security; urgency=high * Non-maintainer upload by the Security Team. * include/strutils: Add ul_strtou64() function * libmount: fix UID check for FUSE umount [CVE-2021-3995] * libmount: fix (deleted) suffix issue [CVE-2021-3996] -- Salvatore Bonaccorso <[email protected]> Thu, 20 Jan 2022 21:10:35 +0100... (This queries the changelog from the repositories, it doesn’t require you to apply the upgrades.) In your case, all the updated packages come from the util-linux source package, so they will all show the same changelog. While the fix only involves libmount , uploading a fixed source package means rebuilding all the binary packages it produces, and shipping them all as security updates. This information is also available on the package tracker , which offers links to the changelog and the security tracker (among many others). The security tracker was down when the question was written, which might explain why some of the other pages aren’t updated as you’d expect; the DSA was sent out on January 24 . If you want to check what changed, you can download the original and updated source code: $ apt source util-linux/{stable,bullseye-security} and compare the downloaded tarballs — in most cases, only the .debian tarball, util-linux_2.36.1-8.debian.tar.xz and util-linux_2.36.1-8+deb11u1.debian.tar.xz in this case: $ mkdir ulo uls; tar xf util-linux_2.36.1-8.debian.tar.xz -C ulo; tar xf util-linux_2.36.1-8+deb11u1.debian.tar.xz -C uls$ diff -urN ulo uls | less
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/687824", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8477/" ] }
687,845
To my surprise the CentOS 7 installer allowed me to create a RAID0 device consisting of roughly a 17 GB disk and a 26 GB disk. I would've expected that even if it allows that, that the logical size would be 2 * min(17 GB, 26 GB) ~= 34 GB .Yet I can really see a usable size of 44 GB on the filesystem level: $ cat /sys/block/md127/md/dev*/size1695539226195968$ df -h |grep md/dev/md127 44G 1.9G 40G 5% / How will the md subsystem behave performance wise, compared to a situation where the disks are equal? As it's impossible to do a straightforward balanced stripe across 2 disks.
raid.wiki.kernel.org says: RAID0/Stripe Mode: The devices should (but don't HAVE to) be the same size. [...] If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance. That's a bit awkward phrasing, but the Wikipedia page for mdadm puts it like this: RAID 0 – Block-level striping. MD can handle devices of different lengths, the extra space on the larger device is then not striped. So, what you get probably looks like this, for a simplified case of two disks of 4 and 2 "blocks" in size: disk0 disk100 0102 030405 Reading "blocks" 04-05 would have to be done just from disk0, so no striping advantage there. md devices should be partitionable, so you could probably test with partitions at the start and at the end of the device to see if the speed difference becomes evident.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/687845", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125367/" ] }
688,021
These do not do the same: $ seq 1000000 | (ssh localhost sleep 1; wc -l)675173$ seq 1000000 | (ssh localhost sleep 1 </dev/null; wc -l)1000000 What is the rationale for ssh reading stdin?
ssh always reads stdin unless you tell it not to with the -n option (or the -f option). The reason is so that you can do things like tar cf - somedir | ssh otherhost "tar xf -" And it always does this because ssh has no way of knowing if your remote command accepts input or not. Likely what is happening in your first command is that seq fills up the network and pipe buffers (seq -> ssh -> sleep), and since sleep isn't reading anything, it gets blocked waiting for more reads, and then sleep exits, causing those full buffers to be dumped, and then seq is unblocked, feeding the remainder to wc. Note that you would get similar results with seq 1000000 | ( cat | cat | sleep 1; wc -l) In your second command, it is still reading stdin, but you've externally assigned /dev/null to stdin.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/688021", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
688,195
I have a filesystem with many small files that I erase regularly (the files are a cache that can easily be regenerated). It's much faster to simply create a new filesystem rather than run rm -rf or rsync to delete all the files (i.e. Efficiently delete large directory containing thousands of files ). The only issue with creating a new filesystem to wipe the filesystem is that its UUID changes, leading to changes in e.g. /etc/fstab . Is there a way to simply "unlink" a directory from e.g. an ext4 filesystem, or completely clear its list of inodes?
Since you're using ext4 you could format the filesystem and the set the UUID to a known value afterwards. man tune2fs writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. The format of the UUID is a series of hex digits separated by hyphens, like this c1b9d5a2-f162-11cf-9ece-0020afc76f16 . And similarly, man mkfs.ext4 writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. […as above…] Personally, I prefer to reference filesystems by label. For example in the /etc/fstab for one of my systems I have entries like this # <file system> <mount point> <type> <options> <dump> <pass>LABEL=root / ext4 errors=remount-ro 0 1LABEL=backup /backup ext4 defaults 0 2 Such labels can be added with the -L flag for tune2efs and mkfs.ext4 . They avoid issues with inode checksums causing rediscovery or corruption on a reformatted filesystem and they are considerably easier to identify visually. (But highly unlikely to be unique across multiple systems, so beware if swapping disks around.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/688195", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233125/" ] }
688,253
I have an installation of Windows 10 and Pop on separate partitions of the same drive and I want to dual boot them with systemd-boot, which is the default for Pop OS. I followed this guide (the TL;DR version is good enough) because I didn't have Windows in the boot menu selection. The guide just tells you to copy the EFI files from the Windows EFI partition into the Pop OS EFI partition so systemd-boot can recognize Windows. This works fine and both Windows and Pop appear in the boot menu. When I boot Pop there is no issue. However, when I boot Windows everything works fine for the first time, but then after a reboot cycle all Pop OS partitions disappear from the boot menu and instead the computer boots into the GRUB terminal (?? GRUB wasn't even being used before). The Pop partition is no longer recognized as bootable and I can't boot into Pop. This problem is reproducible. It happens every time I do the above steps. Any help is appreciated.
Since you're using ext4 you could format the filesystem and the set the UUID to a known value afterwards. man tune2fs writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. The format of the UUID is a series of hex digits separated by hyphens, like this c1b9d5a2-f162-11cf-9ece-0020afc76f16 . And similarly, man mkfs.ext4 writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. […as above…] Personally, I prefer to reference filesystems by label. For example in the /etc/fstab for one of my systems I have entries like this # <file system> <mount point> <type> <options> <dump> <pass>LABEL=root / ext4 errors=remount-ro 0 1LABEL=backup /backup ext4 defaults 0 2 Such labels can be added with the -L flag for tune2efs and mkfs.ext4 . They avoid issues with inode checksums causing rediscovery or corruption on a reformatted filesystem and they are considerably easier to identify visually. (But highly unlikely to be unique across multiple systems, so beware if swapping disks around.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/688253", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/512000/" ] }
688,255
I'm asking before trying because I already have a few things set up in Wine Stable, so I don't want to mess things up by installing something else over it. Basically, I want to install Staging because I have an app which is said to require the former to function properly under Linux (it's a music player.) Will installing Staging affect the way Wine Stable behaves? If so, how? Can I configure Wine Stable and Wine Staging separately? I'm running Debian Bullseye Stable. Thank you.
Since you're using ext4 you could format the filesystem and the set the UUID to a known value afterwards. man tune2fs writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. The format of the UUID is a series of hex digits separated by hyphens, like this c1b9d5a2-f162-11cf-9ece-0020afc76f16 . And similarly, man mkfs.ext4 writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. […as above…] Personally, I prefer to reference filesystems by label. For example in the /etc/fstab for one of my systems I have entries like this # <file system> <mount point> <type> <options> <dump> <pass>LABEL=root / ext4 errors=remount-ro 0 1LABEL=backup /backup ext4 defaults 0 2 Such labels can be added with the -L flag for tune2efs and mkfs.ext4 . They avoid issues with inode checksums causing rediscovery or corruption on a reformatted filesystem and they are considerably easier to identify visually. (But highly unlikely to be unique across multiple systems, so beware if swapping disks around.)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/688255", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/501732/" ] }
688,268
Where can I find this library specifically for installing and using msserver express on debian? I get this error. sudo apt-get install -y --no-install-recommends libssl1.0.0Reading package lists... DoneBuilding dependency tree... DoneReading state information... DonePackage libssl1.0.0 is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another sourceE: Package 'libssl1.0.0' has no installation candidate
Note that libssl1.0.0 is obsolete and no longer updated; any binary linking to it probably suffers from various security issues (perhaps not exploitable, but you’d need to determine that in your scenarios). You should really look for a newer version of whatever it is you’re trying to use. However, you can find libssl1.0.0 on Debian snapshots ; download the appropriate package and install it. For example on amd64 : wget http://snapshot.debian.org/archive/debian/20170705T160707Z/pool/main/o/openssl/libssl1.0.0_1.0.2l-1%7Ebpo8%2B1_amd64.debsudo dpkg -i libssl1.0.0*.deb You may need to install multiarch-support first: wget http://snapshot.debian.org/archive/debian/20190501T215844Z/pool/main/g/glibc/multiarch-support_2.28-10_amd64.debsudo dpkg -i multiarch-support*.deb (Having this library installed only affects binaries which link to it; it won’t create security issues for other binaries linking to other versions of the library.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/688268", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/512013/" ] }
688,527
Say for example I have a path like path_1=/this/is/a/path/with/slash/ How do I get the following: /this/is/a/path/with/slash so the path without the last "/"
All POSIX shells have (c.f. man bash ) "Parameter Expansion: Remove matching suffix pattern". So, use $ echo "${path_1%/}"/this/is/a/path/with/slash If the variable's value does not end in a slash, then the value would be outputted without modification.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/688527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/402699/" ] }
688,533
I have two Linux systems One is the client and the other is the server I put two systems in one network and was able to connect to the server via a local ip It means as follows: ssh [email protected] But now I am trying to connect to the server through the public ip that I requested ... I got my public address from the following site https://api.ipify.org I got my public address from the site above and tried to connect to it via ssh ssh ahmadreza@public_ip But the connection was not established and I made the following error ssh : connat to host <public_ip> port 22 : connection timed out I checked my port forwarding I also made sure my port was on 22 But the problem still persists and I can not enter through the public address
All POSIX shells have (c.f. man bash ) "Parameter Expansion: Remove matching suffix pattern". So, use $ echo "${path_1%/}"/this/is/a/path/with/slash If the variable's value does not end in a slash, then the value would be outputted without modification.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/688533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/508718/" ] }
688,537
I have wrote a script in expect as follows. I want to give date as variable to cd command but when I am giving the date as variable to the command and it add '' commas and therefore it shows a error as follows. How to get rid of those commas? #!/usr/bin/expect#!/bin/bashset DATE [exec date +%c]set DATE2 [exec date +'%Y%m%d']log_user 0log_file -a /lch/portal/scripts/sftpcheck21/log/sftpcheck21.logsend_log "test ran on $DATE \n"spawn sftp -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" -o "Port=8022" [email protected]@sftapx21expect "[email protected]@sftapx21's password:"send "London@123\n"expect "sftp>"send "cd /PIMCOXXX_FDM/SwapClear/$DATE2\n"expect "sftp>"send "lcd /lch/portal/scripts/sftpcheck21\n"expect "sftp>"send "get 'P-PSWC-PIMCOXXX_FDM-$DATE2-233518_$DATE2_REP000F1d - Trade Level Pricing_ 1.TXT'\n"expect "sftp>"send "exit\n"interactlog_file output as follows sftp> cd /PIMCOXXX_FDM/SwapClear/'20220130'Couldn't canonicalize: No such file or director
All POSIX shells have (c.f. man bash ) "Parameter Expansion: Remove matching suffix pattern". So, use $ echo "${path_1%/}"/this/is/a/path/with/slash If the variable's value does not end in a slash, then the value would be outputted without modification.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/688537", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/512301/" ] }
688,561
I want to list the files that contain a pattern,without outputting the line(s) containing the pattern. I assume that it's possible to do this with grep . How should I use grep in that sense? For example, the following commandgives all the lines containing "stringpattern" (case insensitive)in all the .txt files. I want to have only the name of the file (± the line number). grep -ni stringpattern *.txt Ideally, if the string/pattern is present more than once in one file,I would like to have multiple lines of output for that file.
If you need only files that match: grep -lie pattern -- *.txt I don't think that you can use only grep to print only files and line numbers, because with option -n , it outputs on every line 'file:line:match'. If the file names don't contain : nor newline characters, you can though pipe this to cut to get only what you want. grep -nie pattern -- /dev/null *.txt | cut -d: -f 1,2 The /dev/null is needed for the case where *.txt expanded to only one filename where grep then would otherwise not print the file name. With the GNU implementation of grep or compatible, you can use the -H / --with-filename instead to ensure the file name is always printed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/688561", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/370877/" ] }
688,608
Is it possible to install a custom ca certificate on Debian without installing the ca-certificate package? I tend to run my servers beyond the lifespan of each release, and I always seem to have problems after a few years. Simple problems, like cURL not being able to verify the legitimacy of the server, PHP's openssl.cafile and curl.cainfo , etc. Nothing devastating, but annoying. I'm installing Buster now and want to avoid any problems from the get-go this time. Ideally I'd like to download cacert.pem from curl.se (Mozilla source), put it in a directory, then tell the OS and any software that asks for it to use it. That way, when it expires I can just re-download the latest from curl.se or the Mozilla source.
update-ca-certificates is actually a shell script. You could just read it and adapt parts of it to your needs. In a nutshell: when update-ca-certificates adds a certificate, it creates a symbolic link to /etc/ssl/certs/ pointing to the PEM-formatted certificate file. update-ca-certificates expects the CA certficate to be in a PEM formatted file with a *.crt suffix, and the link name will have that suffix changed to *.pem instead: so /etc/ssl/certs/<somename>.pem will be linked to /elsewhere/<somename>.crt . OpenSSL requires that the directory containing trusted CA certificates has them accessible by their hashes, so within the /etc/ssl/certs/ directory, another symbolic link will be created: <certificate hash>.0 -> <somename>.pem . The <certificate hash> can be calculated manually with: openssl x509 -in <certificate PEM file> -noout -hash If another certificate has the same hash, then the .0 portion will be incremented to .1 , then to .2 etc. until an unique name can be found. This hashing is not a security mechanism: it just allows OpenSSL to find the required CA certificate quickly by its hash when validating certificates. Alternatively, cd /etc/ssl/certs; openssl rehash . can be used to create hash symlinks for all certificates within that directory. The contents of the new certificate PEM file will also be appended to /etc/ssl/certificates/ca-certificates.crt , for those programs that only accept their list of trusted CA certificates as a single file. If the PEM-formatted certificate is missing its trailing newline character, the script will add one automatically when appending the certificate to ca-certificates.crt . The update-ca-certificates script will also run any scripts placed into /etc/ca-certificates/update.d/ . In case you have any .dpkg -packaged version of Java installed, there will most likely be a script named /etc/ca-certificates/update.d/jks-keystore dropped by the Java package, which will similarly update the Java keystore file at /etc/ssl/certs/java/cacerts , so that it will also contain the exact same certificates as the OpenSSL CA certificate directory /etc/ssl/certs or the file /etc/ssl/certs/ca-certificates.crt . Edited
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/688608", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44921/" ] }
688,790
I'm calculating aspect ratio height from x number, in this example I'm using 4:3 ratio, and a width of 800, the result (height) should be 600, but bash is returning 800, and I'm not sure why. I've tried other languages, most seem to have issues too, php seems to be one of few that work. PHP (returns 600) php -r 'echo 800/(4/3);' Python (returns 800) python -c "print(800/(4/3))" bc -l kinda works (returns 600.00000000000000000150) -l is "Define the standard math library", not to sure what that means, but it seems to get me closer to my goal, but where is the extra 0's and 150 coming from? echo '800 / (4 / 3)' | bc -l I'm guessing it's something to do with floating point handling, or truncating the result of 3/4 . Now I could just use php , and call it a day, but seems kinda overkill for a relatively simple calculation.Any idea what's going on here.
Bash arithmetic is integer only. So 4/3 returns 1. And 800/1 is 800. If you can control the inputs then you can re-factor and do the multiplication before the division $ echo $(( 800*3/4 ))600 Your other examples are also "integer". If, for example, you force python floating point by replace 4 with 4.0 then you get a different answer (Python 3 doesn't need this) $ python -c "print(800/(4.0/3))"600.0 bc -l loads the standard math library (with functions like s() for sine, l() for natural logarithm, etc), but more importantly here, sets scale to 20. scale defines how many decimals after the radix to generate in divisions, so 4/3 there will be 1.33333333333333333333 (in effect 133333333333333333333/1e+20 ), and that explains why you get 600.00000000000000000150 . echo 'scale=1000; 800/(4/3)' | bc Will get you more precision (without having to load the math library), but you'll never get just 600 there as 4/3 cannot be represented in decimal.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/688790", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43139/" ] }
688,813
I have tried using sed to read \ .Unable to read \ and replace it with \\\\ .I want to replace single \ with 4 \\\\ .
Bash arithmetic is integer only. So 4/3 returns 1. And 800/1 is 800. If you can control the inputs then you can re-factor and do the multiplication before the division $ echo $(( 800*3/4 ))600 Your other examples are also "integer". If, for example, you force python floating point by replace 4 with 4.0 then you get a different answer (Python 3 doesn't need this) $ python -c "print(800/(4.0/3))"600.0 bc -l loads the standard math library (with functions like s() for sine, l() for natural logarithm, etc), but more importantly here, sets scale to 20. scale defines how many decimals after the radix to generate in divisions, so 4/3 there will be 1.33333333333333333333 (in effect 133333333333333333333/1e+20 ), and that explains why you get 600.00000000000000000150 . echo 'scale=1000; 800/(4/3)' | bc Will get you more precision (without having to load the math library), but you'll never get just 600 there as 4/3 cannot be represented in decimal.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/688813", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/512537/" ] }
688,818
Hi would like extract N number of lines after matching the first incidence of a string printing the N lines following using awk. The string is repeated a number of times in the file I am processing . I have tried using this command: 'c&&c--;/XCHT/{c=10}' This prints all incidences of the match + 10 lines. I have seen various versions of this command after extensive searches but all versions produce largely the same result as below. I have looked around to find . I would like some tips on how to modify this command to achieve the result The result of the match I would like to achieve is as follows | XCHT | ||-----------------|----|| 柴胡chaihu | 24 || 黄芩huangqin | 9 || 法半夏banxia | 12 || 生姜shengjiang | 9 || 刺五加ciwujia | 9 || 大枣dazao | 6 || 炙甘草zhigancao | 9 || | || | | A section of the file looks like this: ## XCHT| XCHT | ||-----------------|----|| 柴胡chaihu | 24 || 黄芩huangqin | 9 || 法半夏banxia | 12 || 生姜shengjiang | 9 || 刺五加ciwujia | 9 || 大枣dazao | 6 || 炙甘草zhigancao | 9 || | || | |## XCHT+CM| | ||-----------------|----|| 柴胡chaihu | 24 || 黄芩huangqin | 9 || 法半夏banxia | 12 || 干姜ganjiang | 9 || 五味子wuweizi | 9 || 炙甘草zhigancao | 9 || | || | |## XCHT+TM| XCHT+TM | ||----------------- |----|| 柴胡chaihu | 24 || 黄芩huangqin | 9 || 法天花粉tianhuafen | 12 || 生姜shengjiang | 9 || 刺五加ciwujia | 12 || 大枣dazao | 6 || 炙甘草zhigancao | 9 || | || | | | XCHT-HQin+FL | ||-----------------|----|| 柴胡chaihu | 24 || 黄茯苓fuling | 12 || 法半夏banxia | 12 || 生姜shengjiang | 9 || 刺五加ciwujia | 9 || 大枣dazao | 6 || 炙甘草zhigancao | 9 || | || | |## XCHT-DZ+ML| XCHT-DZ+ML | ||-----------------|-----|| 柴胡chaihu | 12 || 黄芩huangqin | 4.5 || 法半夏banxia | 6 || 生姜shengjiang | 4.5 || 刺五加ciwujia | 4.5 || 牡蛎 muli | 6 || 炙甘草zhigancao | 4.5 || | || | | The result of the command 'c&&c--;/XCHT/{c=10}' on the file | XCHT | ||-----------------|----|| 柴胡chaihu | 24 || 黄芩huangqin | 9 || 法半夏banxia | 12 || 生姜shengjiang | 9 || 刺五加ciwujia | 9 || 大枣dazao | 6 || 炙甘草zhigancao | 9 || | || | || XCHT | ||-----------------|----|| 柴胡chaihu | 24 || 黄芩huangqin | 9 || 法半夏banxia | 12 || 干姜ganjiang | 9 || 五味子wuweizi | 9 || 炙甘草zhigancao | 9 || | || | || XCHT+TM | ||----------------- |----|| 柴胡chaihu | 24 || 黄芩huangqin | 9 || 法天花粉tianhuafen | 12 || 生姜shengjiang | 9 || 刺五加ciwujia | 12 || 大枣dazao | 6 || 炙甘草zhigancao | 9 || | || | || XCHT-DZ+ML | ||-----------------|-----|| 柴胡chaihu | 12 || 黄芩huangqin | 4.5 || 法半夏banxia | 6 || 生姜shengjiang | 4.5 || 刺五加ciwujia | 4.5 || 牡蛎 muli | 6 || 炙甘草zhigancao | 4.5 || | || | || Xiao Chaihi Tang -HQ+BS | ||-------------------------|----|| 柴胡chaihu | 24 || 白芍baishao | 9 || 法半夏banxia | 12 || 生姜shengjiang | 9 || 刺五加ciwujia | 9 || 大枣dazao | 6 || 炙甘草zhigancao | 9 The tables are a bit wonky as a result of copying them to the webpage. Any help of advice will be much appreciated .
One way to handle this is to exit when c goes back to 0: c { print; if (--c == 0) exit }; /XCHT/{c=10} or more concisely, c; c && !--c { exit }; /XCHT/{c=10} GNU grep can do something similar: grep -m1 -A10 XCHT (but this will show the first line matching “XCHT” as well).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/688818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/512523/" ] }