source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
704,181 | I just tried to open new terminal window and this error message displayed: Failed to open PTY: No space left on device It seems I can't open terminal window anymore unless closing existing one (or reboot). I don't have any other problem in my system. My system: Debian Buster (xfce4)Linux debian 4.19.0-18-amd64 #1 SMP Debian 4.19.208-1 (2021-09-29) x86_64 GNU/Linux Storage usage: Filesystem Size Used Avail Use% Mounted onudev 3.9G 0 3.9G 0% /devtmpfs 786M 9.5M 776M 2% /run/dev/sda4 320G 244G 62G 80% /tmpfs 3.9G 315M 3.6G 9% /dev/shmtmpfs 5.0M 4.0K 5.0M 1% /run/locktmpfs 3.9G 0 3.9G 0% /sys/fs/cgrouptmpfs 786M 32K 786M 1% /run/user/1000 Inodes usage: Filesystem Inodes IUsed IFree IUse% Mounted onudev 978K 455 978K 1% /devtmpfs 982K 872 981K 1% /run/dev/sda4 21M 7.2M 14M 36% /tmpfs 982K 394 982K 1% /dev/shmtmpfs 982K 5 982K 1% /run/locktmpfs 982K 17 982K 1% /sys/fs/cgrouptmpfs 982K 34 982K 1% /run/user/1000 Pretty sure there isn't any problem with storage or inodes count. I have closed all opened programs, after that I can open a few more terminal window, but still getting the error message. | You are looking in completely wrong place. Storage devices have nothing to do with PTY. PTY is a "Pseudo Terminal Interfaces". It is responsible for creating connection from remote terminals. For example, you use xterm or ssh - the new PTY master channel is created on the actual machine. Max number of PTYs (or remote connections) is defined in /proc/sys/kernel/pty/max . Its complement: /proc/sys/kernel/pty/nr , shows how many PTYs are currently in use. For more detailed (and more official) explanation do man 7 pty . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/704181",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/213154/"
]
} |
704,315 | I have a single hard drive. I want to use a filesystem that will give me less storage space, but as a tradeoff, give me checksums or any other method to help preserve data integrity. It is my understanding that something like ext4 or xfs will not do this, and thus you can suffer from silent data corruption, aka bitrot. zfs looks like an excellent choice, but everything I have read says you need more than one disk to use it. Why is this? I realize having only one disk will not tolerate a single disk failure, but that is what multiple backup schemes are for. What backups won't help with is something like bitrot. So can I use zfs on a single hard drive for the single purpose of preventing bitrot? If not, what do you recommend? | You could use either ZFS or btrfs. Both of them are copy-on-write filesystems with error detection (and correction too, if there's sufficient redundancy to repair the original data - e.g. mirror drives or RAID-Z), transparent compression, snapshots, etc. ZFS allows you to set the copies attribute on a dataset to keep more than one copy of a file - e.g. on ZFS you can run zfs set copies=2 pool/dataset to tell ZFS to keep two copies of everything on that particular dataset - see man zfsprops and search for copies= . I think btrfs has a similar feature, but it's been a long time since I used btrfs and can't find it in the docs. These extra copies do provide redundancy for error correction (in case of bitrot) but won't protect you from disk failure. You'll need at least a mirror vdev (i.e. RAID-1) for that, or make regular backups (but you should be doing that anyway - RAID or RAID-like tech like ZFS or btrfs is NOT a substitute for backups). Backing up could be as simple as using zfs snapshot and zfs send / zfs receive to send the initial and then incremental backup to a single-drive zfs pool plugged in via USB. Or to a pool on another machine over the network. Even using zfs send to store the backup in files on a non-ZFS filesystem is better than nothing. If your machine has the physical space and hardware to support a second drive, you should add one. You can do this when you first create a pool, or you can add a mirror drive to any single-drive or mirror vdev at any time with zpool attach pool device new-device . NOTE: it's important to use zpool attach , not zpool add for this. attach adds a mirror to an existing drive in a vdev, while add adds another vdev to an existing pool. Adding a single-drive vdev to an existing pool will effectively make a RAID-0 with the other vdevs in the pool, putting ALL of the data at risk. This is a fairly common mistake, and (if the pool contains any RAID-Z vdevs), the only fix is to backup the entire pool, destroy it, re-create it from scratch, and restore. If the pool only has mirror or single-drive vdevs (i.e. no RAID-Z vdevs), it is possible to use zpool remove to remove an accidentally added single drive. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/704315",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/527648/"
]
} |
704,325 | My system is Arch Linux 5.17.9 running i3wm. I am trying to start a VM which is configured in Virt-Manager and Qemu. I have tried the command, sudo virsh list --all this brings up the installed VMs on Virt-Manager in the terminal. I have then tried, sudo virsh start "VM-Name" This apparently starts the VM, but no window opens and I have to open the Virt-Manager machine manually.I want to run a single command and have the VM in a window appear. Eventually I want to attach this command to a key binding on my i3 install and have it open with one keystroke. | You could use either ZFS or btrfs. Both of them are copy-on-write filesystems with error detection (and correction too, if there's sufficient redundancy to repair the original data - e.g. mirror drives or RAID-Z), transparent compression, snapshots, etc. ZFS allows you to set the copies attribute on a dataset to keep more than one copy of a file - e.g. on ZFS you can run zfs set copies=2 pool/dataset to tell ZFS to keep two copies of everything on that particular dataset - see man zfsprops and search for copies= . I think btrfs has a similar feature, but it's been a long time since I used btrfs and can't find it in the docs. These extra copies do provide redundancy for error correction (in case of bitrot) but won't protect you from disk failure. You'll need at least a mirror vdev (i.e. RAID-1) for that, or make regular backups (but you should be doing that anyway - RAID or RAID-like tech like ZFS or btrfs is NOT a substitute for backups). Backing up could be as simple as using zfs snapshot and zfs send / zfs receive to send the initial and then incremental backup to a single-drive zfs pool plugged in via USB. Or to a pool on another machine over the network. Even using zfs send to store the backup in files on a non-ZFS filesystem is better than nothing. If your machine has the physical space and hardware to support a second drive, you should add one. You can do this when you first create a pool, or you can add a mirror drive to any single-drive or mirror vdev at any time with zpool attach pool device new-device . NOTE: it's important to use zpool attach , not zpool add for this. attach adds a mirror to an existing drive in a vdev, while add adds another vdev to an existing pool. Adding a single-drive vdev to an existing pool will effectively make a RAID-0 with the other vdevs in the pool, putting ALL of the data at risk. This is a fairly common mistake, and (if the pool contains any RAID-Z vdevs), the only fix is to backup the entire pool, destroy it, re-create it from scratch, and restore. If the pool only has mirror or single-drive vdevs (i.e. no RAID-Z vdevs), it is possible to use zpool remove to remove an accidentally added single drive. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/704325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/342438/"
]
} |
704,362 | I have two large tab-delimited files (>10GB) and I know that when they're sorted, they're identical in content. However, I'm interested in the order of rows and the index of the swapped ones when they share the same "key" (key here being defined as rows grouped based on Source and Location columns). In other words, rows between these two files should be only compared against each other when they come from the same group (i.e. when they share the same Source and Location). So for example, in the example below, rows 4, 5, 6 from file1.tsv should be compared against 4, 5, 6 from file2.tsv Note: files are normal TSV. Additional spaces are only added here to make columns center- and right-aligned for better visibility. These spaces are not part of the original files file1.tsv Identifier Position Source Location AY1:2301 87 ch1 14 BC1U:4010 105 ch1 14 AC44:1230 90 ch1 15 AJC:93410 83 ch1 16 ABYY:0001 101 ch1 16 ABC:01 42 ch1 16 HH:A9CX 413 ch1 17 LK:9310 2 ch1 17 JFNE:3410 132 ch1 18 MKASDL:11 14 ch1 18 MKDFA:9401 18 ch1 18 MKASDL1:011 184 ch2 50 LKOC:AMC02 18 ch2 50 POI:1100 900 ch2 53 MCJE:09HA 11 ch2 53 ABYCI:1123 15 ch2 53 MNKA:410 1 ch2 53 file2.tsv Identifier Position Source Location AY1:2301 87 ch1 14 BC1U:4010 105 ch1 14 AC44:1230 90 ch1 15 ABC:01 42 ch1 16 ABYY:0001 101 ch1 16 AJC:93410 83 ch1 16 HH:A9CX 413 ch1 17 LK:9310 2 ch1 17 MKASDL:11 14 ch1 18 JFNE:3410 132 ch1 18 MKDFA:9401 18 ch1 18 MKASDL1:011 184 ch2 50 LKOC:AMC02 18 ch2 50 MNKA:410 1 ch2 53 POI:1100 900 ch2 53 ABYCI:1123 15 ch2 53 MCJE:09HA 11 ch2 53 I want to do something similar to a "diff" but at the 'group' level (where rows are only compared when they share the same Source and Location ) I want to extract the original "row numbers" when the order of rows are 'swapped' within the same "Source/Location" " group " (or key). The whole row should match in terms of content. But I have no idea how to go about this. I can only think of writing a for loop which would be extremely inefficient when my original dataset has millions of rows. Expected result: Group_Source:Location df1.index df2.indexch1:16 4 6ch1:16 6 4ch1:18 9 10ch1:18 10 9ch2:53 14 15ch2:53 15 17ch2:53 17 14 Assumptions: Both dataframes have the same number of rows Both dataframes are identical (only order of rows are swapped, so if both are sorted by Source, then Location and then Position and then Identifier, then they will be exactly identical) 'Swapped' rows always match exactly in terms of content in all columns | This is one of those rare occasions when I'd probably use getline due to the size of your input files so we only save a handful of lines in memory at a time instead of >10G: $ cat tst.awkBEGIN { OFS = "\t" print "Group_Source:Location", "df1.index", "df2.index"}NR != FNR { exit }{ srcLoc = $3 ":" $4 }srcLoc != prevSrcLoc { if ( NR > 1 ) { diff() } prevSrcLoc = srcLoc}{ file1[$1,$2] = FNR - 1 if ( (getline < ARGV[2]) > 0 ) { file2[$1,$2] = FNR - 1 }}END { diff() }function diff( idPos) { for ( idPos in file1 ) { if ( file1[idPos] != file2[idPos] ) { print prevSrcLoc, file1[idPos], file2[idPos] } } delete file1 delete file2} $ awk -f tst.awk file1.tsv file2.tsvGroup_Source:Location df1.index df2.indexch1:16 6 4ch1:16 4 6ch1:18 10 9ch1:18 9 10ch2:53 17 14ch2:53 15 17ch2:53 14 15 For more info on getline , please read http://awk.freeshell.org/AllAboutGetline . The above would work even if an Identifier and/or Position was repeated within the input since it's comparing all 4 fields between the 2 files. It does assume that the Source and Location values are in the same order between the 2 files as shown in the sample input. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/704362",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304576/"
]
} |
704,489 | I need to check that a directory (let's call it dir ) contains one of two files (let's call them filea and fileb ), but neither none nor both. The ideal solution would be to use a XOR operation between the predicates: if [ -f dir/filea ] ^ [ -f dir/fileb]then echo Structure ok # do stufffi However the shell does not support the ^ as a XOR operator, and the [ command does not have the options -X or --xor like it has -a and -o ... Using a negated equality did not work either: if ! [ -f dir/filea -eq -f dir/fileb ]# orif ! [ -f dir/filea = -f dir/fileb ] Is there some way to achieve this, without resorting to a full-blown AND/OR expression like if { [ -f dir/filea ] || [ -f dir/fileb ]; } && ! { [ -f dir/filea ] && [ -f dir/fileb ]; } ? The last expression is becoming unreadable, and of course my actual paths are much longer than dir/fileX . EDIT: I am targetting a POSIX-compliant version of sh , but I am open to extensions specific to other shells (out of curiosity mostly, but also because I use bash or ksh93 on other projects and this could be useful there) | The exit code of a test ... or [ ... ] command is the test result. You can use variables to store the result of individual tests and compare them later. [ -f dir/filea ]testA=$?[ -f dir/fileb ]testB=$?if [ "$testA" -ne "$testB" ]then echo "exactly one file"else echo "both files or none"fi It might be possible that [ results in different non-zero exit codes for the two tests. According to the specification in https://pubs.opengroup.org/onlinepubs/007904875/utilities/test.html , an exit code >1 means "An error occurred." You have to define what should happen if [ reports an error. To avoid this you can use conditional variable assignments similar to Kusalananda's answer ... testA=0testB=0[ -f dir/filea ] || testA=1[ -f dir/fileb ] || testB=1if [ "$testA" -ne "$testB" ]then echo "exactly one file"else echo "both files or none"fi ... or use negation (as mentioned in comments) to make sure the value is either 0 or 1 . (See "2.9.2 Pipelines" - "Exit Status" in https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_02 ) ! [ -f dir/filea ]testA=$?! [ -f dir/fileb ]testB=$?if [ "$testA" -ne "$testB" ]then echo "exactly one file"else echo "both files or none"fi Both variants handle an error the same as "file does not exist". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/704489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/148149/"
]
} |
704,530 | I want to download part of a large (199GB) .tar.gz file from here . To start, I used the following command to list all of the files in the .tar.gz file: wget -qO- https://www.cs.cornell.edu/projects/megadepth/dataset/Megadepth_v1/MegaDepth_v1.tar.gz | tar -tz Next, I tried to download the contents of a folder in the .tar.gz using the command: wget -qO- https://www.cs.cornell.edu/projects/megadepth/dataset/Megadepth_v1/MegaDepth_v1.tar.gz | tar -xz phoenix/S6/zl548/MegaDepth_v1/0000 However, this takes too long because the tar command searches depth-first and recursively through each of the folders below phoenix/S6/zl548/MegaDepth_v1 . I am only interested in the contents of the folder phoenix/S6/zl548/MegaDepth_v1/0000 . Is there a way to download the contents of this folder without searching through the sub-folders of the other folders, such as phoenix/S6/zl548/MegaDepth_v1/0162phoenix/S6/zl548/MegaDepth_v1/0001phoenix/S6/zl548/MegaDepth_v1/0132 In other words, is there a faster way to download the contents of the folder phoenix/S6/zl548/MegaDepth_v1/0000 ? Some references for the above commands: How to extract specific file(s) from tar.gz How to download an archive and extract it without saving the archive to disk? https://stackoverflow.com/q/2700306/13809128 | tar writes a file header, then the file contents, then the next file header, the next file contents and so on. There is no order associated with the entries and the only optimization you could come up with is skipping the contents of a file, to get to the next header, directly seeking it. For that you need to have a seekable file. But your .gz is compressed, so you have no reliable way to skip ahead to the next entry, meaning that you will have to read (download) the whole file to get the contents. That's the answer: no, you cannot avoid reading/downloading the whole file. So, since you will have to fully download it anyway, you might as well do it once and then solve everything in the local filesystem. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/704530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/435923/"
]
} |
704,534 | Before switching to fish shell, I frequently used various commands in zsh with which some_command . An example might be: $ file `which zsh`/opt/local/bin/zsh: Mach-O 64-bit executable arm64/bin/zsh: Mach-O universal binary with 2 architectures: [x86_64:Mach-O 64-bit executable x86_64- Mach-O 64-bit executable x86_64] [arm64e:Mach-O 64-bit executable arm64e- Mach-O 64-bit executable arm64e]/bin/zsh (for architecture x86_64): Mach-O 64-bit executable x86_64/bin/zsh (for architecture arm64e): Mach-O 64-bit executable arm64e When I try to do this with fish it fails: $ which zsh/opt/local/bin/zsh$ file `which zsh``which: cannot open ``which' (No such file or directory)zsh`: cannot open `zsh`' (No such file or directory) Any idea of why this doesn't work fish as opposed to other more bash-like shells? | fish does not use backticks for command substitutions. Instead one can use parens: file (which zsh) or (in release 3.4.0 and later) file $(which zsh) . These mean the same thing. Check out fish for bash users for other differences. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/704534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47012/"
]
} |
704,573 | I have sound disturbances: A quiet crackling in the background and a loud hissing / crackling when I start Firefox, for example. The problem occurs only when playing through the speakers –the speakers are connected directly via AUX and are powered via USB. The speakers are not the problem;these disturbances do not occur under Windows. What I've tried so far in config: /etc/pulse/default.pa : Added tsched=0 to the line load-module module-udev-detect && pulseaudio -k Commented out the following line: load-module module-suspend-on-idle && pulseaudio -k in config /etc/pulse/daemon.conf : Set Pulse default-sample-rate to 48000 && pulseaudio -k killall pulseaudio Unplugged the speakers and plugged them in again System information: Linux system 5.15.0-33-generic #34-Ubuntu SMP Wed May 18 13:34:26 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux Ubuntu 22.04 LTS ii pulseaudio 1:15.99.1+dfsg1-1ubuntu1 amd64 PulseAudio sound server | fish does not use backticks for command substitutions. Instead one can use parens: file (which zsh) or (in release 3.4.0 and later) file $(which zsh) . These mean the same thing. Check out fish for bash users for other differences. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/704573",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/527466/"
]
} |
704,737 | After update to kernel 5.10.119, /proc/sys/kernel/random/entropy_avail became stuck to 256 and does not change when moving the mouse. It used to be greater than 3000. # cat /proc/sys/kernel/random/entropy_avail 256 Also, /proc/sys/kernel/random/poolsize went down to 256. It used to be 4096. Is this a bug? Can you trust the new random number generator of this kernel with only 256 available entropy? | With no intention to compete with Marcus' complete answer. Just to explain what happened and justify that what you are noticing is not a bug. Default poolsize is hardcoded in drivers/char/random.c but something actually changed in 5.10.119 : Up to 5.10.118 : #define INPUT_POOL_SHIFT 12#define INPUT_POOL_WORDS (1 << (INPUT_POOL_SHIFT-5))...static int sysctl_poolsize = INPUT_POOL_WORDS * 32; (2^7)x32=4096 Under 5.10.119 , poolsize appears computed differently : POOL_BITS = BLAKE2S_HASH_SIZE * 8...static int sysctl_poolsize = POOL_BITS; having BLAKE2S_HASH_SIZE = 32 as defined in include/crypto/blake2s.h 8x32=256 what you are noticing is not a bug… its : a feature ! BTW, it's just a default value, feel free to change it if you know it does not fit your needs. Note : This change, which concerns mainline since 5.17-rc1 was backported to 5.10 from 119 but also to the more recent LTS : 5.15 from 44. 5.4 does not seem concerned (yet ?) and of course, 5.16 will never be. As opportunely suggested by @TooTea in the comments, the reasons for the move can be read as part of the initial commit , in short : increased security (if the state of the pool leaks, its contents could be controlled and entirely zeroed out.) better performances (up to 225% on hight end cpu) This being achieved by replacing the 4096 LFSR by a direct call to the BLAKE2s. BLAKE2s outputs 256 bits, which should give us an appropriate amountof min-entropy accumulation, and a wide enough margin of collisionresistance against active attacks. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/704737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/528206/"
]
} |
704,956 | I'm still confused about the concept of kernel and filesystem. Filesystems contain a table of inodes used to retrieve the different files and directories in different memories. Is this inode table part of the kernel? I mean, is the inode table updated when the kernel mounts another filesystem? Or is it part of the filesystem itself that the kernel reads by somehow using a driver and inode table address? | There is some confusion here because kernel source and documentation is sloppy with how it uses the term 'inode'. The filesystem can be considered as having two parts -- the filesystem code and data in memory, and the filesystem on disk. The filesystem on disk is self contained and has all the non-volatile data and metadata for your files. For most linux filesystems, this includes the inodes on disk along with other metadata and data for the files. But when the filesystem is mounted, the filesystem code also keeps in memory a cached copy of the inodes of files being used. All file activity uses and updates this in memory copy of the inode, so the kernel code really only thinks about this in memory copy, and most kernel documentation doesn't distinguish between the on disk inode and the in memory inode. Also, the in memory inode contains additional ephemeral metadata (like where the cache pages for the file are in memory and which processes have the file open) that is not contained in the on disk copy of the inode. The in memory inode is periodically synchronized and written back to disk. The kernel does not have all the inodes in memory -- just the ones of files in use and files that recently were in use. Eventually inodes in memory get flushed and the memory is released. The inodes on disk are always there. Because file activity in unix is so tightly tied to inodes, filesystems (like vfat) that do not use inodes still have virtual inodes in kernel memory that the filesystem code constructs on the fly. These in memory virtual inodes still hold file metadata that is synchronized to the filesystem on disk as needed. In a traditional unix filesystem, the inode is the key data structure for a file. The filename is just a pointer to the inode, and an inode can have multiple filenames linked to it. In other filesystems that don't use inodes, a file can typically only have one name and the metadata is tied to the filename rather than an inode. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/704956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65878/"
]
} |
704,960 | By default, gawk pads the content to the specified length with space character: root@u2004:~# awk 'BEGIN{printf("|%+5s|\n", "abc")}'| abc|root@u2004:~# Is it possible to specify a custom padding character? For example, how can I get |__abc| ? | There is some confusion here because kernel source and documentation is sloppy with how it uses the term 'inode'. The filesystem can be considered as having two parts -- the filesystem code and data in memory, and the filesystem on disk. The filesystem on disk is self contained and has all the non-volatile data and metadata for your files. For most linux filesystems, this includes the inodes on disk along with other metadata and data for the files. But when the filesystem is mounted, the filesystem code also keeps in memory a cached copy of the inodes of files being used. All file activity uses and updates this in memory copy of the inode, so the kernel code really only thinks about this in memory copy, and most kernel documentation doesn't distinguish between the on disk inode and the in memory inode. Also, the in memory inode contains additional ephemeral metadata (like where the cache pages for the file are in memory and which processes have the file open) that is not contained in the on disk copy of the inode. The in memory inode is periodically synchronized and written back to disk. The kernel does not have all the inodes in memory -- just the ones of files in use and files that recently were in use. Eventually inodes in memory get flushed and the memory is released. The inodes on disk are always there. Because file activity in unix is so tightly tied to inodes, filesystems (like vfat) that do not use inodes still have virtual inodes in kernel memory that the filesystem code constructs on the fly. These in memory virtual inodes still hold file metadata that is synchronized to the filesystem on disk as needed. In a traditional unix filesystem, the inode is the key data structure for a file. The filename is just a pointer to the inode, and an inode can have multiple filenames linked to it. In other filesystems that don't use inodes, a file can typically only have one name and the metadata is tied to the filename rather than an inode. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/704960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/520451/"
]
} |
704,962 | On RHEL, there is a command lid , which lists group users, no matter primary group or secondary group. [root@192 ~]# id user1uid=1000(user1) gid=1000(user1) groups=1000(user1),1001(g1)[root@192 ~]# id user2uid=1001(user2) gid=1002(user2) groups=1002(user2),1001(g1)[root@192 ~]# id user3uid=1002(user3) gid=1001(g1) groups=1001(g1)[root@192 ~]# lid -g g1 user3(uid=1002) user1(uid=1000) user2(uid=1001)[root@192 ~]# But it doesn't exist on Ubuntu. Is there a similar one? | It does exist in Ubuntu, but it’s provided under a different name: sudo libuser-lid -g g1 It’s part of the libuser package, install that if necessary: sudo apt install libuser The reason it’s not named lid is that lid is provided in the id-utils package and has a different purpose. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/704962",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/520451/"
]
} |
705,141 | What's the fastest way to copy/paste between a non-graphical console (<Ctrl><Alt><F...>) and an X session ? Right now : I select the text with the mouse on the console (I've installed gpm) Then I paste the text inside a temporary file And finally I switch over to the x session, open the temporary file, and copy/paste its content Is there an easier way to do this ? Can the primary selections of the non-X console and the X session be merged ? Ideally I'd want to select the text in the console, then switch over to the X session and paste it (middle-click). Can this be done ? | The "best" way to achieve that sort of thing is almost probably opinion based. The way I prefer uses the backlog of the native terminal. Knowing that the backlog of tty[N] can be accessed via /dev/vcs[N], I simply fire cat /dev/vcs[N] from my Xterm and do whatever I want with the result displayed. Of course if your Xterm user is different from the owner of the tty you want to dump, you might need to use sudo. BTW, as wisely reported in the comments, you might be annoyed with the formatting due to the absence of line feeds. man vcs will give you possible workarounds : Note that the output does not contain newline characters, so someprocessing may be required, like in fold -w 81 /dev/vcs3 | lpr or (horrors) setterm -dump 3 -file /proc/self/fd/1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/705141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152418/"
]
} |
705,151 | I'm having trouble with a VM I was using yesterday. After using, I closed the guest and updated my host machine (arch linux). Today I turned on the host and tried to turn on my guest and this message appeared: Error starting domain: unsupported configuration: chardev 'spicevmc'not supported without spice graphics Traceback (most recent call last): File"/usr/share/virt-manager/virtManager/asyncjob.py", line 72, incb_wrappercallback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/asyncjob.py", line 108, in tmpcbcallback(*args, **kwargs) File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line57, in newfnret = fn(self, *args, **kwargs) File "/usr/share/virt-manager/virtManager/object/domain.py", line 1384, instartupself._backend.create() File "/usr/lib/python3.10/site-packages/libvirt.py", line 1352, in createraise libvirtError('virDomainCreate() failed') libvirt.libvirtError: unsupported configuration: chardev 'spicevmc'not supported without spice graphics I'm using a dedicated nvidia card on a manjaro guest OS. I'm not a linux expert. Any idea what might be happening and how to fix this? Any other info you need let me know. UPDATE #1: I removed the usb spice redirectors from my VM: <redirdev bus="usb" type="spicevmc"> <address type="usb" bus="0" port="4"/></redirdev><redirdev bus="usb" type="spicevmc"> <address type="usb" bus="0" port="5"/></redirdev> And now it works fine... I just can't have usb redirectors on the guest OS now... Anyone know why this is and how to fix it? | The "best" way to achieve that sort of thing is almost probably opinion based. The way I prefer uses the backlog of the native terminal. Knowing that the backlog of tty[N] can be accessed via /dev/vcs[N], I simply fire cat /dev/vcs[N] from my Xterm and do whatever I want with the result displayed. Of course if your Xterm user is different from the owner of the tty you want to dump, you might need to use sudo. BTW, as wisely reported in the comments, you might be annoyed with the formatting due to the absence of line feeds. man vcs will give you possible workarounds : Note that the output does not contain newline characters, so someprocessing may be required, like in fold -w 81 /dev/vcs3 | lpr or (horrors) setterm -dump 3 -file /proc/self/fd/1 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/705151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/511243/"
]
} |
705,252 | While trying to understand this other question , I encountered /dev/sda0 being mentioned. I have some experience in Linux and I'm used to this scheme where sda , sdb , … are devices and sda1 , sda2 , … , sdb1 , sdb2 , … are partitions (each inside the respective device). In this scheme sda0 , sdb0 , … do not appear. I don't recall seeing sda0 ever. Still sda0 appears on U&L SE , on Super User and elsewhere. Where it appears, it almost always seems to be the first partition, i.e. the partition I would expect to appear as sda1 in the scheme I'm used to. On the other hand in Debian 10 I can see major,minor numbers as 8,1 for sda1 , 8,2 for sda2 etc. Thus if anyone asked me what sda0 might be, I would say 8,0 which is already assigned to sda . This reasoning would make sda0 equivalent to sda , the whole device. I guess these numbers are specific to Linux and they may be different in a non-Linux Unix(-like) OS, so the reasoning may not apply there. In the Internet I have found few appearances of sda0 used as a whole device. The examples are quite obscure though, they may be due to typos or somebody being wrong. Anyway, the question is: is/was /dev/sda0 a standard thing? If so, what is/was it? (can/could it be a whole device?). Under what circumstances is/was it a standard thing? (e.g. specific OS, some old kernel, specific driver, inside a virtual machine, some obsolete(?) udev config or so). I'm hoping for answers that will give me enough insight, so the next time I see /dev/sda0 I will be able to tell to myself: 'Oh, this guy is probably using …'; or maybe: 'Caution! Custom config ahead'. Side note: I have also found mentions of /dev/hda0 and a scheme that starts enumerating from hda1 . I totally cannot tell if it's closely related (a parallel) to what I observed for /dev/sda* or just a coincidence. | I’m not aware of /dev/sda0 ever being a standard device name, even on other Unix systems. And as far as I can tell, references to sda0 are likely mistakes rather than indications of a custom setup. Even in cases where the devices are named in output, users still confuse the numbers. For example, this forum post explicitly lists sda1 to sda3 in the quoted output, yet the title mentions sda0 . Similarly, Why are my drives referred to as '(hda0,msdos5)' etc in grub instead of (hda0,5), (hda0,sda5) etc that you usually see? refers to hda0 (which has also never been a standard device name, as far as I know) even though Grub uses hd0 etc. The Linux kernel publishes the list of device assignments ; this describes how partitions are handled: For partitions, add to the whole disk device number: Value Device Usage 0 /dev/hd? Whole disk 1 /dev/hd?1 First partition 2 /dev/hd?2 Second partition … … … 63 /dev/hd?63 63rd partition SCSI disks ( /dev/sd? ) are handled in the same way: Partitions are handled in the same way as for IDE disks (see major number 3) except that the limit on partitions is 15. As you can see, there’s no provision for a 0-numbered partition; the “0th” device is the whole disk, represented without a numeric suffix. (Note that “SCSI disks”, aka sd? , are used for many non-SCSI devices; see Why CentOS converts ATA bus to scsi bus? for some of the history.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/705252",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/108618/"
]
} |
705,263 | I have a file that has 1000 text lines. I want to sort the 4th column at each 20 lines interval and print the output to another file. Can anybody help me with sorting them with awk or sed? Here is an example of the data structure input 1 1.1350 1092.42 0.0000 2 1.4645 846.58 0.0008 3 1.4760 840.01 0.0000 4 1.6586 747.52 0.0006 5 1.6651 744.60 0.0000 6 1.7750 698.51 0.0043 7 1.9216 645.20 0.0062 8 2.1708 571.14 0.0000 9 2.1839 567.71 0.0023 10 2.2582 549.04 0.0000 11 2.2878 541.93 1.1090 12 2.3653 524.17 0.0000 13 2.3712 522.88 0.0852 14 2.3928 518.15 0.0442 15 2.5468 486.82 0.0000 16 2.6504 467.79 0.0000 17 2.6909 460.75 0.0001 18 2.7270 454.65 0.0000 19 2.7367 453.04 0.0004 20 2.7996 442.87 0.0000 1 1.4962 828.64 0.0034 2 1.6848 735.91 0.0001 3 1.6974 730.45 0.0005 4 1.7378 713.47 0.0002 5 1.7385 713.18 0.0007 6 1.8086 685.51 0.0060 7 2.0433 606.78 0.0102 8 2.0607 601.65 0.0032 9 2.0970 591.24 0.0045 10 2.1033 589.48 0.0184 11 2.2396 553.61 0.0203 12 2.2850 542.61 1.1579 13 2.3262 532.99 0.0022 14 2.6288 471.64 0.0039 15 2.6464 468.51 0.0051 16 2.7435 451.92 0.0001 17 2.7492 450.98 0.0002 18 2.8945 428.34 0.0010 19 2.9344 422.52 0.0001 20 2.9447 421.04 0.0007 expected output: 11 2.2878 541.93 1.1090 12 2.2850 542.61 1.1579 Each n interval has only one highest (unique) value. | Via awk : NR%20==1 {max=$4 ; line=$0}{ if ($4>max) {max=$4;line=$0} }NR%20==0 {print line} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/705263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/528753/"
]
} |
705,447 | I'm used to use cat > /path/to/file << EOF when I, in a bash script, printed more than one line into a file... I was checking old code of my company and I found the cat EOT instruction instead of the cat EOF I'm used to (please notice the T instead of the F at the end of it) and curiosity bit me. I did a quick research and I only found this other question , but I think it was not related to what I wanted to know. I did some tests with the following code: password=hellocat > ./hello.txt << EOTauthentication { auth_type PASS auth_pass $password }EOT And I get the exact same output as when I use EOF instead of EOT . The output is, as expected: root@test_VM:~# bash test.sh && cat hello.txtauthentication { auth_type PASS auth_pass hello } So the questions are: What are the differences between the use of EOT and EOF ? When should I use one over the other? | There is no difference, and no particular meaning to those two strings, or any others. It's just an arbitrary terminator and you can use almost any string you like. Of course, the data itself can't contain that particular line, so if your data contains e.g. a shell script that has another here-doc, you'll need to use different terminators in both. Using somewhat descriptive strings may be useful for any future readers of the script. E.g. cat > test.sh <<END_OF_SCRIPTcat <<EOFhelloEOFEND_OF_SCRIPT produces test.sh which, when executed through the shell prints hello . There is a difference if you quote the terminator in the line that starts the here-doc, though, it'll prevent expansions in the here-doc data. This prints $i , not whatever the value of the variable is: cat << 'EOF'$iEOF See also: 3.6.6 Here Documents in Bash's manual 2.7.4 Here-Document in the POSIX Shell Language description Here Document in wiki.wooledge.org | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/705447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260722/"
]
} |
705,796 | Given this input { "attributes": { "f1": "one", "f2": "two", "f3": "three" }} I'd like to be able to specify on the jq command line which fields to retrieve and spit their values out space-delimited on the same line. jq -r 'some-filter' --argjson fields '["f3","f1"]' test.json Result: three one jq -r 'some-filter' --argjson fields '["f2"]' test.json Result: two I've tried several variations of this filter, with no luck .attributes | $fields | . as $f | "\($f)" | You could do it using variable-symbolic binding (as you attempted) as jq -r --argjson fields '["f3","f1"]' '[$fields[] as $f | .attributes | .[$f]] | join(" ")' test.json or slightly more compactly jq -r --argjson fields '["f3","f1"]' '[$fields[] as $f | .attributes[$f]] | join(" ")' test.json but at least with jq 1.6 it seems you can just pass the $fields[] iterator to a generic object index of the form .[<string>] jq -r --argjson fields '["f3","f1"]' '[.attributes[$fields[]]] | join(" ")' test.json ex. $ jq -r --argjson fields '["f3","f1"]' '[.attributes[$fields[]]] | join(" ")' test.jsonthree one$ jq -r --argjson fields '["f2"]' '[.attributes[$fields[]]] | join(" ")' test.jsontwo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/705796",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/526704/"
]
} |
705,818 | Very simple hypothetical question, I've reached the limitation of sed and need to change my sed script to perl . So for sed conditional replacement of sed '/condition/ s/xx/yy/' How to do it in perl? For e.g. how to do the following in perl? seq 6 > /tmp/tf$ paste -d '' /tmp/tf /tmp/tf | sed -E '/[135]/s/^(.)(.)$/\1.\2-/'1.1-223.3-445.5-66$ paste -d '' /tmp/tf /tmp/tf | perl -pe 's/$&/$1.$2-/ if /^([135])(.)$/'.-22.-44.-66 | If you aim to reduce typing and to increase the similarity between how to do this in Perl and sed . In that case, both sed and Perl allow the reuse of the most recently matched regular expression by specifying an empty expression. sed '/^\([135]\)\(.\)$/ s//\1.\2-/' perl -pe '/^([135])(.)$/ && s//$1.$2-/' The empty regular expression in the s/// command will reuse the expression from the preceding test (in general, the most recent matching expression). In Perl, we must add && between the test and the substitution to let it act as a short-circuit if -statement. In sed , the first expression simply acts as the address of the substitution command. In general, sed '/condition/ s/xx/yy/' ... would be "the same as" (taking slightly different syntax and regular expression flavors into account) perl -pe '/condition/ && s/xx/yy/' In this case, though, it would be simpler just to apply the substitution directly: sed 's/^\([135]\)\(.\)$/\1.\2-/' perl -pe 's/^([135])(.)$/$1.$2-/' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/705818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/374303/"
]
} |
705,842 | I need accomplish a task of filtering activity of bots in the log file. Solution should only show records meeting following criteria user logged in, user changed password, user logged off within same second. those actions (log in, change password, log off) happened one after another with no other entries in between. Input data example [a lot of data]Mon, 22 Aug 2016 13:15:39 +0200|178.57.66.225|fxsciaqulmlk| - |user logged in| -Mon, 22 Aug 2016 13:15:39 +0200|178.57.66.225|fxsciaqulmlk| - |user changed password| -Mon, 22 Aug 2016 13:15:39 +0200|178.57.66.225|fxsciaqulmlk| - |user logged off| -Mon, 22 Aug 2016 13:15:42 +0200|178.57.66.225|faaaaaa11111| - |user logged in| -Mon, 22 Aug 2016 13:15:40 +0200|178.57.66.215|terdsfsdfsdf| - |user logged in| -Mon, 22 Aug 2016 13:15:49 +0200|178.57.66.215|terdsfsdfsdf| - |user changed password| -Mon, 22 Aug 2016 13:15:49 +0200|178.57.66.215|terdsfsdfsdf| - |user logged off| -Mon, 22 Aug 2016 13:15:59 +0200|178.57.66.205|erdsfsdfsdf| - |user logged in| -Mon, 22 Aug 2016 13:15:59 +0200|178.57.66.205|erdsfsdfsdf| - |user changed password| -Mon, 22 Aug 2016 13:15:59 +0200|178.57.66.205|erdsfsdfsdf| - |user logged off| -Mon, 22 Aug 2016 13:17:50 +0200|178.57.66.205|abcbbabab| - |user logged in| -Mon, 22 Aug 2016 13:17:50 +0200|178.57.66.205|abcbbabab| - |user changed password| -Mon, 22 Aug 2016 13:17:50 +0200|178.57.66.205|abcbbabab| - |user changed profile| -Mon, 22 Aug 2016 13:17:50 +0200|178.57.66.205|abcbbabab| - |user logged off| -Mon, 22 Aug 2016 13:19:19 +0200|178.56.66.225|fxsciaqulmla| - |user logged in| -Mon, 22 Aug 2016 13:19:19 +0200|178.56.66.225|fxsciaqulmla| - |user changed password| -Mon, 22 Aug 2016 13:19:19 +0200|178.56.66.225|fxsciaqulmla| - |user logged off| -Mon, 22 Aug 2016 13:20:42 +0200|178.57.67.225|faaaa0a11111| - |user logged in| -[a lot of data] I've written the code below in order to complete the task awk 'BEGIN { FS=" " } { c[$5]++; l[$5,c[$5]]=$0 } END { for (i in c) { if (c[i] == 3) for (j = 1 ; j <= c[i]; j++) print l[i,j] } }' $1 Usage: ./parse_log.sh logfile.log Output: Mon, 22 Aug 2016 13:15:39 +0200|178.57.66.225|fxsciaqulmlk| - |user logged in| -Mon, 22 Aug 2016 13:15:39 +0200|178.57.66.225|fxsciaqulmlk| - |user changed password| -Mon, 22 Aug 2016 13:15:39 +0200|178.57.66.225|fxsciaqulmlk| - |user logged off| -Mon, 22 Aug 2016 13:15:59 +0200|178.57.66.205|erdsfsdfsdf| - |user logged in| -Mon, 22 Aug 2016 13:15:59 +0200|178.57.66.205|erdsfsdfsdf| - |user changed password| -Mon, 22 Aug 2016 13:15:59 +0200|178.57.66.205|erdsfsdfsdf| - |user logged off| -Mon, 22 Aug 2016 13:19:19 +0200|178.56.66.225|fxsciaqulmla| - |user logged in| -Mon, 22 Aug 2016 13:19:19 +0200|178.56.66.225|fxsciaqulmla| - |user changed password| -Mon, 22 Aug 2016 13:19:19 +0200|178.56.66.225|fxsciaqulmla| - |user logged off| - But I want to know what alternative written in Perl or Python (with minimum usage of external libs) would look? | The solution itself is written in Python 3. #!/usr/bin/env python3import sysimport refrom collections import defaultdictcolumn_delimiter = sys.argv[1]column = int(sys.argv[2]) - 1records = defaultdict(list)with open(sys.argv[3]) as inputfile: for lines in inputfile: line = lines.rstrip('\n') row_record = line.split(column_delimiter) records[row_record[column]].append(line)for timestamps in records.values(): if len(timestamps) == 3: for i in range(len(timestamps)): if (re.search('logged in|changed password|logged off', timestamps[i])): print(timestamps[i]) Usage: parse_log.py ' ' 5 logfile.log The Python code is much easier to read and understand what it does. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/705842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/222727/"
]
} |
705,847 | In less for navigation purposes according with this tutorial Less Command in Linux indicates: g Go to the first line in the file.p Go to the beginning of the file. I tested both, and of course the result is the same (of course using G to go bottom) and testing each one. But just at a first glance if g and G do the opposite to each other and they are enough to go to the first line (top) and last line (bottom) respectively - so why there is the p option if it does the same as g ? | He, this is mischaracterizing what these commands actually do. p is for "percentage". Try typing 20p and you'll jump to 20% of the file length. Nifty! 20g works too, but it goes to the twentieth line. Simply typing g or p just implies 0g or 0p ; because the zeroth line and the zeroth byte are both the file's beginning, that works out as the same. You can test this rather easily; I'm assuming you're using zsh : #!/usr/bin/zsh(for i in {1..1000}; echo $i) | less will display 1000 numbered lines, and 33g will jump to line 33, but 33.3p will jump to line 333 :) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/705847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383045/"
]
} |
705,890 | I was wondering how indenting recursively more and more lines of poetry following a custom rule. For instance Let say we have: OF Mans First Disobedience, and the FruitOf that Forbidden Tree, whose mortal tastBrought Death into the World, and all our woe,With loss of Eden, till one greater ManRestore us, and regain the blissful Seat,Sing Heav'nly Muse, that on the secret topOf Oreb, or of Sinai, didst inspireThat Shepherd, who first taught the chosen Seed,In the Beginning how the Heav'ns and EarthRose out of Chaos: Or if Sion HillDelight thee more, and Siloa's Brook that flow'dFast by the Oracle of God; I thenceInvoke thy aid to my adventrous Song,That with no middle flight intends to soarAbove th' Aonian Mount, while it pursuesThings unattempted yet in Prose or Rhime.And chiefly Thou O Spirit, that dost preferBefore all Temples th' upright heart and pure,Instruct me, for Thou know'st; Thou from the firstWast present, and with mighty wings outspread. And we want to indent it, recursively, adding 3 spaces to any three lines after the first, in this way OF Mans First Disobedience, and the Fruit Of that Forbidden Tree, whose mortal tast Brought Death into the World, and all our woe, With loss of Eden, till one greater ManRestore us, and regain the blissful Seat, Sing Heav'nly Muse, that on the secret top Of Oreb, or of Sinai, didst inspire That Shepherd, who first taught the chosen Seed,In the Beginning how the Heav'ns and Earth Rose out of Chaos: Or if Sion Hill Delight thee more, and Siloa's Brook that flow'd Fast by the Oracle of God; I thenceInvoke thy aid to my adventrous Song, That with no middle flight intends to soar Above th' Aonian Mount, while it pursues Things unattempted yet in Prose or Rhime.And chiefly Thou O Spirit, that dost prefer Before all Temples th' upright heart and pure, Instruct me, for Thou know'st; Thou from the first Wast present, and with mighty wings outspread What can be the simplest way to achieve this goal? | If you just want to indent the first line and then every 4th line after that, you can use awk : $ awk 'NR % 4 != 1{$0=" "$0};1' file OF Mans First Disobedience, and the Fruit Of that Forbidden Tree, whose mortal tast Brought Death into the World, and all our woe, With loss of Eden, till one greater ManRestore us, and regain the blissful Seat, Sing Heav'nly Muse, that on the secret top Of Oreb, or of Sinai, didst inspire That Shepherd, who first taught the chosen Seed,In the Beginning how the Heav'ns and Earth Rose out of Chaos: Or if Sion Hill Delight thee more, and Siloa's Brook that flow'd Fast by the Oracle of God; I thenceInvoke thy aid to my adventrous Song, That with no middle flight intends to soar Above th' Aonian Mount, while it pursues Things unattempted yet in Prose or Rhime.And chiefly Thou O Spirit, that dost prefer Before all Temples th' upright heart and pure, Instruct me, for Thou know'st; Thou from the first Wast present, and with mighty wings outspread. Or perl : $ perl -pe 's/^/ / if $. % 4 != 1' fileOF Mans First Disobedience, and the Fruit Of that Forbidden Tree, whose mortal tast Brought Death into the World, and all our woe, With loss of Eden, till one greater ManRestore us, and regain the blissful Seat, Sing Heav'nly Muse, that on the secret top Of Oreb, or of Sinai, didst inspire That Shepherd, who first taught the chosen Seed,In the Beginning how the Heav'ns and Earth Rose out of Chaos: Or if Sion Hill Delight thee more, and Siloa's Brook that flow'd Fast by the Oracle of God; I thenceInvoke thy aid to my adventrous Song, That with no middle flight intends to soar Above th' Aonian Mount, while it pursues Things unattempted yet in Prose or Rhime.And chiefly Thou O Spirit, that dost prefer Before all Temples th' upright heart and pure, Instruct me, for Thou know'st; Thou from the first Wast present, and with mighty wings outspread. In both cases, we are adding 4 spaces to the beginning of the line if the current line number module 4 is not equal to 1, which means we will do it for all lines except the 1st, 4th and so on. In awk, NR is the line number, and $0 is the contents of the line, so NR % 4 != 1{$0=" "$0}; means "add 4 spaces to the beginning of the line when the current line number modulo 4 is not equal to 1". The final 1; is just shorthand for "print". In Perl, $. is the current line number, and s/old/new/ is the subsitution operator which will replace the first occurrence of old with new . So s/^/ / if $. % 4 != 1 means "replace the beginning of the line ( ^ ) with four spaces if the current line number modulo 4 is not equal to 1". The -p means "print each line of the input file after applying the script provided by -e ". Here is the exact same perl command in a more verbose and easier to understand version: perl -e 'open(my $fileHandle, "<", $ARGV[0]);my $lineCount=0;while(my $line = <$fileHandle>){ $lineCount += 1; if ( $lineCount % 4 != 1 ){ ## or $line = " " . $line $line =~ s/^/ / } print "$line";}' file Or, almost identical: perl -e 'open(my $fileHandle, "<", $ARGV[0]);my $lineCount=0;while(my $line = <$fileHandle>){ $lineCount += 1; unless ( $lineCount % 4 == 1 ){ $line = " " . $line } print "$line";}' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/705890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14962/"
]
} |
706,287 | I have a file that look like this one: 10-04-2022 00:39:13,36707,1455008753,3211-05-2022 00:39:13,36708,1555008753,2621-05-2022 00:39:13,36708,1555408753,1512-06-2022 00:39:13,36709,1655008753 Because values into last field are related to running time of next line, I would like to shift down them to this way for using with gnuplot. 10-04-2022 00:39:13,36707,1455008753,11-05-2022 00:39:13,36708,1555008753,3221-05-2022 00:39:13,36708,1555408753,2612-06-2022 00:39:13,36709,1655008753,15 | A simple awk script could do this: awk 'BEGIN {FS=OFS=","} { tmp=$NF; $NF=save; print; save=tmp; }' < input > output This saves the fourth field into a temporary variable, replaces the fourth field with the previously-saved value, then prints the new line. Once printed, it saves the previous value of that fourth field for the next iteration. On the first line, the "save"d value is empty (the default awk behavior), which achieves the desired result. The "BEGIN" section sets the Field Separator (used for splitting the input) and the Output Field Separator (used when printing the line/fields back out) to a comma. See your local awk man page, various online references, or the Open Group Base Specifications for awk to learn more. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/706287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/508807/"
]
} |
706,336 | find uses {} to indicate "this file"(ish). You can feed a series of files into myprog , thus: find ./tests/ -name *.in -exec myprog -i {} \; Is there a way to modify the name in {} ? In my case, I use -i to define an input file and -o for the output, and I would like the output to go to a slightly modified file name, such that "a.in" would produce "a.out". Ideally, I would like something to the effect of: find ./tests/ -name *.in -exec myprog -i {} -o {}.out \; Additionally, the output directory may be a different path. In this case, the output would not go to /tests/ but perhaps /tests_20220615/ . I've looked at numerous pages with examples of find , and nothing like this shows up, so perhaps "no"? I know there are ways to do this using loops in bash or zsh, but the list of possible gotchas is great ("nullglob"?!), and if find can do this it seems much safer to this noob. | find won't let you modify the paths of the files, some find implementations would not let you concatenate {} with something else, some don't even support passing {} more than once, but you can always run some command such as a shell that can do transformations: find ./tests/ -name '*.in' -type f -exec sh -c ' ret=0 for file do myprog -i "$file" -o "${file%.*}.out" || ret="$?" done exit "$ret"' sh {} + Above, instead of executing myprog directly, we're executing sh and passing it some inline code as well as the paths of the found file (with {} + instead of {} ';' to pass as many as possible). sh in turn loops over those files, and calls myprog after having applied transformations on them like ${file%.*} to remove the extension. Note the quotes around *.in . Without them the shell you're running that find command in would try to expand it to the list of files in the current directory whose name ends in .in instead of passing that pattern literally to find . Above, we tell sh to exit with a failure exit status if any of the myprog invocations fails. That failure will be reflected in the exit status of find , so you can take action if need be or the script to exit if the errexit option is enabled. It's not possible to abort upon the first failure of myprog though. If using the zsh shell, you could also do the finding internally: set -o errexitfor file (./tests/**/*.in(ND.)) myprog -i $file -o $file:r.out Would exit upon the first failure and would also process the list in lexical order (you can always add the oN glob qualifier to disable that sorting). Another approach is to get find to print the files, pipe it to some command that does the transformations and then pipe to xargs . For instance: find ./tests/ -name '*.in' -type f -print0 | gawk -v RS='\0' -v ORS='\0' -v OFS='\0' ' { filein = fileout = $0; sub(/\.in$/, ".out", fileout) print "-i", filein, "-o", fileout }' | xargs -r0 -n4 myprog Again xargs will return with a non-zero exit status if any of the myprog invocations fails. GNU xargs can run several invocations in parallel with its -P option. Or you could get perl to post-process it and let it do the running: find ./tests/ -name '*.in' -type f -print0 | perl -l -0ne ' system("myprog", "-i", $_, "-o", s/\.in\Z/.out/r) == 0 or $ret = 1; END {exit $ret}' Beware that the approaches that post-process the output of find will mask its failure exit status if any (like when it fails to enter some directories) unless you set the pipefail of your shell (where supported). Using pipes also have an implication on what myprog 's standard input will be (in case it needs to prompt the user for instance). GNU xargs opens stdin on /dev/null , some others and the perl approach will leave it as is which means it will be the pipe from find / gawk . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/706336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/529863/"
]
} |
706,377 | I have a SanDisk Ultra Plus 64gb MicroSd XC Card that I used to run ubuntu server on a raspbery PI. Now I have to format this card but I can not succes. I've tried many things including: Formating via the explorer on window: Unexpected error. Deleting patitions with window partition manager : Unexpected error. Diskpart cmd: Unexpected error Trying to flash another ubuntu image via Raspbery Imager: Unexpected error. Formating via Rufus: Unexpected error. Deleting partitions and making a new one via gparted: Tell me about overlapping volumes and return to initial state. Listing via fdisk: Show me that there is no overlaping. sdb1 end at 526335 and sdb2 start at 526336. Deleting partitions via gparted without creating a new one: After saving it show succes but the disk reload, and the partitions are comming back. So far I can tell, the card is not in readonly mode + I do not use an adaptater that have a switch. | Very likely your SD card has failed. SD cards have a limited number of write cycles per block. Most of them use wear leveling which tries to rearrange blocks to spread writes out evenly across all blocks to extend the SD card life. But once the number of write cycles has been used up, blocks are no longer writable. A typical failure mode for an SD card is for it to silently stop accepting new writes. It sounds like this is what has happened to your card. The only solution is to replace the SD card. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/706377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/529922/"
]
} |
706,447 | #!/bin/bashif [ $# -gt 0 ]; then snum=( $@ ) echo $snumfi When I run the script like ./testscript.sh 1234 4568 The output of echo command is only 1234 , so I guess I am not building an array of all positional arguments? #!/bin/bashif [ $# -gt 0 ]; then snum="$@" echo $snumfi and run ./testscript.sh 1234 4568 The output is 1234 4568 I am wondering why snum=( $@ ) is only taking the first positional argument? | Use "$@" (incl. double quotes) in a list context to get a list of positional parameters, individually quoted. Use "$*" (incl. double quotes) in a scalar context to get the positional parameters concatenated into a single string with the first character of $IFS (usually a space) as the delimiter. Using $@ unquoted (as in your first example) or using "$@" in a scalar context (as in your second example) rarely makes sense. In the bash shell, using "$@" in a scalar context is the same as using "$*" with the first character of $IFS set to a space. When using snum=( "$@" ) , you create the array snum . If you access the variable as $snum , you will get the first element of the array. It is, in effect, the same as accessing ${snum[0]} . Using "${snum[@]}" gives you a list of the individually quoted elements, in a similar manner as "$@" does. Using "${snum[*]}" gives you the equivalent of "$*" , but for the array snum . Assuming you want to create an array, snum , from the list of positional parameters and then print that array if it's not empty, you may use #!/bin/bashsnum=( "$@" )if [ "${#snum[@]}" -gt 0 ]; then printf '%s\n' "${snum[@]}"fi This prints the elements of snum on separate lines if the script was given arguments. Example run: bash-5.1$ ./script 1 2 3 "hello world" 4123hello world4 Notice that the hello world argument is kept as a single argument, which would not be the case had you forgotten the quotes around $@ . To print the list of positional parameters at a single string, delimited by colons. The colons are inserted between the positional parameters by modifying the $IFS string. #!/bin/bashIFS=:snum="$*"if [ -n "$snum" ]; then printf '%s\n' "$snum"fi The difference here is that snum is now a single string, not an array of elements. The script outputs the string if it is non-empty, i.e., if the script was given at least one non-empty argument. Or, modifying our first example only slightly to keep using snum as an array, #!/bin/bashsnum=( "$@" )if [ "${#snum[@]}" -gt 0 ]; then IFS=: printf '%s\n' "${snum[*]}"fi Example run: bash-5.1$ ./script 1 2 3 "hello world" 41:2:3:hello world:4 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/706447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364311/"
]
} |
706,576 | I am looking for the "right" way to dump the contents of audio CDs to hard disk without losing any information like CD identifiers, cue lists, etc... I am not searching for a all-in-one solution from CD to compressed audio, like ABCDE for example, because I can't be certain at this time about all the possible future audio formats and data structures that I will ever need in the future. It is also not necessary that online CD information sources, like CDDB or Musicbrainz are queried at dump time. The idea is more to get a full, perfect-quality, lossless (obviously) dump of the CDs, in a set of files that I can post-process as many times as I need, with different parameters of various existing or future software, for batch-converting part or all of the library into a particular format. I mainly want to avoid having to play the physical disk-jockey with well over one thousand CDs more than once. What would be the optimal set of programs and options to get a binary dump of the whole audio data, as well as cue times, CD-Text data, CD identifiers, etc... well, anything that is on the disk ? I have programming skills and writing the necessary scripts to batch-process the contents of the dump is not an issue, as long as we are speaking about linear audio (.wav) and text files. I am also wondering if it would be better to get whole-CD audio as a single track or individual tracks. I have many live recordings, for which it is probably more useful to have single-track, because it is usually the way I listen to them. Any advice on that would also be appreciated. So far, I have experimented with cdda2wav and cdrdao, and I found the following set of commands probably give me a lot of the data I need : cdda2wav -D /dev/cdr0 -Bcdda2wav -D /dev/cdr0 -t all -cuefilecdda2wav -D /dev/cdr0 -Jcd-info -C /dev/cdr0cdrdao read-cd toc_file Running all these commands result in a lot of redundant information being dumped, and of course in reading the whole CD more than once. I wasn't able to clearly determine the data provided by one of these commands to be a strict subset of another one, hence my question. I use Linux slackware 15.0 on a desktop with 4 SATA CD drives.In addition to the above, do you think using more than a single CD drive, to dump up to 4 CDs in parallel (saving time) would result in a higher risk of errors (on scratched media, for example) ? | To extract as much information as possible from a CD with audio tracks, on a current CD drive, you should use cdrdao with any subchannel information supported by your drive: cdrdao read-cd --read-raw --read-subchan rw_raw tocfile You may need to specify a different driver with the --driver option, depending on the drive you have; see the cdrdao README file for details. This will include CD-TEXT data if your drive supports it. Note that if you want to write a CD with CD-TEXT data, you may need to explicitly enable driver option 0x10 if you’re using the generic-mmc driver. cdrdao has a database of known drives but it might not include the drive you’re using. If the CD you’re reading isn’t in great condition, or the drive itself isn’t great, you may want to avoid rw_raw to at least have a chance of detecting errors. In general you should read CDs in disk-at-once mode, not individual tracks; DAO will preserve the original tracks, with whatever gaps were present (if any), along with any extra information at the beginning or end of the CD. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/706576",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530149/"
]
} |
706,712 | There is a description of a source command: source is a bash shell built-in command that executes the content of the file passed as argument, in the current shell. It has a synonym in . (period). For example, for the sake of an experiment, I want to export a variable from a different shell zsh in my case (running the command in bash ): $ zsh -c "export test=$(echo "hello world")"$ echo $test$ It does not work because the command runs in a zsh subshell and is not executed directly in bash . If I create and source a script this way: #!/home/linuxbrew/.linuxbrew/bin/zshexport test=$(echo "hello world") $ chmod 777 test.zsh$ source test.zsh$ echo $testhello world It works alright. The question is how to source a command without using a script since source can be run with files only?I want to achieve something like that: source zsh -c "export test=$(echo "hello world")" If it is not possible, please explain why. | The answer to the question you asked is: use eval . eval "export test=$(echo "hello world")" Do take care that the argument to eval is executed as a piece of shell code. It's usually tricky to get right. For example, the code above sets test to the 5-character string hello , not to the 11-character string hello world . This mimics your original example with zsh -c "export test=$(echo "hello world")" , which also sets test to hello . The reason is that in both cases, the outer shell runs the code and determines that it's calling a command with the argument export test=hello world . In one case the command is the external program zsh with a first argument -c , and in the other case the command is the builtin eval . If you wanted to set test to hello world , you'd have to arrange for the inner shell evaluation to get an input like export test="hello world" or export test='hello world' or export test=hello\ world . Note that there is no direct method that works for arbitrary characters in the value: surrounding with single quotes fails if the string contains single quotes; surrounding with double quotes fails if the string contains any of `"\[ ; adding backslashes requires adding them before each of bunch of characters, and doesn't work for newlines. How to work around this difficulty depends on what you want to do. Many answers are to be found among quoting questions on this site. However, this may not be what you want — it's not clear what you actually want to do. Note that the code is executed by the same shell. That's the whole point of source or eval : to execute code inside the same program. You're showing an example with source where you read a file that starts with a shebang, but that shebang line is ignored, since the file isn't being executed: only its contents are. If you want to construct the value of a variable using one program (e.g. zsh) while using another shell (e.g. bash), forget about all that and make the other program print the value. The simple way is: test=$(zsh -c 'echo "hello world"') Note that command substitution removes trailing newlines. So the value of test will be the 11-character string hello world , without a newline at the end. If you want to get the full output from echo with the trailing newline, add another character (and optionally a newline) at the end, then strip it out. test=$(zsh -c 'echo "hello world"'; echo .); test=${test%.} | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/706712",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365419/"
]
} |
706,717 | I'd like to know if it's possible to just repeat part of a command.I.e. if I do ls /path/to/somewhere -a , I only want to remove ls and -a . I know that if I do !! it repeats the previous command (appending the last command to whichever command you write before it) and that if I do !$ it includes the last part of the string, but I'd like to know if it's possible to re-use only the e.g. middle part of the previous command. | Sure, use !^ e.g. $ ls /path/to/somewhere -als: cannot access '/path/to/somewhere': No such file or directory$ echo !^echo /path/to/somewhere/path/to/somewhere$ Alternatively (incurring an extra keystroke) you could use !:1 . $ ls /path/to/somewhere -als: cannot access '/path/to/somewhere': No such file or directory$ echo !:1echo /path/to/somewhere/path/to/somewhere$ This is fully documented in the Event Designators and Word Designators sections of the bash man page . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/706717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469646/"
]
} |
706,729 | I want to remove from the path environment variable all of the enteries that contain a certain word, how can I do that? | Sure, use !^ e.g. $ ls /path/to/somewhere -als: cannot access '/path/to/somewhere': No such file or directory$ echo !^echo /path/to/somewhere/path/to/somewhere$ Alternatively (incurring an extra keystroke) you could use !:1 . $ ls /path/to/somewhere -als: cannot access '/path/to/somewhere': No such file or directory$ echo !:1echo /path/to/somewhere/path/to/somewhere$ This is fully documented in the Event Designators and Word Designators sections of the bash man page . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/706729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530428/"
]
} |
706,780 | From time to time I see similar framing for comments in bash scripts: #!/bin/bash#===================================================================================## FILE: stale-links.sh## USAGE: stale-links.sh [-d] [-l] [-oD logfile] [-h] [starting directories]## DESCRIPTION: List and/or delete all stale links in directory trees.# The default starting directory is the current directory.# Don’t descend directories on other filesystems.#=================================================================================== Is there any program to generate such a decoration for comments or do people usually create it manually? P.S. After some search, I found similar threads: How can I create a message box from the command line? bash script , echo output in box | I really like Thomas Jensen's boxes . It does a lot more than just the comment boxes you describe, and more than just for shell scripts. It's a command-line utility and it also integrates with several text editors , including my personal favorite. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/706780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365419/"
]
} |
706,782 | I have a CSV file with 150+ columns, with file separator symbol as a field separator. The problem lies in one of the columns getting new line characters. For this, I want to remove those. Input data Output data | I really like Thomas Jensen's boxes . It does a lot more than just the comment boxes you describe, and more than just for shell scripts. It's a command-line utility and it also integrates with several text editors , including my personal favorite. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/706782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530495/"
]
} |
706,828 | I am testing if the passwd command could run if the setuid bit is disabled. I disabled the setuid by running the following command: chmod 0554 /bin/passwd After doing so, I tested if the passwd command would still be able to function. But as expected, it didn't. Instead, it gave me the following errors: passwd: Authentication token manipulation error passwd: password unchanged I tried to look for these error messages in the source code, but I couldn't find them in this file. Can anyone please direct me to find the source file that contains the error messages shown above? | The first error message is from the PAM library, see e.g. https://github.com/linux-pam/linux-pam/blob/master/libpam/pam_strerror.c const char *pam_strerror(pam_handle_t *pamh UNUSED, int errnum){ switch (errnum) {/* ... */ case PAM_AUTHTOK_ERR: return _("Authentication token manipulation error");/* ... */ } return _("Unknown PAM error");} A search in the linked Git repository finds the second error message in https://github.com/shadow-maint/shadow/blob/master/libmisc/pam_pass.c This is the function that prints both error messages: void do_pam_passwd (const char *user, bool silent, bool change_expired){ pam_handle_t *pamh = NULL; int flags = 0, ret; FILE *shadow_logfd = log_get_logfd(); if (silent) flags |= PAM_SILENT; if (change_expired) flags |= PAM_CHANGE_EXPIRED_AUTHTOK; ret = pam_start ("passwd", user, &conv, &pamh); if (ret != PAM_SUCCESS) { fprintf (shadow_logfd, _("passwd: pam_start() failed, error %d\n"), ret); exit (10); /* XXX */ } ret = pam_chauthtok (pamh, flags); if (ret != PAM_SUCCESS) { fprintf (shadow_logfd, _("passwd: %s\n"), pam_strerror (pamh, ret)); fputs (_("passwd: password unchanged\n"), shadow_logfd); pam_end (pamh, ret); exit (10); /* XXX */ } fputs (_("passwd: password updated successfully\n"), shadow_logfd); (void) pam_end (pamh, PAM_SUCCESS);} | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/706828",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/438546/"
]
} |
707,038 | I have the following example: $ a="$(ls)" $ echo $abackups cache crash lib local lock log mail opt run snap spool tmp$$ echo "$a"backupscachecrashliblocallocklogmailoptrunsnapspooltmp Now with printf : $ printf $abackups$$ printf "$a"backupscachecrashliblocallocklogmailoptrunsnapspooltmp Why is the output so different? What do quotes do in this situation?Could someone explain what's going on here? P.S. Found some explanation on the ls behavior: Output from ls has newlines but displays on a single line. Why? https://superuser.com/questions/424246/what-is-the-magic-separator-between-filenames-in-ls-output http://mywiki.wooledge.org/ParsingLs The newline characters can be checked this way: ls | od -c | echo $a is the same as echo backups cache crash lib local lock log mail opt run snap spool tmp whereas echo "$a" is the same as echo 'backupscachecrashliblocallocklogmailoptrunsnapspooltmp' See https://mywiki.wooledge.org/Quotes . The first argument for printf is a formatting string and printf $a is the same as printf backups cache crash lib local lock log mail opt run snap spool tmp so it's using the string backups as the format and discarding the rest since there's nothing like %s in the formatting string to use them in. Just like: $ printf foo whateverfoo$ $ printf '%s\n' foo whateverfoowhatever Don't do a="$(ls)" to try to create a scalar variable holding file names btw as that's fragile, do a=(*) to hold them in an array instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/707038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/365419/"
]
} |
707,042 | This is a virtual machine (vmware) on vCenter Current Kernel 3.10.0-862.el7.x86_64 [root@stage ~]# uname -aLinux stage.dsr.FILTERED.net 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux [root@stage ~]# hostnamectl Static hostname: stage.dsr.FILTERED.net Icon name: computer-vm Chassis: vm Machine ID: 4121FILTERED1663 Boot ID: b264FILTEREDa2dc Virtualization: vmware Operating System: CentOS Stream 8 CPE OS Name: cpe:/o:centos:centos:8 Kernel: Linux 3.10.0-862.el7.x86_64 Architecture: x86-64 Installed latest kernel sudo dnf --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel kernel-ml-headers [root@stage ~]# sudo dnf --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel kernel-ml-headersLast metadata expiration check: 0:36:34 ago on Tue 21 Jun 2022 23:59:55 BST.Package kernel-ml-5.18.5-1.el8.elrepo.x86_64 is already installed.Package kernel-ml-devel-5.18.5-1.el8.elrepo.x86_64 is already installed.Package kernel-ml-headers-5.18.5-1.el8.elrepo.x86_64 is already installed.Dependencies resolved.Nothing to do.Complete! Checking available images in /boot [root@stage boot]# ls -altotal 91212dr-xr-xr-x. 6 root root 272 Jun 22 00:06 .dr-xr-xr-x. 17 root root 224 Jun 22 00:01 ..drwxr-xr-x. 3 root root 17 Aug 30 2018 efidrwxr-xr-x. 2 root root 6 Jun 22 00:07 grubdrwx------. 5 root root 156 Jun 21 23:48 grub2-rw-------. 1 root root 55388571 Aug 30 2018 initramfs-0-rescue-4121a368f8744621872224e7593f1663.img-rw-------. 1 root root 12508372 Jun 7 18:44 initramfs-3.10.0-862.el7.x86_64kdump.img-rw-------. 1 root root 19272908 Jun 22 00:06 initramfs-5.18.5-1.el8.elrepo.x86_64.imgdrwxr-xr-x. 3 root root 21 Jun 21 23:48 loader-rwxr-xr-x. 1 root root 6224704 Aug 30 2018 vmlinuz-0-rescue-4121a368f8744621872224e7593f1663 List default kernel [root@stage boot]# grubby --default-kernel/boot/vmlinuz-0-rescue-4121a368f8744621872224e7593f1663 List out available [root@stage boot]# ls -l /boot/vmlinuz-*-rwxr-xr-x. 1 root root 6224704 Aug 30 2018 /boot/vmlinuz-0-rescue-4121a368f8744621872224e7593f1663 There is only one entry! Listing out RPM kernel packages [root@stage boot]# rpm -qa | grep kernelkernel-ml-devel-5.18.5-1.el8.elrepo.x86_64kernel-ml-headers-5.18.5-1.el8.elrepo.x86_64kernel-ml-modules-5.18.5-1.el8.elrepo.x86_64kernel-ml-5.18.5-1.el8.elrepo.x86_64kernel-ml-core-5.18.5-1.el8.elrepo.x86_64 How do I install and set 5.18.5-1.el8.elrepo.x86_64 to be the new default kernel? Any pointers are most welcome UPDATE 20220622 [root@stage ~]# dracut --forcedracut: Cannot find module directory /lib/modules/3.10.0-862.el7.x86_64/dracut: and --no-kernel was not specified also tried dracut --kver 5.18.5-1.el8.elrepo.x86_64 --forcedracut --regenerate-all --force but I still only have one same vmlinuz- in the the /boot folder [root@stage boot]# ls -aldrwxr-xr-x. 3 root root 17 Aug 30 2018 efidrwxr-xr-x. 2 root root 6 Jun 22 00:07 grubdrwx------. 5 root root 156 Jun 22 16:44 grub2-rw-------. 1 root root 55388571 Aug 30 2018 initramfs-0-rescue-4121a368f8744621872224e7593f1663.img-rw-------. 1 root root 12508372 Jun 7 18:44 initramfs-3.10.0-862.el7.x86_64kdump.img-rw-------. 1 root root 19457348 Jun 22 17:32 initramfs-5.18.5-1.el8.elrepo.x86_64.imgdrwxr-xr-x. 3 root root 21 Jun 21 23:48 loader-rwxr-xr-x. 1 root root 6224704 Aug 30 2018 vmlinuz-0-rescue-4121a368f8744621872224e7593f1663 | echo $a is the same as echo backups cache crash lib local lock log mail opt run snap spool tmp whereas echo "$a" is the same as echo 'backupscachecrashliblocallocklogmailoptrunsnapspooltmp' See https://mywiki.wooledge.org/Quotes . The first argument for printf is a formatting string and printf $a is the same as printf backups cache crash lib local lock log mail opt run snap spool tmp so it's using the string backups as the format and discarding the rest since there's nothing like %s in the formatting string to use them in. Just like: $ printf foo whateverfoo$ $ printf '%s\n' foo whateverfoowhatever Don't do a="$(ls)" to try to create a scalar variable holding file names btw as that's fragile, do a=(*) to hold them in an array instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/707042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/53782/"
]
} |
707,085 | I was trying to write a query in linux for alternate a json file and write it in a new file. My json file is formatted like so: {"intents": [ { "patterns": "'For the last 8 years of his life, Galileo was under house arrest for espousing this man's theory'", "responses": "Copernicus" }, { "patterns": "'No. 2: 1912 Olympian; football star at Carlisle Indian School; 6 MLB seasons with the Reds, Giants & Braves'", "responses": "Jim Thorpe" }, for about 200k entries like the above. The command that I execute is bellow: jq --argjson r \ "$( jo tag = "$(curl -Ss "https://www.random.org/integers/?num=1&min=0&max=1000000&col=1&base=10&format=plain&rnd=new")")" \ '.intents[] += $r' \< intents7.json > intents_super.json i was wanted to add a new keyname in the list as a tag name and i wanted to fill the key( the tag) for every entry with a random number .The command was executed but i am waiting like 30 mins so far and nothing is outputting in the file intents_super.json. Note: The cpu is on constant 100%also in terminal i was getting those 2 lines, the command still running though..: Argument `tag' is neither k=v nor k@vArgument `17208' is neither k=v nor k@v Does the command do what I want? | echo $a is the same as echo backups cache crash lib local lock log mail opt run snap spool tmp whereas echo "$a" is the same as echo 'backupscachecrashliblocallocklogmailoptrunsnapspooltmp' See https://mywiki.wooledge.org/Quotes . The first argument for printf is a formatting string and printf $a is the same as printf backups cache crash lib local lock log mail opt run snap spool tmp so it's using the string backups as the format and discarding the rest since there's nothing like %s in the formatting string to use them in. Just like: $ printf foo whateverfoo$ $ printf '%s\n' foo whateverfoowhatever Don't do a="$(ls)" to try to create a scalar variable holding file names btw as that's fragile, do a=(*) to hold them in an array instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/707085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530901/"
]
} |
707,112 | I created an ext4-filesystem on a loop device with all necessary files to boot with, e.g.: /bin /boot /dev /lib /mnt /etc ... Now I want to boot from the loop device as an image (let's say filesystem.img ) with this filesystem.Is it possible to make this loop device as the new root filesystem and to boot from it with the GRUB2 bootloader? I also read an article about initrd to perform this with the initial ram disk: https://developer.ibm.com/articles/l-initrd/ | echo $a is the same as echo backups cache crash lib local lock log mail opt run snap spool tmp whereas echo "$a" is the same as echo 'backupscachecrashliblocallocklogmailoptrunsnapspooltmp' See https://mywiki.wooledge.org/Quotes . The first argument for printf is a formatting string and printf $a is the same as printf backups cache crash lib local lock log mail opt run snap spool tmp so it's using the string backups as the format and discarding the rest since there's nothing like %s in the formatting string to use them in. Just like: $ printf foo whateverfoo$ $ printf '%s\n' foo whateverfoowhatever Don't do a="$(ls)" to try to create a scalar variable holding file names btw as that's fragile, do a=(*) to hold them in an array instead. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/707112",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530931/"
]
} |
707,301 | The Debian package libreoffice-core (which is described in the Debian repositories as containing " the architecture-dependent core files of LibreOffice," and which is itself a dependency for libreoffice-writer and similar packages) has an absolute dependency (i.e., the relationship of the packages is depends , not recommends or suggests ) on libldap-2.4-2 (described as "the run-time libraries for the OpenLDAP (Lightweight Directory Access Protocol) servers and clients"). Why? How is a word processor whose most common use case by far is editing files stored locally , on the same machine it is running on, so dependent on a protocol for accessing remote directories that it cannot even be configured if the latter is not present? Is this just a dependency classification error (i.e., the relationship should actually be recommends or suggests ), or does libreoffice actually somehow need OpenLDAP installed in order to function? | libreoffice-core ships /usr/lib/libreoffice/program/soffice.bin , and that is linked against libldap_r-2.4.so.2 => /usr/lib/x86_64-linux-gnu/libldap_r-2.4.so.2 (0x00007f55a8c9e000) The package build tools therefore automatically add a dependency on the package providing that library, libldap-2.4-2 . It’s a strong dependency because without it, LibreOffice as built in Debian simply wouldn’t start. Of course LibreOffice could be changed to support dynamically loading LDAP support as needed, but that’s a rather invasive change to make in a package. Another option would be to build it without LDAP support, but some people do actually need it, e.g. to access shared address books , which Writer can use for mail-merges among other things. Presumably the package maintainer chose to provide LDAP-based features for everyone, instead of introducing complexity in order to allow users to choose. The LDAP library adds less than a megabyte of dependencies, which is a very small amount compared to LibreOffice as a whole. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/707301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109384/"
]
} |
707,670 | I'll start by stating, I'm pretty sure this is a unique mess of my own design, but I hope someone encountered this and might be able to help. The Setup My laptop runs Pop!_OS 22.04 (Based on Ubuntu Jammy). I really like the xscreensaver packages, but the Debian/Ubuntu/Pop!_OS release repos contain an outdated version, and only sid (aka Unstable) contains the updated package * . No fret, that's why pinning exists, and so this is how I have it setup: /etc/apt/preferences.d/unstable-200 file: Package: *Pin: release a=unstablePin-Priority: 200 /etc/apt/preferences.d/xscreensaver-2000 file: Package: xscreensaver*Pin: release a=unstablePin-Priority: 2000 /etc/apt/sources.list.d/debian.sid.list file: deb [arch=amd64] http://http.us.debian.org/debian sid main contrib non-free This actually works, at this point running sudo apt install xscreensaver installs the updated versions.However, there is a strange side-effect. The problem When I run sudo apt update followed by sudo apt upgrade , I get the following output: Reading package lists... DoneBuilding dependency tree... DoneReading state information... DoneCalculating upgrade... DoneThe following packages will be DOWNGRADED: alsa-topology-conf appmenu-gtk-module-common aspell-en ca-certificates chrome-gnome-shell dictionaries-common dns-root-data emacsen-common folks-common fonts-arphic-ukai fonts-noto-cjk fonts-noto-cjk-extra fonts-noto-color-emoji fonts-urw-base35 friendly-recovery gir1.2-flatpak-1.0 gir1.2-gdkpixbuf-2.0 gir1.2-graphene-1.0 gir1.2-gtksource-4 gir1.2-polkit-1.0 gir1.2-secret-1 gir1.2-soup-2.4 gsfonts gsfonts-x11 hunspell-ar hunspell-de-at-frami hunspell-de-ch-frami hunspell-de-de-frami hunspell-en-au hunspell-en-ca hunspell-en-gb hunspell-en-us hunspell-en-za hunspell-es hunspell-fr hunspell-fr-classical hunspell-it hunspell-pt-br hunspell-pt-pt hunspell-ru hyphen-de hyphen-en-gb hyphen-es hyphen-fr hyphen-it hyphen-pt-br hyphen-pt-pt ieee-data javascript-common klibc-utils laptop-detect liba52-0.7.4 libappmenu-gtk2-parser0 libbytesize-common libffi8 libflatpak-dev libgl1 libgles2 libgutenprint-common libgweather-4-0 libio-stringy-perl libjs-jquery libldacbt-abr2 libmpcdec6 libmysofa1 libopengl0 libpolkit-gobject-1-0 libsndio7.0 libsoup-gnome2.4-1 libtermkey1 libvterm0 libwacom-common libxkbcommon0 mythes-ar mythes-de mythes-de-ch mythes-en-au mythes-en-us mythes-es mythes-fr mythes-it mythes-pt-pt mythes-ru neovim-runtime netbase pass policykit-1 poppler-data powermgmt-base printer-driver-all python3-certifi python3-fido2 python3-jinja2 python3-launchpadlib python3-lazr.uri python3-macaroonbakery python3-more-itertools python3-pkg-resources python3-pyatspi python3-rfc3339 python3-setuptools python3-tz python3-wheel python3-ykman sensible-utils sgml-base sgml-data sound-icons ssl-cert tpm-udev ucf update-inetd va-driver-all wamerican wbrazilian wbritish wfrench witalian wngerman wogerman wspanish wswiss xfonts-base xml-core yubikey-manager0 upgraded, 0 newly installed, 125 downgraded, 0 to remove and 0 not upgraded.Need to get 257 MB/283 MB of archives.After this operation, 0 B of additional disk space will be used.Do you want to continue? [Y/n] This also throws off Pop!_OS Shop's update count, with these packages showing as pending Operating System Updates. Troubleshooting Some data I collected while attempting to troubleshoot this. Removing /etc/apt/sources.list.d/debian.sid.list and running sudo apt update resolves the issue, so I know it's just a miscalculation/flawed logic somewhere. Focusing on the the first package in the list alsa-topology-conf : Although I know the error is completely superficial, at first I thought apt somehow tracks where (which repo) the package came from, so I removed, cleaned-up, then reinstalled the package. Didn't make a difference. sudo apt remove alsa-topology-confsudo apt cleansudo apt updatesudo apt install alsa-topology-conf Running apt policy alsa-topology-conf , the results are: alsa-topology-conf: Installed: 1.2.5.1-2 Candidate: 1.2.5.1-2 Version table: *** 1.2.5.1-2 200 200 http://http.us.debian.org/debian sid/main amd64 Packages 100 /var/lib/dpkg/status 1.2.5.1-2 501 501 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages 501 http://us.archive.ubuntu.com/ubuntu jammy/main i386 Packages It seems that both sid and jammy have the exact same version, and for some reason, apt matches the package to the 200 priority, instead of the 501 priority entry. With /etc/apt/sources.list.d/debian.sid.list removed, the output looks like this: alsa-topology-conf: Installed: 1.2.5.1-2 Candidate: 1.2.5.1-2 Version table: *** 1.2.5.1-2 501 501 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages 501 http://us.archive.ubuntu.com/ubuntu jammy/main i386 Packages 100 /var/lib/dpkg/status Related questions The following are related questions with similar situations but none of the answers there helped me understand or resolve this. apt pinning priority restricted Debian 10: Why some SSL packages will be downgraded? How to get rid of "Packages were downgraded and -y was used without --allow-downgrades" apt message I've tried all of the answers in the above questions, but none seems to either be relevant or work out. My question Does anyone have any suggestion on how to reconcile this so that the system will not constantly think that these packages need to be DOWNGRADED ? | The basic answer is that you’re doing something that you shouldn’t, namely mixing repositories across releases (and distribution) . Pulling in Debian packages in an Ubuntu-based distribution is a bad idea. xscreensaver is available in later versions of Ubuntu , which would be less dangerous to use, but even that’s a bad idea. Given all the investigation you’ve done, and the detail you’ve provided, it’s worth explaining the behaviour you’re seeing here. All the packages that are offered for “downgrade” have the shared property of being available in the same version in Debian and Ubuntu; however, they are not the same packages, since all packages imported from Debian are rebuilt in Ubuntu. The first feature of apt which comes into play here is that pin-priorities only choose versions . For any package available in different versions in your repositories, the pin-priorities will distinguish between them. For any package available in the same version in your repositories, they won’t. The next feature then applies: when multiple repositories provide the same version, the first one listed wins . This combines with another feature of apt , which is that a package installed with a given hash will be replaced by a repository package with the same version if the hashes don’t match (there’s a Q&A about that somewhere here, but I can’t find it right now). The result of all this is that for all packages provided by Pop!_OS (Ubuntu under the hood), whose versions in Jammy exactly match the current version in Debian unstable, apt will consider replacing them with the Debian version. I’m not sure why it identifies them as downgrades. If you were to go ahead with this, you’d replace a number of Pop!_OS packages with their Debian “equivalents”; there’s a decent chance that that would actually work, but there’s also the possibility that subtle differences in the libraries used would cause problems. You’d end up with a wholly untested setup. To undo this, you should remove sid.list , update your repositories, and explicitly re-install any package you “downgraded”: sudo apt reinstall alsa-topology-conf | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/707670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138012/"
]
} |
707,748 | So leaving details aside last night i stumbled across "administrator root" in a live instance of Zorin. First off, is Administrator Root even a thing? If so, where does it fall into the hierarchy? As i understood it there were three users in linux - root, admin, and user. Picture is attached. Thank you. | On unix, there are two kinds of users: root everyone else. root is any username with a UID of 0. The username is typically root , but doesn't have to be, and it is possible to have more than one root account with a different username (e.g. FreeBSD systems have root with /bin/csh as its shell, and toor with /bin/sh instead. They are both root, because they are both UID 0 - it is the UID which is important, not the name. There is only one root, but root can have aliases). Ignoring security environments like SELINUX (which can limit what root is able to do), root (UID 0) can do anything on the system that the system is capable of - read, write, delete any file or directory, change ownership or permissions of files etc, bind a network socket to ports 0-1023, shutdown or reboot the system, etc. All other users can only do what their permissions and group memberships allow. And programs like sudo or su , can give them some access to some root capabilities by temporarily elevating them to root, or via other mechanisms such as capabilities on Linux. NOTE: There is a sub-class of non-root users that are often called "system users". Despite what the name suggests, they are ordinary users. They just happen to be created for special purposes like running a particular daemon and owning that daemon's files and directories. e.g. user lp for a printer daemon, ftp for ftpd, postgres for the postgresql database, and many more. They usually have a disabled password and their shell set to /bin/false or /usr/sbin/nologin or similar (user postgres is a notable exception because it's fairly common to su to user postgres to run psql for maintenance tasks). Quite often, system users have a matching or associated "system group", and sometimes a "system group" exists without an associated "system user" (e.g. group disk has group ownership of device nodes for disks and partitions in /dev, and those devices are RW by group - meaning that any member of the disk group can read & write and even re-partition or re-format those block devices). Most Linux systems reserve UIDs and GIDs 1 to 1000 (or 1 to 500, or similar) for system users (and most, but not all, system users have uids & gids within that reserved range). Non-system users, i.e. normal login accounts, get UIDs and GIDs above that reservation. This is a matter of OS vendor or even local site policy, not inherent to unix itself. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/707748",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/531622/"
]
} |
707,791 | 1598427@931PDD 220624P00051000 ohlc=0,0,0,0 vol=0 oi=424 nbbo=69@2316/113@532 nbbo2=69@145/113@95PDD 220617C00051000 ohlc=0,0,0,0 vol=0 oi=434 nbbo=530@1921/710@1496 nbbo2=530@31/710@115PDD 220722P00051000 ohlc=0,0,0,0 vol=0 oi=15 nbbo=285@1436/405@1772 nbbo2=230@15/455@15PDD 220708C00051000 ohlc=0,0,0,0 vol=0 oi=17 nbbo=785@864/935@894 nbbo2=785@15/935@15PDD 220624C00051000 ohlc=0,0,0,0 vol=0 oi=392 nbbo=645@771/795@947 nbbo2=645@83/795@80PDD 220729C00051000 ohlc=0,0,0,0 vol=0 oi=0 nbbo=870@902/1190@677 nbbo2=820@15/1195@20PDD 220708P00051000 ohlc=0,0,0,0 vol=0 oi=32 nbbo=200@1413/320@2273 nbbo2=200@15/320@356PDD 220722C00051000 ohlc=0,0,0,0 vol=0 oi=140 nbbo=795@1630/1175@1544 nbbo2=795@51/1175@21PDD 220729P00051000 ohlc=0,0,0,0 vol=0 oi=11 nbbo=254@3/450@3 nbbo2=254@2/570@1CSCO 220715C00090000 ohlc=0,0,0,0 vol=0 oi=739 nbbo=0@0/4@1056 nbbo2=0@0/4@121CSCO 220617C00090000 ohlc=0,0,0,0 vol=0 oi=203 nbbo=0@0/1@2 nbbo2=0@0/0@0CSCO 220617P00090000 ohlc=0,0,0,0 vol=0 oi=0 nbbo=4685@654/4730@1155 nbbo2=4685@33/4730@33CSCO 240119P00090000 ohlc=0,0,0,0 vol=0 oi=0 nbbo=4695@202/4770@193 nbbo2=4695@75/4770@33 I have a file that looks like the above. I want to find all lines that contain the word CSCO , or has length <= 15 . What command can I use to do this? | With grep -E for Extended regexps you can use alternation ( | ). $ grep -E 'CSCO|^.{0,15}$' file1598427@931CSCO 220715C00090000 ohlc=0,0,0,0 vol=0 oi=739 nbbo=0@0/4@1056 nbbo2=0@0/4@121CSCO 220617C00090000 ohlc=0,0,0,0 vol=0 oi=203 nbbo=0@0/1@2 nbbo2=0@0/0@0CSCO 220617P00090000 ohlc=0,0,0,0 vol=0 oi=0 nbbo=4685@654/4730@1155 nbbo2=4685@33/4730@33CSCO 240119P00090000 ohlc=0,0,0,0 vol=0 oi=0 nbbo=4695@202/4770@193 nbbo2=4695@75/4770@33 If you want to match "CSCO" only when it's at the beginning of the line and followed by white-space (spaces, tabs, etc): $ grep -E '^CSCO[[:space:]]|^.{0,15}$' file Or use the end-of-word boundary marker, \> (I can't remember if this is a GNU extension or if it's "standard", and it's a difficult thing to google for. It definitely works in GNU grep, and maybe others): $ grep -E '^CSCO\>|^.{0,15}$' file Note that GNU grep's info documentation defines a "word" character as [_[:alnum:]] , which is different to the definition in the perlre man page, perl also recognises some connector punctuation and unicode characters as "words". If you're using GNU grep, that version also understands perl's \s (any whitespace) and \b (word boundary marker), even when using -E rather than -P . GNU grep's -P option for PCRE support adds recognition of \h for horizontal whitespace. e.g. $ grep -E '^CSCO\s|^.{0,15}$' file$ grep -E '^CSCO\b|^.{0,15}$' file$ grep -P '^CSCO\h|^.{0,15}$' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/707791",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/531676/"
]
} |
707,944 | I want to create a sed command that will remove all of these strange characters from a given document: sed -n 's/\|®MD-IT¯\|®MD\+BO¯\|®MDNM¯®LL\.8LI,0LI¯\|®LL0LI,0LI¯\|®MD\+IT¯\|®LL.8LI,0LI¯®MDIT¯\|®MDNM¯®FL¯®LL.8LI,0LI¯\|®FL¯®MD-BO¯\|®FL¯®MD-BO¯\|®MD-BO¯\|¯®OF1IN,1IN¯®FC¯®LL1LI,0LI¯\|\|®SF1,1¯\|®FM1FT=0LI,LR=1;\|®MDSU¯®FN1¯\|®MDNM¯¯\|®IV-RTF\|\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\.\|¯®BF0¯\|®FS1\|-------------------------------------\|¯®FW1\|\|//gp' These codes were all created in another application Nota Bene and I have many files with such codes that I would like to convert to plain text and possibly even markdown. The problem is that the characters are not substituted. I have tried doing this in Sublime Text and was successful in stripping the document using find-replace (regex). It would be better for me to create a sed script than to use Sublime for this task. I also tried using Ed but it too did not pick up the replacements. Here is a sample nb file when opened in `Sublime Text: ®SSDEFAULTS¯®LR1¯®JU¯®MD+BO¯®UFTimes New Roman¯®SZ12Pt¯Glossary®MD+BO¯®TS.5IN,1IN,1.5IN,2IN,2.5IN,3IN,3.5IN,4IN,4.5IN,5IN,5.5IN,6IN¯ ®MD-BO¯®NJ¯®LR1¯®LL.5LI,0LI¯®MD+BO¯®LL0LI,0LI¯®MDNM¯®LR1¯®LL.5LI,0LI¯A fortiori proposition: If X is true, then how much greater is Y true? To move logically from a stronger argument to establish a weaker argument. The weaker argument is sometimes presented by the speaker as the stronger argument.®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯Accusative of motion/direction - Indicates movement to the noun marked by the accusative and is to be distinguished from the accusative of local determination which indicates location without motion (Joüon and Muraoka 2006, 428).Anadiplosis - A figure of speech in which the word that a colon ends with, or a like sounding word, is the word that begins the next colon ®GC|CI:R#=47;AU=Brown, Raymond E.;YR=1990;TI=New Jerome biblical commentary;PG=245;XT=;F[=;F]=;F#=;ID=;XX=Print;CT=;FL=¯(Brown, Fitzmyer, Murphy, et al. 1990, 245)®GC¯.®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯Anaphoric use of the article - When the article is used to indicate that the word to which it is attached is the one previously mentioned (Williams and Beckman 2007, 36). ®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯Anaptyxis - The insertion of a vowel into a word to avoid a consonant cluster.®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯Aoristic perfect - I use the phrase 'aoristic perfect' to refer to one of the ways the qatal form can be rendered into English. Aoristic perfect denotes a past situation the implications of which are no longer felt in the present. The situation may have extended over a period of time and it may have occurred more than once. It may have occurred in the recent or distant past but from the standpoint of the speaker it is to be regarded as a fact having occurred and hence as a fact belonging to the past (Joüon and Muraoka 2006, 337; Driver 1998, 12). The term 'aoristic perfect' and indeed the other categorizations of perfect in this grammar, all relate to the interpretation of qatal verbs in their given contexts. The qatal form in and of itself does not convey these meanings. ®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯®LL0LI,0LI¯®LR1¯®LL.5LI,0LI¯Beth essentiae - ®LAHebrew¯ÿHá®LAEnglish¯ that is used to indicate the predicate of a clause or a word used predicatively (Joüon and Muraoka 2006, 458). This is how I would like the text to read: Glossary A fortiori proposition: If X is true, then how much greater is Y true? To move logically from a stronger argument to establish a weaker argument. The weaker argument is sometimes presented by the speaker as the stronger argument.Accusative of motion/direction - Indicates movement to the noun marked by the accusative and is to be distinguished from the accusative of local determination which indicates location without motion (Joüon and Muraoka 2006, 428).Anadiplosis - A figure of speech in which the word that a colon ends with, or a like sounding word, is the word that begins the next colon (Brown, Fitzmyer, Murphy, et al. 1990, 245).Anaphoric use of the article - When the article is used to indicate that the word to which it is attached is the one previously mentioned (Williams and Beckman 2007, 36). Anaptyxis - The insertion of a vowel into a word to avoid a consonant cluster.Aoristic perfect - I use the phrase 'aoristic perfect' to refer to one of the ways the qatal form can be rendered into English. Aoristic perfect denotes a past situation the implications of which are no longer felt in the present. The situation may have extended over a period of time and it may have occurred more than once. It may have occurred in the recent or distant past but from the standpoint of the speaker it is to be regarded as a fact having occurred and hence as a fact belonging to the past (Joüon and Muraoka 2006, 337; Driver 1998, 12). The term 'aoristic perfect' and indeed the other categorizations of perfect in this grammar, all relate to the interpretation of qatal verbs in their given contexts. The qatal form in and of itself does not convey these meanings. |> sed -n l Glossary.NB\256SSDEFAULTS\257\256LR1\257\256JU\257\256MD+BO\257\256UFTimes New R\oman\257\256SZ12Pt\257Glossary\256MD+BO\257\256TS.5IN,1IN,1.5IN,2IN,2\.5IN,3IN,3.5IN,4IN,4.5IN,5IN,5.5IN,6IN\257\t\256MD-BO\257\r$\256NJ\257\256LR1\257\256LL.5LI,0LI\257\256MD+BO\257\256LL0LI,0LI\257\\256MDNM\257\256LR1\257\256LL.5LI,0LI\257A fortiori proposition: If X\ is true, then how much greater is Y true? To move logically from a s\tronger argument to establish a weaker argument. The weaker argument \is sometimes presented by the speaker as the stronger argument.\r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Accusative of motion/direction - Indicates mov\ement to the noun marked by the accusative and is to be distinguished\ from the accusative of local determination which indicates location \without motion (Jo\374on and Muraoka 2006, 428).\r$Anadiplosis - A figure of speech in which the word that a colon ends \with, or a like sounding word, is the word that begins the next colon\ \256GC|CI:R#=47;AU=Brown, Raymond E.;YR=1990;TI=New Jerome biblical \commentary;PG=245;XT=;F[=;F]=;F#=;ID=;XX=Print;CT=;FL=\257(Brown, Fit\zmyer, Murphy, et al. 1990,\240245)\256GC\257.\r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Anaphoric use of the article - When the articl\e is used to indicate that the word to which it is attached is the on\e previously mentioned (Williams and Beckman 2007, 36). \r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Anaptyxis - The insertion of a vowel into a wo\rd to avoid a consonant cluster.\r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Aoristic perfect - I use the phrase 'aoristic \perfect' to refer to one of the ways the qatal form can be rendered i\nto English. Aoristic perfect denotes a past situation the implicatio\ns of which are no longer felt in the present. The situation may have\ extended over a period of time and it may have occurred more than on\ce. It may have occurred in the recent or distant past but from the s\tandpoint of the speaker it is to be regarded as a fact having occurr\ed and hence as a fact belonging to the past (Jo\374on and Muraoka 20\06, 337; Driver 1998, 12). The term 'aoristic perfect' and indeed the\ other categorizations of perfect in this grammar, all relate to the \interpretation of qatal verbs in their given contexts. The qatal form\ in and of itself does not convey these meanings. \r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Beth essentiae - \256LAHebrew\257\377H\341\256\LAEnglish\257 that is used to indicate the predicate of a clause or a\ word used predicatively (Jo\374on and Muraoka 2006, 458).\r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Classic perfect - I use the phrase 'classic pe\rfect' to refer to one of the ways the qatal form can be rendered int\o English. Classic perfect refers to the continuing present relevance\ of a past situation from the perspective of the speaker (Comrie 1976\, 52). By perfect I do not necessarily imply that a previous situatio\n has resulted in a state but that the situation has implications rel\evant to the present. The situation is not merely past and over but s\omehow persists and continues to intrude into the present. Such verbs\ are usually translated into English using the perfect or present ten\se. I have included under this definition quasi-stative verbs which r\efer to attributes which were acquired before, but which are assumed \to continue in some way up to the present moment (Driver 1998, 11; Jo\\374on and Muraoka 2006, 333; Waltke and O'Connor 1990, 487). In some\ grammars these are treated separately. However, that creates too man\y functions for the one perfect form. The term 'classic perfect' and \indeed the other categorizations of perfect in this grammar all relat\e to the \256MD+IT\257interpretation \256MD-IT\257of qatal verbs in t\heir given contexts. The qatal form by itself does not convey these m\eanings.\r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Cohortative of praise. The cohortative is ofte\n used in Psalms to indicate that praise, freely undertaken, has begu\n. This usage is close to the cohortative of resolve but not identica\l with it. The emphasis falls not on what the writer is intending to \do, but what he has already undertaken. \r$Cohortative of resolve - The cohortative mood normally expresses the \will of the speaker, but when the speaker has the ability to carry ou\t what he wants it takes on the coloring of resolve (Van der Merwe et\ al. 1997, 152; Waltke and O'Connor 1990, 573).\r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Concluding \256LAHebrew\257\377h\353\377H\351\\256LAEnglish\257 - A special use of the word \256LAHebrew\257\377h\\353\377H\351\256LAEnglish\257 found towards the end of several Psalm\s and approximating in meaning to: the conclusion of the matter is th\at\205\r$\256LL0LI,0LI\257\256LR1\257\256LL.5LI,0LI\257\256LL0LI,0LI\257\256LR\1\257\256LL.5LI,0LI\257Conjunctive waw - Waw used to connect clauses \ | Sed can also be used as a script (easier to devel): create a file "nb2txt" with #!/usr/bin/sed -Efs/®[^¯]*¯//gs/-{20,}//gs/\.{20,}//g and: $ chmod 755 nb2txt$ nb2txt file.nb | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/707944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321709/"
]
} |
708,001 | This doesn't necessarily have to be a Linux problem but I'll ask it here anyway. I'm using a workstation mainly for training deep learning and machine learning models. I run training codes on both CPU and GPU. CPU: AMD Ryzen 9 5950X 16-Core Processor GPU: NVIDIA GeForce RTX 3090 OS: Ubuntu 22.04 LTS The libraries that I use (PyTorch, XGBoost, LightGBM and etc.) utilize swap memory a lot for data loading. While working on big datasets, swap memory accumulates slowly and exceeds the limit (2GB). When that happens, all of the cores go crazy and CPU overheats. Workstation shuts down itself couple seconds later. I'm a data scientist and I'm not good with hardware. It took couple weeks for me to figure out why my workstation was keep shutting itself down. I have to find a way to prevent this since I can't progress on my own tasks anymore. What are your suggestions? To give you more details, this wasn't happening 3-4 months ago. It started very recently. Edit : Added nvidia-smi and sensors outputs while training two models (UNet and YOLOv6) simultaneously. nvidia-smi +-----------------------------------------------------------------------------+| NVIDIA-SMI 510.73.05 Driver Version: 510.73.05 CUDA Version: 11.6 ||-------------------------------+----------------------+----------------------+| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. || | | MIG M. ||===============================+======================+======================|| 0 NVIDIA GeForce ... Off | 00000000:0A:00.0 Off | N/A ||100% 79C P2 338W / 350W | 14171MiB / 24576MiB | 100% Default || | | N/A |+-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+| Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=============================================================================|| 0 N/A N/A 1361 G /usr/lib/xorg/Xorg 56MiB || 0 N/A N/A 1568 G /usr/bin/gnome-shell 10MiB || 0 N/A N/A 27955 C python 2743MiB || 0 N/A N/A 31692 C python 11355MiB |+-----------------------------------------------------------------------------+ sensors nvme-pci-0300Adapter: PCI adapterComposite: +74.8°C (low = -273.1°C, high = +84.8°C) (crit = +84.8°C)Sensor 1: +74.8°C (low = -273.1°C, high = +65261.8°C)Sensor 2: +74.8°C (low = -273.1°C, high = +65261.8°C)iwlwifi_1-virtual-0Adapter: Virtual devicetemp1: +57.0°C k10temp-pci-00c3Adapter: PCI adapterTctl: +87.8°C Tccd1: +89.2°C Tccd2: +79.5°C | First, absolutely make sure your PSU is powerful enough - instant shutdowns like yours could indicate an issue with it. Maybe replace it. RTX 3090 can have spikes up to 500W and that means, along with your CPU, that your PSU must be rated at the very least 850W. Speaking of your temps. Your CPU is running close to its rated maximum , which is 90C, which means you'd better improve your case cooling by installing case fans e.g. 120mm (140mm are beter - quieter and more powerful) and probably installing a better cooler on your CPU along with changing thermal paste - my preferred one is Arctic MX-4 (MX-5 in theory provides better performance but it's a lot more cumbersome to apply). Installing proper case cooling might prove enough since your GPU is definitely increasing your CPU temps. Don't forget to update your EFI BIOS as well. You can also use a software only solution: enter your BIOS and either decrease your CPU PPT (maximum wattage) or set the maximum temperature for it, e.g. 85C Both will result in decreased multithreaded performance but not so much. You may get more help here: https://www.reddit.com/r/Amd/ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/708001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/531889/"
]
} |
708,115 | I've set up my sudoers so that it asks the root password instead of the user password everytime I use sudo . Mainly because I believe it makes sense that if you want to execute a root command, you should know the root password. However, could this be considered a security risk? And if is not, why isn't this the default configuration in most distros? Edit:I am running a personal Linux Machine, where I am the only user. Does the rationale make more sense in this context?I do think that this may not apply to multi-user systems. Context:My experience with sudo was on systems where sudo was simply a "synonym" for su . One could run any root command by simply typing their user password, which I thought defeated the purpose of root to begin with. Hence my reasoning to have it ask you for the root password.Having said that I was unaware of the power of sudoers , some users mentioned that you could specify which commands can be run with sudo (while leaving out some commands restricted to the root user only). This I think is a great middle ground | Some would consider this a security risk because it undermines two of the main purposes of using sudo rather than su , which are: sudo makes it easy to allow users to run some, but not all, commands as root, and You don't have to give out the root password . Having the root password is potentially far more dangerous than just being allowed to run certain commands as root. Once someone has the root password, they can either login as root or use it with su . It is also harder to revoke root access from just one person - you have to change the root password and let everyone know what the new password is. With sudo 's default configuration, you only have to change the sudoers file and/or remove the user from the sudo group. This is why it's not the default configuration for sudo . I strongly recommend that you revert back to the default behaviour as it is almost certainly more well thought out than your belief that it "makes sense" that you should have to know the root password. "common sense" is usually neither "common" nor "sensible". sudo was written, at least in part, to avoid the problems caused when everyone who needed some ability to do some root-level sysadmin tasks had to know the root password. In practice, this proved to be extremely problematic, especially in large environments like universities or corporations where people changed roles a lot. I've worked in several environments over the years where people had moved on to other roles in the same organisations years before (or even left the organisation completely) but still had root access on machines that they shouldn't even still have a valid login on. There's also the issue that people often get upset when you do the actually right thing and remove root access and/or disable accounts when those things are no longer needed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/708115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/531949/"
]
} |
708,276 | A section of grep --help is: -E, --extended-regexp PATTERNS are extended regular expressions -F, --fixed-strings PATTERNS are strings There are plenty examples online for the -E but I can't find any for -F I'm not very good with regex and avoid it if I possibly can so this -F looks like a better option. What is the syntax to grep something and see if it contains 3 different strings? | To get the lines in test.txt that have at least one of the strings hello or goodbye , you'd use: grep -F -e hello -e goodbye test.txt That is, the patterns to look for are given as arguments to the -e options, and the -F option tells to treat them as fixed strings. Well, that doesn't matter with the above, but e.g. a pattern like a.*b would look for a , a dot, an asterisk, and a b , instead of the regex interpretation which would be a and b with anything in between. Alternatively, you could put the looked-for strings in a file, one per line, and use the -f option to give the filename: $ grep -F -f patterns.txt test.txtyou say hello when you arrivegoodbye when you leavebut rarely say hello and goodbye at the same time (Note that it'd be much harder to do the "all given strings" test. It's rather awkward even with regexes.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/708276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188451/"
]
} |
708,717 | I am looking to learn the setup for RAID on my OS (Pop OS Linux) and also backup my laptop. I want a proper backup scheme in place on one external drive (drive A) and I want a RAID 1 setup between another external drive (drive B). Neither drive A nor B will mirror my hard drive, but I would like them to mirror one another for redundancy in backups. I tried setting up RAID 1 for them, but they sought to mirror my boot drive, which isn't what I seek. Is RAID an appropriate tool for mirroring external drives in such a manner? Or is there a better tool? Do the drives have to be present at boot? I hit a bump with needing the drives at all times when rebooting the computer without the drives present. | From a high level persepective, using a RAID of external disks as a backup device ... ... has the following benefits: Logically, you only have to backup data once (the RAID layer handles the redundancy when you copy data) Some configurations can detect bit-rot and auto-correct it (btrfs-raid, md-raid + dm-integrity) ... and the following disadvantages: If one of the disks is not present (e.g. if you forgot to plug in one of the data or power cables), you're unable to cleanly assemble the RAID device If one of the disks fails, or is disconnected during operation for whichever reason, you have to rebuild the RAID device If the filesystem is faulty, all disks contain faulty data, because the faulty data is replicated by the RAID layer (true for md-raid, lvm-raid; false for btrfs-raid, zfs-raid) - If, for example, you would use a md(adm)-RAID-1 with a btrfs-filesystem ontop of it, and the next kernel update (which includes the btrfs code) comes with a bug in the btrfs code, and this bug corrupts btrfs-filesystems, both disks would contain a valid md-RAID-1 device with a corrupted btrfs-filesystem ontop. If the RAID-layer code contains a bug, both disks are corrupted, too - The same argumentation as for filesystem-bugs applies My advice is to not use a RAID of multiple external disks as a backup device and to instead use the disks independently with independent filesystems and execute your backup solution serially for each one of them. IMHO, RAID should be used to provide high-availability. Backup needs redundancy, and this includes redundancy on a filesystem level (multiple separate filesystem instances). My advice is to create independent filesystems on each one of your external disks and backup your data with a (e.g. rsync ) script to each one of them. You can run multiple instances of the script in parallel (one per disk) to speed up the backup process... I am looking to learn the setup for RAID A good way to do this is using a (e.g. qemu kvm ) virtual machine. This allows for creating as many virtual disks as you want to experiment with. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/708717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255749/"
]
} |
708,719 | I am trying to setup a PXE server on my laptop on CentOS 7 to connect to a physical test client, following the tutorial on: https://www.linuxtechi.com/configure-pxe-installation-server-centos-7/#comment-35567 All of the configuration files and setup procedures are from this website.On “Step: 6 Start and enable xinetd, dhcp, and vsftpd service.”,The commands: “systemctl start xinetd” and “systemctl enable xinetd” work, but when I run the command: “systemctl start dhcpd.service”, I receive the following error message: Job for dhcpd.service failed because the control process exited with error code. See “systemctl status dhcpd.service” and “journalctl -xe” for details. When I run “systemctl status -l dhcpd.service”, I receive the following error message: systemctl status -l dhcpd.service dhcpd.service - DHCPv4 Server Daemon Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2022-07-05 11:18:07 EDT; 1min 12s ago Docs: man:dhcpd(8) man:dhcpd.conf(5) Process: 11655 ExecStart=/usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid (code=exited, status=1/FAILURE) Main PID: 11655 (code=exited, status=1/FAILURE)Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: to which interface virbr0 is attached. **Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: No subnet declaration for enp0s20f0u13 (10.249.6.154).Jul 05 11:18:07 localhost.localdomain dhcpd[11655]: ** Ignoring requests on enp0s20f0u13. If this is not whatJul 05 11:18:07 localhost.localdomain dhcpd[11655]: you want, please write a subnet declarationJul 05 11:18:07 localhost.localdomain systemd[1]: dhcpd.service: main process exited, code=exited, status=1/FAILUREJul 05 11:18:07 localhost.localdomain systemd[1]: Failed to start DHCPv4 Server Daemon.Jul 05 11:18:07 localhost.localdomain systemd[1]: Unit dhcpd.service entered failed state.Jul 05 11:18:07 localhost.localdomain systemd[1]: dhcpd.service failed. Also here is the Dhcpd.conf file: ## DHCP Server Configuration file.# see /usr/share/doc/dhcp*/dhcpd.conf.example# see dhcpd.conf(5) man page## DHCP Server Configuration file.ddns-update-style interim;ignore client-updates;authoritative;allow booting;allow bootp;allow unknown-clients;# internal subnet for my DHCP Serversubnet 172.168.1.0 netmask 255.255.255.0 {range 172.168.1.21 172.168.1.151;option domain-name-servers 172.168.1.11;option domain-name "pxe.example.com";option routers 172.168.1.11;option broadcast-address 172.168.1.255;default-lease-time 600;max-lease-time 7200;# IP of PXE Servernext-server 172.168.1.11;filename "pxelinux.0";} What do I need to change in my dhcpd.conf file to make the command “systemctl start dhcpd.service” work so I can finish going through the PXE server tutorial? | From a high level persepective, using a RAID of external disks as a backup device ... ... has the following benefits: Logically, you only have to backup data once (the RAID layer handles the redundancy when you copy data) Some configurations can detect bit-rot and auto-correct it (btrfs-raid, md-raid + dm-integrity) ... and the following disadvantages: If one of the disks is not present (e.g. if you forgot to plug in one of the data or power cables), you're unable to cleanly assemble the RAID device If one of the disks fails, or is disconnected during operation for whichever reason, you have to rebuild the RAID device If the filesystem is faulty, all disks contain faulty data, because the faulty data is replicated by the RAID layer (true for md-raid, lvm-raid; false for btrfs-raid, zfs-raid) - If, for example, you would use a md(adm)-RAID-1 with a btrfs-filesystem ontop of it, and the next kernel update (which includes the btrfs code) comes with a bug in the btrfs code, and this bug corrupts btrfs-filesystems, both disks would contain a valid md-RAID-1 device with a corrupted btrfs-filesystem ontop. If the RAID-layer code contains a bug, both disks are corrupted, too - The same argumentation as for filesystem-bugs applies My advice is to not use a RAID of multiple external disks as a backup device and to instead use the disks independently with independent filesystems and execute your backup solution serially for each one of them. IMHO, RAID should be used to provide high-availability. Backup needs redundancy, and this includes redundancy on a filesystem level (multiple separate filesystem instances). My advice is to create independent filesystems on each one of your external disks and backup your data with a (e.g. rsync ) script to each one of them. You can run multiple instances of the script in parallel (one per disk) to speed up the backup process... I am looking to learn the setup for RAID A good way to do this is using a (e.g. qemu kvm ) virtual machine. This allows for creating as many virtual disks as you want to experiment with. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/708719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/529574/"
]
} |
708,733 | July 2022 mac os Monterey V12.1 awk --version 20200816 GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin21) Why does awk -F work for most letters, but NOT for the letter t ?I have the solution, but I would like to understand why awk fails for the letter t . # Count 'e's% echo "tweeter" | awk -F "e" '{print NF-1}'3# Count 'r's% echo "tweeter" | awk -F "r" '{print NF-1}'1# (Attempt to) count 't's% echo "tweeter" | awk -F "t" '{print NF-1}'0 <=== ????# Use gsub()% echo "tweeter" | awk '{print gsub(/t/, "")}'2 | Because: Normally, any number of blanks separate fields. In order to set thefield separator to a single blank, use the -F option with a value of [ ] . If a field separator of t is specified, awk treats it as if \t had been specified and uses <TAB> as the field separator. In orderto use a literal t as the field separator, use the -F option with avalue of [t] . That's from the FreeBSD awk man page , and the utilities that come with macOS are usually some old FreeBSD versions or such. $ printf 'foo\tbar\n' | awk -F t '{print NF-1}'1$ echo total | awk -F '[t]' '{print NF-1}'2 In a way, that seems like a useful shorthand for files with tab-separated values, but what with other letters taken as-is, it's confusing. It only works like that with -F , using -v FS=t doesn't do it. The feature is non-POSIX, as POSIX says that -F x is the same as -v FS=x . Most other awks I tested treated t as the the literal letter (some versions of gawk, mawk and Busybox). The version of awk that e.g. Debian has in the original-awk package ("One True AWK" or "BWK awk" presumably from Brian W. Kernighan's initials) does support it, though, and at least Wikipedia seems to indicate that would be the same software FreeBSD uses. This one appears to be based on the version described in the 1988 book "The AWK Programming Language", but I'm not an expert on awk lineages and don't know if it has evolved significantly since then. That one is on github , but the documentation there doesn't seem to describe the feature. The special case can be seen in the code (where it's described as "a wart" in a comment). You can get the same behaviour with GNU awk in BWK-awk compatibility mode, though. : As a special case, in compatibility mode (see section Command-Line Options), if the argument to -F is ‘t’, then FS is set to the TAB character. If you type ‘-F\t’ at the shell, without any quotes, the ‘\’ gets deleted, so awk figures that you really want your fields to be separated with TABs and not ‘t’s. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/708733",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/532632/"
]
} |
708,783 | I want to detect errors in application's execution logic. E.g.: forgot to call free() on address returned by malloc() did not close file handle returned by open() invalid flags passed to open() invalid file handle passed to poll() write() called on fd that wasn't opened for writing pass invalid flags to open() e.g. open("/etc/fstab", 4) calling close() on an invalid fd ... I think there are hundreds more. Maybe the tool can be run similar to ftrace or strace , but a kernel log containing the faulty calls would be sufficient too. | forgot to call free() on address returned by malloc() Well, malloc and free aren't kernel calls! What malloc() (which is a libc library function, normal user process code!) does look up in the memory pool it keeps whether there's an available chunk of the requested size if so, mark it as used and return it to the program, if not, call sbrk (or equivalently, mallocing of anonymous memory was common) to ask the kernel for an amount of new virtual memory pages, add these to the pool, and then satisfy the program's request. free just takes s piece of memory previously returned through malloc ; if so, it marks it as unused in the memory pool. (If not, undefined behaviour happens, but most libc's will abort at that point.) Most implementations of free don't ever try to even return the memory to the OS! Now, if you want memory sanitation, there's tools (valgrind, gcc -fsanitize and more) that watch these free and malloc calls, and even trace whether the address of a malloc ed piece of memory is still "saved" somewhere in the program, or whether, e.g. at the end of a function, the pointer holding that address just ceases to exist, so that nobody can possibly remember that the memory was allocated. That would be an actual fault; just not immediately freeing memory, or deferring the freeing to the end of the program is not a problem, at all. The whole point of malloc is that you get memory with a potentially infinite lifetime! (hint: if you worry about these kinds of things, and you'd be right to, don't write C. Write in a language that allows for object life times to be tracked properly. That would be languages like Rust, or C++, but the latter only if not taught by someone who thinks of C++ as extension to C . I have large programs where I never once used new or worse, malloc in my C++ code. Smart pointers can take a lot of the pitfalls of your shoulders, even in C++, which very much allows you to do manual memory control, but in modern variants also very much encourages you not to by offering zero-cost object lifetime tracking. ) did not close file handle returned by open() That's not a problem! Even more than with memory, it's perfectly acceptable and even sensible to keep files open till the end of a program; for example, locks on files wouldn't work if you relinquish them right away. And a control interface would need to be kept open until the program shuts down. Again, if you're worried that within your program's control flow, you might be opening thousands of files and forget to close them, don't write in C, but in a language where a file handle has a life time and can close the underlying file descriptor when not needed anymore. Just: "there's a file opened, it has not yet been closed" simply isn't a problem, especially not on POSIX systems, where concurrent file access is normal and in many aspects even well-defined. invalid flags passed to open() How do you know that, other than things that return an error code anyways? I mean, it's very normal for a library to check whether a file can be opened "write + append" mode, but it's not a problem if it can't be. If you want to observe any time a system call is made, get its arguments and what it "returns" to user land, the ptrace syscall is your friend, as e.g. used by the popular strace program. Other options involve writing eBPF probes or uprobes, which can be used for very efficient and even "intelligently filtering" logging of such things. invalid file handle passed to poll() Same problem as before, this might just be your program checking whether a file handle can be polled; that's not the case for all (pseudo-) file systems. Additionally, poll is actually also the name of a wrapper function (symbol) supplied at least by glibc if necessary, and "invalid" to that might be different than "invalid" to the poll syscall. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/708783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/376049/"
]
} |
708,958 | My env: zsh, macOS Command in concern: echo 'hi' | tee > a b c echo 'hi' > a b c Command 1 creates files named a , b and c with content hi . Command 2 creates a file named a with content hi b c . AFAIK, only the usage of Command 1 without > is documented in the manpage of tee : echo 'hi' | tee a b c I want some help to understand why adding > the above code(i.e., Command 1) still creates multiple files, whereas Command 2 creates only one file. | Redirection ( > in this case) “consumes” the following argument as the target of the redirection; everything else is left alone. So echo 'hi' | tee > a b c is equivalent to echo 'hi' | tee b c > a tee duplicates its input to b , c , and standard output which goes to a . echo 'hi' > a b c is equivalent to echo 'hi' b c > a and outputs hi b c to standard output, which goes to a . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/708958",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/532499/"
]
} |
708,967 | On a shared machine somehow a directory ; (with a lot of stuff in it) was created. I only have shell access to this machine and so I can't use any GUI File explorer to delete that file. rm -rf ; results in strange behavior since the semicolon probably acts as a command separator and is not evaluated as the directory's name. rm -rf ./;rm: refusing to remove '.' or '..' directory: skipping './' rm: refusing to remove '.' or '..' directory: skipping './' Furthermore I do not want to delete other directories and so I am not willing to play around with wildcards and stuff. How can I securely remove that semicolon-directory recursivly? | Quote its name. Quoting the name will stop the shell from interpreting it as a command terminator. rm -rf ';' The same goes for working with files with names containing other characters that the shell usually treat specially, like filename globbing characters, & , < , > , ( , ) , { , } , newlines, tabs, spaces, etc., and any other character that may be part of the shell's grammar. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/708967",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134224/"
]
} |
709,082 | So as the question suggests, I want to get the version of a command inside a bash script, instead of using the terminal. I know that all available commands are present as files inside the directory /usr/bin . But it does not give the versions and it gives some funny information when you use cat for one of the (command) files. for example - cat /usr/bin/man gives - What is this and how do I get the version? I am using Ubuntu 20.04. I know its a stupid question to ask here but I wasn't able to find anything :/ Supposedly the final edit - the accepted answer works for me, I am using the function. I did also realise later that something like foo --version will work too But, if you are using a variable to store the version then you need to be careful about the syntax, I was doing it the wrong way. Thanks to @Romeo Ninov for pointing out that foo --version works too. I am really sorry for these silly mistakes. | If you are on a Debian based distribution, you can use the package manager to figure it out: $ dpkg -S '*bin/man'man-db: /usr/bin/man This tells us man-db is the package which owns /usr/bin/man . $ dpkg -l man-dbDesired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-==============-============-============-=================================ii man-db 2.10.2-1 amd64 tools for reading manual pages Then we ask what version of man-db is installed. This is man-db 's upstream version 2.10.2 released by the distribution with an extra -1 . The -1 represents a patch done by the distribution. This may just include build-rules, but could also include fixes to 2.10.2 ported from later versions. dpkg-query can be used to extract the version: $ dpkg-query -W -f '${Version}\n' man-db2.10.2-1 Putting all this together: $ dpkg-query -W -f '${Version}\n' $(dpkg -S '*bin/man' | cut -d: -f1)2.10.2-1 Or a function which could go into .bashrc : ver() { package=$(dpkg -S "*bin/$1" | cut -d: -f1) version=$(dpkg-query -W -f '${Version}' "$package") printf "%s\n" "$version"}ver man2.10.2-1 (Behind the scenes, dpkg -S is really dpkg-query -S , but dpkg-query -S can't be combined with -W -f .) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709082",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/532942/"
]
} |
709,091 | trying to remove percentage symbol '%' from this shell script df -h | awk '$NF=="/"{printf "Percentage: %s \n", $5}' output: Percentage: 2% want to make the output like this Percentage: 2 sorry for my bad english | If you are on a Debian based distribution, you can use the package manager to figure it out: $ dpkg -S '*bin/man'man-db: /usr/bin/man This tells us man-db is the package which owns /usr/bin/man . $ dpkg -l man-dbDesired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-==============-============-============-=================================ii man-db 2.10.2-1 amd64 tools for reading manual pages Then we ask what version of man-db is installed. This is man-db 's upstream version 2.10.2 released by the distribution with an extra -1 . The -1 represents a patch done by the distribution. This may just include build-rules, but could also include fixes to 2.10.2 ported from later versions. dpkg-query can be used to extract the version: $ dpkg-query -W -f '${Version}\n' man-db2.10.2-1 Putting all this together: $ dpkg-query -W -f '${Version}\n' $(dpkg -S '*bin/man' | cut -d: -f1)2.10.2-1 Or a function which could go into .bashrc : ver() { package=$(dpkg -S "*bin/$1" | cut -d: -f1) version=$(dpkg-query -W -f '${Version}' "$package") printf "%s\n" "$version"}ver man2.10.2-1 (Behind the scenes, dpkg -S is really dpkg-query -S , but dpkg-query -S can't be combined with -W -f .) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/532950/"
]
} |
709,131 | Here is my 1.file id a1 a2 a3 a4 Here is my 2.file DW 1 2 3 4 KD 2 3 4 5 LBJ 4 4 4 4 I want to get my final file id a1 a2 a3 a4 DW 1 2 3 4 KD 2 3 4 5 LBJ 4 4 4 4 And I try to cat 1.file |tr "\n" "\t"|sed -e 's/,$/\n/' and then cat 1.file 2.file >> fina.file but I want to get the awk way | $ column -t <( paste -s 1.file ) 2.fileid a1 a2 a3 a4DW 1 2 3 4KD 2 3 4 5LBJ 4 4 4 4 The lines of 1.file are turned into a single line of headers by means of paste -s and then column -t is used to align these headers with the data in 2.file . The above assumes that you are using a shell that understands process substitutions with <(...) . If you are not, then use the following instead: paste -s 1.file | column -t /dev/stdin 2.file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530936/"
]
} |
709,186 | I'm just using the free version of RHEL in my homelab, so do not have support with Red Hat. Same fix for CentOS should apply here... I shouldn't have even tried to upgrade Python3, but hindsight is (as always) 20/20. This is what I'm getting when I try to use YUM: [root@RHEL7 ~]# yum update-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: No such file or directory[root@RHEL7 ~]# yum-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: No such file or directory From reading about this error it seems I am going to have to completely reinstall Python2 and/or 3, but am wondering if there is any other fix. Both Python2 and Python3 actually still work fine (at least in the REPL): [root@RHEL7 ~]# python3Python 3.6.8 (default, Aug 13 2020, 07:46:32)[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linuxType "help", "copyright", "credits" or "license" for more information.>>>[root@RHEL7 ~]# python2Python 2.7.5 (default, Aug 13 2020, 02:51:10)[GCC 4.8.5 20150623 (Red Hat 4.8.5-39)] on linux2Type "help", "copyright", "credits" or "license" for more information.>>> This is what I get when I try to set Python2 as default (nothing returned, issue persists): [root@RHEL7 bin]# update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1[root@RHEL7 bin]#[root@RHEL7 bin]# yum-bash: /usr/bin/yum: /usr/bin/python: bad interpreter: No such file or directory Oddly enough Python2 already seems to be aliased to python but when I try just running python nothing happens...? The symlink is definitely already present: [root@RHEL7 bin]# pwd/usr/bin[root@RHEL7 bin]# python-bash: python: command not found[root@RHEL7 bin]# ln -s python2 pythonln: failed to create symbolic link ‘python’: File exists[root@RHEL7 bin]# ls -l pythonlrwxrwxrwx 1 root root 24 Jul 9 11:30 python -> /etc/alternatives/python | yum can't find /usr/bin/python . It should look like this: [root@centos7 ~]# ls -l /usr/bin/pythonlrwxrwxrwx. 1 root root 7 Jul 9 11:08 /usr/bin/python -> python2[root@centos7 ~]# If python2 itself is still present but the symlink itself is missing, reinstate using: [root@centos7 ~]# cd /usr/bin[root@centos7 bin]# ln -s python2 python[root@centos7 bin]# ls -l pythonlrwxrwxrwx. 1 root root 7 Jul 9 11:10 python -> python2[root@centos7 bin]# Note the solution by Romeo will likely solve it, but it will leave it looking a little different from the original symlink. For the purposes of your homelab, this might not matter a jot. [root@centos7 ~]# update-alternatives --install /usr/bin/python python /usr/bin/ python2.7 1[root@centos7 ~]# ls -l /usr/bin/pythonlrwxrwxrwx. 1 root root 24 Jul 9 11:11 /usr/bin/python -> /etc/alternatives/python[root@centos7 ~]# ls -l /etc/alternatives/pythonlrwxrwxrwx. 1 root root 18 Jul 9 11:11 /etc/alternatives/python -> /usr/bin/python2.7[root@centos7 ~]# | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/221128/"
]
} |
709,272 | I am trying to learn operating system concepts. Here is two simple python code: while True: pass and this one: from time import sleepwhile True: sleep(0.00000001) Question : Why when running first code CPU usage is 100% but when running the second one it is about 1% to 2% ? I know it may sounds stupid but why we can not implement something like sleep in user space mode without using sleep system call ? NOTE: I have tried to understand sleep system call from linux kernel but TBH I didn't understand what happens there. I also search about NOP assembly code and turns out that it is not really doing nothing but doing something useless (like xchg eax, eax) and maybe this is that cause of 100% CPU usage. but I am not sure. What exactly assembly code for sleep system call that we can't do it in user space mode? Is it something like HLT I also tried to use HLT assembly in code like this: section .textglobal _start _start: hlthalter: jmp _startsection .datamsg db 'Hello world',0xa len equ $ - msg but after running this code I see kernel general protection fault like this: [15499.991751] traps: hello[22512] general protection fault ip:401000 sp:7ffda4e83980 error:0 in hello[401000+1000] I don't know maybe this is related to protection ring or my code is wrong? The other question here is that OS is using HLT or other protected assembly commands under beneath sleep system call or not? | Why when running first code CPU usage is 100% but when running the second one it is about 1% to 2% ? Because the first is a "busy loop": You are always executing code. The second tells the OS that this particular process wants to pause (sleep), so the OS deschedules the process, and if nothing else is using CPU, the CPU becomes idle. I also search about NOP assembly code and turns out that it is not really doing nothing Well, NOP = no operation: It is actively executing code that has no effect. Which you can use to pad code, but not to put the CPU in a low power state. What exactly assembly code for sleep system call that we can't do it in user space mode? Modern OS on x86 CPUs use mwait . Other CPU architectures use other commands. but after running this code I see kernel general protection fault like this That's because the OS is supposed to do this in supervisor mode. As I wrote above, the OS needs to be able to keep scheduling processes, so a process itself isn't allowed to put the CPU to idle mode. The other question here is that OS is using HLT or other protected assembly commands under beneath sleep system call Yes, it does. Though it's not executed during the sleep call, but inside the scheduler loop, when the scheduler detects that there are no processes that want to run. One question for the first part. If I use very little slot of time i.e: sleep(0.0000000000000001) is scheduler still go to the next process? For the actual OS syscalls, see man 3 sleep (resolution in seconds), man usleep (resolution in microseconds), and man nanosleep (resolution in nanoseconds`). No matter what floating point number you use in your python code, you won't get a better resolution than the syscall used by python (whichever variant it is). The manpages say "suspends execution of the calling thread for (at least) usec microseconds." etc., so I'd assume it gets descheduled even if the delay is zero (and then immediately rescheduled), but I didn't test that, nor did I read the kernel code.. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/709272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169862/"
]
} |
709,327 | I am getting a lot of spam messages from a certain country. They all have the same pattern in the source. I want to write a procmail rule to automatically move all those emails to my Spam folder. The source of a spam email might look like this ("[REDACTED]" added by me to protect my privacy): Return-Path: <>X-Original-To: [REDACTED]Delivered-To: [REDACTED]Received: from [REDACTED] ([REDACTED] [REDACTED]) by [REDACTED] (Postfix) with ESMTPS id 2AC8E731E799DC for <[REDACTED]>; Sat, 9 Jul 2022 20:16:41 +0000 (UTC)Received: from [REDACTED].org ([REDACTED].ru [REDACTED]) by [REDACTED] (Postfix) with ESMTP id 6F1865ECD8 for <[REDACTED]>; Sat, 9 Jul 2022 20:16:40 +0000 (UTC)[...] What I want to do is look at the "Received" headers and throw everything that comes from .ru TLD into Spam. My attempt is this: :0 H* ^Received:*\.ru.Spam/ However, I'm new to writing procmail rules. How can I go about testing my new rule so I know it's correct? | procmail accepts the mail message from standard input, so will act on anything you pipe to it. Ideally what you pipe to it should be the same as what the Mail Transport Agent will send. Also to test use a custom rules file before including the new rules in the .procmailrc proper: # rm output# cat testmessageFrom [email protected]: [email protected]: footest# cat testrules:0 H* ^output# procmail testrules < testmessage# cat outputFrom [email protected]: [email protected]: footest# | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533232/"
]
} |
709,403 | For as long as I can remember, I've personally used the following method for changing the color of output text when using bash: red=$'\033[1;31m' nc=$'\033[0m'echo "${red}This is red text$nc" It's always worked for me no matter if used on macOS or Linux. Out of curiosity, to better understand why it works, I've done a bit of research and have found some answers/examples that don't include $ ( red='\033[1;31m' ). After some testing, if I don't use $ , the output color doesn't change to red, rather it just prints the string. Though, when executing it using sh instead of bash , the text does change color. Can someone help me understand why this is? I've additionally tested it with double quotes instead of single quotes, and it only seems to work when using sh . I'll post examples below. This is the script and code I used: #!/bin/bash red=$'\033[1;31m' nc=$'\033[0m' echo "${red}Test 1$nc" echo "" ########################## red=$"\033[1;31m" nc=$"\033[0m" echo "${red}Test 2$nc" echo "" ########################## red='\033[1;31m' nc='\033[0m' echo "${red}Test 3$nc" echo "" ########################## red="\033[1;31m" nc="\033[0m" echo "${red}Test 4$nc" echo "" Output on macOS with both bash and sh: Output on Linux with both bash and sh: | The main thing here is that for the terminal to recognize the control sequence, the output sent the terminal must contain the ESC character, then [1;31m . That ESC is a control character, and we don't usually present it as-is in a shell script, but instead we use the backslash escape \033 to represent that character using its octal character code (it's 27 decimal, 033 octal, or 0x1b hex). The rest are just regular characters. Now, that backslash-escape needs to be changed to the raw ESC character at some point. And there's two places where this could happen: the assignment to the variable and the command that prints it. In shells that support it, the $'...' type of quotes interpret the escape already, so after red=$'\033[...' , the variable itself contains the raw ESC character. That's not a standard POSIX feature, and in shells that don't support it, that would assign the raw dollar sign, then backslash, 033[ and so on. On the other hand, red='\033[...' would just assign backslash, 033[ etc., never interpreting the escape. See What is the difference between the "...", '...', $'...', and $"..." quotes in the shell? and Which shells support ANSI-C quoting? e.g. $'string' ( $"..." , and "..." never interpret the escape, and the one with the dollar is also non-standard so might leave the literal dollar sign in shells not supporting it.) Then, the second step is the printing. In some shells echo expands backslash escapes. In others it doesn't. Some of those have echo -e that again does. Bash's echo by default doesn't do it, but with the xpg_echo setting set... it does. Zsh's echo does interpret them. See Why is printf better than echo? So, in a usual Bash on your usual Linux desktop distribution, where xpg_echo isn't set, red=$'\033[1;31m'; echo "${red}text" produces the ESC on the assignment and prints red text, but red='\033[1;31m'; echo "${red}text" doesn't produce it in either the assignment or the echo and just prints \033[1;31mtext in normal color. On Debian/Ubuntu, sh is Dash, where the $'...' quotes aren't supported but echo does interpret backslashes, so red=$'\033[1;31m'; echo "${red}text" prints a normal-colored $ , and then the red text. This is what your run with sh and Linux looks like. You might get a different result in some other Linux system. And on a Mac, the shell installed as sh is actually either Bash with xpg_echo set (the one installed as bash has it disabled),or zsh in newer macOS releases.Even though starting either as sh tries to make them more POSIX-compatible, they still support the $'...' quotes, so both red=$'\033[1;31m'; echo "${red}text" and red='\033[1;31m'; echo "${red}text" print red text. Confused yet? red='\033[1;31m'; printf "%b\n" "${red}text"; should work in all POSIX-like shells. The %b tells it to process backslash escapes in the argument, so any in "text" would get processed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709403",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/241691/"
]
} |
709,407 | I love (the way) how Linux & Co. lets users install many packages from different repositories.AFAIK, they come also with source-packages, so you can compile them by yourself. But why even bother to "keep/offer" pre-compiled packages, when you could just compile them yourself? What are the intentions of keeping/offering them? Is it possible to configure Linux, to only download source packages and let the OS do the rest? (Just like a pre-compiled package installation?) Thank you for your answers. | It’s a trade-off: distributions which provide pre-built packages spend the time building them once (in all the configurations they support), and their users can then install them without spending the time to build them. The users accept the distributions’ binaries as-is. If you consider the number of package installations for some of the larger distributions, the time saved by not requiring recompilation everywhere is considerable. There are some distributions which ship source and the infrastructure required to build it, and rely on users to build everything locally; see for example Gentoo . This allows users to control exactly how their packages are built. If you go down this path, even with the time savings you can get by simplifying package builds, you should be ready to spend a lot of time building packages. I don’t maintain the most complex packages in Debian, but one of my packages takes over two hours to build on 64-bit x86 builders , and over twelve hours on slower architectures ! | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/709407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/518817/"
]
} |
709,487 | I'm having trouble understanding what I need to escape when using sh -c . Let's say I want to run the for loop for i in {1..4}; do echo $i; done . By itself, this works fine.If I pass it to eval , I need to escape the $ : eval "for i in {1..4}; do echo \$i; done" , but I cannot make it work for sh -c "[...]" : $ sh -c "for i in {1..4}; do echo $i; done"4$ sh -c "for i in {1..4}; do echo \$i; done"{1..4}$ sh -c "for i in \{1..4\}; do echo \$i; done"{1..4}$ sh -c "for i in \{1..4\}\; do echo \$i\; done"sh: 1: Syntax error: end of file unexpected Where can I find more information about this? | The usual wisdom is to define the script (after the -c ) inside single quotes. The other part you need to use is a shell where the {1..4} construct is valid: $ bash -c 'for i in {1..4}; do echo $i; done' # also work with ksh and zsh One alternative to get it working with dash (your sh ) is to make the expansion on the shell you are using interactively (I am assuming that you use bash or zsh as your interactive shell): $ dash -c 'for i do echo $i; done' mysh {1..4}1234 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210906/"
]
} |
709,500 | I use Linux Debian 11 with cinnamon desktop and I have some icons on the left side of the panel and when I click on one of them I would like that icon to stay fixed to its place and the new window to be opened next to the right, but instead of this a new window opens in the place of the icon which I clicked and then I can't open a new window of this program by clicking on that icon, because this icon becomes the representation of a new window itself. I can't find any solution in the panel configuration nor in the preferences of that program that is represented by the icon on the left side of the panel (right click on icon -> preferences -> configuration, but I can't find any way to change it). Is there a way to make icons on the panel to launch a new window next to the right instead of the representation of a new window on the panel to be appearing in the place of the primary icon? | The usual wisdom is to define the script (after the -c ) inside single quotes. The other part you need to use is a shell where the {1..4} construct is valid: $ bash -c 'for i in {1..4}; do echo $i; done' # also work with ksh and zsh One alternative to get it working with dash (your sh ) is to make the expansion on the shell you are using interactively (I am assuming that you use bash or zsh as your interactive shell): $ dash -c 'for i do echo $i; done' mysh {1..4}1234 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533075/"
]
} |
709,531 | Tools like fdupes are ridiculous overkill when dealing with jpg or h264 compressed files. Two such files having the exact same filesize is already a pretty good indication of them being identical. If, say, in addition to that, 16 equidistant chunks of 16 bytes are extracted and compared and they are the same as well that would be plenty of evidence for me to assume that they are identical. Is there something like that? (By the way I am aware that filesize alone can be a rather unreliable indicator since there are options to compress to certain target sizes, like 1MB or 1 CD/DVD. If the same target size is used on many files, it is quite reasonable that some different files will have the exact same size.) | czkawka is an open source tool which was created to find duplicate files (and images, videos or music) and present them through command-line or graphical interfaces, with an emphasis on speed. This part from the documentation may interest you: Faster scanning for big number of duplicates By default for all files grouped by same size are computed partial hash(hash from only of 2KB each file). Such hash is computed usually very fast, especially on SSD and fast multicore processors. But when scanning a hundred of thousands or millions of files with HDD or slow processor, typically this step can take much time. With the GUI version, hashes will be stored in a cache so that searching for duplicates later will be way faster. Examples: Create some test files: We generate random images, then copy a.jpg to b.jpg in order to have a duplicate. $ convert -size 1000x1000 plasma:fractal a.jpg$ cp -v a.jpg b.jpg'a.jpg' -> 'b.jpg'$ convert -size 1000x1000 plasma:fractal c.jpg$ convert -size 1000x1000 plasma:fractal d.jpg$ ls --sizetotal 1456364 a.jpg 364 b.jpg 364 c.jpg 364 d.jpg Check only the size: $ linux_czkawka_cli dup --directories /run/shm/test/ --search-method sizeFound 2 files in 1 groups with same size(may have different content) which took 361.76 KiB:Size - 361.76 KiB (370442) - 2 files /run/shm/test/b.jpg/run/shm/test/a.jpg Check files by their hashes: $ linux_czkawka_cli dup --directories /run/shm/test/ --search-method hashFound 2 duplicated files in 1 groups with same content which took 361.76 KiB:Size - 361.76 KiB (370442) - 2 files /run/shm/test/b.jpg/run/shm/test/a.jpg Check files by analyzing them as images: $ linux_czkawka_cli image --directories /run/shm/test/Found 1 images which have similar friends/run/shm/test/a.jpg - 1000x1000 - 361.76 KiB - Very High/run/shm/test/b.jpg - 1000x1000 - 361.76 KiB - Very High | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/709531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533419/"
]
} |
709,590 | I have millions of rows of data my boss has asked me to reformat for them. The format is: 06/28/2022,04:00,142.01,142.38,141.59,142.15,315106/28/2022,04:01,142.1,142.1,142.1,142.1,196 I need to reformat the first date field to: 20220628,04:00,142.01,142.38,141.59,142.15,315120220628,04:01,142.1,142.1,142.1,142.1,196 %Y%m%d I have the following: gawk -F"," '{OFS=","; $1=strftime("%Y%m%d", $1); print $0}' AAPL.txt > AAPL.csv but the weird thing is it works, but produces a date in 1969. 19691231,04:00,142.01,142.38,141.59,142.15,315119691231,04:01,142.1,142.1,142.1,142.1,196 I don't understand why. I chose gawk because awk on MacOS doesn't have strftime and calling date externally create a huge performance hit. | Your code does not do what you expect it to do because the GNU awk strftime() expects a Unix timestamp as its second argument. It's unable to parse an arbitrary datetime string. However, we don't really need strftime() here. $ awk -F , 'BEGIN { OFS=FS } { split($1,a,"/"); $1 = a[3] a[1] a[2] }; 1' file20220628,04:00,142.01,142.38,141.59,142.15,315120220628,04:01,142.1,142.1,142.1,142.1,196 This treats each line of input as simple comma-delimited fields and splits the first such field up on / into the array a . The first field is then re-formed as the elements of the array concatenated in the wanted order. The lone 1 at the end of the awk code causes the modified record to be outputted. This would work with the default awk on macOS. It does not need special date formatting functions as it treats the input date as a string and simple reorganises it. The only assumption about the date is that it always is in the DD/MM/YYYY format in the input and that it should be in the YYYYMMDD format in the output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533469/"
]
} |
709,593 | I am running cmake version 3.23.2 on Debian teseting. If I put - $cmake . It would generate the output but that's too fast for me to read. If I do something like - cmake . > cmake-build.txt that generates only the last five lines of the build. I never come to know/see if there are any errors, warnings, etc. that need to be reported. Updated and this is what I get - Platform: LinuxTouchscreen input: No (Enable by param -DENABLE_TOUCHSCREEN=ON)Metaserver: No (Enable by param -DENABLE_METASERVER=ON)Doxygen documentation: No (Enable by param -DENABLE_DOC=ON)Game development files: No (Enable by param -DENABLE_DEV=ON)Upx packer: No (Enable by param -DENABLE_UPX=ON)X11: Found and enabled (Disable by param -DWITH_X11=OFF)==================================-- Configuring done-- Generating done-- Build files have been written to: /home/shirish/games/Wyrmgus-master Let me share what I did - ~/games/Wyrmgus-master$ mkdir build~/games/Wyrmgus-master$ cd build/~/games/Wyrmgus-master/build$ cmake ../ | less The third is the crucial command I guess. Here, it just gives me some but not everything, at least something is missing, for e.g. there is something before - -- The C compiler identification is GNU 11.3.0-- The CXX compiler identification is GNU 11.3.0-- Detecting C compiler ABI info-- Detecting C compiler ABI info - done I want the few lines before that to be also captured :( | Your code does not do what you expect it to do because the GNU awk strftime() expects a Unix timestamp as its second argument. It's unable to parse an arbitrary datetime string. However, we don't really need strftime() here. $ awk -F , 'BEGIN { OFS=FS } { split($1,a,"/"); $1 = a[3] a[1] a[2] }; 1' file20220628,04:00,142.01,142.38,141.59,142.15,315120220628,04:01,142.1,142.1,142.1,142.1,196 This treats each line of input as simple comma-delimited fields and splits the first such field up on / into the array a . The first field is then re-formed as the elements of the array concatenated in the wanted order. The lone 1 at the end of the awk code causes the modified record to be outputted. This would work with the default awk on macOS. It does not need special date formatting functions as it treats the input date as a string and simple reorganises it. The only assumption about the date is that it always is in the DD/MM/YYYY format in the input and that it should be in the YYYYMMDD format in the output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709593",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50490/"
]
} |
709,731 | The following files exist in the current directory GCF_000901975.1_ViralProj181986_genomic.fna.gz GCF_001885505.1_ViralProj344311_genomic.fna.gz GCF_000901995.1_ViralProj181990_genomic.fna.gz GCF_001041015.1_ViralProj287961_genomic.fna.gz and i want to rename the current file like this GCF_000901975.1 GCF_001885505.1 GCF_000901995.1 GCF_001041015.1 I'm using the below script to get it but it's failed for file in `ls | grep .gz` do newfile=`echo $file | awk -F "_" '{print $1,"_",$2,".gz"| sed 's/ //g"` mv $file $newfile done anybody can give me some advice? or maybe i should try the "Split" , appreciate it | First of all, avoid parsing the output of ls . Next, even if you have a good reason to parse ls output (you don't, here), there is no reason to pass it through grep : ls *gz will list only the file and directory names ending with gz (but note that it will also list the contents of directories whose names end with gz unless you use ls -d ) and, unlike ls | grep .gz will not match files like not.a.gz.file , and will match files with newlines in their names. In any case, you don't need or want ls at all here, all you want is for file in *gz which is a far better approach since it can deal with arbitrary file names ( as long as you properly quote your variables ). So, your loop could be much better written as: for file in *.gz; do newfile=$(echo "$file" | awk -F "_" '{print $1"_"$2".gz"}' | sed 's/ //g') mv -- "$file" "$newfile" done Note how I also fixed your awk and sed commands since you hadn't closed the awk part before opening the sed part and how all variables are quoted. Also note how I removed the , in your awk print statement since those would be adding a space (or whatever you set the OFS variable to) between each printed item. The -- is used to indicate the end of command line options and ensures the command will work with file names starting with - . Next, you don't really need awk or sed at all. You could use the shell: for f in *gz; do echo mv -- "$f" "${f%%_V*}"done The ${variable%%pattern} syntax ( "${f%%_V*}") means "return the value of $variable after removing the longest string matching $pattern from the end". So, in this case, it means "remove everything from the first _ which comes before a V`. You can read more about it here : ${parameter%%word} The word is expanded to produce a pattern and matched according to the rules described below (see Pattern Matching). If the pattern matches a trailing portion of the expanded value of parameter, then the result of the expansion is the value of parameter with the shortest matching pattern (the ‘%’ case) or the longest matching pattern (the ‘%%’ case) deleted. If parameter is ‘@’ or ‘ ’, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with ‘@’ or ‘ ’, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. Once you are satisfied that it works as expected, remove the echo and run it again to actually rename the files. Finally, if you have perl-rename (called rename on Debian-based Linux distributions), you can also do: $ rename -n -- 's/_V.*//s' *gzGCF_000901975.1_ViralProj181986_genomic.fna.gz -> GCF_000901975.1GCF_000901995.1_ViralProj181990_genomic.fna.gz -> GCF_000901995.1GCF_001041015.1_ViralProj287961_genomic.fna.gz -> GCF_001041015.1GCF_001885505.1_ViralProj344311_genomic.fna.gz -> GCF_001885505.1 If that looks OK, remove the -n to actually rename the files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530936/"
]
} |
709,738 | In Solaris server how can i get total size of this output? Could you help me? I need to learn total size of last 1 day file root@test:# find . -mtime -1 -type f | xargs du -sh226M ./10/01.tar209M ./10/02.tar198M ./10/03.tar202M ./10/04.tar193M ./10/05.tar193M ./10/06.tar193M ./10/07.tar204M ./10/08.tar222M ./10/09.tar244M ./10/10.tar24G ./10/00.tar17G ./10/01.tar11G ./10/02.tar8.3G ./10/03.tar6.5G ./10/04.tar5.8G ./10/05.tar6.0G ./10/06.tar8.3G ./10/07.tar | First of all, avoid parsing the output of ls . Next, even if you have a good reason to parse ls output (you don't, here), there is no reason to pass it through grep : ls *gz will list only the file and directory names ending with gz (but note that it will also list the contents of directories whose names end with gz unless you use ls -d ) and, unlike ls | grep .gz will not match files like not.a.gz.file , and will match files with newlines in their names. In any case, you don't need or want ls at all here, all you want is for file in *gz which is a far better approach since it can deal with arbitrary file names ( as long as you properly quote your variables ). So, your loop could be much better written as: for file in *.gz; do newfile=$(echo "$file" | awk -F "_" '{print $1"_"$2".gz"}' | sed 's/ //g') mv -- "$file" "$newfile" done Note how I also fixed your awk and sed commands since you hadn't closed the awk part before opening the sed part and how all variables are quoted. Also note how I removed the , in your awk print statement since those would be adding a space (or whatever you set the OFS variable to) between each printed item. The -- is used to indicate the end of command line options and ensures the command will work with file names starting with - . Next, you don't really need awk or sed at all. You could use the shell: for f in *gz; do echo mv -- "$f" "${f%%_V*}"done The ${variable%%pattern} syntax ( "${f%%_V*}") means "return the value of $variable after removing the longest string matching $pattern from the end". So, in this case, it means "remove everything from the first _ which comes before a V`. You can read more about it here : ${parameter%%word} The word is expanded to produce a pattern and matched according to the rules described below (see Pattern Matching). If the pattern matches a trailing portion of the expanded value of parameter, then the result of the expansion is the value of parameter with the shortest matching pattern (the ‘%’ case) or the longest matching pattern (the ‘%%’ case) deleted. If parameter is ‘@’ or ‘ ’, the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with ‘@’ or ‘ ’, the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list. Once you are satisfied that it works as expected, remove the echo and run it again to actually rename the files. Finally, if you have perl-rename (called rename on Debian-based Linux distributions), you can also do: $ rename -n -- 's/_V.*//s' *gzGCF_000901975.1_ViralProj181986_genomic.fna.gz -> GCF_000901975.1GCF_000901995.1_ViralProj181990_genomic.fna.gz -> GCF_000901995.1GCF_001041015.1_ViralProj287961_genomic.fna.gz -> GCF_001041015.1GCF_001885505.1_ViralProj344311_genomic.fna.gz -> GCF_001885505.1 If that looks OK, remove the -n to actually rename the files. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431914/"
]
} |
709,882 | I came across something funny when testing my script. I can ls my directory from the shell manually if I run $ ls ~/db_backups/test1 test2$ However, if I assign a shell variable a dir location as such with the tilde, it doesn't work. I tried this with both single and double quotes. $ backupfolder='~/db_backups'$ echo $backupfolder~/db_backups$ ls $backupfolderls: cannot access '~/db_backups': No such file or directory$ What is happening with the tilde substitution inside the shell variable? Why can't I ls the directory thru the variable like I can manually with the tilde in the dir name? | The bash shell will not expand ~ when the tilde is part of the result of a variable expansion. The unquoted tilde prefix ( ~ , ~+ or ~username for the current user named username ) is only expanded to the current user's home directory when it is at the start of a word or immediately following a = or : , optionally followed by / and other path elements. In your case, it would be easier to do the expansion when assigning to your variable backupfolder . This would happen if you left the tilde unquoted (using neither single nor double quotes): backupfolder=~/db_backupsls "$backupfolder" ... or if you used $HOME instead (without single quotes): backupfolder=$HOME/db_backupsls "$backupfolder" In both these assignments, the shell would expand the value to the right of the assignment operator to a pathname of something called db_backups in the current user's home directory. In general, it's often easier to work with $HOME in scripts and leave the use of ~ to interactive sessions where it may serve as a handy shortcut. The HOME variable always behaves as a variable . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709882",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30038/"
]
} |
709,900 | I would like to see the logs for a microservice with the latest version and send its content to a file according to the date. user@MacBook-Pro ~ % kubectl -n bci-api get pods | grep ms-example-microservices ms-example-microservices-neg-re-v1-7-56bfd9f6c7-kjb24 1/1 Running 1 (6h39m ago) 6h47mms-example-microservices-neg-re-v2-0-66d88b48fb-9ttcf 1/1 Running 0 5h14mms-example-microservices-neg-re-v2-1-6d8749dfb8-d42jk 1/1 Running 0 6h26mms-example-microservices-neg-re-v2-2-849c97f6c-dnp45 1/1 Running 0 4h53mms-example-microservices-neg-re-v2-3-db6dc776c-x45jl 1/1 Running 0 5h50muser@MacBook-Pro ~ % kubectl logs -f -n bci-api ms-example-microservices-neg-re-v2-3-db6dc776c-x45jl > pf_v2-3.2022-07-14.log I would like to select the latest version (for this example, v2-3 ),and later create a file with the date. Is it possible to do it in a single line? user@MacBook-Pro ~ % kubectl -n bci-api get pods | grep ms-example-microservices | tail -n 1 | awk '{print $1}' ms-example-microservices-neg-re-v2-3-db6dc776c-x45jluser@MacBook-Pro ~ % Now creating the date-time for name: user@MacBook-Pro ~ % $(date +"%Y%m%d_%H%M%S")zsh: command not found: 20220714_172238user@MacBook-Pro ~ % I was trying the nested command with: kubectl logs -f -n bci-api $(kubectl -n bci-api get pods | grep ms-example-microservices | tail -n 1 | awk '{print $1}') > "pf_$(date +"%Y%m%d_%H%M%S").log" Unfortunately I don't know how to select the version v2-3 for the name. | The bash shell will not expand ~ when the tilde is part of the result of a variable expansion. The unquoted tilde prefix ( ~ , ~+ or ~username for the current user named username ) is only expanded to the current user's home directory when it is at the start of a word or immediately following a = or : , optionally followed by / and other path elements. In your case, it would be easier to do the expansion when assigning to your variable backupfolder . This would happen if you left the tilde unquoted (using neither single nor double quotes): backupfolder=~/db_backupsls "$backupfolder" ... or if you used $HOME instead (without single quotes): backupfolder=$HOME/db_backupsls "$backupfolder" In both these assignments, the shell would expand the value to the right of the assignment operator to a pathname of something called db_backups in the current user's home directory. In general, it's often easier to work with $HOME in scripts and leave the use of ~ to interactive sessions where it may serve as a handy shortcut. The HOME variable always behaves as a variable . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117555/"
]
} |
709,933 | Here I found the file with ..:; name. mkdir '..:;' worked fine. But in PATH directory names are split by : . How to add this directory to PATH? | The POSIX standard explicitly mentions that it's impossible to use directories with : in their names in the PATH variable's value. See the entry about the PATH environment variable in the section entitled Other Environment Variables : Since <colon> is a separator in this context, directory names that might be used in PATH should not include a <colon> character. In the zsh shell, you would be able to add the directory to your search path and have it work as expected by modifying your path array variable (which is tied to PATH ): path+=( '/some/path/..:;' ) or to add the entry first rather than last: path=( '/some/path/..:;' $path ) However, after doing this, modifying the shell's search path using PATH rather than via the path array will cause the ..:; entry to be split on the : . Also, note that although the modified path may work in the zsh shell, it is unlikely to work as expected in another shell or in an application started from that shell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/470570/"
]
} |
709,952 | I want to grep a search pattern but only succeed (and output the matching line) if there is only one unique match. If two lines match, grep should fail or output nothing. | You can't do this with grep , but you can simply count the matches. I don't know what shell, what grep or what operating system you are using, but here's an example of a bash function that can do that: maxOne() ( pattern="$1" file="$2" IFS=$'\n' set -f results=( $(grep -m2 -- "$pattern" "$file") ) if [ "${#results[@]}" -eq 1 ]; then printf -- '%s\n' "${results[@]}" return 0 else return 1 fi) Add those lines to your ~/.bashrc or just paste them into a terminal with a running bash session, and you can then do: maxOne foo file To search for foo in file . Note that the -m option (maximum results) which is used here for efficiency to make grep exit after two matches, isn't supported by all versions of grep so if it gives you an error, just remove it. It isn't needed, it just speed things up. Important : this will not work for multi-line search strings which you can use with grep -z if your grep supports that. If you need to be able to handle multi-line search patterns, you will need a different approach. Also, this will not work with patterns that match empty lines (e.g. grep '^$' file ). Stéphane's solution will handle empty lines, so that would be a better option if this is an issue. His will also work on multiple files, unlike mine, which is a nice perk. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709952",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240014/"
]
} |
709,978 | Linux has shred , from the GNU coreutils package, to securely overwrite data in-place when removing files. What is the equivalent on BSD systems (and specifically on macOS)? | You can install the GNU coreutils with brew install coreutils in macOS (using Homebrew), pkg install coreutils in FreeBSD, or pkg_add coreutils in OpenBSD and then run gshred , which will behave exactly like shred in Linux. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533887/"
]
} |
709,987 | I am trying to build a new bootloader, regrettably without knowing too much about the subject. I'm following the instructions in Problems booting installer to UEFI system via PXE . I first run: grub-mknetdir --net-directory=/srv/tftp/ --subdir=/boot/grubNetboot directory for x86_64-efi created. Configure your DHCP server to point to /srv/tftp/boot/grub/x86_64-efi/core.efi This seemed to work, so I go on to the next command: root@vogon:~# grub-mkimage -O x86_64-efi /srv/tftp/boot/grub/x86_64-efi/core.efi --prefix='tftp,192.168.50.9)/boot/grub' efinet tftpgrub-mkimage: error: cannot open `/srv/tftp/boot/grub/x86_64-efi/core.efi.mod': No such file or directory. I can see the .mod files in /srv/tftp/boot/grub/x86_64-efi/ : root@vogon:~# file /srv/tftp/boot/grub/x86_64-efi/*/srv/tftp/boot/grub/x86_64-efi/acpi.mod: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped/srv/tftp/boot/grub/x86_64-efi/adler32.mod: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped/srv/tftp/boot/grub/x86_64-efi/affs.mod: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped/srv/tftp/boot/grub/x86_64-efi/afs.mod: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped... The efinet.mod and tftp.mod both exist, but no core.efi.mod , of course. The core.efi file is different from the .mod files: root@vogon:~# file /srv/tftp/boot/grub/x86_64-efi/core.efi/srv/tftp/boot/grub/x86_64-efi/core.efi: PE32+ executable (EFI application) x86-64 (stripped to external PDB), for MS Windows How do I get past this issue? | You can install the GNU coreutils with brew install coreutils in macOS (using Homebrew), pkg install coreutils in FreeBSD, or pkg_add coreutils in OpenBSD and then run gshred , which will behave exactly like shred in Linux. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/709987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/364511/"
]
} |
710,499 | I've looked online but couldn't get a straight answer. Plus, there are no mentions of package managers on the Unix books that I've read. One would imagine that someone/something needed to manage the installation/update/removal of programs; like we have today.But did the people at Bell Labs have a package manager? Did they have a centralized repository with a bunch of programs? Or was it like Slackware today where each person manages their own packages? Thanks in advance | No. The package manager requires a repository to be useful. Some storage space where packages wait to be installed. A repository requires network. Or some other kind of external storage. The earliest "package manager" (which was not called "package manager") was an "installation manager" which was a part of install CD. And that was 90s already... Not sure which distributive had that feature first, I met it in the middle of 90s in FreeBSD. It was a set of four CDs with an OS on the first, necessary tools on the second and last two had a lot of user-level applications (text and graphic editors, games, etc). A user could re-start the installation utility at any time (on a working OS) and install an app from these CDs, it felt very weird, but was useful. The internet-based repository and package managers for them I first met only in this century. Before that: get an app.tar.gz file, unpack it, run configure; make; make install - if failed, take your favorite text editor, look for a problem, try to fix, compile again. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/710499",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/531949/"
]
} |
710,568 | I have 120 files (genomes.faa) that all have headers between each sequence >GENOME1_00001 HYPOTHETICAL PROTEIN ANQFTIAQSQVGLEDALLDL>GENOME1_00002 HYPOTHETICAL PROTEIN BNQFTIAQSQVGLEDALLDL>GENOME1_00003 HYPOTHETICAL PROTEIN CNQFTIAQSQVGLEDALLDLetc. I am trying to remove the "_0000X " after the name and replace it with a "|" >GENOME1|HYPOTHETICAL PROTEIN ANQFTIAQSQVGLEDALLDL>GENOME1|HYPOTHETICAL PROTEIN BNQFTIAQSQVGLEDALLDL>GENOME1|HYPOTHETICAL PROTEIN CNQFTIAQSQVGLEDALLDLetc. I have tried doing this: for file in *.faadosed -r 's/_.*$/|/g' $file > $file.1done This does not keep the 'HYPOTHETICAL PROTEIN A' afterwards, resulting in >ERR1156171|MMRQSVQTVLP instead of >ERR1156171|HYPOTHETICAL PROTEIN AMMRQSVQTVLP Any help is appreciated! | I think you were very close to a working command. This worked for me on the few examples you gave: sed -E 's/_[0-9]+ /|/' "$file" > "$file.1" I changed the match expression from _.* to _[0-9]+ to limit the match to only the underscore, numeric digits, and space character. I removed the $ because that matches at the end of the line, not the end of the first word. I changed the end of the substitute command from /g to / because your examples have only one place in each line that needs editing, rather than multiple places. Also, rather use -E than -r for extended regular expressions, as the former is more compatible with other versions of sed; and quote the variable expansions in case any filenames contain whitespace or special characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/710568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/534500/"
]
} |
710,571 | I'm trying to download some package binaries using dget but I keep getting a "No public key" error. dget http://deb.debian.org/debian/pool/main/g/gl-image-display/gl-image-display_0.10-2.dsc Returns the output dget: retrieving http://deb.debian.org/debian/pool/main/g/gl-image-display/gl-image-display_0.10-2.dsc % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 2243 100 2243 0 0 9080 0 --:--:-- --:--:-- --:--:-- 9080dget: using existing gl-image-display_0.10.orig.tar.gzdget: using existing gl-image-display_0.10-2.debian.tar.xzdscverify: gl-image-display_0.10-2.dsc failed signature check:gpg: WARNING: no command supplied. Trying to guess what you mean ...gpg: Signature made Wed 06 Apr 2022 04:57:07 PM MDTgpg: using RSA key B5E2FA190FDF9AFE218889CFACC7C2CF30941188gpg: Can't check signature: No public keyValidation FAILED!! I tried to use the configuration variable DGET_VERIFY=no to disable checking signatures of downloaded source packages. (See Documenation ), but I still get the same error. How can I either fix this error or disable the validation check? | I think you were very close to a working command. This worked for me on the few examples you gave: sed -E 's/_[0-9]+ /|/' "$file" > "$file.1" I changed the match expression from _.* to _[0-9]+ to limit the match to only the underscore, numeric digits, and space character. I removed the $ because that matches at the end of the line, not the end of the first word. I changed the end of the substitute command from /g to / because your examples have only one place in each line that needs editing, rather than multiple places. Also, rather use -E than -r for extended regular expressions, as the former is more compatible with other versions of sed; and quote the variable expansions in case any filenames contain whitespace or special characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/710571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142412/"
]
} |
710,577 | I need to create a reusable ("utility") script handling entire set of operations with temporary files for any of my other ("application") scripts: creation tracking of temporaries created trapping exit (regular completion and interruption by user) and removing everything created Each of application scripts need to "include" utility script on their own, not rely that some "main" script will call it once for all subscripts it uses. The key requirement is that application scripts using the utility script can call each other, each of them in turn also need to use temporary files. Call between scripts are both ./some_script.sh and . ./some_script.sh Some scripts exit with error code; others just finish without explicit exit . The application scripts themselves can be called from command line using subshells, like: echo "Parameter" |./feed_parameter_to_script.sh script_using_tmps.sh Ideally application scripts could invoke creating temporaries from subshells within its own code, but that's not mandatory. The only solution I found is for each script to perform its tmp removal on its own , which does not fit item 3. in my requirements. What is best practice for requirements like mine? | I think you were very close to a working command. This worked for me on the few examples you gave: sed -E 's/_[0-9]+ /|/' "$file" > "$file.1" I changed the match expression from _.* to _[0-9]+ to limit the match to only the underscore, numeric digits, and space character. I removed the $ because that matches at the end of the line, not the end of the first word. I changed the end of the substitute command from /g to / because your examples have only one place in each line that needs editing, rather than multiple places. Also, rather use -E than -r for extended regular expressions, as the former is more compatible with other versions of sed; and quote the variable expansions in case any filenames contain whitespace or special characters. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/710577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301190/"
]
} |
710,744 | I have a file that has this pattern in Linux and Unix environments. # file1text1 98432230text2 123412text3 10line4 0line5 0line6 40000... ...line10 20... ... I am trying to exclude lines that end with 0 , but not exclude lines ending with 10 , 20 , 40000 , 98432230 and so on. I have tested , |grep -v "\0$" , | grep -v "[[:space:]]0$" , | grep -v " 0" , | sed '/0/d' , | sed "/0$/d" but none of them work as they exclude any line that ends in 0, including 10, 20, 40000 and 98432230. Any suggestions? | Don't use grep , do it in awk instead: $ awk '$NF!=0' filetext1 98432230text2 123412text3 10line6 40000... ...line10 20... ... In awk , the variable NF s the number of fields, so $NF is the last field. Expressions that evaluate to true mean "print this line" so $NF!=0 means "print every line whose last field is not 0". You could even simplify it further to just: awk '$NF' file | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/710744",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/534664/"
]
} |
710,800 | I'm trying to insert a space between letters only, not numbers or other characters. hello woRLd 12ab34 should become h e l l o w o R L d 12a b34 sed 's/\([a-zA-Z]\)\([a-zA-Z]\)/\1 \2/g' file.txt results in h el lo w or LD 12a b34 I can't insert a space after every letter, as that doesn't check if the one after that will be a letter also. I could run the sed command twice, which solves the problem but is not elegant. I need to solve this problem using sed , if possible. | You don't need to run the sed command twice, you can simply run the s ubstitute command inside the sed script twice. sed has an elegant way to do this: The empty pattern // repeats the previous pattern: sed 's/\([a-zA-Z]\)\([a-zA-Z]\)/\1 \2/g;s//\1 \2/g' file.txt For the sake of readability, I suggest to use extended regular expressions: sed -E 's/([a-zA-Z])([a-zA-Z])/\1 \2/g;s//\1 \2/g' file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/710800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/534710/"
]
} |
710,901 | What is the meaning of chmod -R a-x,a=rX,u+w ? chmod changes file permissions, -R makes it be done recursively but what are: a-x a=rX u+w here? | a-x clears the executable bit for everyone (user, group, other). a=rX sets the read bit for everyone, and the executable bit on directories; see What is a capital X in posix / chmod? for details. It clears all other bits. (This can’t be combined with a-x , because X here would set the executable bit for any non-directory with an executable bit set too; applying a-x first ensures that only directories get their executable bit set.) u+w sets the write bit for the user. The result is that all directories end up with 755 permissions, and everything else with 644 permissions. Here are a few examples: Step Regular file Executable Directory a-x ??-??-??- ??-??-??- ??-??-??- a=rX r--r--r-- r--r--r-- r-xr-xr-x u+w rw-r--r-- rw-r--r-- rwxr-xr-x If we leave out the a-x step, one of the executable’s x bits would be set (otherwise it wouldn’t be an executable), and the a=rX step would handle it like a directory. If you prefer reasoning in terms of “read, write, execute”, then a=r,u+w,a+X might be easier to understand: Step Regular file Executable Directory a=r r--r--r-- r--r--r-- r--r--r-- u+w rw-r--r-- rw-r--r-- rw-r--r-- a+X r--r--r-- r--r--r-- rwxr-xr-x This would also work better on at least some versions of macOS where X is only recognised with + operations, not = . See Understanding UNIX permissions and file types for more context, and Combine find-chmod for directories and find-chmod for regular files for other approaches. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/710901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/534795/"
]
} |
710,917 | Is it possible to change the background color depending on which application is running currently? My vim theme background differs with the terminal but I don't want to change the background of my terminal permanently just because of vim. | a-x clears the executable bit for everyone (user, group, other). a=rX sets the read bit for everyone, and the executable bit on directories; see What is a capital X in posix / chmod? for details. It clears all other bits. (This can’t be combined with a-x , because X here would set the executable bit for any non-directory with an executable bit set too; applying a-x first ensures that only directories get their executable bit set.) u+w sets the write bit for the user. The result is that all directories end up with 755 permissions, and everything else with 644 permissions. Here are a few examples: Step Regular file Executable Directory a-x ??-??-??- ??-??-??- ??-??-??- a=rX r--r--r-- r--r--r-- r-xr-xr-x u+w rw-r--r-- rw-r--r-- rwxr-xr-x If we leave out the a-x step, one of the executable’s x bits would be set (otherwise it wouldn’t be an executable), and the a=rX step would handle it like a directory. If you prefer reasoning in terms of “read, write, execute”, then a=r,u+w,a+X might be easier to understand: Step Regular file Executable Directory a=r r--r--r-- r--r--r-- r--r--r-- u+w rw-r--r-- rw-r--r-- rw-r--r-- a+X r--r--r-- r--r--r-- rwxr-xr-x This would also work better on at least some versions of macOS where X is only recognised with + operations, not = . See Understanding UNIX permissions and file types for more context, and Combine find-chmod for directories and find-chmod for regular files for other approaches. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/710917",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388466/"
]
} |
710,998 | Here is my file which contains three columns and separate by "\t", and the delimiter of the second column is a blank 1 a b tom 2 a b c sim 3 a mary 4 o l shey 5 c bob I want to get the first file which the second column contains multiple elements 1 a b tom 2 a b c sim 4 o l shey and then I want to get the second file like this 1 a tom 1 b tom 2 a sim 2 b sim 2 c sim 4 o shey 4 l shey Actually, i have tried that awk -F\\t 'BEGIN {OFS=FS} {n=split($2,aa," ");for (i=1;i<=n;i++) {$2=aa[i]; printf "%s\n" $0 }}' but it looks like didn't work. Can you give me some advice? thanks. | $ awk 'NF>3' file 1 a b tom 2 a b c sim 4 o l shey$ awk -v OFS='\t' 'NF>3{for (i=2;i<NF;i++) print $1, $i, $NF}' file1 a tom1 b tom2 a sim2 b sim2 c sim4 o shey4 l shey As for why your code didn't work - the most obvious problem is that split($2,aa,",") is trying to split $2 at commas when you said and show that it's separated by blanks. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/710998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530936/"
]
} |
711,170 | GNU sort on Linux is not giving expected results on my csv file. Can you please help to resolve the situation/ issue? Input file [nscruser]$ cat cemp1.txt10,3050,90020,1050 Objective I need to do numeric sort on the first field for the above file [nscruser]$ sort -t',' -k1 -n cemp1.txt10,3050,90020,1050 Expected output But I expected the output as below as I am doing a numeric sort on first column 10,3020,105050,900 Can someone please let me know why the discrepancy? | Looking at the man page of sort (from GNU coreutils 8.32), -k, --key=KEYDEFsort via a key; KEYDEF gives location and type ... KEYDEF is F[.C][OPTS][,F[.C][OPTS]] for start and stop position, where F is a field number and C a character position in thefield;both are origin 1, and the stop position defaults to the line's end. If neither -t nor -b is in effect, characters in afield arecounted from the beginning of the preceding whitespace. OPTS is one or more single-letter ordering options [bdfgiMhnRrV],which over‐ride global ordering options for that key. If no key is given, use the entire line as the key. Use --debug to diagnoseincorrect keyusage. First, you can use --debug as suggested, $ sort -t',' -k1 -n --debug cemp1.txtsort: text ordering performed using ‘en_IE.UTF-8’ sorting rulessort: key 1 is numeric and spans multiple fields10,30__________50,900____________20,1050______________ That gives us a clue: "key 1 is numeric and spans multiple fields". As the man page says, "the stop position defaults to the line's end". So you need to add a stop position: $ sort -t',' -k1,1 -n cemp1.txt10,3020,105050,900 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/711170",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/535046/"
]
} |
711,338 | Currently I'm using the following zsh-snippet to select small batches of files for further processing for f in $(ls /some/path/*.txt | head -2) ; do echo unpacking $f ./prepare.sh $f && rm -v $fdone Is there a better alternative to $(ls ... | head -2) in zsh? General overview of my task. I'm creating a data set to train a neural network. Details of that ML task are not important here. The task of the dataset creation requires me to manually process a large bunch of files. To do it, I've copied them to a separate directory. Then I randomly choose several files (first two from ls output in this example), call some preprocessing routine, review its results, move some of them to the data set being created and remove the rest. After this cleanup I again execute the command above. Additionally, I'd like to improve my skills in shell programming and learn something new :) The order in which these "first" files are chosen does not matter, since all of them will be processed in the end. In other words, I'm working together with PC inside a for loop and want it to pause after several iterations and wait for me. Pseudocode. for f in /some/path/*.txt ; do echo unpacking $f ./prepare $f if human wants to review ; then human is reviewing then cleans, and PC waits fidone The reason for such weird procedure is that preprocessing of one "source" .txt file creates several dozens other files, I need to view all of them and select a few samples (usually 1-2), suitable to train a network. I could run for f in /some/path/*.txt ; do ./prepare $f ; done but this command would create several hundreds of files, and this amount overwhelms. | Glob qualifiers Glob qualifiers can replace most uses of ls or find to enumerate files. They're a unique feature of zsh. For example, $(ls /some/path/*.txt | head -2) (enumerate files in lexicographic order, keep only the first two files) is equivalent 1 to /some/path/*.txt(N[1,2]) in zsh. The N qualifier ensures that the list is empty if there are no matches, and the [ from , to ] qualifier limits the matches to the specified range. Without the N qualifier, under default options, your script will exit with an error message if there are no matching files. You can use the o or O qualifier to control the order of the files. For example, /some/path/*.txt(Nom[1,2]) takes the two newest files. 1 There are some slight differences, generally at zsh's benefit. Using ls tends to be problematic with file names containing special character such as spaces or newlines or invalid byte sequences, whereas zsh's built-in features work reliably on all file names. Error management is different in corner cases. Here, as you forgot the -d option to ls , you'd also have problems if some of those *.txt files were of type directory as ls would list their contents instead. I don't see how taking two files helps with your overall objective, though. If you want to have a way to process all files, but allow a human to review the first few, you could show a step/continue/abort prompt. Something like this: pause=1for f in /some/path/*.txt ; do print -ru2 unpacking $f ./prepare $f if ((pause)); then print -ru2 -- "$f output is ready for review." c= while [[ $c != [anq] ]]; do read -k1 "c?Process (N)ext, (A)ll, (Q)uit? " && c=${c:l} done echo case $c in a) pause=0;; q) break;; esac fidone | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/711338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216325/"
]
} |
711,343 | How do I grep for a range of unicode characters? I've seen an example for one character. How to grep characters with their unicode value? I'm interested in a method other than the shell substitution method because shell substitution seems to be a bit limited e.g. it doesn't seem to work for a non-graphical unicode characters like codepoint of \u80. I can get that method to work for a range, but only to an extent, since it won't cover non-graphical characters like \u80 (unicode codepoint 80) $ echo grep [$'\u41'-$'\u45']grep [A-E]$ echo 4142434445|xxd -r -pABCDE$ echo 4142434445|xxd -r -p | grep [$'\u41'-$'\u45']ABCDE The $ method uses substitution at the shell level so won't work to e.g. look for characters from \u0080-\uFFFF or \u0080 upwards, 'cos if the shell can't display a character, it won't work. ugrep is available through apt-get for debian, though not for the ubuntu version I have on the VPS I have. And I have to test it some more. NOTE Turns out the shell substitution method does work for control characters so would even work for a range of them or any unicode characters, as no doubt would ugrep. Initially when I tried shell substitution with grep, I was unknowingly feeding in wrong bytes. e.g. echo 418042| xxd -r -p displays A▒B so I thought great that worked, and I was trying to grep on that. So I was piping wrong data to grep. 80 is not utf-8 for \u80. An echo of high characters e.g. £ clearly shows it is outputting utf-8. echo £ | xxd -p shows c2a30a c2a3 is utf-8 for £. When I fed in correct bytes it worked e.g. c280 is \u80 and even echo $'\u80' worked . This page is good for showing utf-8 mappings to unicode codepoints. https://www.utf8-chartable.de/ While shell substitution does work, i'm glad I have an answer that does a method other than shell substitution, as having an alternative is good. | In gnu-grep and similar you can use PCRE option -P and use \x{HHHH} syntax $ grep -o -P '[\x{0410}-\x{042F}]+' # same as: grep -o -P '[А-Я]+'абвгдеёжзийклмнопрстуфхцчшщъыьэюяАБВГДЕ=> АБВГДЕ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/711343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9475/"
]
} |
711,456 | Hello I have 2 files with the first file containing a few values for example powershellvectormapsJuniperSA and the second file containing values and and ID appid uidSplunkforSnort 340powershell 610vectormaps 729JuniperSA 826postfix 933SplunkforJuniperSRX 929TA-barracuda_webfilter 952TA-dragon-ips 954dhcpd 392 So im trying to run a while loop with AWK to get the values and their corresponding ID's but the output file seems to be writing something else. This is how im trying to run the while loop. while read $line;doawk '/'$line'/ {print $0}' file2.csv > newdone < file1 My expected output should be powershell 610vectormaps 729JuniperSA 826 but my output is appid uidSplunkforSnort 340powershell 610vectormaps 729JuniperSA 826postfix 933SplunkforJuniperSRX 929TA-barracuda_webfilter 952TA-dragon-ips 954dhcpd 392 it seems as if nothing is happening. What am i missing here? | Using awk $ awk 'FNR==NR {a[$1]=$2; next} {$(NF+1)=a[$1]}1' file2 file1powershell 610vectormaps 729JuniperSA 826 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/711456",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469475/"
]
} |
711,595 | Is it possible to check for a file existence in a crontab oneliner, and only execute a script if that file existed? Pseudocode: * * * * * <if /tmp/signal.txt exists> run /opt/myscript.sh | Use an ordinary test for existence, then run the script if the test succeeds. * * * * * if [ -e /tmp/signal.txt ]; then /opt/myscript.sh; fi or * * * * * if test -e /tmp/signal.txt; then /opt/myscript.sh; fi Or, using the short-circuit syntax. Doing it this way would cause the job to fail if the file does not exist (which may trigger an email from the cron daemon): * * * * * [ -e /tmp/signal.txt ] && /opt/myscript.sh or * * * * * test -e /tmp/signal.txt && /opt/myscript.sh You could use the -f test instead of the -e test if you want to additionally ensure that /tmp/signal.txt is a regular file and not a directory, named pipe, or some other type of file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/711595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249878/"
]
} |
711,610 | i am trying to write a script that will download, enable and add task with cronand then add auto update and upgrade task to the machine. what i have until now is sudo apt install cronsudo systemctl enable cron until here all goodthen i add (after a research, the following commands) <(crontab -l) <(echo '50 19 * * * sudo apt update -y') | crontab -<(crontab -l) <(echo '00 20 * * * sudo apt upgrade -y') | crontab - and when i check the file crontab -l i see that the script did write the task like it should,but its not runing (i tried to run an apt install every min to see if its working) but when i write the this command 50 19 * * 3 root sudo apt update -y with nano on that file /etc/crontab it worked i tried to add root permeation on crontab -e but still not working any solution? is there is a way to add text line with script to /etc/crontab ? ( i couldn't find a way on line) thanks you all | Use an ordinary test for existence, then run the script if the test succeeds. * * * * * if [ -e /tmp/signal.txt ]; then /opt/myscript.sh; fi or * * * * * if test -e /tmp/signal.txt; then /opt/myscript.sh; fi Or, using the short-circuit syntax. Doing it this way would cause the job to fail if the file does not exist (which may trigger an email from the cron daemon): * * * * * [ -e /tmp/signal.txt ] && /opt/myscript.sh or * * * * * test -e /tmp/signal.txt && /opt/myscript.sh You could use the -f test instead of the -e test if you want to additionally ensure that /tmp/signal.txt is a regular file and not a directory, named pipe, or some other type of file. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/711610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/535113/"
]
} |
711,697 | I have a list of domains without subdomains in a text file. I need the TLD and SLD removed. Input google.ukexample.comamazon.co.ukdomain.ca.ukeducation.edu.it Expected output: googleexampleamazondomaineducation | You don't need awk or sed for this, this is the job that cut exists to do: $ cut -d'.' -f1 filegoogleexampleamazondomaineducation | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/711697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533874/"
]
} |
711,705 | I need all 2-3 character words completely capitalized. 1 character and 4+ character words need to be remain untouched. Input: cat ExampleDogIFishsuSu admAmd Cat ignore Expected output: CAT ExampleDOGaFishSUSU ADMADM CAT ignore | Using GNU sed $ sed -E 's/\<[[:alpha:]]{2,3}\>/\U&/g' input_fileDOGIFishSUSU ADMAMD CAT ignore | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/711705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/533874/"
]
} |
711,755 | I have a React app where some files are in .ts and some are in .tsx Currently in order to update the contents of both .ts and .tsx files, I have to run 2 separate commands: internal-web main % cd srcsrc main % find . -name '*.tsx' -print0 | xargs -0 sed -i "" "s/SomeThing/SOME_THING/g"src main % find . -name '*.ts' -print0 | xargs -0 sed -i "" "s/SomeThing/SOME_THING/g" Is there a way I can combine this to one command so that I can update both .ts and .tsx files at once? | You can do: find . \( -name '*.tsx' -o -name '*.ts' \) -print0 | xargs -0 sed -i "" "s/SomeThing/SOME_THING/g" Or, simpler and more portable (and avoids a sed error if not file is found): find . \( -name '*.tsx' -o -name '*.ts' \) -exec sed -i "" "s/SomeThing/SOME_THING/g" {} + In any case, note that -i is not a standard option to sed . sed -i "" usage suggests the FreeBSD implementation of sed (also found on macOS). Most other sed implementations either don't support a -i or need the "" to be omitted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/711755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44136/"
]
} |
711,806 | After setting password expiration via: sudo chage -d 0 username Then changing the password and login as that user. When I type "passwd" and try to set the original password I receive message; "Password Policy - BAD PASSWORD: The password is just rotated old one" I've had a look in the following file but can't see a policy line item that could cause this behaviour; sudo nano /etc/pam.d/common-password # here are the per-package modules (the "Primary" block)password requisite pam_pwquality.so retry=3password [success=2 default=ignore] pam_unix.so obscure use_authtok try_first_pass yesc>password sufficient pam_sss.so use_authtok# here's the fallback if no module succeedspassword requisite pam_deny.so# prime the stack with a positive return value if there isn't one already;# this avoids us returning an error just because nothing sets a success code# since the modules above will each just jump aroundpassword required pam_permit.so# and here are more per-package modules (the "Additional" block)password optional pam_gnome_keyring.sopassword optional pam_ecryptfs.so# end of pam-auth-update config What is causing the "BAD PASSWORD: The password is just rotated old one" error message? Cheers! | pam_pwquality causes this , having an exact match for the error message: case PWQ_ERROR_ROTATED: return _("The password is just rotated old one"); There does not appear to be an option to disable this feature via the pwquality.conf(5) configuration file. And even if pam_pwquality were disabled (probably a bad idea, attackers love it when passwords do not change, or do not change much) then pam_unix might also reject the password for reasons of its own. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/711806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/535690/"
]
} |
711,812 | Consider a dir garbage containing many files and directories.If I run rm -rf garbage , but some files or directories within garbage are busy by the OS/NFS/etc., so rm -rf will fail for them. Will it delete the rest? Or will it stop deleting upon the first failure?The current OS is Ubuntu 20.04, but it's of interest whether this behavior is standard, or it depends on the (version of) OS. | rm ’s behaviour is specified in detail in POSIX (see also What is the actual sequence of steps during rm -Rf on a very large folder? which reproduces the full sequence). In all cases where it can’t remove the current file or it doesn't exist, it must go on to any remaining files This means that even in case of failure, it will continue deleting files (if it can). This applies both to files specified as arguments to rm -rf , and to files found while processing directories. Note that in some cases it will permanently wait for the result of an operation ( e.g. if it tries to remove a file on an NFS mount whose server isn’t responding); it will end up in uninterruptible sleep (state D ), but if that state ever resolves, it will continue deleting files. It never actually stops deleting until it’s run out of files to delete. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/711812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185188/"
]
} |
711,852 | I want to print a line from one file and use that as the input for a line replacement in another file. sed -n '12p' FILE1 | sed -i '12c\ STDIN' FILE2 Is this possible? | I'm not aware of any implementation of sed that supports reading from standard input for the c command, but in GNU sed you could use the r command with pseudo-filename /dev/stdin , followed by d to delete the original line: ... | sed -i '12{r /dev/stdind}' FILE2 or, more compactly using -e to chain expressions ... | sed -i -e '12{r /dev/stdin' -e 'd;}' FILE2 The braces group the r and d commands so that they are both executed for the address 12 . You can't simply use 12{r /dev/stdin; d;} because the ; d;} will be parsed as part of the argument to r . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/711852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1932/"
]
} |
712,094 | Given this file: $ cat fruits.json[ { "name": "apple" }, { "name": "banana\nfofanna" }, { "name": "my kiwi" }] how can one use jq to retrieve a list of fruit names which the shell use as data, for example to populate an array? Something equivalent to the assignment to the fruits array below: $ fruits=( 'apple''bananafofanna''my kiwi')$ for f in "${fruits[@]}" ;do echo "<$f>"; done<apple><bananafofanna><my kiwi> None of the following work: $ fruits=( $(jq -r ' .[].name ' fruits.json))$ fruits=( $(jq -r ' .[].name | @sh ' fruits.json))$ fruits=( $(jq ' .[].name | @sh ' fruits.json)) | The first two attempts at the end of the question do not work because they rely on the shell splitting the output of jq on whitespace. Since newline is one type of whitespace, you lose the newlines (and tabs and the original spaces) in the data. The attempts additionally fail to quote the string data, so you would get filename globbing happening if any string contained filename globbing characters. The last attempt fails for similar reasons but additionally does not properly decode the data, leaving encoded newlines in place. Using the built-in @sh operator in jq to quote the data and build an array assignment: eval "$( jq -r '"fruits=(" + (map(.name)|@sh) + ")"' fruits.json )" For the given JSON data, this would cause the following assignment to be evaluated by the current shell, creating the array fruits : fruits=('apple' 'bananafofanna' 'my kiwi') After evaluating that statement, $ printf '<%s>\n' "${fruits[@]}"<apple><bananafofanna><my kiwi> As an alternative, the following would append each name element's value to the shell array: $ jq -r '"fruits+=(" + (.[].name | @sh) + ")"' fruits.jsonfruits+=('apple')fruits+=('bananafofanna')fruits+=('my kiwi') $ unset -v fruits$ eval "$(jq -r '"fruits+=(" + (.[].name | @sh) + ")"' fruits.json)" $ printf '<%s>\n' "${fruits[@]}"<apple><bananafofanna><my kiwi> | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42620/"
]
} |
712,115 | I was attempting to run a script in bash: cc.sh To run the bash script: ../cc.sh ../seq/ seqf where ../seq/ and seqf are both arguments required. I wanted to save the command I used to run the script (the command above), together with the output/result from the script into a log file. I managed to print the output into a log file with tee command. ../cc.sh ../seq/ seqf 2<&1 | tee -a cc.log cat cc.log: "here is where the output shown" While for the command itself, the closest I got is the script command. However, it seemed to save the command I used to run in a binary file. Is there other better way to print the command other than script ? Or better, print the command used to run the script file, together with the output/result into a log file? Expected output for the log file: COMMAND: ../cc.sh ../seq/ seqf"here is where the output shown" | The first two attempts at the end of the question do not work because they rely on the shell splitting the output of jq on whitespace. Since newline is one type of whitespace, you lose the newlines (and tabs and the original spaces) in the data. The attempts additionally fail to quote the string data, so you would get filename globbing happening if any string contained filename globbing characters. The last attempt fails for similar reasons but additionally does not properly decode the data, leaving encoded newlines in place. Using the built-in @sh operator in jq to quote the data and build an array assignment: eval "$( jq -r '"fruits=(" + (map(.name)|@sh) + ")"' fruits.json )" For the given JSON data, this would cause the following assignment to be evaluated by the current shell, creating the array fruits : fruits=('apple' 'bananafofanna' 'my kiwi') After evaluating that statement, $ printf '<%s>\n' "${fruits[@]}"<apple><bananafofanna><my kiwi> As an alternative, the following would append each name element's value to the shell array: $ jq -r '"fruits+=(" + (.[].name | @sh) + ")"' fruits.jsonfruits+=('apple')fruits+=('bananafofanna')fruits+=('my kiwi') $ unset -v fruits$ eval "$(jq -r '"fruits+=(" + (.[].name | @sh) + ")"' fruits.json)" $ printf '<%s>\n' "${fruits[@]}"<apple><bananafofanna><my kiwi> | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345181/"
]
} |
712,126 | I had made a bash shell script which include while command but when I run the script using source command on terminal, it gives a syntax error message. I need to use source because I have to set the environment variable on the terminal, which can't be used without source . echo $shell gives : /bin/csh shell is not interactive output of ps -p $$ gives CMD : tcsh script is: #! /bin/bashi=1while read linedo echo "$line $i"echoi=$((i+1))done < seed.txt error is: i=1: Command not found.while: Expression Syntax. | This error? $ tcshtcsh> source while.sh i=1: Command not found.while: Expression Syntax.tcsh> exit Csh/tcsh is a different shell than POSIX sh or Bash. Trying to run a script in sh syntax in (t)csh is not going to work. I need to use source because I have to set the environment variable on the terminal, which can't be used without source . Make it an actual exported environment variable with setenv : tcsh> cat hello.sh echo "hello, $name"tcsh> bash hello.shhello, tcsh> setenv name vikastcsh> bash hello.shhello, vikas | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712126",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/536060/"
]
} |
712,142 | I want to install a software (Veeam) on my CentOS 7 virtual machine, but the thing is, all I have on that VM is the shell, so there's no GUI. How can I install software on this VM? I am a newbie coming into Linux and VMs. Do I have to download and ISO file and upload it to the VM or something of the like? If so, what are the steps I'd have to do? | This error? $ tcshtcsh> source while.sh i=1: Command not found.while: Expression Syntax.tcsh> exit Csh/tcsh is a different shell than POSIX sh or Bash. Trying to run a script in sh syntax in (t)csh is not going to work. I need to use source because I have to set the environment variable on the terminal, which can't be used without source . Make it an actual exported environment variable with setenv : tcsh> cat hello.sh echo "hello, $name"tcsh> bash hello.shhello, tcsh> setenv name vikastcsh> bash hello.shhello, vikas | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/532591/"
]
} |
712,272 | While downloading MySQL Workbench for my Debian-based Linux system, I encountered two packages. The first one is called mysql-workbench-community_8.0.30-1ubuntu22.04_amd64.deb while the second one mysql-workbench-community-dbgsym_8.0.30-1ubuntu22.04_amd64.deb , the latter being larger in size: What are the differences between the two packages? Which one should I use? | The dbgsym package contains debug symbols. TLDR Ignore the dbgsym package Details If your program (mysql in this case) is written in a compiled language such as C, C++, Go, Rust, etc and converted into an executable then debug information allows a certain amount of referencing locations and values from the executable back to the original source code. For example it can say that bytes 300 through 312 and 340 through 356 came from line 127 of file src/main.c. Also it can say that the variable foo is stored in register %r12 when the program counter is between 500 and 512. If the program crashes, perhaps due to doing a divide by zero, then being able to say that you were executing a particular source code line and using a particular variable can make things much simpler to get a fix. However all of this information takes space. As most people are not going to want to debug programs such as mysql, distributions frequently split the debug information out into an additional package so that people who are not interested don't pay the costs of downloading and storing it. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/504271/"
]
} |
712,302 | I'm getting nowhere with this. I simply want to make a bash script that loops 10 times and breaks if the output contains the string "connection successful". I tried something like ret=$? and then if [$ret -ne 0] but I kept getting the else statement happen, even if ret is 1. So now I'm trying to use grep to search for the words "connection successful", but I don't know how to use the syntax for that. So I want something like: for i in {1..10}do ret=bluetoothctl connect 26:EE:F1:58:92:AF | grep "connection successful" if [$ret -ne ""]; then break fidone but obviously with the right syntax for $ret=bluetoothctl connect 26:EE:F1:58:92:AF | grep "connection successful" Any help would be greatly appreciated. | It is rarely necessary to store the output from grep when you just want to test whether some text matches a pattern. Instead, just invoke grep with its -q option and act on its exit status: #!/bin/shtries=10while [ "$tries" -gt 0 ]; do if bluetoothctl connect '26:EE:F1:58:92:AF' | grep -q 'connection successful' then break fi tries=$(( tries - 1 ))doneif [ "$tries" -eq 0 ]; then echo 'failed to connect' >&2 exit 1fi If bluetoothctl returns a sane exit status on failure and success, then you don't even need grep and can shorten the if -statement in the loop into the following: if bluetoothctl connect '26:EE:F1:58:92:AF' >/dev/nullthen breakfi In fact, you might as well make bluetoothctl part of the loop condition (assuming you switch from a for loop to using a while loop like I'm showing here): #!/bin/shtries=10while [ "$tries" -gt 0 ] && ! bluetoothctl connect '26:EE:F1:58:92:AF'do tries=$(( tries - 1 ))done >/dev/nullif [ "$tries" -eq 0 ]; then echo 'failed to connect' >&2 exit 1fi Consider using https://www.shellcheck.net to verify the syntax of your shell scripts. For the script in the question, it would point out that the test needs spaces within the [ ... ] : if [$ret -ne ""]; then ^-- SC1009 (info): The mentioned syntax error was in this if expression. ^-- SC1035 (error): You need a space after the [ and before the ]. ^-- SC1073 (error): Couldn't parse this test expression. Fix to allow more checks. ^-- SC1020 (error): You need a space before the ]. ^-- SC1072 (error): Missing space before ]. Fix any mentioned problems and try again. Testing for a non-empty string is done with [ -n "$ret" ] or [ "$ret" != "" ] . Note that -ne is an arithmetic test. You also do not assign the output of the pipeline correctly to ret . Unfortunately, the syntax is absolutely correct but does something completely different, so ShellCheck would not pick up on it. What you intended to use was ret=$( bluetoothctl ... | grep ... ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/712302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/492275/"
]
} |
712,424 | In a script, I have a long list of commands that print to stdout. I want to hide all output. So instead of redirecting each command, I put exec >/dev/null at the beginning. What options do I have to "temporarily ignore" the general redirection and to have one echo call in between actually print to stdout? | Any echo would print to stdout. It's just that your stdout now points to /dev/null . Point being that the original stdout is in no way special, or more "true" than the stdout you have after a redirection. If you want to keep a copy of where stdout pointed to originally, you can duplicate the file descriptor to another number and send there any output you want to keep: exec 3>&1 # duplicate original stdout to fd 3exec 1>/dev/null # send stdout to /dev/nullprintf "what\n" # this goes to stdout = /dev/nullprintf "hello " >&3 # this goes to fd 3 = original stdout# optionally:exec 1>&3 # put the original stdout backexec 3>&- # close fd 3printf "there\n" # to current stdout = original stdout again Or in Bash/ksh93/zsh with a dynamically allocated fd (I hope I got this right): exec {orig}>&1 # duplicate original stdout to some fd, # store number in $origexec 1>/dev/null # send stdout to /dev/nullprintf "what\n" # this goes to stdout = /dev/nullprintf "hello " >&"$orig" # this goes to fd in $orig = original stdout# optionally:exec 1>&"$orig" # put the original stdout backexec {orig}>&- # close fd in $origprintf "there\n" # to current stdout = original stdout again In both cases, ksh93 will mark the fd (3 or $orig ) with the close-on-exec flag, bash / zsh won't, though with bash 5.2 or newer, you can do shopt -s varredir_close for the close-on-exec flag to be added to fds created with the exec {var}>... syntax. When that flag is not set, that means that fd 3 will leak to other commands. That could be a problem in practice if the original stdout was going to a pipe for instance and you run a command that starts a background process that could end-up keeping that pipe open. Running cmd 3>&- for those can work around the problem. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/712424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491702/"
]
} |
Subsets and Splits