source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
502,208
I am trying to compare todays date with the last modified date from a file. DATE=$(date +"%F")LASTMOD=$(stat $i -c %y);LASTMOD_DATE=$(cut -d' ' -f1 <<<"$LASTMOD")if [ "$LASTMOD_DATE" -ge "$DATE" ]; then printf "%-19s | " "$DATE"else printf "%-19s | " "NO RECENT MOD"fi Currently this does not compare them properly and I think it's because LASTMOD_DATE is not actually a datetime so I get the error: "integer expression expected".
You can use the timestamp format date +%s and the -r option. -r, --reference=FILE display the last modification time of FILE like if [ $(date +%s -r file) -ge $(date +%s) ]; then # do somethingfi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/502208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/338211/" ] }
502,257
I want to run an alias inside a bash -c construct. The bash manual says: Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt In this example, why is the alias hi not found when setting expand_aliases explicitly? % bash -O expand_aliases -c "alias hi='echo hello'; alias; shopt expand_aliases; hi"alias hi='echo hello'expand_aliases onbash: hi: command not found I'm running GNU bash, version 5.0.0(1)-release (x86_64-pc-linux-gnu) . Context: I want to be able to run an alias at idle priority, eg a script containing: #!/bin/bashexec chrt -i 0 nice -n 19 ionice -c 3 bash -c ". ~/.config/bash/aliases; shopt -s expand_aliases; $(shell-quote "$@")" I want to avoid using bash -i as I don't want my .bashrc to be read.
It doesn't seem work if you set the alias on the same line as it's used. Probably something to do with how aliases are expanded really early in the command line processing, before the actual parsing stage. On an interactive shell: $ alias foobash: alias: foo: not found$ alias foo='echo foo'; foo # 2 bash: foo: command not found$ alias foo='echo bar'; foo # 3foo$ foobar Note how the alias used is one line late: on the second command it doesn't find the alias just set, and on the third command it uses the one that was previously set. So, it works if we put a newline within the -c string: $ bash -c $'shopt -s expand_aliases; alias foo="echo foo";\n foo'foo (You could also use bash -O expand_aliases -c ... instead of using shopt within the script, not that it helps with the newline.) Alternatively, you could use a shell function instead of an alias, they're much better in other ways, too: $ bash -c 'foo() { echo foo; }; foo'foo
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/502257", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/143394/" ] }
502,460
I'm confused by this experiment (in Bash): $ mkdir 'foo\n'$ find . -print0 | od -c0000000 . \0 . / f o o \ n \00000012 As you can see, "find" is correctly delimiting the output with null characters, but it escapes the newline in the directory name as "foo\n" with a backslash "n". Why is it doing this? I told it "-print0" which says "This allows file names that contain newlines ... to be correctly interpreted by programs that process the find output." The escaping should not be necessary, since "\0" is the delimiter, not "\n".
The problem is not in find , but in how you're creating this directory. The single quoted string 'foo\n' is actually a 5-character string, of which the last two are a backslash and a lowercase "n". Double-quoting it doesn't help either, since double-quoted strings in shell use backslash as an escape character, but don't really interpret any of the C-style backslash sequences. In a shell such as bash or zsh, etc. (but not dash from Debian/Ubuntu), you can use $'...' , which interprets those sequences: $ mkdir $'foo\n' (See bash's documentation for this feature, called "ANSI C Quoting"). Another option, that should work in any shell compatible with bourne shell is to insert an actual newline: $ mkdir 'foo' That's an actual Return at the end of the first line, only closing the single quote on the second line.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/502460", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118662/" ] }
502,492
When the file owner is part of several groups, how does ls -l decide which group to show? For example, on MacOS, I see drwx------+ 48 flow2k staff 1536 Feb 5 10:11 Documentsdrwxr-xr-x+ 958 flow2k _lpoperator 30656 Feb 22 16:07 Downloads Here groups shown for the two directories are different ( staff and _lpoperator ) - what is this based on? I am a member of both groups.
I think this question stems from a misunderstanding of how groups work. The groups listed in ls -l are not the group that the user is potentially in, but the group that the file is owned by. Each file is owned by a user and a group. Often, this user is in the group, but this is not necessary. For example, my user is in the following groups: $ groupsaudio uucp sparhawk plugdev but not in, say, the group cups . Now, let's create a file. $ touch foo$ ls -l foo-rw-r--r-- 1 sparhawk sparhawk 0 Feb 23 21:01 foo This is owned by the user sparhawk , and the primary group for me, which is also called sparhawk . Let's now change the group owner of the file. $ sudo chown sparhawk:cups foochanged ownership of 'foo' from sparhawk:sparhawk to sparhawk:cups$ ls -l foo-rw-r--r-- 1 sparhawk cups 0 Feb 23 21:01 foo You can see that the group that now owns the file is not a group that I am in. This concept allows precise manipulation of file permissions. For example, you could create a group with members X, Y, and Z, and share files between the three of them. You could further give X write permissions, but only give the others (the group) read permissions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/502492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227169/" ] }
502,540
This error shows up everytime I install Kali Linux, whenever I try to boot it. Then, it dissapears and the screen blacks out. The error is the following: +[drm:vmw_host_log [vmwgfx]] *ERROR* Failed to send host log message. Here's also an screenshot of the error:
Try to change display setting and check
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/502540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/338462/" ] }
502,659
I have a folder with -wx permissions called folder1 and another folder inside it called folder2 with rwx permissions. I tried to delete folder1 using this command: rm -r folder1 But I got the following error: rm: cannot remove 'folder1': Permission denied The reason I think I got this error is because the rm program needs to first get the content of folder1 (get the names of the files and folders inside folder1 that is) in order to be able to delete that content (because you can't delete a file or folder without knowing its name I think), and then the rm program can delete folder1 itself. But since folder1 doesn't have the read permission, then the rm program can't get its content, and hence it can't delete its content, and since it can't delete its content, then it can't delete it. Am I correct?
I think your analysis is correct: you cannot delete the directory since its non-empty, and you cannot empty it since you cannot see its contents. I just gave it a try: $ mkdir -p folder1/folder2$ chmod -r folder1$ rm -rf folder1rm: cannot remove 'folder1': Permission denied$ rmdir folder1/folder2$ rm -rf folder1$ When I wrote “you”, I meant any program you may run. Your rm -r command first sees that folder1 is a directory, so it tries to discover its contents to empty it, but fails for missing read permission, then it tries to delete it but fails because it’s non-empty. The “Permission denied” is misleading; I think “Directory not empty” (like rmdir reports) would be more appropriate.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/502659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/231516/" ] }
502,693
Why is SMART overall-health self-assessment test result: PASSED being displayed when the two tests done failed? sudo smartctl -a /dev/sdc smartctl 6.6 2018-12-05 r4851 [x86_64-linux-4.14.98] (local build)Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===Model Family: Western Digital AV-GP (AF)Device Model: WDC WD20EURS-63SPKY0Serial Number: WD-WMC1T2763021LU WWN Device Id: 5 0014ee 6addb4b7cFirmware Version: 80.00A80User Capacity: 2,000,398,934,016 bytes [2.00 TB]Sector Sizes: 512 bytes logical, 4096 bytes physicalDevice is: In smartctl database [for details use: -P show]ATA Version is: ACS-2 (minor revision not indicated)SATA Version is: SATA 3.0, 3.0 Gb/s (current: 3.0 Gb/s)Local Time is: Sun Feb 24 13:43:30 2019 GMTSMART support is: Available - device has SMART capability.SMART support is: Enabled=== START OF READ SMART DATA SECTION ===SMART overall-health self-assessment test result: PASSEDGeneral SMART Values:Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled.Self-test execution status: ( 117) The previous self-test completed having the read element of the test failed.Total time to complete Offline data collection: (27240) seconds.Offline data collectioncapabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported.SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer.Error logging capability: (0x01) Error logging supported. General Purpose Logging supported.Short self-test routine recommended polling time: ( 2) minutes.Extended self-test routinerecommended polling time: ( 275) minutes.Conveyance self-test routinerecommended polling time: ( 5) minutes.SCT capabilities: (0x70b5) SCT Status supported. SCT Feature Control supported. SCT Data Table supported.SMART Attributes Data Structure revision number: 16Vendor Specific SMART Attributes with Thresholds:ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 180 179 021 Pre-fail Always - 5991 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 113 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 092 092 000 Old_age Always - 6354 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 56192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 66194 Temperature_Celsius 0x0022 122 114 000 Old_age Always - 28196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 1SMART Error Log Version: 1No Errors LoggedSMART Self-test log structure revision number 1Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error# 1 Short offline Completed: read failure 50% 6354 4377408# 2 Extended offline Completed: read failure 90% 6354 4377408SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testingSelective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk.If Selective self-test is pending on power-up, resume after 0 minute delay.
Because your SMART attributes are all in good shape and there were No Errors Logged . Please read: ATA drive is failing self-tests, but SMART health status is 'PASSED'. What's going on? If the drive fails a self-test, but still has 'PASSED' SMART health status, this usually means that there is a corrupted (uncorrectable=UNC) sector on the disk. This means that the ECC data stored at that sector is not consistent with the user data stored at that sector, and an attempt to read the sector fails with a UNC error. This can be a one-time transient effect: a sudden power failure while the disk was writing to the sector corrupted the ECC code or data, but the sector could correctly store new data. Or it can be a permanent effect: the magnetic media has been damaged by a bit of dust, and the sector could not correctly store new data. If the disk can read the sector of data a single time, and the damage is permanent, not transient, then the disk firmware will mark the sector as 'bad' and allocate a spare sector to replace it. But if the disk can't read the sector even once, then it won't reallocate the sector, in hopes of being able, at some time in the future, to read the data from it. A write to an unreadable (corrupted) sector will fix the problem. If the damage is transient, then new consistent data will be written to the sector. If the damange is permanent, then the write will force sector reallocation. Please see Bad block HOWTO for instructions about how to force this sector to reallocate (Linux only). The disk still has passing health status because the firmware has not found other signs of trouble, such as a failing servo. Such disks can often be repaired by using the disk manufaturer's 'disk evaluation and repair' utility. Beware: this may force reallocation of the lost sector and thus corrupt or destroy any file system on the disk. See Bad block HOWTO for generic Linux instructions. You can try to fix your unreadable sector, either with dd or with some kind of "repair utility". Backup your drive first!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/502693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
502,826
Quite often we see that the file we are trying to save in vim after edit is reported to be read-only. The way around this is to add !wq , i am trying to figure out what internally goes that allows the vim program to gain enough permission to write the read-only file ? Is there a internal flag which is switched or the vim temporarily gains the privileges for some time ?
When you do w! in Vim, what actually happens depends on who owns the file. If you (the current user) are the owner of the file, Vim will change the permissions to be writable before rewriting the file. It then removes the write permissions to restore the permission bits to what it was from the start. If you are not the owner of the file, but if you have write permissions in the current directory, Vim will delete the original file and write the document to a new file with the same name. The new file will then be assigned the same permissions as the original file, but will be owned by you. At no time does Vim gain elevated privileges to be able to write to the file. The mechanics described above are the available options that any program that needs to write to a read-only file has to choose from (i.e. either temporarily change the permission while writing to the file, or delete the file and create a new one), and what Vim ends up choosing to do may in the end may depend on a number of configurable settings. As seen in comments below, there is some confusion about the above. If you want to see for yourself what actually happens with your setup of Vim on your particular brand of Unix, I'd recommend tracing the system calls that Vim does while writing to a read-only file. How this is done depends on what Unix you are using. On Linux, this is likely done through e.g. strace vim file (then editing the file, saving it with w! and exiting). This is the first case (output from ktrace + kdump on OpenBSD): 13228 vim CALL chmod(0x19b1d94b4b10,0100644<S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH|S_IFREG>)13228 vim NAMI "file"13228 vim RET chmod 013228 vim CALL lseek(3,0x1000,SEEK_SET)13228 vim RET lseek 4096/0x100013228 vim CALL write(3,0x19b1e0aa9000,0x1000) This changes permissions on the file so that it's writable (the S_IWUSR flag used with chmod() ) and writes the buffer to it. It then sets the original permissions: 13228 vim CALL fchmod(4,0100444<S_IRUSR|S_IRGRP|S_IROTH|S_IFREG>)13228 vim RET fchmod 013228 vim CALL close(4)13228 vim RET close 0 For the other case: It first unlinks (deletes) the file and then recreates it (before writing to the file and changing permissions later): 44487 vim CALL unlink(0x79fdbc1f000)44487 vim NAMI "file"44487 vim RET unlink 044487 vim CALL open(0x79fdbc1f000,0x201<O_WRONLY|O_CREAT>,0644<S_IRUSR|S_IWUSR|S_IRGRP|S_IROTH>)44487 vim NAMI "file"44487 vim RET open 4
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/502826", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73029/" ] }
502,857
I need this for a unit test. There's a function that does lstat on the file path passed as its parameter. I have to trigger the code path where the lstat fails (because the code coverage has to reach 90%) The test can run only under a single user, therefore I was wondering if there's a file in Ubuntu that always exists, but normal users have no read access to it, or to its folder. (So lstat would fail on it unless executed as root.) A non-existent file is not a solution, because there's a separate code path for that, which I'm already triggering. EDIT: Lack of read access to the file only is not enough. With that lstat can still be executed. I was able to trigger it (on my local machine, where I have root access), by creating a folder in /root, and a file in it. And setting permission 700 on the folder. So I'm searching for a file that is in a folder that is only accessible by root.
On modern Linux systems, you should be able to use /proc/1/fdinfo/0 (information for the file descriptor 1 (stdout) of the process of id 1 ( init in the root pid namespace which should be running as root )). You can find a list with (as a normal user): sudo find /etc /dev /sys /proc -type f -print0 | perl -l -0ne 'print unless lstat' (remove -type f if you don't want to restrict to regular files). /var/cache/ldconfig/aux-cache is another potential candidate if you only need to consider Ubuntu systems. It should work on most GNU systems as /var/cache/ldconfig is created read+write+searchable to root only by the ldconfig command that comes with the GNU libc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/502857", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/195913/" ] }
502,979
Under /home folder we have many subfolders, as the following: /home/user1/home/user2/user_sub_2/home/user3/user_sub_3/info_sub/home/user4/INFO_FOLDER We want to copy all the files under /home , recursively, to the /tmp/calculation folder. All files should be placed directly into the target folder (no subdirectories should be created). If two or more files have the same name, then the most recently modified file should be copied to /tmp/calculation . What is the right approach to do this action?
find /home ! -type d -exec bash -c ' for pathname do if [ "$pathname" -nt "/tmp/calculation/${pathname##*/}" ] then cp "$pathname" /tmp/calculation fi done' bash {} + This would find all non-directory files under /home , and for batches of these it would call a short bash script. The short bash script would loop over the current batch of pathnames, and for each would test with the -nt test whether current file is newer than the copy in the target directory (or whether a copy does not exist there). If the file in the target directory is older or if it does not exist, cp is used to copy the current file to the target directory. The parameter expansion ${pathname##*/} would remove any directory path before the actual filename, leaving only the filename portion of the pathname. It could be replaced by $(basename "$pathname") . Related: Understanding the -exec option of `find` Mostly unrelated: The -nt test is a non-standard test. This is why I chose to use bash for the internal script that find calls. Using sh -c instead of bash -c would probably have worked , but the semantics of the test may differ slightly between shell that may masquerade as sh . For example, in the bash , zsh and ksh shells, the -nt test is true if the first operand has a modification timestamp that in newer than that of the second operand, or if the second operand does not exist. In the dash shell, however, both files most exist and the first file has to be newer than the second for the test to be true (according to the documentation). This difference would not have been an issue in this case. In the yash shell it's not specified in the manual what happens if either file does not exist. It is therefore safest to use a specific shell when using a non-standard facility, even if it, in this specific case, would probably have worked with sh -c anyway. (The downside with using bash in this instance is that it only has a one second resolution in the timestamps that it compares, but that's another story)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/502979", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
503,127
I'm trying to get a JSON document with the top 5 processes by memory. This JSON I want to send to Zabbix and draw the top 5 processes by memory. I get the top 5 processes by memory by the following command: ps axho comm --sort -rss | head -5nodemongodkubeletdockerdsystemd-journal How to convert the output of ps + head to JSON with key {#PROCNAME} to get this structure: { "data": [ { "{#PROCNAME}": "node" }, { "{#PROCNAME}": "mongod" }, { "{#PROCNAME}": "kubelet" }, { "{#PROCNAME}": "dockerd" }, { "{#PROCNAME}": "systemd-journal" } ]} https://www.zabbix.com/documentation/current/manual/config/macros/lld_macros There is a type of macro used within the low-level discovery (LLD) function: {#MACRO}
If your jq has the inputs function, and assuming {#PROCNAME} is just a string, you can use the following: ps axho comm --sort -rss | head -5 | jq -Rn '{data: [inputs|{"#PROCNAME":.}]}' The inputs functions lets jq read all input string. The rest is decoration to get the wanted format. The option -R gets raw string as input.The option -n feeds jq input with null entry. That way inputs gets all strings at once.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/265661/" ] }
503,161
I'd like ripgrep to search paths with the specified pattern. For e.g. rg PATTERN --path REGEX where PATTERN is the pattern to grep and REGEX is the path matching pattern. I have scattered through the documentation and I am unsure if this functionality is baked in.
Use the -g/--glob flag, as documented in the guide . It uses globbing instead of regexes, but accomplishes the same thing in practice. For example: rg PM_RESUME -g '*.h' finds occurrences of PM_RESUME only in C header files in my checkout of the Linux kernel. ripgrep provides no way to use a regex to match file paths. Instead, you should use xargs if you absolutely need to use a regex: rg --files -0 | rg '.*\.h$' --null-data | xargs -0 rg PM_RESUME Breaking it down: rg --files -0 prints all of the files it would search, on stdout, delimited by NUL . rg '.*\.h$' --null-data only matches lines from the file list that end with .h . --null-data ensures that we retain our NUL bytes. xargs -0 rg PM_RESUME splits the arguments that are NUL delimited, and hands them to ripgrep, which precisely corresponds to the list of files matching your initial regex. Handling NUL bytes is necessary for full correctness. If you don't have whitespace in your file paths, then the command is simpler: rg --files | rg '.*\.h$' | xargs rg PM_RESUME
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503161", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64547/" ] }
503,184
Before I lend my laptop for a while to my young and linux-savy nephew I want to make sure he's not able to carve into my personal data in the blank space of the drive. I have saturated the blank space in the drive several times with sudo cat /dev/urandom > some-file Note the use of sudo, so that the 5% blank space reserved is ignored and the file grows until there is an error. However, I execute photorec in that partition and then hundreds of old files pop out into existence. So, at least out of curiosity, where are those files stored and why does the random noise not overwrite them? (The only explanation I have so far is, they might be in the empty space between the end of a file and the end of the sector that contains it. Could that be?)
There may be several misunderstandings here, so the command does not do what you perhaps expect it to. sudo is superfluous since you don't need sudo to read from /dev/urandom . The > some-file part is a shell redirection and thus not covered by sudo at all. So your sudo is super ineffective. ( Note: in this particular case, sudo might work as intended regardless, see comments. However, not using sudo this way is a pattern as it bites you in other cases.) Then, you're writing into a regular file. That does fill up free space - of the filesystem that file happens to reside on. If you have multiple filesystems (one for / , one for /home , boot and swap partitions, etc.) then those are also unaffected. At best this only overwrites free space . There is no guarantee that it will cover everything (depends on filesystem internals, root reserve, journal, otherwise packed/reserved/etc. sectors), and it does not overwrite any file that is still there regularly (and those can include files hidden away in trashcan / thumbnail / cache folders or just some subdirectory you forgot about). All of those will still be picked up by photorec since it's never overwritten. Furthermore, writing this file has to be completed first. So instead of deleting it directly afterwards, you'd have to sync first to make sure all that random data actually hit the disk, and not just some RAM write buffer and never gets written. So with this method, there is no guarantee for anything. At the same time it's dangerous, as the filesystem will run out of free space, which in turn can cause write failures for all other programs and thus result in unintentional data loss.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503184", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104233/" ] }
503,241
What does the concept of disk label mean? Does it mean the same as partition table type (MBR, GPT, loop, etc)? (as I suspected from the following output of parted , and in my previous post ) Or does it mean a name given to a disk? Thanks. $ sudo parted -lModel: ATA TOSHIBA MQ01ABF0 (scsi)Disk /dev/sda: 500GBSector size (logical/physical): 512B/512BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 1049kB 538MB 537MB fat32 EFI System Partition boot, esp 2 538MB 500GB 500GB lvmModel: Linux device-mapper (linear) (dm)Disk /dev/mapper/lubuntu--vg-swap: 4295MBSector size (logical/physical): 512B/512BPartition Table: loopDisk Flags: Number Start End Size File system Flags 1 0.00B 4295MB 4295MB linux-swap(v1)Error: /dev/mapper/lubuntu--vg-home: unrecognised disk labelModel: Linux device-mapper (linear) (dm) Disk /dev/mapper/lubuntu--vg-home: 444GBSector size (logical/physical): 512B/512BPartition Table: unknownDisk Flags: Model: Linux device-mapper (linear) (dm)Disk /dev/mapper/lubuntu--vg-root: 51.5GBSector size (logical/physical): 512B/512BPartition Table: loopDisk Flags: Number Start End Size File system Flags 1 0.00B 51.5GB 51.5GB ext4
Yes, it's confusing: There's the label inside partitions (more correctly inside filesystems) just called LABEL by lsblk -f [On all disks but not for special partitions like swap, procfs, sysfs] There's the label outside partitions but in the partition table called PARTLABEL by lsblk -f [Only gpt disks have this capacity] There's the label outermost which as you rightly suspect is more usually called 'partition table'. This last terminology is more used in other Unix cultures eg OpenBSD , Oracle and BSD . Unfortunately the 'unrecognised disk label' that you've stumbled on seems to be this case. Some etymology/history Early filesystems did not agree on labels or even having labels. Also remaking a filesystem would lose the (FS) label. So a level of label outside FSes but inside the partition table was added in gpt disks. If we were starting over the PARTLABEL would be called LABEL and the (old-fashioned) LABEL maybe InternalLabel or Docu or something else or absent altogether. We don't have that luxury because Facts of history are not negotiable — (most of us don't have access to a time machine!) Many of us are still using the old (MBR) hardware right now Nevertheless labeling a bottle inside the bottle is confusing. In the case of the outermost label, treat it as closer to the English word "format" than "label" i.e. you buy a new disk and prepare it for use by the OS. Nowadays we say format the disk . Earlier *nixers said: label the disk Why Confusion Every Linux user (or at least Linux user who administers his own machine) needs to deal with 4 levels which can be confusing enough! Hardware disk Partition table ( gpt table ) Partitions File systems Each n+1 nests inside the above n By using LVs you are adding more level(s), which can be levels of confusion. My Friendly Advice Until you get the above don't use LVs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
503,303
We usually use $@ to represent all of argument except $0. However, I don't know what data structure $@ is. Why it behave differently with $* when including in double quote, could anyone give me a interpreter-level explanation? It can be iterated in for loop, so it seems to be array.However, it can also echoed entirely with simple echo $@ , if it is an array, only first element will be shown. Due to the limitation of shell, I cannot write more experiment code to carry it out. Difference between this post : This post show how $@ behaves differently from $* . But I am wondering about the data type of $@ . Shell as a interpreting language, like Python, should representing data according to a series of fundamental types. Or in other words, I want to know how $@ stored in computer memory. Is it a string, a multi-line string or a array? If it is a unique data type, is it possible to define a custom variable as an instance of this type?
That started as a hack in the Bourne shell. In the Bourne shell, IFS word splitting was done (after tokenisation) on all words in list context (command line arguments or the words the for loops loop on). If you had: IFS=i var=file2.txtedit file.txt $var That second line would be tokenised in 3 words, $var would be expanded, and split+glob would be done on all three words, so you would end up running ed with t , f , le.txt , f , le2.txt as arguments. Quoting parts of that would prevent the split+glob. The Bourne shell initially remembered which characters were quoted by setting the 8th bit on them internally (that changed later when Unix became 8bit clean, but the shell still did something similar to remember which byte was quoted). Both $* and $@ were the concatenation of the positional parameters with space in-between. But there was a special processing of $@ when inside double-quotes. If $1 contained foo bar and $2 contained baz , "$@" would expand to: foo bar baz^^^^^^^ ^^^ (with the ^ s above indicating which of the characters have the 8th bit set). Where the first space was quoted (had the 8th bit set) but not the second one (the one added in-between words). And it's the IFS splitting that takes care of separating the arguments (assuming the space character is in $IFS as it is by default). That's similar to how $* was expanded in its predecessor the Mashey shell (itself based on the Thomson shell, while the Bourne shell was written from scratch). That explains why in the Bourne shell initially "$@" would expand to the empty string instead of nothing at all when the list of positional parameters was empty (you had to work around it with ${1+"$@"} ), why it didn't keep the empty positional parameters and why "$@" didn't work when $IFS didn't contain the space character. The intention was to be able to pass the list of arguments verbatim to another command, but that didn't work properly for the empty list, for empty elements or when $IFS didn't contain space (the first two issues were eventually fixed in later versions). The Korn shell (on which the POSIX spec is based) changed that behaviour in a few ways: IFS splitting is only done on the result of unquoted expansions (not on literal words like edit or file.txt in the example above) $* and $@ are joined with the first character of $IFS or space when $IFS is empty except that for a quoted "$@" , that joiner is unquoted like in the Bourne shell, and for a quoted "$*" when IFS is empty, the positional parameters are appended without separator. it added support for arrays, and with ${array[@]} ${array[*]} reminiscent of Bourne's $* and $@ but starting at indice 0 instead of 1, and sparse (more like associative arrays) which means $@ cannot really be treated as a ksh array (compare with csh / rc / zsh / fish / yash where $argv / $* are normal arrays). The empty elements are preserved. "$@" when $# is 0 now expands to nothing instead of the empty string, "$@" works when $IFS doesn't contain spaces except when IFS is empty. An unquoted $* without wildcards expands to one argument (where the positional parameters are joined with space) when $IFS is empty. ksh93 fixed the remaining few problems above. In ksh93, $* and $@ expands to the list of positional parameters, separated regardless of the value of $IFS , and then further split+globbed+brace-expanded in list contexts, $* joined with first byte (not character) of $IFS , "$@" in list contexts expands to the list of positional parameters, regardless of the value of $IFS . In non-list context, like in var=$@ , $@ is joined with space regardless of the value of $IFS . bash 's arrays are designed after the ksh ones. The differences are: no brace-expand upon unquoted expansion first character of $IFS instead of for byte some corner case differences like the expansion of $* when non-quoted in non-list context when $IFS is empty. While the POSIX spec used to be pretty vague, it now more or less specifies the bash behaviour. It's different from normal arrays in ksh or bash in that: Indices start at 1 instead of 0 (except in "${@:0}" which includes $0 (not a positional parameter, and in functions gives you the name of the function or not depending on the shell and how the function was defined)). You can't assign elements individually it's not sparse, you can't unset elements individually shift can be used. In zsh or yash where arrays are normal arrays (not sparse, indices start at one like in all other shells but ksh/bash), $* is treated as a normal array. zsh has $argv as an alias for it (for compatibility with csh ). $* is the same as $argv or ${argv[*]} (arguments joined with the first character of $IFS but still separated out in list contexts). "$@" like "${argv[@]}" or "${*[@]}"} undergoes the Korn-style special processing.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/503303", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299590/" ] }
503,312
I have an array of users who need to just upload files to their set homedirs. I think sftp would suffice, but I don't want them to login via shell. So is it possible?My platform is centos 7, user's homedirs are stored lets say /personal/$user I created user with these settings useradd -m -d /personal/user1 -s /sbin/nologin assigned user a passwd, then when I use sftp to login to the machine, it says cannot connect.
I like the following setup for managing SSH access, which I've used to manage a group of users on small fleets of servers. Security and ease of management is high on the list of my priorities. Its key features are easily managing SSH rights through Unix group membership, having tightly defined permissions, and being secure by default. Setting up Install software (optional but useful): yum install members # or apt install members Add groups: addgroup --system allowsshaddgroup --system sftponly In /etc/ssh/sshd_config , ensure that the following to settings are No : PermitRootLogin noPubkeyAuthentication noPasswordAuthentication no And at the end of /etc/ssh/sshd_config , add these two stanzas: Match Group allowssh PubkeyAuthentication yesMatch Group sftponly ChrootDirectory %h DisableForwarding yes ForceCommand internal-sftp (don't forget to restart SSH after editing the file) Explanation So, what does all this do? It always disables root logins, as an extra security measure. It always disables password-based logins (weak passwords are a big risk for servers running sshd). It only allows (pubkey) login for users in the allowssh group. Users in the sftponly group cannot get a shell over SSH, only SFTP. Managing who has access is then simply done by managing group membership (group membership changes take effect immediately, no SSH restart required; but note that existing sessions are not affected). members allowssh will show all users that are allowed to log in over SSH, and members sftponly will show all users that are limited to SFTP. # adduser marcelm allowssh# members allowsshmarcelm# deluser marcelm allowssh# members allowssh# Note that your sftp users need to be members of both sftponly (to ensure they won't get a shell), and of allowssh (to allow login in the first place). Further information Please note that this configuration does not allow password logins ; all accounts need to use public key authentication. This is probably the single biggest security win you can get with SSH, so I argue it's worth the effort even if you have to start now. If you really don't want this, then also add PasswordAuthentication yes to the Match Group allowssh stanza. This will allow both pubkey and password auth for allowssh users. Alternatively, you can add another group (and Match Group stanza) to selectively grant users password-based logins. This configuration limits any sftponly user to their home directory. If you do not want that, remove the ChrootDirectory %h directive. If you do want the chrooting to work, it's important that the user's home directory (and any directory above it) is owned by root:root and not writable by group/other. It's OK for subdirectories of the home directory to be user-owned and/or writable. Yes, the user's home directory must be root-owned and unwritable to the user. Sadly, there are good reasons for this limitation. Depending on your situation, ChrootDirectory /home might be a good alternative. Setting the shell of the sftponly users to /sbin/nologin is neither necessary nor harmful for this solution, because SSH's ForceCommand internal-sftp overrides the user's shell. Using /sbin/nologin may be helpful to stop them logging in via other ways (physical console, samba, etc) though. This setup does not allow direct root logins over SSH; this forms an extra layer of security. If you really do need direct root logins, change the PermitRootLogin directive. Consider setting it to forced-commands-only , prohibit-password , and (as a last resort) yes . For bonus points, have a look at restricting who can su to root; add a system group called wheel , and add/enable auth required pam_wheel.so in /etc/pam.d/su .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/503312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117409/" ] }
503,320
I have a wonderful ISP blocking all UDP traffic (except DNS to its own DNS servers). However, I want to use UDP for my VPN solution. I have root over both VPN endpoints, and both of them are using Linux. My idea is to simply overwrite the packet type field in my outgoing UDP packets to look as TCP, and doing the reverse on the server side. Thus, the routers/firewall of my wonderful ISP will see bad TCP packets, while my VPN processes will be able to communicate on UDP. I strongly suspect, that the firewall of the ISP is not smart enough to detect that something is not okay. Of course it would be a dirty trick, but not more dirty than simply forbidding the second most used IP protocol and selling this as ordinary internet connection. As far I know, there are some iptables rules for that, but which one?
I like the following setup for managing SSH access, which I've used to manage a group of users on small fleets of servers. Security and ease of management is high on the list of my priorities. Its key features are easily managing SSH rights through Unix group membership, having tightly defined permissions, and being secure by default. Setting up Install software (optional but useful): yum install members # or apt install members Add groups: addgroup --system allowsshaddgroup --system sftponly In /etc/ssh/sshd_config , ensure that the following to settings are No : PermitRootLogin noPubkeyAuthentication noPasswordAuthentication no And at the end of /etc/ssh/sshd_config , add these two stanzas: Match Group allowssh PubkeyAuthentication yesMatch Group sftponly ChrootDirectory %h DisableForwarding yes ForceCommand internal-sftp (don't forget to restart SSH after editing the file) Explanation So, what does all this do? It always disables root logins, as an extra security measure. It always disables password-based logins (weak passwords are a big risk for servers running sshd). It only allows (pubkey) login for users in the allowssh group. Users in the sftponly group cannot get a shell over SSH, only SFTP. Managing who has access is then simply done by managing group membership (group membership changes take effect immediately, no SSH restart required; but note that existing sessions are not affected). members allowssh will show all users that are allowed to log in over SSH, and members sftponly will show all users that are limited to SFTP. # adduser marcelm allowssh# members allowsshmarcelm# deluser marcelm allowssh# members allowssh# Note that your sftp users need to be members of both sftponly (to ensure they won't get a shell), and of allowssh (to allow login in the first place). Further information Please note that this configuration does not allow password logins ; all accounts need to use public key authentication. This is probably the single biggest security win you can get with SSH, so I argue it's worth the effort even if you have to start now. If you really don't want this, then also add PasswordAuthentication yes to the Match Group allowssh stanza. This will allow both pubkey and password auth for allowssh users. Alternatively, you can add another group (and Match Group stanza) to selectively grant users password-based logins. This configuration limits any sftponly user to their home directory. If you do not want that, remove the ChrootDirectory %h directive. If you do want the chrooting to work, it's important that the user's home directory (and any directory above it) is owned by root:root and not writable by group/other. It's OK for subdirectories of the home directory to be user-owned and/or writable. Yes, the user's home directory must be root-owned and unwritable to the user. Sadly, there are good reasons for this limitation. Depending on your situation, ChrootDirectory /home might be a good alternative. Setting the shell of the sftponly users to /sbin/nologin is neither necessary nor harmful for this solution, because SSH's ForceCommand internal-sftp overrides the user's shell. Using /sbin/nologin may be helpful to stop them logging in via other ways (physical console, samba, etc) though. This setup does not allow direct root logins over SSH; this forms an extra layer of security. If you really do need direct root logins, change the PermitRootLogin directive. Consider setting it to forced-commands-only , prohibit-password , and (as a last resort) yes . For bonus points, have a look at restricting who can su to root; add a system group called wheel , and add/enable auth required pam_wheel.so in /etc/pam.d/su .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/503320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52236/" ] }
503,502
I know that Linux is available and has been ported for many different platforms such as for X86, ARM, PowerPC etc. However, in terms of porting, what is required exactly? My understanding is that Linux is software written in C. Therefore when porting Linux originally from X86 to ARM or others for example, is it not just a matter of re-compiling the code with the compiler for the specific target architecture? Putting device drivers for different peripherals aside, what else would need to be done when porting Linux to a new architecture. Does the compiler not take care of everything for us?
Even though most of the code in the Linux kernel is written in C, there are still many parts of that code that are very specific to the platform where it's running and need to account for that. One particular example of this is virtual memory, which works in similar fashion on most architectures (hierarchy of page tables) but has specific details for each architecture (such as the number of levels in each architecture, and this has been increasing even on x86 with introduction of new larger chips.) The Linux kernel code introduces macros to handle traversing these hierarchies that can be elided by the compiler on architectures which have fewer levels of page tables (so that code is written in C, but takes details of the architecture into consideration.) Many other areas are very specific to each architecture and need to be handled with arch-specific code. Most of these involve code in assembly language though. Examples are: Context Switching : Context switching involves saving the value of all registers for the process being switched out and restoring the registers from the saved set of the process scheduled into the CPU. Even the number and set of registers is very specific to each architecture. This code is typically implemented in assembly, to allow full access to the registers and also to make sure it runs as fast as possible, since performance of context switching can be critical to the system. System Calls : The mechanism by which userspace code can trigger a system call is usually specific to the architecture (and sometimes even to the specific CPU model, for instance Intel and AMD introduced different instructions for that, older CPUs might lack those instructions, so details for those will still be unique.) Interrupt Handlers : Details of how to handle interrupts (hardware interrupts) are usually platform-specific and usually require some assembly-level glue to handle the specific calling conventions in use for the platform. Also, primitives for enabling/disabling interrupts are usually platform-specific and require assembly code as well. Initialization : Details of how initialization should happen also usually include details that are specific to the platform and often require some assembly code to handle the entry point to the kernel. On platforms that have multiple CPUs (SMP), details on how to bring other CPUs online are usually platform-specific as well. Locking Primitives : Implementation of locking primitives (such as spinlocks) usually involve platform-specific details as well, since some architectures provide (or prefer) different CPU instructions to efficiently implement those. Some will implement atomic operations, some will provide a cmpxchg that can atomically test/update (but fail if another writer got in first), others will include a "lock" modifier to CPU instructions. These will often involve writing assembly code as well. There are probably other areas where platform- or architecture-specific code is needed in a kernel (or, specifically, in the Linux kernel.) Looking at the kernel source tree, there are architecture-specific subtrees under arch/ and under include/arch/ where you can find more examples of this. Some are actually surprising, for instance you'll see that the number of system calls available on each architecture is distinct and some system calls will exist in some architectures and not others. (Even on x86, the list of syscalls differs between a 32-bit and a 64-bit kernel.) In short, there's plenty of cases a kernel needs to be aware that are specific to a platform. The Linux kernel tries to abstract most of those, so higher-level algorithms (such as how memory management and scheduling works) can be implemented in C and work the same (or mostly the same) on all architectures.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/503502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/323971/" ] }
503,586
I would like to apply the print only-matching option ( -o ) to one pattern specified by grep -e 'PATTERN' syntax, while another similarly specified pattern should display the whole line containing the match (i.e. default behavior). Can this be done?
This will select only the matches for one pattern and the full line for another: grep -oe 'this_pattern' -e '^.*that_pattern.*$' file This also works and makes it a bit cleaner: grep -Eoe 'this_pattern|^.*that_pattern.*$' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339346/" ] }
503,601
We want to calculate the first numbers that we get from du du -b /tmp/*6 /tmp/216c6f99-6671-4865-b8bc-7205f5388752_resources668669 /tmp/hadoop7887078727316788325.tmp6 /tmp/hadoop-hdfs42456 /tmp/hive32786 /tmp/hsperfdata_hdfs6 /tmp/hsperfdata_hive32786 /tmp/hsperfdata_root262244 /tmp/hsperfdata_yarn so final sum will be sum=6+668669+6+42456+32786+6+32786+262244echo $sum How we can do it by awk or perl one liners?
In AWK: { sum += $1 }END { print sum } So du -b /tmp/* | awk '{ sum += $1 } END { print sum }' Note that the result won’t be correct if the directories under /tmp have subdirectories themselves, because du produces running totals on directories and their children. du -s will calculate the sum for you correctly (on all subdirectories and files in /tmp , including hidden ones): du -sb /tmp and du -c will calculate the sum of the listed directories and files, correctly too: du -cb /tmp/*
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/503601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
503,644
I tagged a bunch of messages in mutt index. How can one jump to the next tagged message in mutt? Is it possible to just show all tagged messages?
TL;DR: Search next tagged message: / then ~T . Show tagged messages only: l then ~T . Now, there's an exhaustive answer: You T agged a set of messages. Now, you want to go to/select the next T agged message: Use / then ~T and you're all set. In case you temporarily want to be presented with only your Tagged messages press l (limit results) then ~T . To revert and show all messages again, limit l to all . See also (neo)mutt pattern modifiers for more info on how to modify your "Searching, Limiting and Tagging" operations. Goody ahead: mutt bindings below. macro index I "<search>~T\n" "Search for next Tagged"macro pager I "<exit><search>~T\n<display-message>" "Jump to next Tagged"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259089/" ] }
503,679
I have a question regarding making my own unit (service) file for Systemd. I've read the documentation and had some questions. After searching around, I found this very helpful answer that gives some detail about some of the questions I was having. How to write a systemd .service file running systemd-tmpfiles Although I find that answer useful, there is still one part that I do not understand. Mainly this part: Since we actually want this service to run later rather than sooner, we then specify an "After" clause. This does not actually need to be the same as the WantedBy target (it usually isn't) My understanding of After is that it is pretty straight forward. The service (or whatever you are defining) will run after the unit listed in After. Similarly, WantedBy seems pretty straight forward. You are defining that the unit you list has a Want to your service. So for a target like multi-user or graphical, your unit should be run in order for systemd to consider that target reached. Now, assuming my understanding of how these declarations work is correct so far, my question is this: Why would it even work to list the same unit in the After and WantedBy clauses? For example, defining a unit that is After multi-user.target and also WantedBy multi-user.target seems to me like it would lead to an impossible situation where the unit needs to be started after the target is reached, but also it needs to be started for the target to be considered "reached". Am I misunderstanding something?
The systemd manual discusses the relationship between Before / After and Requires / Wants / Bindto in the Before=, After= section : Note that this setting is independent of and orthogonal to the requirement dependencies as configured by Requires=, Wants= or BindsTo=. It is a common pattern to include a unit name in both the After= and Requires= options, After does not imply Wants or WantedBy , nor does it conflict with those settings. If both units are triggered to start, After will affect the order, regardless of the dependency chain. If the module listed in After is not somewhere in the dependency chain, it won't be loaded, since After does not imply any dependency.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/216854/" ] }
503,682
I want to perform shuf --zero-terminated on multi-line strings with here-document.
Here-documents in Bash and dash don't support this. You can't store a null in a variable, they are removed from command substitutions, you can't write one in literally, and you can't use ANSI-C quoting inside the here-document. Neither shell is null-friendly and they are generally treated as (C-style) string terminators if one does get in. You have a few options: use a real file , use zsh, use process substitution, or use standard input. You can do exactly what you want in zsh, which is much more null-friendly. zsh% null=$(printf '\x00')zsh% hexdump -C <<EOTheredoc> a${null}b${null}heredoc> EOT00000000 61 00 62 00 0a |a.b..|00000005 Note though that heredocs have an implicit terminating newline, which may not be desirable (it'll be an extra field for shuf after the final null). For Bash, you can use process substitution almost equivalently to your heredoc in combination with printf or echo -e to create nulls inline: bash$ hexdump -C < <(printf 'item 1\x00item\n2\x00')00000000 69 74 65 6d 20 31 00 69 74 65 6d 0a 32 00 |item 1.item.2.|0000000e This is not necessarily entirely equivalent to a here-document, because those are often secretly put into real files by the shell (which matters for seekability, among other things). Since you probably want to suppress terminating newlines, you can't even use a heredoc internally within the commands there - it has to be printf / echo -ne if safe to get fine-grained control over the output. You can't do process substitution in dash, but in any shell you could pipe in standard input from a subshell: dash$ (printf 'item 1\x00'printf 'item\n2\x00') | hexdump -C00000000 69 74 65 6d 20 31 00 69 74 65 6d 0a 32 00 |item 1.item.2.|0000000e shuf is happy to read from standard input by default, so that should work for your concrete use case as I understand it. If you have a more complex command, being on the right-hand side of a pipeline can introduce some confounding elements with scoping. Finally, you could write your data into a real file using printf and use that instead of a here-document. That option has been covered in the other answer . You'll need to make sure you clean up the file afterwards, and may want to use mktemp or similar if available to create a safe filename if there are any live security concerns.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503682", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259284/" ] }
503,686
Opensuse tumbleweed comes with glibc2.29, and so everything else in the system is dependant on it. However, I have CrashPlanDesktop and it needs 2.27 max. I found an opensuse repo with 2.27 easily. However, trying to install it results in this: rpm: /lib64/libc.so.6: version `GLIBC_2.27' not found (required by /usr/lib64/libpopt.so.0) How can I solve this? The CrashPlanDesktop has to communicate to a service. Can I use chroot or something else? Can I extract the rpm into the folder with the executable? Additionally I got this message, but don't know what it means Code: d6 21 12 e3 c4 a7 81 1d 7a 48 5f 26 5f 37 b8 f1 ed f5 f8 7c 86 e8 25 4c a5 5a 29 b7 45 41 0c cc a7 76 95 b4 93 d9 d8 5e 4c b8 f4 95 11 c4 9f 2c fc 6d a0 1d 3c 50 4a e0 5a 6b 48 18 f7 b9 ab
Here-documents in Bash and dash don't support this. You can't store a null in a variable, they are removed from command substitutions, you can't write one in literally, and you can't use ANSI-C quoting inside the here-document. Neither shell is null-friendly and they are generally treated as (C-style) string terminators if one does get in. You have a few options: use a real file , use zsh, use process substitution, or use standard input. You can do exactly what you want in zsh, which is much more null-friendly. zsh% null=$(printf '\x00')zsh% hexdump -C <<EOTheredoc> a${null}b${null}heredoc> EOT00000000 61 00 62 00 0a |a.b..|00000005 Note though that heredocs have an implicit terminating newline, which may not be desirable (it'll be an extra field for shuf after the final null). For Bash, you can use process substitution almost equivalently to your heredoc in combination with printf or echo -e to create nulls inline: bash$ hexdump -C < <(printf 'item 1\x00item\n2\x00')00000000 69 74 65 6d 20 31 00 69 74 65 6d 0a 32 00 |item 1.item.2.|0000000e This is not necessarily entirely equivalent to a here-document, because those are often secretly put into real files by the shell (which matters for seekability, among other things). Since you probably want to suppress terminating newlines, you can't even use a heredoc internally within the commands there - it has to be printf / echo -ne if safe to get fine-grained control over the output. You can't do process substitution in dash, but in any shell you could pipe in standard input from a subshell: dash$ (printf 'item 1\x00'printf 'item\n2\x00') | hexdump -C00000000 69 74 65 6d 20 31 00 69 74 65 6d 0a 32 00 |item 1.item.2.|0000000e shuf is happy to read from standard input by default, so that should work for your concrete use case as I understand it. If you have a more complex command, being on the right-hand side of a pipeline can introduce some confounding elements with scoping. Finally, you could write your data into a real file using printf and use that instead of a here-document. That option has been covered in the other answer . You'll need to make sure you clean up the file afterwards, and may want to use mktemp or similar if available to create a safe filename if there are any live security concerns.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122303/" ] }
503,702
I accidentally enabled SELINUX and reboot the system without knowing it's consequence. Now, I can't access the login system in my CENTOS 7 unit. What I've tried so far: https://serverfault.com/questions/501304/disable-selinux-permanently kernel /boot/vmlinuz-2.6.32-358.2.1.el6.x86_64 ro root=/dev/xvda1 rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto console=tty0 selinux=0 and this # cat /etc/grub.conf........ root (hd0,0) kernel /vmlinuz-2.6.32-279.el6.x86_64 root=/dev/md3 selinux=0 initrd /initramfs-2.6.32-279.el6.x86_64.img......... but after I reboot the system, I still can't login. Also what is the purpose of root=/dev/xda or /dev/md3 . Update: I access the kernel boot and said that I should set selinux=0in grub.cfg but when I went to grub.cfg it is readonly and the source pathfrom the article is different from the path of the grub.cfg.
I thought I need to type the information on the source in grub. What I did is very simple,I just type Ctrl+X then add selinux=0 on the edited selected kernel version. Spent hourslooking for solution and exploring at boot loader to edit grub.cfg. Sorry I'm a newbie to not thinking that the selinux=0 will just add in Ctrl+X.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503702", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339440/" ] }
503,732
Given I have in a bash script ev=USER How can I get the environment variable value for $USER using ev? Tried naively doing: echo ${"$"$ev} which results in bad substitution. I'd expect to get back whatever the value of $USER is.
By using an indirect expansion (also sometimes called "variable indirection"), ev=USERprintf '%s\n' "${!ev}" This is described in the bash (5.0) manual, in the section titled "Parameter Expansion". Or, by making ev a name reference (requires bash 4.3+), declare -n ev=USERprintf '%s\n' "$ev" This is described in the bash (5.0) manual, just before the section called "Positional Parameters".
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/503732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237494/" ] }
503,738
I have a scenario of doing a curl request with payload from a file to my server. Here I need to replace values in my file incrementing values by 1000 and repeating the same for 25 times. I am able to replace the values by 'sed' but I am not able to loop it for 25 times.Here is what I implemented for one time. curl -H "text/xml" --data-binary "@/home/miracle/email/somainput1.xml" https://x.x.x.x:5550 --insecure -u admin:xxxxx >> somaoutput1.xml my input file has the following code.. <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body><dp:request domain="HUB" xmlns:dp="http://www.datapower.com/schemas/management"><dp:b2b-query-metadata><dp:query><dp:query-condition evaluation="property-equals"><dp:property-name>ResultCode</dp:property-name><dp:value>0</dp:value></dp:query-condition><dp:query-condition evaluation="logical-and"><dp:query-condition evaluation="property-greater-than"><dp:property-name>InputTime</dp:property-name><dp:value>2019-02-19 23:00:00</dp:value></dp:query-condition><dp:query-condition evaluation="property-less-than"><dp:property-name>InputTime</dp:property-name><dp:value>2019-02-20 11:00:00</dp:value></dp:query-condition></dp:query-condition></dp:query><dp:result-constraints><dp:max-rows>1000</dp:max-rows>**<dp:start-index>18001</dp:start-index>**<dp:include-properties><dp:property-name>SenderName</dp:property-name><dp:property-name>ReceiverName</dp:property-name><dp:property-name>ResultCode</dp:property-name></dp:include-properties></dp:result-constraints></dp:b2b-query-metadata></dp:request></soapenv:Body></soapenv:Envelope> and I can able to replace it by sed using sed -i '23s/18001/19001/g' b2bsoapinput.xml I need to do the loop and send the same curl request 25 times.
By using an indirect expansion (also sometimes called "variable indirection"), ev=USERprintf '%s\n' "${!ev}" This is described in the bash (5.0) manual, in the section titled "Parameter Expansion". Or, by making ev a name reference (requires bash 4.3+), declare -n ev=USERprintf '%s\n' "$ev" This is described in the bash (5.0) manual, just before the section called "Positional Parameters".
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/503738", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339469/" ] }
503,830
I want to check for the existence of multiple directories, say, dir1 , dir2 and dir3 , in the working directory. I have the following if [ -d "$PWD/dir1" ] && [ -d "$PWD/dir2" ] && [ -d "$PWD/dir3" ]; then echo Trueelse echo Falsefi But I suspect there is a more elegant way of doing this. Do not assume that there is a pattern in the names of the directories. The goal is to check for the existence of a few directories and for the nonexistence of others. I'm using Bash, but portable code is preferred.
I would loop: result=Truefor dir in \ "$PWD/dir1" \ "$PWD/dir2" \ "$PWD/dir3" do if ! [ -d "$dir" ]; then result=False break fidoneecho "$result" The break causes the loop to short-circuit, just like your chain of &&
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503830", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339541/" ] }
503,851
I have a configuration as below in my ~/.ssh/config file: Host xxx HostName 127.0.0.1 Port 2222 User gigi ServerAliveInterval 30 IdentityFile ~/blablabla # CertificateFile ~/blablabla-cert.pub which works fine but I'm curious about how would one generate the CertificateFile if really wants to use it? Consider one already has the private and public RSA keys generated with e.g. openssl req -newkey rsa:2048 -x509 [...] .
The certificate model of authentication used by SSH is a variation of the public key authentication method. With certificates, each user's (or host's) public key is signed by another key, known as the certificate authority (CA). The same CA can be used to sign multiple user or host keys. The user or the host can then trust a single CA instead of having to trust each individual user/host key. Because this is a change in the authentication model, implementing certificates requires changes on both the client and the server side. Also, do note that the certificates used by SSL (the ones generated by openssl ) are different from the ones used by SSH. This topic is explained by these QAs at the Security SE: What is the difference between SSL & SSH? , Converting keys between OpenSSL and OpenSSH . Now, since the question is about how a client could connect to a server using an SSH certificate, let's look at that approach. The manual page for ssh-keygen has some relevant information: ssh-keygen supports signing of keys to produce certificates that may be used for user or host authentication. Certificates consist of a public key, some identity information, zero or more principal (user or host) names and a set of options that are signed by a Certification Authority (CA) key. Clients or servers may then trust only the CA key and verify its signature on a certificate rather than trusting many user/host keys. Note that OpenSSH certificates are a different, and much simpler, format to the X.509 certificates used in ssl (8). ssh-keygen supports two types of certificates: user and host. User certificates authenticate users to servers, whereas host certificates authenticate server hosts to users. To generate a user certificate: $ ssh-keygen -s /path/to/ca_key -I key_id /path/to/user_key.pub The resultant certificate will be placed in /path/to/user_key-cert.pub . A host certificate requires the -h option: $ ssh-keygen -s /path/to/ca_key -I key_id -h /path/to/host_key.pub The host certificate will be output to /path/to/host_key-cert.pub . The first thing we'll need here is a CA key. A CA key is a regular private-public key pair, so let's generate one as usual: ssh-keygen -t rsa -f ca The -f ca option simply specifies the output filename as 'ca'. This results in the two files being generated - ca (private key) and ca.pub (public key). Next, we'll sign our user key with the CA's private key (following the example from the manual): ssh-keygen -s path/to/ca -I myuser@myhost -n myuser ~/.ssh/id_rsa.pub This will generate a new file named ~/.ssh/id_rsa-cert.pub which contains the SSH certificate. The -s option specifies the path to the CA private key, the -I option specifies an identifier that is logged at the server-side, and the -n option specifies the principal (username). The contents of the certificate can be verified by running ssh-keygen -L -f ~/.ssh/id_rsa-cert.pub . At this point, you're free to edit your configuration file (~/.ssh/config) and include the CertificateFile directive to point to the newly generated certificate. As the manual indicates , the IdentityFile directive must also be specified along with it to identify the corresponding private key. The last thing to do is to tell the server to trust your CA certificate. You'll need to copy over the public key of the CA certificate to the target server. This is done by editing the /etc/ssh/sshd_config file and specifying the TrustedUserCAKeys directive: TrustedUserCAKeys /path/to/ca.pub Once that is done, restart the SSH daemon on the server. On my CentOS system, this is done by running systemctl restart sshd . After that, you will be able to log in to the system using your certificate. Tracing your ssh connection using the verbose flag ( -v ) will show the certificate being offered to the server and the server accepting it. One last thing to note here is that any user key signed with the same CA key will now be trusted by the target server. Access to the CA keys must be controlled in any practical scenario. There are also directives such as AuthorizedPrincipalsFile that can be used to limit the access from the server side. See the manual for sshd_config for more details. On the client side, the certificates can also be created with tighter specifications. See the manual for ssh-keygen for those details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503851", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227120/" ] }
503,902
I want to insert this cat <<EOF >> /etc/security/limits.conf* soft nproc 65535 * hard nproc 65535 * soft nofile 65535 * hard nofile 65535root soft nproc 65535root hard nproc 65535root soft nofile 65535root hard nofile 65535EOF into the second to last line of the file, before the # End of file line. I know I could use other methods to insert this statement without the use of EOF but for visual candy I wanted to maintain this format as well for readability.
You can use ex (which is a mode of the vi editor) to accomplish this. You can use the :read command to insert the contents into the file. That command takes a filename, but you can use the /dev/stdin pseudo-device to read from standard input, which allows you to use a <<EOF marker. The :read command also takes a range, and you can use the $- symbol, which breaks down into $ , which indicates the last line of the file, and - to subtract one from it, getting to the second to last line of the file. (You could use $-1 as well.) Putting it all together: $ ex -s /etc/security/limits.conf -c '$-r /dev/stdin' -c 'wq' <<EOF* soft nproc 65535 * hard nproc 65535 * soft nofile 65535 * hard nofile 65535root soft nproc 65535root hard nproc 65535root soft nofile 65535root hard nofile 65535EOF The -s is to make it silent (not switch into visual mode, which would make the screen blink.) The $-r is abbreviated (a full $-1read would have worked as well) and finally the wq is how you write and quit in vi . :-) UPDATE: If instead of inserting before the last line, you want to insert before a line with specific contents (such as "# End of file"), then just use a /search/ pattern to do so. For example: $ ex -s /etc/security/limits.conf -c '/^# End of file/-1r /dev/stdin' -c 'wq' <<EOF...EOF
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152242/" ] }
503,903
I always use the following commands to yank an inner word and then paste it in the line above: yiw -> O -> Esc -> p Obviously P by itself (without using O to insert a line above) doesn't work, because there's no new line character, so instead that just pastes it before the cursor. Is there an easier way to do this?
Two suggestions to paste the contents on a line of its own: You can use the :put! command , since it always works linewise. The version with the ! inserts the contents of the register before (rather than after) the current line. (You can abbreviate it to :pu! .) You can use O , Ctrl + R , " , Esc to insert a line above with the contents of the latest yank. See help on i_CTRL-R for the Ctrl + R part. And " is the "unnamed" register, which is where yanks and deletes go by default. This is not necessarily "easier" than O , Esc , p , but it has the advantage that it's a single command, so it's repeatable with . and the whole action can be undone at once. If this is a frequent enough operation for you, consider creating a mapping for it, that would be surely the easiest one to type. :-)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503903", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339595/" ] }
503,944
xargs and basename work together as I would expect: $ printf '%s\n' foo/index.js bar/index.js baz/index.js | xargs basenameindex.jsindex.jsindex.js xargs and dirname , though, appear not to work together: $ printf '%s\n' foo/index.js bar/index.js baz/index.js | xargs dirnameusage: dirname path I would expect foobarbaz as output. What am I missing? I'm on Darwin 18.2.0 (macOS 10.14.3).
dirname on macOS only takes a single pathname, whereas basename is able to work with multiple pathnames. It is however safest to call basename with a single pathname so that it does not accidentally try to remove the the second pathname from the end of the first, as in $ basename some/file efil When calling these utilities from xargs you may ask xargs to run the utility with a single newline-delimited string at a time: printf '%s\n' some arguments | xargs -I {} basename {} or, printf '%s\n' some arguments | xargs -I {} dirname {} You could also use xargs -L 1 utility rather than xargs -I {} utility {} .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/503944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22289/" ] }
503,951
For each line in my file, if the line ends with / I want to remove it. How to do this?My attempt: sed -e "s/$\/$//" myfile.txt > myfile_noslash.txt Did not work.
Your command simply has an errant dollar sign. Fixed: sed -e 's/\/$//' myfile.txt > myfile_noslash.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/503951", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299440/" ] }
503,975
I've seen that make is useful for large projects, especially with confusing dependencies described in a Makefile , and also helping with workflow. I haven't heard any advantages for using make for small projects. Are there any?
As opposed to what? Suppose you have a program that you have split into two files,which you have imaginatively named file1.c and file2.c . You can compile the program by running cc file1.c file2.c -o yourprogram But this requires recompiling both files every time,even if only one has changed. You can decompose the compilation steps into cc -c file1.ccc -c file2.ccc file1.o file2.o -o yourprogram and then, when you edit one of the files, recompile only that file(and perform the linking step no matter what you changed). But what if you edit one file, and then the other, and you forget that you edited both files,and accidentally recompile only one? Also, even for just two files,you’ve got about 60 characters’ worth of commands there. That quickly gets tedious to type. OK, sure, you could put them into a script,but then you’re back to recompiling every time. Or you could write a really fancy, complicated script that checkswhat file(s) had been modified and does only the necessary compilations. Do you see where I’m going with this?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/503975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339667/" ] }
504,043
I would like to have a list of all the timezones in my system's zoneinfo database (note : system is a debian strecth linux) The current solution I have is : list all paths under /usr/share/zoneinfo/posix , which are either plain files or symlinks cd /usr/share/zoneinfo/posix && find * -type f -or -type l | sort I am not sure, however, that each and every known timezone is mapped to a path under this directory. Question Is there a command which gives the complete list of timezones in the system's current zoneinfo database ?
On Debian 9, your command gave me all of the timezones listed here: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Additionally, systemd provides timedatectl list-timezones , which outputs a list identical to your command. As far as I know, the data in tzdata is provided directly from IANA: This package contains data required for the implementation of standard local time for many representative locations around the globe. It is updated periodically to reflect changes made by political bodies to time zone boundaries, UTC offsets, and daylight-saving rules. So just keep the tzdata package updated.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/504043", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33732/" ] }
504,057
I have a curl command that sends a string of text to the server and I've been trying to figure out how to either have the string of text come from a file or from a bash variable. The command looks like this: curl -X POST -u "apikey:<apikey>"--header "Content-Type: application/json"--data '{"text": "<variable>"}'"<url>" I can't figure out how to get a variable in there. I've tried replacing with $variable and $(< file) but I don't know how to get those to spit out text without an echo and I can't echo in a curl.
Stop the single quoted string, follow with the variable expansion, posibly double quoted, and resume the single quoted string: --data '{"text": "'"$variable"'"}' ( $variable should still expand to something that together with the surroundings forms legal JSON, or else the other side probably won't be very happy :) .)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/504057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339751/" ] }
504,063
At some point, in some teaching material (from Linux Foundation) on Linux that I came across, the following is mentioned: ip command is more versatile and more efficient than ifconfig because it uses netlink sockets rather than ioctl system calls. Can anyone elaborate a bit on this because I cannot understand what's going on under the hood? P.S. I am aware of this topic on those tools but it does not address this specific difference on how they operate
The ifconfig command on operating systems such as FreeBSD and OpenBSD was updated in line with the rest of the operating system. It nowadays can configure all sorts of network interface settings on those operating systems, and handle a range of network protocols. The BSDs provide ioctl() support for these things. This did not happen in the Linux world. There are, today, three ifconfig commands: ifconfig from GNU inetutils jdebp % inetutils-ifconfig -lenp14s0 enp15s0 lojdebp % inetutils-ifconfig lolo Link encap:Local Loopback inet addr:127.0.0.1 Bcast:0.0.0.0 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:9087 errors:0 dropped:0 overruns:0 frame:0 TX packets:9087 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:51214341 TX bytes:51214341jdebp % ifconfig from NET-3 net-tools jdebp % ifconfig -lifconfig: option -l' not recognised.ifconfig: --help' gives usage information.jdebp % ifconfig lolo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10<host> inet6 ::2 prefixlen 128 scopeid 0x80<compat,global> inet6 fe80:: prefixlen 10 scopeid 0x20<link> loop txqueuelen 1000 (Local Loopback) RX packets 9087 bytes 51214341 (48.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9087 bytes 51214341 (48.8 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0jdebp % ifconfig from (version 1.40 of) the nosh toolset jdebp % ifconfig -lenp14s0 enp15s0 lojdebp % ifconfig lolo link up loopback running link address 00:00:00:00:00:00 bdaddr 00:00:00:00:00:00 inet4 address 127.0.0.1 prefixlen 8 bdaddr 127.0.0.1 inet4 address 127.53.0.1 prefixlen 8 bdaddr 127.255.255.255 inet6 address ::2 scope 0 prefixlen 128 inet6 address fe80:: scope 1 prefixlen 10 inet6 address ::1 scope 0 prefixlen 128jdebp % sudo ifconfig lo inet4 127.1.0.2 aliasjdebp % sudo ifconfig lo inet6 ::3/128 aliasjdebp % ifconfig lolo link up loopback running link address 00:00:00:00:00:00 bdaddr 00:00:00:00:00:00 inet4 address 127.0.0.1 prefixlen 8 bdaddr 127.0.0.1 inet4 address 127.1.0.2 prefixlen 32 bdaddr 127.1.0.2 inet4 address 127.53.0.1 prefixlen 8 bdaddr 127.255.255.255 inet6 address ::3 scope 0 prefixlen 128 inet6 address ::2 scope 0 prefixlen 128 inet6 address fe80:: scope 1 prefixlen 10 inet6 address ::1 scope 0 prefixlen 128 jdebp % As you can see, the GNU inetutils and NET-3 net-tools ifconfig s have some marked deficiencies, with respect to IPv6, with respect to interfaces that have multiple addresses, and with respect to functionality like -l . The IPv6 problem is in part some missing code in the tools themselves. But in the main it is caused by the fact that Linux does not (as other operating systems do) provide IPv6 functionality through the ioctl() interface. It only lets programs see and manipulate IPv4 addresses through the networking ioctl() s. Linux instead provides this functionality through a different interface, send() and recv() on a special, and somewhat odd, address family of sockets, AF_NETLINK . The GNU and NET-3 ifconfig s could have been adjusted to use this new API. The argument against doing so was that it was not portable to other operating systems, but these programs were in practice already not portable anyway so that was not much of an argument. But they weren't adjusted, and remain as aforeshown to this day. (Some people worked on them at various points over the years, but the improvements, sad to say, never made it into the programs. For example: Bernd Eckenfels never accepted a patch that added some netlink API capability to NET-3 net-tools ifconfig , 4 years after the patch had been written.) Instead, some people completely reinvented the toolset as an ip command, which used the new Linux API, had a different syntax, and combined several other functions behind a fashionable command subcommand -style interface. I needed an ifconfig that had the command-line syntax and output style of the FreeBSD ifconfig (which neither the GNU nor the NET-3 ifconfig has, and which ip most certainly does not have). So I wrote one. As proof that one could write an ifconfig that uses the netlink API on Linux, it does. So the received wisdom about ifconfig , such as what you quote, is not really true any more. It is now untrue to say that " ifconfig does not use netlink.". The blanket that covered two does not cover three. It has always been untrue to say that "netlink is more efficient". For the tasks that one does with ifconfig , there isn't really much in it when it comes to efficiency between the netlink API and the ioctl() API. One makes pretty much the same number of API calls for any given task. Indeed, each API call is two system calls in the netlink case, as opposed to one in the ioctl() system. And arguably the netlink API has the disadvantage that on a heavily-used system it explicitly incorporates the possibility of the tool never receiving an acknowledgement message informing it of the result of the API call. It is, furthermore, untrue to say that ip is "more versatile" than the GNU and NET-3 ifconfig s because it uses netlink . It is more versatile because it does more tasks, doing things in one big program that one would do with separate programs other than ifconfig . It is not more versatile simply by dint of the API that it uses internally for performing those extra tasks. There's nothing inherent to the API about this. One could write an all-in-one tool that used the FreeBSD ioctl() API, for example, and equally well state that it is "more versatile" than the individual ifconfig , route , arp , and ndp commands. One could write route , arp , and ndp commands for Linux that used the netlink API, too. Further reading Jonathan de Boyne Pollard (2019). ifconfig . nosh Guide . Softwares. Eduardo Ferro (2009-04-16). ifconfig: reports wrong ip address / initial patch . Debian bug #359676.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/504063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136100/" ] }
504,082
I think a example would best explain what I need _v1="windows"_v2_windows="/mnt/d"_v2_osx="/Volumes/d"echo $_v2_`echo $_v1` I want to echo the value of _v2_windows but using _v1 to determine which of the two v2 s to get. I know can use a case statement to solve the problem but I'm trying to avoid that.
With zsh : ${(P)varname} expands to the value of the variable whose name is stored in $varname . So if $varname contains var , ${(P)varname} expands to the same thing as $var . ${(e)var} expands to the content of $var but also performs parameter expansions, command substitution and arithmetic expansion within. So if $var contains $othervar , ${(e)var} expands to the same thing as $othervar . You can nest variable expansion operators, so things like ${(P)${var:-something}} work ${:-content} is one way to have a parameter expansion expand to arbitrary text (here content ) (see the manual for details) So you could do: _v1=windows_v2_windows=/mnt/dprintf '%s\n' ${(P)${:-_v2_$_v1}} Or: printf '%s\n' ${(e)${:-\$_v2_$_v1}} Or do it in two steps: varname=_v2_$_v1printf '%s\n' ${(P)varname} Or: expansions_to_evaluate=\$_v2_$_v1printf '%s\n' ${(e)expansions_to_evaluate} Or you could use the standard POSIX syntax: eval 'printf "%s\n" "${_v2_'"$_v1"'}"' Beware that if the value of $_v1 is not under your control, all those amount to an arbitrary command injection vulnerability, you'd need to sanitize the value first. Also note that zsh supports associative arrays (and has long before bash ), so you can do: typeset -A mntmnt=( windows /mnt/d osx /Volumes/d)os=windowsprintf '%s\n' $mnt[$os] Which would be a lot more legible, and wouldn't have any of those security implications.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504082", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65536/" ] }
504,083
So far on Windows 10 I used mainly the PuTTy bundle (PuTTygen, Pageant and PuTTy, etc) to SSH into remote Linux machines. I use Pageant as a private key holder to hold to private key (and possible passphrase) in memory as long as the PC isn't rebooted. I'd like to use WSLs OpenSSH as client more often as with this command: ssh USER@IP -vvv -L 22:localhost:22 # Or [email protected]; Yet I didn't find a Pageant like behavior on WSL OpenSSH: Holding the SSH key in memory for all possible WSL shells (even in shell termination) until the machine itself is rebooted from whatever reason How could I have a Pageant-like behavior in WSL OpenSSH?
With zsh : ${(P)varname} expands to the value of the variable whose name is stored in $varname . So if $varname contains var , ${(P)varname} expands to the same thing as $var . ${(e)var} expands to the content of $var but also performs parameter expansions, command substitution and arithmetic expansion within. So if $var contains $othervar , ${(e)var} expands to the same thing as $othervar . You can nest variable expansion operators, so things like ${(P)${var:-something}} work ${:-content} is one way to have a parameter expansion expand to arbitrary text (here content ) (see the manual for details) So you could do: _v1=windows_v2_windows=/mnt/dprintf '%s\n' ${(P)${:-_v2_$_v1}} Or: printf '%s\n' ${(e)${:-\$_v2_$_v1}} Or do it in two steps: varname=_v2_$_v1printf '%s\n' ${(P)varname} Or: expansions_to_evaluate=\$_v2_$_v1printf '%s\n' ${(e)expansions_to_evaluate} Or you could use the standard POSIX syntax: eval 'printf "%s\n" "${_v2_'"$_v1"'}"' Beware that if the value of $_v1 is not under your control, all those amount to an arbitrary command injection vulnerability, you'd need to sanitize the value first. Also note that zsh supports associative arrays (and has long before bash ), so you can do: typeset -A mntmnt=( windows /mnt/d osx /Volumes/d)os=windowsprintf '%s\n' $mnt[$os] Which would be a lot more legible, and wouldn't have any of those security implications.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504083", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
504,091
This is not about File Manager as I can copy/paste or type the full path in the address bar. This question is about other apps, e.g. FeatherPad text editor. If you have a file buried deep within your hard drive, navigating its full path can take forever. You have to click directory one by one in order to get to the file location. In Microsoft Windows, you can navigate easily by typing the full path on the address bar. Unfortunately, this is not possible in Linux. As example, I've been trying to open a file via FeatherPad text editor. As you can see in the address bar, it's not possible to type the path in address bar or copy/paste. This is screenshot from Window's notepad. As you can see, it is possible to type full path or copy/paste in it's address bar. Would it be possible to do that? If yes, please let me know how. If not, what is the alternative?
With zsh : ${(P)varname} expands to the value of the variable whose name is stored in $varname . So if $varname contains var , ${(P)varname} expands to the same thing as $var . ${(e)var} expands to the content of $var but also performs parameter expansions, command substitution and arithmetic expansion within. So if $var contains $othervar , ${(e)var} expands to the same thing as $othervar . You can nest variable expansion operators, so things like ${(P)${var:-something}} work ${:-content} is one way to have a parameter expansion expand to arbitrary text (here content ) (see the manual for details) So you could do: _v1=windows_v2_windows=/mnt/dprintf '%s\n' ${(P)${:-_v2_$_v1}} Or: printf '%s\n' ${(e)${:-\$_v2_$_v1}} Or do it in two steps: varname=_v2_$_v1printf '%s\n' ${(P)varname} Or: expansions_to_evaluate=\$_v2_$_v1printf '%s\n' ${(e)expansions_to_evaluate} Or you could use the standard POSIX syntax: eval 'printf "%s\n" "${_v2_'"$_v1"'}"' Beware that if the value of $_v1 is not under your control, all those amount to an arbitrary command injection vulnerability, you'd need to sanitize the value first. Also note that zsh supports associative arrays (and has long before bash ), so you can do: typeset -A mntmnt=( windows /mnt/d osx /Volumes/d)os=windowsprintf '%s\n' $mnt[$os] Which would be a lot more legible, and wouldn't have any of those security implications.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504091", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
504,105
I have these thin clients that are not in use I think they’d do perfect for a security camera video wall—just feed them an RTSP/RTMP URL and be done, GUI not needed. I’ve been lazily looking for candidates between projects but I can only find stuff for Raspberry Pie devices, furthermore, the clients are rather constrained and upgrading a single component in one of them probably costs more than all of them. They can’t be donated either because it’s hard to find an OS without expired system root CAs, up-to-date or able to be up-to-date with current cryptography, it’d do more harm than good. What I’ve done & specs They all have only 1GB of mem, 2GB of storage space but it can’t be used “too much” because they have some sort of RAM disk trickery. The only current-ish OSes I’ve managed to install are Porteus and Lubuntu on a USB stick-maybe I could even modify this to clear out some resources so the video player is able to keep playing smoothly for days. The original image is an embedded version of Windows-2009 I believe-with great tools for kiosk deployment from HP but I’d still need a player and the certificate thing is a real issue. BTW, the CPU is actually a i586 processor; this is a big part of the reason why I’ve had such a hard time finding something.
With zsh : ${(P)varname} expands to the value of the variable whose name is stored in $varname . So if $varname contains var , ${(P)varname} expands to the same thing as $var . ${(e)var} expands to the content of $var but also performs parameter expansions, command substitution and arithmetic expansion within. So if $var contains $othervar , ${(e)var} expands to the same thing as $othervar . You can nest variable expansion operators, so things like ${(P)${var:-something}} work ${:-content} is one way to have a parameter expansion expand to arbitrary text (here content ) (see the manual for details) So you could do: _v1=windows_v2_windows=/mnt/dprintf '%s\n' ${(P)${:-_v2_$_v1}} Or: printf '%s\n' ${(e)${:-\$_v2_$_v1}} Or do it in two steps: varname=_v2_$_v1printf '%s\n' ${(P)varname} Or: expansions_to_evaluate=\$_v2_$_v1printf '%s\n' ${(e)expansions_to_evaluate} Or you could use the standard POSIX syntax: eval 'printf "%s\n" "${_v2_'"$_v1"'}"' Beware that if the value of $_v1 is not under your control, all those amount to an arbitrary command injection vulnerability, you'd need to sanitize the value first. Also note that zsh supports associative arrays (and has long before bash ), so you can do: typeset -A mntmnt=( windows /mnt/d osx /Volumes/d)os=windowsprintf '%s\n' $mnt[$os] Which would be a lot more legible, and wouldn't have any of those security implications.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504105", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290539/" ] }
504,138
I am using bash to write a script to make it easier to set breakpoints. I am trying to see if I can use echo and pipe to send set breakpoints commands to the java debugger jdb. The command I have strung together successfully sets a breakpoint in jdb, but afterwards it immediately closes the debugger. I am piping the breakpoint to jdb as follows.... (echo -n; sleep 5; echo "stop at MainActivity:77") | jdb -sourcepath app/src/main/java -attach localhost:7777 the output is as follows... Initializing jdb ...> Set breakpoint saf.mobilebeats2.MainActivity:77> Input stream closed.
With zsh : ${(P)varname} expands to the value of the variable whose name is stored in $varname . So if $varname contains var , ${(P)varname} expands to the same thing as $var . ${(e)var} expands to the content of $var but also performs parameter expansions, command substitution and arithmetic expansion within. So if $var contains $othervar , ${(e)var} expands to the same thing as $othervar . You can nest variable expansion operators, so things like ${(P)${var:-something}} work ${:-content} is one way to have a parameter expansion expand to arbitrary text (here content ) (see the manual for details) So you could do: _v1=windows_v2_windows=/mnt/dprintf '%s\n' ${(P)${:-_v2_$_v1}} Or: printf '%s\n' ${(e)${:-\$_v2_$_v1}} Or do it in two steps: varname=_v2_$_v1printf '%s\n' ${(P)varname} Or: expansions_to_evaluate=\$_v2_$_v1printf '%s\n' ${(e)expansions_to_evaluate} Or you could use the standard POSIX syntax: eval 'printf "%s\n" "${_v2_'"$_v1"'}"' Beware that if the value of $_v1 is not under your control, all those amount to an arbitrary command injection vulnerability, you'd need to sanitize the value first. Also note that zsh supports associative arrays (and has long before bash ), so you can do: typeset -A mntmnt=( windows /mnt/d osx /Volumes/d)os=windowsprintf '%s\n' $mnt[$os] Which would be a lot more legible, and wouldn't have any of those security implications.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220721/" ] }
504,150
In bash, how do I encode zero-width sequences into PS1, when those sequences are coming from stdout of an external process or function? How do I implement writes-prompt-sequences-to-stdout so that it can emit multi-colored text to the prompt? PS1='$( writes-prompt-sequences-to-stdout )' I know that, when writing a bash PS1 prompt, I must wrap zero-width sequences in \[ \] so bash can compute correct prompt width. PS1='\[\e[0;35m\]$ \[\e[00m\]' bash does not print the \[ \] and understands the prompt is only 2 characters wide. How do I move those sequences into an external function? The following does not work, my prompt looks like \[\]$ \[\] , even though I can run render-prompt and see it writing the correct sequence of bytes to stdout. PS1='$( render-prompt )'function render-prompt { printf '\[\e[0;35m\]$ \[\e[00m\]'} Moving the printf call into PS1 does work: PS1='$( printf '"'"'\[\e[0;35m\]$ \[\e[00m\]'"'"' )' I theorized, perhaps bash is scanning the PS1 string before execution to count the number of zero-width bytes. So I tried tricking it by encoding [] sequences that aren't printed, but it correctly ignores the trick. PS1='$( printf '"'"'$$$$$'"'"' '"'"'\[\e[00m\]'"'"' )' My question: How do I write \[ \] sequences to stdout from a function or binary that is invoked via PS1?
I figured it out. Bash special-cases \e , \[ , and \] within PS1. It coverts \e to an escape byte, \[ to a 1 byte, and \] to a 2 byte. External commands must write 1 and 2 bytes to stdout. According to ASCII, these encode "start of heading" and "start of text." http://www.columbia.edu/kermit/ascii.html Here's a working example, which relies on printf converting \ escapes within the first positional parameter into the correct bytes: PS1='$( render-prompt )'function render-prompt { printf '\1\033[0;35m\2$ \1\033[00m\2'} render-prompt | hexdump -C 00000000 01 1b 5b 30 3b 33 35 6d 02 24 20 01 1b 5b 30 30 |..[0;35m.$ ..[00|00000010 6d 02 |m.|00000012
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339823/" ] }
504,163
For a long time I've used ipconfig in Windows, and ifconfig in Unix to find out my local IPv4, for different purposes. There are times when your screen is small, or you have an extensive amount of network adapters connected to your computer, making this list really extensive. I know you can pipe it into less , in order to avoid scrolling, and filter with grep , but that's rather cumbersome. I was wondering if there was an easier way to find the basic information your DHCP provides you (gateway, IPv4, and subnet mask), without having to squint your eyes in order to find the numbers you are looking for, and without having to look up a command in your notes or Google.
ip addr - list IPv4 and IPv6 addresses ip -4 addr - list only IPv4 addresses ( ip -c -4 addr for color) ip -6 addr - list only IPv6 addresses ( ip -c -6 addr for color) ip route - IPv4 routing table ip -6 route - IPv6 routing table
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504163", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/334455/" ] }
504,187
For example, the only thing in file.txt looks like this: xxxxxxxxxHAHAxxxxxxHOHOxxxxxxx I hope to replace HAHA with seq 1 3 and replace HOHO with seq 5 7, so the output should be: xxxxxxxxx1xxxxxx5xxxxxxx xxxxxxxxx2xxxxxx6xxxxxxx xxxxxxxxx3xxxxxx7xxxxxxx What I did: for i in $(seq 1 3) do sed "s/HAHA/$i/g" file.txt for i in $(seq 5 7) do sed "s/HOHO/$i/g" file.txt done done > new.txt But new.txt doesn't show what I expected. How should I change the code?
ip addr - list IPv4 and IPv6 addresses ip -4 addr - list only IPv4 addresses ( ip -c -4 addr for color) ip -6 addr - list only IPv6 addresses ( ip -c -6 addr for color) ip route - IPv4 routing table ip -6 route - IPv6 routing table
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504187", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339837/" ] }
504,208
The logical volume(aka LV), centos-home is created automatically when installing CentOS 7 by default, but I didn't use it manually. Now, I have mounted an empty directory, work to centos-home . /home/anselmo/work ==> /dev/mapper/centos-home The following are the results of df -h after mount. [anselmo@anselmo-centos7 ~]$ df -hFilesystem Size Used Avail Use% Mounted on/dev/mapper/centos_anselmo--centos7-root 50G 45G 5.2G 90% /devtmpfs 63G 0 63G 0% /devtmpfs 63G 302M 63G 1% /dev/shmtmpfs 63G 43M 63G 1% /runtmpfs 63G 0 63G 0% /sys/fs/cgroup/dev/sdb3 1014M 358M 657M 36% /boot/dev/sdc1 200M 12M 189M 6% /boot/efi/dev/mapper/centos_anselmo--centos7-home 2.6T 1.7T 948G 65% /hometmpfs 13G 92K 13G 1% /run/user/1000/dev/mapper/centos-home 65G 8.8G 56G 14% /home/anselmo/work Though I mounted an empty directory, the LV had already used space 8.8G . How can I find what uses this space?
ip addr - list IPv4 and IPv6 addresses ip -4 addr - list only IPv4 addresses ( ip -c -4 addr for color) ip -6 addr - list only IPv6 addresses ( ip -c -6 addr for color) ip route - IPv4 routing table ip -6 route - IPv6 routing table
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504208", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318407/" ] }
504,262
I'm trying to figure out what words does the -i nteractive option of cp accepts as input. For your convenience, here's code that sets up files for experimentation. touch example_file{1..3}mkdir example_dircp example_file? example_dircp -i example_file? example_dir The shell then asks interactively for each file whether it should be overwritten. It seems to accept all sorts of random input. cp: overwrite 'example_dir/example_file1'? qcp: overwrite 'example_dir/example_file2'? wcp: overwrite 'example_dir/example_file3'? e I tried looking into the source code of cp , but I don't know C and searching for overwrite is of no help. As far as I can tell it accepts some words as confirmation for overwriting, and everything else is taken as a no. The problem is even words like ys seem to be accepted as yes , so I don't know what works and what doesn't. I'd like to know how exactly does this work and to have some proof of it by means of documentation or intelligible snippets of source code.
The POSIX standard only specifies that the response need to be "affirmative" for the copying to be carried out when -i is in effect. For GNU cp ,the actual input at that point is handled by a function called yesno() . This function is defined in the lib/yesno.c file in the gnulib source distribution, and looks like this: boolyesno (void){ bool yes;#if ENABLE_NLS char *response = NULL; size_t response_size = 0; ssize_t response_len = getline (&response, &response_size, stdin); if (response_len <= 0) yes = false; else { /* Remove EOL if present as that's not part of the matched response, and not matched by $ for example. */ if (response[response_len - 1] == '\n') response[response_len - 1] = '\0'; yes = (0 < rpmatch (response)); } free (response);#else /* Test against "^[yY]", hardcoded to avoid requiring getline, regex, and rpmatch. */ int c = getchar (); yes = (c == 'y' || c == 'Y'); while (c != '\n' && c != EOF) c = getchar ();#endif return yes;} If NLS ("National Language Support") is not used, you can see that the only reply that the function returns true for is a response that starts with an upper or lower case Y character. Any additional or other input is discarded. If NLS is used, the rpmatch() function is called to determine whether the response was affirmative or not. The purpose of the rpmatch() NLS library function is to determine whether a given string is affirmative or not (with support for internationalisation). On BSD systems, the corresponding function is found in src/bin/cp/utils.c : /* * If the file exists and we're interactive, verify with the user. */intcopy_overwrite(void){ int ch, checkch; if (iflag) { (void)fprintf(stderr, "overwrite %s? ", to.p_path); checkch = ch = getchar(); while (ch != '\n' && ch != EOF) ch = getchar(); if (checkch != 'y' && checkch != 'Y') return (0); } return 1;} This is essentially the same as the non-NLS code path in the GNU code.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504262", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/328323/" ] }
504,305
Trying to open a python3 virtual environment I have created with python3 -m venv myVenv by doing source myVenv/bin/activate as I do in Linux, but I get ksh: source: not found which mean it is not in my path/installed. When I try to add it with pkg_add , it just tell me it can't find it. Does OpenBSD use something else that allows me to use venv or what should I do?
You are using the Forsyth PD Korn shell, the usual login shell on OpenBSD. The PD Korn shell does not have a source command. The source built-in command is only available in some shells. The command that you want is the . command. Further reading What is the difference between '.' and 'source' in shells?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/504305", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116744/" ] }
504,326
I have a text file called "shoplist.txt" which one have: drinks water cola fantafruit banana orange And I want to get how many items per line I have. I'm able to extract drinks and fruit with function "cut" but how can I count how many words I have in each line? My actually code is: fileLine=`cat file.txt`#Here I get each line saving it to fileLinefor line in $fileLine; do echo((aux++))done But this code dosen't work because it save to %fileLine each work (drinks, then water,then cola,...) How can I get the first line and then count the words on that line?
If you can use awk , NF is the number of fields in the current line (by default, a field is a word delimited by any amount of whitespace). Use awk '{ print NF, $0 }' inputfile With your sample input, this will print 4 drinks water cola fanta3 fruit banana orange
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/504326", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/335761/" ] }
504,342
My nginx unitfile is following, [root@arif ~]# cat /usr/lib/systemd/system/nginx.service[Unit]Description=The nginx HTTP and reverse proxy serverAfter=network.target remote-fs.target nss-lookup.target[Service]Type=forkingPIDFile=/run/nginx.pid# Nginx will fail to start if /run/nginx.pid already exists but has the wrong# SELinux context. This might happen when running `nginx -t` from the cmdline.# https://bugzilla.redhat.com/show_bug.cgi?id=1268621ExecStartPre=/usr/bin/rm -f /run/nginx.pidExecStartPre=/usr/sbin/nginx -tExecStart=/usr/sbin/nginxExecReload=/bin/kill -s HUP $MAINPIDKillSignal=SIGQUITTimeoutStopSec=5KillMode=processPrivateTmp=true[Install]WantedBy=multi-user.target Here, in the [Service] portion, the value of Type is equal to forking which means from here , The process started with ExecStart spawns a child process that becomes the main process of the service. The parent process exits when the startup is complete. My questions are, Why a service does that? What are the advantages for doing this? What's wrong is Type=simple or other similar options?
Why a service does that? Services generally do not do that, in fact. Aside from the fact that it isn't good practice, and the idea of "dæmonization" is indeed fallacious, what services do isn't what the forking protocol requires . They get the protocol wrong, because they are in fact doing something else , which is being shoehorned into the forking protocol, usually unnecessarily. What are the advantages for doing this? There aren't any. Better readiness notification protocols exist, and no-one actually speaks this protocol properly. This service unit is not doing this because it is advantageous. What's wrong is Type=simple or other similar options? Nothing. It is in fact generally the use of the forking readiness protocol that is wrong. This is not best practice, as claimed in other answers. Quite the reverse. The simple fact is that this is the best of a bad job, a bodge to cope with a behaviour of nginx that still cannot be turned off. Most service softwares nowadays, thanks to a quarter of a century of encouragement from the IBM SRC, daemontools, and other serious service management worlds, have gained options for, or even changed their default behaviours to, not attempting to foolishly "dæmonize" something that is already in dæmon context . This is still not the case for nginx, though. daemon off does not work, sadly. Just as many softwares used to erroneously conflate "non-dæmonize" mode with debug mode (but often no longer do, nowadays), nginx unfortunately conflates it with other things, such as not handling its control signals. People have been pushing for this for 5 years, so far. Further reading Jonathan de Boyne Pollard (2015). Readiness protocol problems with Unix dæmons . Frequently Given Answers. Adrien CLERC (2013-10-27). nginx: Don't use type=forking in systemd service file . Debian Bug #728015. runit and nginx Jonathan de Boyne Pollard (2001). " Don't fork() in order to 'put the dæmon into the background'. ". Mistakes to avoid when designing Unix dæmon programs . Frequently Given Answers. Jonathan de Boyne Pollard (2015). You really don't need to daemonize. Really. . The systemd House of Horror. Numerous examples of readiness protocol mismatches here at StackExchange: https://unix.stackexchange.com/a/401611/5132 https://unix.stackexchange.com/a/200365/5132 https://unix.stackexchange.com/a/194653/5132 https://unix.stackexchange.com/a/211126/5132 https://unix.stackexchange.com/a/336067/5132 https://unix.stackexchange.com/a/283739/5132 https://unix.stackexchange.com/a/242860/5132
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171196/" ] }
504,367
Any ideas how to group and sum data on the below with the command line script? 2018-02-01 102018-02-03 122018-03-01 12018-03-01 122018-04-12 9 2019-01-12 213 expected result from the above data set 2018-02 222018-03 132018-04 92019-01 213
Try this $ awk '{a[substr($0,0,7)]+=$2}END{for(b in a){print b,a[b]}}' myfile2018-02 222019-01 2132018-03 132018-04 9$ For sorted, add sort $ awk '{a[substr($0,0,7)]+=$2}END{for(b in a){print b,a[b]}}' myfile | sort2018-02 222018-03 132018-04 92019-01 213$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504367", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/340051/" ] }
504,381
Somehow but not quite building upon the older question "ntpd vs. systemd-timesyncd - How to achieve reliable NTP syncing?" , I'd like to ask about the differences between chrony and systemd-timesyncd in terms of an NTP client . I know that systemd-timesyncd is a more or less minimal ntp client implementation whereas chrony is a full fledged NTP daemon solution that happens to include an NTP client. The ubuntu Bionic Beaver release notes state the following: For simple time sync needs the base system already comes with systemd-timesyncd. Chrony is only needed to act as a time server or if you want the advertised more accurate and efficient syncing. I like the idea of using a minimal preinstalled tool to do the job and I am pretty sure systemd-timesyncd will do the job for my use cases, still I am curious: What are the real world differences between the two in terms of accuracy? What are the differences in efficiency? What are a "non simple" time sync needs aka the use-cases for chrony as NTP client?
The announcement of systemd-timesyncd in the systemd NEWS file does a good job of explaining the differences of this tool in comparison with Chrony and tools like it. (emphasis mine): A new "systemd-timesyncd" daemon has been added for synchronizing the system clock across the network. It implements an SNTP client . In contrast to NTP implementations such as chrony or the NTP reference server this only implements a client side, and does not bother with the full NTP complexity, focusing only on querying time from one remote server and synchronizing the local clock to it . Unless you intend to serve NTP to networked clients or want to connect to local hardware clocks this simple NTP client should be more than appropriate for most installations. [...] This setup is a common use case for most hosts in a server fleet. They will usually get synchronized from local NTP servers, which themselves get synchronized from multiple sources, possibly including hardware. systemd-timesyncd tries to provide an easy-to-use solution for that common use case. Trying to address your specific questions: What are the real world differences between the two in terms of accuracy? I believe you can get higher accuracy by getting synchronization data from multiple sources, which is specifically not a supported use case for systemd-timesyncd. But when you're using it to get synchronization data from central NTP servers connected to your reliable internal network, using multiple sources isn't really that relevant and you get good accuracy from a single source. If you're synchronizing your server from a trusted server in a local network and in the same datacenter , the difference in accuracy between NTP and SNTP will be virtually non-existent. NTP can take RTT into account and do timesmearing, but that's not that beneficial when your RTT is really small, which is the case of a fast local network and a nearby machine. You also don't need multiple sources if you can trust the one you're using. What are the differences in efficiency? Getting synchronization from a single source is much simpler than getting it from multiple sources, since you don't have to make decisions about which sources are better than others and possibly combine information from multiple sources. The algorithms are much simpler and will require less CPU load for the simple case. What are a "non simple" time sync needs aka the use-cases for chrony as NTP client? That's addressed in the quote above, but in any case these are use cases for Chrony that are not covered by systemd-timesyncd: running NTP server (so that other hosts can use this host as a source for synchrnoization); getting NTP synchronization information from multiple sources (which is important for hosts getting that information from public servers on the Internet); and getting synchronization information from the local clock, which usually involves specialized hardware such as GPS devices which can get accurate time information from satellites. These use cases require Chrony or ntpd or similar.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/504381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/87704/" ] }
504,412
I am trying to manipulate a file which contains numbers in scientific notation, but without the e symbol, i.e. 1.2e+3 is written as 1.2+3 . The easiest thing I thought of doing with awk was to replace + with e+ , using the gsub function and do my calculation in the new file. The same goes for the minus case. So a simple fix could be done using the following command awk '{gsub("+", "e+", $1); print $1, $2, $3, $4, $5}' file_in and do the same in all the columns. However the file contains also negative numbers which makes things a bit more complicated. A sample file can be seen bellow 1.056000+0 5.000000-1 2.454400-3 2.914800-2 8.141500-6 2.043430+1 5.000000-1 2.750500-3 2.698100-2-2.034300-4 3.829842+1 5.000000-1 1.969923-2 2.211364-2 9.499900-6 4.168521+1 5.000000-1 1.601262-2 3.030919-2-3.372000-6 6.661784+1 5.000000-1 5.250575-2 3.443669-2 2.585500-5 7.278104+1 5.000000-1 2.137055-2 2.601701-2 8.999800-5 9.077287+1 5.000000-1 1.320498-2 2.961020-2-1.011600-5 9.248130+1 5.000000-1 3.069610-3 2.786329-2-6.317000-5 1.049935+2 5.000000-1 4.218794-2 3.321955-2-5.097000-6 1.216283+2 5.000000-1 1.432105-2 3.077165-2 4.300300-5 Any idea on how to manipulate and calculations with such a file?
Is this output correct? 1.056000e+0 5.000000e-1 2.454400e-3 2.914800e-2 8.141500e-6 2.043430e+1 5.000000e-1 2.750500e-3 2.698100e-2-2.034300e-4 3.829842e+1 5.000000e-1 1.969923e-2 2.211364e-2 9.499900e-6 4.168521e+1 5.000000e-1 1.601262e-2 3.030919e-2-3.372000e-6 6.661784e+1 5.000000e-1 5.250575e-2 3.443669e-2 2.585500e-5 7.278104e+1 5.000000e-1 2.137055e-2 2.601701e-2 8.999800e-5 9.077287e+1 5.000000e-1 1.320498e-2 2.961020e-2-1.011600e-5 9.248130e+1 5.000000e-1 3.069610e-3 2.786329e-2-6.317000e-5 1.049935e+2 5.000000e-1 4.218794e-2 3.321955e-2-5.097000e-6 1.216283e+2 5.000000e-1 1.432105e-2 3.077165e-2 4.300300e-5 Code: perl -lne 's/(\.\d+)(\+|\-)/\1e\2/g; print' sample Explanation: -lne take care of line endings, process each input line, execute the code that follows s/(\.\d+)(\+|\-)/\1e\2/g : substitute ( s ) (.\d+)(\+|\-) find two groups of (a dot and numbers) and (a plus or minus) \1e\2 substitute them with the first group then e then the second group g globally - don't stop at the first substitution in each line, but process all possible hits print print the line sample input file This one adds space if it's missing. In fact it puts space between the numbers regardless. Ie. if there were two spaces in some case, there would be only one in the output. perl -lne 's/(\.\d+)(\+|\-)(\d+)(\s*)/\1e\2\3 /g; print' sample Most of it is similar to the previous one. The new thing is the (\d+) group nr 3 and the (\s*) group nr 4. * here means optional. In the substitution no \4 is used. There's a space instead. The output is this: 1.056000e+0 5.000000e-1 2.454400e-3 2.914800e-2 8.141500e-6 2.043430e+1 5.000000e-1 2.750500e-3 2.698100e-2 -2.034300e-4 3.829842e+1 5.000000e-1 1.969923e-2 2.211364e-2 9.499900e-6 4.168521e+1 5.000000e-1 1.601262e-2 3.030919e-2 -3.372000e-6 6.661784e+1 5.000000e-1 5.250575e-2 3.443669e-2 2.585500e-5 7.278104e+1 5.000000e-1 2.137055e-2 2.601701e-2 8.999800e-5 9.077287e+1 5.000000e-1 1.320498e-2 2.961020e-2 -1.011600e-5 9.248130e+1 5.000000e-1 3.069610e-3 2.786329e-2 -6.317000e-5 1.049935e+2 5.000000e-1 4.218794e-2 3.321955e-2 -5.097000e-6 1.216283e+2 5.000000e-1 1.432105e-2 3.077165e-2 4.300300e-5
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/504412", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209617/" ] }
504,444
There are JSON data which contains some numeric values. How to convert all numerics to strings? (wrap with quotes) Example: { "id":1, "customer":"user", "plate":"BMT-216-A", "country":"GB", "amount":1000, "pndNumber":20000, "zoneNumber":4} should become { "id":"1", "customer":"user", "plate":"BMT-216-A", "country":"GB", "amount":"1000", "pndNumber":"20000", "zoneNumber":"4"}
$ jq 'map_values(tostring)' file.json{ "id": "1", "customer": "user", "plate": "BMT-216-A", "country": "GB", "amount": "1000", "pndNumber": "20000", "zoneNumber": "4"} Redirect to a new file and then move that to the original filename. For a more thorough conversion of numbers in non-flat structures into strings, consider jq '(..|select(type == "number")) |= tostring' file.json This would examine every value recursively in the given document, and select the ones that are numbers. The selected values are then converted into strings. It would also, strictly speaking, look at the keys, but since these can't be plain numbers in JSON, no key would be selected. Example: $ jq . file.json{ "a": { "b": 1 }, "b": null, "c": [ 1, 2, "hello", 4 ]} $ jq '(..|select(type == "number")) |= tostring' file.json{ "a": { "b": "1" }, "b": null, "c": [ "1", "2", "hello", "4" ]} To additionally quote the null , change the select() to select(type == "number" or type == "null")
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/504444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/329780/" ] }
504,652
I am running interactive R in terminal, however it doesn't use all of the width of the terminal. It only use 72 characters out of 226. It is very uncomfortable to read any data with a lot of columns displayed in interactive R . I am using urxvt on debian 9,8 .
See ?option : ‘width’: controls the maximum number of columns on a line used in printing vectors, matrices and arrays, and when filling by ‘cat’. Columns are normally the same as characters except in East Asian languages. You may want to change this if you re-size the window that R is running in. Valid values are 10...10000 with default normally 80. (The limits on valid values are in file ‘Print.h’ and can be changed by re-compiling R.) Some R consoles automatically change the value when they are resized. To query the value: R> getOption("width")[1] 80 To change the value (add this to ~/.Rprofile to change it permanently): options("width"=200)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504652", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/267716/" ] }
504,801
I have 2 different machines - one running RHEL7 and one running CentOS-7.5. find --version reports version 4.5.11 on each. I've created the following dierectory structure on each. ./dir/some-file./.hidden/dir/some-file When I run find -name some-file on the RHEL7 machine, I get output which matches the above. But when I run find on my CentOS-7.5 machine, my results list in reversed order. Why is this?
The order in which find traverses the directory structures of its search paths is probably the order in which the readdir() library function returns the directory entries in. These entries are not further ordered by find and will therefore likely depend on the order in which the directory entries were created in the filesystem, and maybe even on the order in which other files and directories on the same partition were created and deleted, depending on the filesystem implementation. You will get the same ordering in the output of ls -f .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/140533/" ] }
504,804
I have a file called foo.txt . I want to associate my own program with the mime-type .txt so that my program opens a terminal and shows the contents of foo.txt as standard output. I would prefer Ruby, but BASH scripting will also be OK. An working example: I can open an HTML file with firefox . I want to open txt files with my own executable the same way. I can't figure out how can I actually get it working? Example 2: I can open a .txt file with Geany/Mousepad/Atom/Code etc.Let's suppose I have made a tool just like mousepad. How should my program handle the .txt mimetype? So far I have made a small GUI program with Ruby and made it executable and tried to open foo.txt with my program (I used the Nemo file manager). I have captured arguments, and stdins in my Ruby program so it will show the Argument and STDINs if any. But my program doesn't even show up the window if I open a .txt file with it! How am I supposed to achieve the result?
Introduction When a file is opened with an application, the file is just passed to the application as an argument. So when you open firefox-nightly located in /bin/ with p.html located in /home/user/ , it's basically similar to run /bin/firefox-nightly /home/user/p.html . Creating an Executable As mentioned in the question: So far I have made a small GUI program with Ruby and made it executable and tried to open foo.txt with my program (I used the Nemo file manager). Let us create a Ruby program as asked by the OP that will copy the contents of a file passed as an argument to /tmp/tempfile-#{n} . Note that any programming language will work if it can accept command line arguments. #!/usr/bin/env ruby# ErrorsERR_NO_FILE = 2FILE = $*[0]begin tempfile, counter = File.join(%W(/ tmp tempfile)), 0 tempfile.replace(File.join(%W(/ tmp tempfile-#{counter += 1}))) while File.exist?(tempfile) IO.write(tempfile, IO.read(FILE))rescue Errno::ENOENT exit!(ERR_NO_FILE)rescue Interrupt, SystemExit, SignalException exit!(0)rescue Exception abort($!.backtrace.join(?\n))end if FILE And let's call our program copycat.rb , and move it to /tmp/ directory. We can surely run the program on a terminal like this: /tmp/copycat.rb /tmp/AFile A this will copy all the contents of /tmp/AFile to /tmp/tempfile-#{n} Where #{n} is the count in case any duplicate tempfile exists. Creating an Application Entry Now to open this with our program from the file manager, we need to create an Application entry. To do that, we have 2 options. The first options is : Create a file called copycat.desktop in $HOME/.local/share/applications , with the following content: [Desktop Entry]Version=1.0Type=ApplicationName=CopyCatComment=Copy the contents of a file to /tmp/tempfile#{n}Exec=/tmp/copycat.rb %uIcon=edit-copyPath=Terminal=falseStartupNotify=falseActions=Categories=Copy;Utility Don't forget to add the %u option to the line starts with Exec. Testing Well, to test, let's create a simple file called with the content 'hello world' or anything you want. Open your file manager, and click your secondary mouse button on the file, and select "Open With" or similar option. Because it's GUI related, I will add some sample pictures. Nautilus, "Open With Other Application": Nautilus, "View All Applications": "CopyCat": When done, you can see the tempfile-#{n} created in /tmp/ The file manager I used here is Nautilus, but this should work with other file managers as well, just the text might differ. The second option is to make the application available for all users. To do that, you have to move the file from $HOME/.local/share/applications/copycat.desktop to /usr/share/applications and change the ownership to root. This is how the open with a custom executable works in Linux.A similar GUI app can be created and opened in the same way.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504804", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274717/" ] }
504,808
Edit: Important note before reading further: since Kusalananda's anwser , my question is now quite useless, since he pointed out a bug in another script I used. Now this bug is fixed, the below errors I described no longer occur! There are two issues in your code which causes the value in $extension to be expanded The second issue is the function itself TL;DR I was stuck when I started writing the question, but found a solution that works (using shell noglob ), BUT I still have a question of "would it be possible to write". I'm sure this has already been answered, but I get lost in hundreds of topics about bash expansion. Here is my case: I want to run a script (lets say ffmpeg with option -pattern_type glob -i mode that accepts a regexp like *.JPG ) taking an argument (lets say *.JPG ) argument coming from a variable (lets say $extension ) variable coming from script parameters (lets say $1 ) but I do not want shell to expand $extension as by default for a pattern like *.JPG Note that all this actions take place in a script script.sh . Thus, the user could run: script.sh 'JPG' ; # where $1='JPG' Note: in script.sh , I concatenate *. and $1 to get a more final regexp like *.JPG I read (from here https://stackoverflow.com/a/11456496/912046 ) that shell expansion occurs before ffmpeg receives arguments ( ffmpeg even does not know Shell expansion took place) I tried to play with quotes, double quotes, or backslashes around ${extension} in different ways, unsuccessfully. Here are some tries: ffmpeg ... "${extension}" leads to: ffmpeg 1.jpg 2.jpg [...] (expansion occurs) ffmpeg ... ${extension} same ffmpeg ... '${extension}' leads to no file match because of searching pattern ${extension} (variable name is used literally) extension="\'${extension}\'" ; # wrapping the single quotes (preventing expansion) directly in variableffmpeg ... ${extension} leads to no file match because of searching pattern '*.jpg' (including single quotes) ffmpeg ... \"${extension}\" same but searching for "*.jpg" (including double quotes) I finally succeeded using Shell noglob option. Here is the command for curious: set -f ; # disable glob (prevent expansion on '*.jpg')ffmpeg -pattern_type glob -i ${extension} movie.mp4 ;set +f ; # restore, but what if glob was already disabled? I should not restore it then? But for my curiosity, is there another way of preventing expansion on a script argument coming from a variable we WANT to be resolved? (thus, the way I know of preventing expansion (using single quotes) makes variable no longer resolved/replaced). Something like : (but I read string concatenation should be avoided for safety, injection) ffmpeg '${extension}' || | 3 || 2 | 1 where : 1: would open single quote for final result being ffmpeg '*.jpg' 2: would inject and resolve our variable extension to .jpg 3: would close single quote for ffmpeg to receive argument as if I manually had written '*.jpg' Edit: after comment about how extension variable is assigned: local extension="${2}" ;echo $extension ;extension="${extension:=jpg}" ; # Default on "jpg" extension# wrapp extension type in single quotes + prefix with "*."extension="*.${extension}" ; Then, using it with: runprintcommand ffmpeg -loglevel verbose -pattern_type glob -i ${extension} "$movieName" ; Where the runprintcommand function/script being: function runprintcommand() { echo "Command to run: (Note: '\\' escape character may not be printed)" ; echo "$*" ; $* ; }
An unquoted variable would undergo variable expansion, word-splitting (on spaces, tabs and newlines by default), and each generated word would in turn undergo filename generation (globbing). A variable within double quotes would have its value expanded, but the shell would not do word-splitting or globbing on the expanded value. A variable within single quotes would not be expanded at all. Example: $ lsscript.sh $ var='* *' $ echo $varscript.sh script.sh $ echo "$var"* * $ echo '$var'$var There are two issues in your code which causes the value in $extension to be expanded to the glob pattern that it contains and also causes the glob pattern to be matched against filenames prematurely (you want to hand it as-is to ffmpeg -i which does its own glob expansion internally). The first is your call to your function: runprintcommand ffmpeg -loglevel verbose -pattern_type glob -i ${extension} "$movieName" ; Here, ${extension} is unquoted, so you would definitely get (word-splitting and) filename globbing happening on its value. The second issue is the function itself: function runprintcommand() { echo "Command to run: (Note: '\\' escape character may not be printed)" ; echo "$*" ; $* ; } Here, you use $* unquoted, which again would (split the value up into words and then) expand the glob, even if you double-quoted ${extension} in the call to the function. Instead of a bare $* , use "$@" (with the double quotes). This would expand to the individually quoted positional parameters of the function. This is the difference between "$@" and "$*" : "$*" is a single double-quoted string . This can generally not be used to execute a command with arguments. "$@" is a list of double quoted words . This could be used to execute a command with arguments.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504808", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168003/" ] }
504,892
I have a 4TB big text file Exported from Teradata records, and I want to know how many records (= lines in my case) there are in that file. How may I do this quickly and efficiently?
If this information is not already present as meta data in a separate file (or embedded in the data, or available through a query to the system that you exported the data from) and if there is no index file of some description available, then the quickest way to count the number of lines is by using wc -l on the file. You can not really do it quicker. To count the number of records in the file, you will have to know what record separator is in used and use something like awk to count these. Again, that is if this information is not already stored elsewhere as meta data and if it's not available through a query to the originating system, and if the records themselves are not already enumerated and sorted within the file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/310604/" ] }
504,896
I am using arrays in bash and one particular array is behaving unusually. I am using a function and calling an external script which returns a value to be appended to an array as follows: function get_unit_coverage() { for sub_unit in "$@" do extracted_value=$( ./external_script.sh $file $sub_unit ) my_array+=$extracted_value done} I pass this function an array and expect the array to be appended each iteration. However the retun of: echo "${my_Array[0]}" is 52.5500%66.6400%16.4300%47.8800%40.6600%45.6800%43.3400%74.5100%87.4600%45.6300%65.6100%58.0900%%47.5800%5.9500%7.6500%1.8000% The external_script.sh simply echoes these values, is this a potential issue?
To append new elements to an array: array+=( new elements here ) In your case: my_array+=( "$extracted_value" ) When you do array+=$variable you are appending to the first element of the array. It is the same as array[0]+=$variable Also note that in extracted_value=$( ./external_script.sh $file $sub_unit ) the values $file and $sub_unit will be split on whitespace and undergo filename globbing. To prevent this, use "$file" and "$sub_unit" instead (i.e. double-quote the variable expansions). Likewise, saying my_array+=( $extracted_value ) would split the value of $extracted_value into multiple words, and each word would undergo filename globbing to generate new element in the array. That would be better written (as already mentioned), my_array+=( "$extracted_value" ) This is general advice and there's no reason to not do this regardless of whether you know your values are already single words containing no globbing characters.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/340538/" ] }
504,902
I want to redirect the output of rsync to a particular directory with the date and time stamp. Example: rsync -r source-dir dest-dir/current-date-time Is there any way the "current-date-time" folder can be created automatically? My main aim is to run the rsync command in a cron job and I want the output to be stored in multiple directories (with the date and time) under destination. Is that possible in single rsync command? I do understand -t preserves modification time so I may use rsync -avH -t <source> <dest> but is the directory creation (with date and time) possible at the destination?
To append new elements to an array: array+=( new elements here ) In your case: my_array+=( "$extracted_value" ) When you do array+=$variable you are appending to the first element of the array. It is the same as array[0]+=$variable Also note that in extracted_value=$( ./external_script.sh $file $sub_unit ) the values $file and $sub_unit will be split on whitespace and undergo filename globbing. To prevent this, use "$file" and "$sub_unit" instead (i.e. double-quote the variable expansions). Likewise, saying my_array+=( $extracted_value ) would split the value of $extracted_value into multiple words, and each word would undergo filename globbing to generate new element in the array. That would be better written (as already mentioned), my_array+=( "$extracted_value" ) This is general advice and there's no reason to not do this regardless of whether you know your values are already single words containing no globbing characters.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504902", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/340544/" ] }
504,911
I don't know how nor why but my Debian systemd stopped starting any service, particularly sshd service so I can't access to my machine. It's a headless machine without monitor and moreover I can't connect one. I know systemd is working because when I boot with a USB with puppy, and I chroot the debian partition, I can read logs with journalclt. Everything seems working OK, but the system suddenly gets stucked receiving watchdog events ad infinitum . I paste the tail of the output: [...]mar 06 19:36:26 DEBIAN-LXDE systemd[1]: [email protected]: Child 560 belongs to [email protected] 06 19:36:26 DEBIAN-LXDE systemd[1]: [email protected]: Main process exited, code=exited, status=0/SUCCESSmar 06 19:36:26 DEBIAN-LXDE systemd[1]: [email protected]: Failed to destroy cgroup /system.slice/[email protected], ignoring: Device or resource busymar 06 19:36:26 DEBIAN-LXDE systemd[1]: [email protected]: Changed running -> exitedmar 06 19:36:32 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:36:44 DEBIAN-LXDE ntpdate[729]: step time server 158.227.98.15 offset 9.571738 secmar 06 19:36:44 DEBIAN-LXDE systemd[1]: Time has been changedmar 06 19:36:44 DEBIAN-LXDE systemd[1]: Set up TFD_TIMER_CANCEL_ON_SET timerfd.mar 06 19:36:44 DEBIAN-LXDE systemd[1]: Received SIGCHLD from PID 722 (ntpdate).mar 06 19:36:44 DEBIAN-LXDE systemd[1]: Child 722 (ntpdate) died (code=exited, status=0/SUCCESS)mar 06 19:36:44 DEBIAN-LXDE systemd[1]: [email protected]: Child 722 belongs to [email protected] 06 19:36:44 DEBIAN-LXDE systemd[1]: [email protected]: cgroup is emptymar 06 19:36:44 DEBIAN-LXDE systemd[1]: systemd-journald.service: Received EPOLLHUP on stored fd 19 (stored), closing.mar 06 19:36:44 DEBIAN-LXDE systemd[1]: Got cgroup empty notification for: /system.slice/[email protected] 06 19:36:44 DEBIAN-LXDE systemd[1]: [email protected]: cgroup is emptymar 06 19:37:02 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:37:22 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:37:42 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:38:02 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:38:02 DEBIAN-LXDE systemd[1]: systemd-journald.service: Got notification message from PID 237 (WATCHDOG=1)mar 06 19:38:22 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:38:42 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:39:02 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:39:22 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:39:42 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:39:42 DEBIAN-LXDE systemd[1]: systemd-journald.service: Got notification message from PID 237 (WATCHDOG=1)mar 06 19:40:02 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:40:22 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:40:42 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:41:02 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:41:02 DEBIAN-LXDE systemd[1]: systemd-journald.service: Got notification message from PID 237 (WATCHDOG=1)mar 06 19:41:22 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:41:42 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:42:02 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:42:22 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)mar 06 19:42:42 DEBIAN-LXDE systemd[1]: systemd-udevd.service: Got notification message from PID 293 (WATCHDOG=1)[...] Nevertheless the logs in /var/log/ were written few days ago. I thought the problem could be because some usb drive attached but I disconnected all of them and the result is the same. I read a lot in the web but I could't find anyting similar. Only about specific services. I tried to write a check service as: [Unit]Description=Avisa cuando se arranca el sistemaRequires=network.targetAfter=network.target[Service]Type=simpleRemainAfterExit=noExecStart=/usr/bin/mail -s "AVISOOOO" [email protected][Install]WantedBy=default.target But the mail is never sent. How could I find out what the problem is? UPDATE: I've finally managed to start the system plugin all the usb devices again. I think it was that point because I tried several thigs: reinstall grub, check the main partition, etc. Thank you for all of you that took a moment to give a hand. Thanks.
To append new elements to an array: array+=( new elements here ) In your case: my_array+=( "$extracted_value" ) When you do array+=$variable you are appending to the first element of the array. It is the same as array[0]+=$variable Also note that in extracted_value=$( ./external_script.sh $file $sub_unit ) the values $file and $sub_unit will be split on whitespace and undergo filename globbing. To prevent this, use "$file" and "$sub_unit" instead (i.e. double-quote the variable expansions). Likewise, saying my_array+=( $extracted_value ) would split the value of $extracted_value into multiple words, and each word would undergo filename globbing to generate new element in the array. That would be better written (as already mentioned), my_array+=( "$extracted_value" ) This is general advice and there's no reason to not do this regardless of whether you know your values are already single words containing no globbing characters.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/340526/" ] }
504,913
I have a large set of files in a directory. The files contains arbitrary text. I want to search for the file name inside that particular file text. To clarify, I have file1.py.txt (yeas, two dots .py.txt ) and file2.py.txt both contains texts. I want to search for the existence of the string @code prefix.file1.py inside file1.py.txt and for the string @code prefix.file2.py inside file2.py.txt How can I customize grep such that it goes through every file in the directory, search for the string in each file using that particular file name? EDIT: The output I am looking for is written in a separate file, result.txt which contains:filename (if a match is found), the line text (where the match is found)
With GNU awk : gawk ' BEGINFILE{search = "@code prefix." substr(FILENAME, 3, length(FILENAME) - 6)} index($0, search)' ./*.py.txt Would report the matching lines. To print the file name and matching line, change index($0, search) to index($0, search) {print FILENAME": "$0} Or to print the file name only: index($0, search) {print FILENAME; nextfile} Replace FILENAME with substr(FILENAME, 3) to skip outputting the ./ prefix. The list of files is lexically sorted. The ones whose name starts with . are ignored (some shells have a dotglob option to add them back; with zsh , you can also use the (D) glob qualifier).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504913", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/299440/" ] }
504,963
I've installed Kubuntu 18.04 on a desktop with an ethernet connection. During the installation, updates were downloaded and the internet was working fine. Once the operating system is installed, every time I try to ping any website I get the following error: Temporary failure in name resolution I've tried the ethernet cable on different computers and it works, so I don't know what I should do next.
There are different possible reasons for a failure in name resolution. You don't have any internet connectivity. Try ping -c4 8.8.8.8 If you get answers, then your internet connection works. Else find out why it doesn't You have the wrong resolver. Type cat /etc/resolv.conf You should see at least one line nameserver a.b.c.d The a.b.c.d is typically the address of your router. If there is no such line, add one. If there is such a line, but it doesn't work, of if you don't know the address of your router, try nameserver 8.8.8.8 . This uses the Google DNS servers at 8.8.8.8 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504963", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/340614/" ] }
504,972
Bitdefender Antivirus for Mac keeps notifying me that it has blocked App:/bin/rm with the following message: An app you previously chose to block attempted to access your protected files again. We blocked the app to prevent it from altering the content of your protected files. Should I remove or freeze this App entirely or should I free it up to perform as needed by OSX? If I need to remove it, please guide me with proper command sequence. I blocked this awhile back on an alert from BitDefender, and it quietly continues to block it in the background. It was only after checking Bitdefender Notifications that I saw this oft repeated sequence. Having just gone into my BD settings to make adjustments, I now see /bin/rm and /bin/rmdir blocked in the App Access menu. I have unblocked them now.
It is a friend. It is a very, very good friend. The type you invite into your house and let them keep a copy of your keys for you. The /bin/rm command is one of the basic tools of the operating system and the default command used when you want to delete a file. Removing it will most likely break your system since there are bound to be various programs that expect it to exist and use it 1 . I am guessing (and I stress that this is a guess, I don't know how bitdefender works) that bitdefender blocked something, possibly malware, which was attempting to delete a file from your computer. Presumably, it tried to delete it using /bin/rm , and so bitdefender blocked /bin/rm . However, completely disabling this essential utility is sort of like making rocks illegal because someone threw one at you once. The rm utility is a tool, one that can be used safely or dangerously, depending on how it is used. 1 I actually just tried this on an Ubuntu Virtual Machine. To my surprise, deleting /bin/rm didn't stop the machine from shutting down and rebooting normally. I can also still delete files from the GUI file manager. Nevertheless, deleting this sort of basic utility is not a good idea and I still expect it to cause problems at some point.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/504972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/340620/" ] }
505,008
How can signal-desktop messages be exported? I want to backup my correspondence. Is it possible at all?
Yes, it is possible. Just save this in a file <yourFilename> : sigBase="${HOME}/.config/Signal/";key=$( /usr/bin/jq -r '."key"' ${sigBase}config.json );db="${HOME}/.config/Signal/sql/db.sqlite";clearTextMsgs="${sigBase}clearTextMsgs.csv";/usr/bin/sqlcipher -list -noheader "$db" "PRAGMA key = \"x'"$key"'\";select json from messages;" > "$clearTextMsgs"; and call it via bash <yourFilename> . Or render it executable with chmod 700 <yourFilename> and call it directly: ./<yourFilename> This script uses sqlcipher and jq with signal-desktop's database key to open, decrypt and extract all messages in JSON format into clearTextMsgs.csv inside your signal-desktop folder ~/.config/Signal . Besides the key extraction by filtering JSON with jq (from ~/.config/Signal/config.json ), the crucial bit happens here: sqlcipher -list -noheader <DB> <SQL> where <SQL> contains the PRAGMA key definition and the actual SQL statement ( SELECT json FROM messages; ). One can then use jq to access any key/value from the messages backup. You have to install sqlcipher and jq for that: sudo apt install sqlcipher jq Note: While this does extract all messages, we need to specify that " all " in signal-desktop has the meaning of " all messages actually loaded ". So, in order to extract every single message, the slider of the active contact has to be slid way up, then signal-desktop will load previously not availalble messages (lather rinse repeat until satisfied). Do so as far in the past you would want your messages loaded. This gets tedious quite quickly. Remember to do so for all of your contacts' histories. Having that said, it is technically feasable to backup your message history, in practice it is a manual job. A way around this might be a cron job backing up all recent messages, maybe once a day. Then this is likely to contain duplicates and might miss messages in case signal-desktop has been restarted. In any case, this method is working fine if the (not too far -- read: a couple of months maybe) history is to be searched programmatically once in a while.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/505008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259089/" ] }
505,036
A router is configured with DD-WRT and features remote SSH access. A raspberry pi computer (Raspbian) is connected to the router's LAN. Once logged into the router via SSH, attempts to SSH from the router into the pi returns: ssh [email protected] ssh: connection to [email protected]:22 exited: no matching algo kex Shelling into other LAN connected DD-WRT routers from the DD-WRT Gateway is successful. Questions: What is the cause of the error message and what can be done to enable SSH access? What exactly does no matching algo kex imply? UPDATES ssh [email protected] -oKexAlgorithms=+diffie-hellman-group1-sha1 WARNING: Ignoring unknown argument '-oKexAlgorithms=+diffie-hellman-group1-sha1' ssh: connection to [email protected]:22 exited: no matching algo kex user@raspberrypi:~ $ ssh -V OpenSSH_6.7p1 Raspbian-5+deb8u2, OpenSSL 1.0.1t 3 May 2016 DD-WRT is running dropbear v0.52: user@DD-WRT:~ $ ssh -V WARNING: Ignoring unknown argument '-v' Dropbear client v0.52 Usage: ssh [options] [user@]host[/port][,[user@]host/port],...] [command] Options are: -p -l -t Allocate a pty-T Don't allocate a pty-N Don't run a remote command-f Run in background after auth-y Always accept remote host key if unknown-s Request a subsystem (use for sftp)-i (multiple allowed)-L Local port forwarding-g Allow remote hosts to connect to forwarded ports-R Remote port forwarding-W (default 24576, larger may be faster, max 1MB)-K (0 is never, default 0)-I (0 is never, default 0)-B Netcat-alike forwarding-J Use program pipe rather than TCP connection
Yes, it is possible. Just save this in a file <yourFilename> : sigBase="${HOME}/.config/Signal/";key=$( /usr/bin/jq -r '."key"' ${sigBase}config.json );db="${HOME}/.config/Signal/sql/db.sqlite";clearTextMsgs="${sigBase}clearTextMsgs.csv";/usr/bin/sqlcipher -list -noheader "$db" "PRAGMA key = \"x'"$key"'\";select json from messages;" > "$clearTextMsgs"; and call it via bash <yourFilename> . Or render it executable with chmod 700 <yourFilename> and call it directly: ./<yourFilename> This script uses sqlcipher and jq with signal-desktop's database key to open, decrypt and extract all messages in JSON format into clearTextMsgs.csv inside your signal-desktop folder ~/.config/Signal . Besides the key extraction by filtering JSON with jq (from ~/.config/Signal/config.json ), the crucial bit happens here: sqlcipher -list -noheader <DB> <SQL> where <SQL> contains the PRAGMA key definition and the actual SQL statement ( SELECT json FROM messages; ). One can then use jq to access any key/value from the messages backup. You have to install sqlcipher and jq for that: sudo apt install sqlcipher jq Note: While this does extract all messages, we need to specify that " all " in signal-desktop has the meaning of " all messages actually loaded ". So, in order to extract every single message, the slider of the active contact has to be slid way up, then signal-desktop will load previously not availalble messages (lather rinse repeat until satisfied). Do so as far in the past you would want your messages loaded. This gets tedious quite quickly. Remember to do so for all of your contacts' histories. Having that said, it is technically feasable to backup your message history, in practice it is a manual job. A way around this might be a cron job backing up all recent messages, maybe once a day. Then this is likely to contain duplicates and might miss messages in case signal-desktop has been restarted. In any case, this method is working fine if the (not too far -- read: a couple of months maybe) history is to be searched programmatically once in a while.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/505036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/182280/" ] }
505,112
I know that you can display interfaces by doing ip a show . That only displays the interfaces that the host can see, but virtual interfaces configured by containers don't appear in this list. I've tried using ip netns as well, and they don't show up either. Should I recompile another version of iproute2 ? In /proc/net/fb_trie , you can see the local/broadcast addresses for, I assume, as a use for the forwarding database. Where can I find any of this information, or command to list all interfaces including containers? To test this out, start up a container. In my case, it is a lxc container on snap. Do an ip a or ip l . It will show the host machine's view, but not the container configured interface. I'm grepping through procfs , since containers are just cgrouped processes, but I don't get anything other than the fib_trie and the arp entry. I thought it could be due to a netns namespace obfuscation, but ip netns also shows nothing. You can use conntrack -L to display all incoming and outgoing connections that are established, because lxd needs to connection track the forwarding of the packets, but I'd like to list all ip addresses that are configured on the system, like how I'd be able to tell using netstat or lsof .
An interface, at a given time, belongs to one network namespace and only one. The init (initial) network namespace, except for inheriting physical interfaces of destroyed network namespaces has no special ability over other network namespaces: it can't see directly their interfaces. As long as you are still in init's pid and mount namespaces, you can still find the network namespaces by using different informations available from /proc and finally display their interfaces by entering those network namespaces. I'll provide examples in shell. enumerate the network namespaces For this you have to know how those namespaces are existing: as long as a resource keep them up. A resource here can be a process (actually a process' thread), a mount point or an open file descriptor (fd). Those resources are all referenced in /proc/ and point to an abstract pseudo-file in the nsfs pseudo-filesystem enumerating all namespaces. This file's only meaningful information is its inode, representing the network namespace, but the inode can't be manipulated alone, it has to be the file. That's why later we can't just keep only the inode value (given by stat -c %i /proc/some/file ): we'll keep the inode to be able to remove duplicates and a filename to still have an usable reference for nsenter later. process (actually thread) The most common case: for usual containers. Each thread's network namespace can be known via the reference /proc/pid/ns/net : just stat them and enumerate all unique namespaces. The 2>/dev/null is to hide when stat can't find ephemeral processes anymore. find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do stat -L -c '%20i %n' $procpid/ns/netdone 2>/dev/null This can be done faster with the specialized lsns command which deals with namespaces, but appears to handle only processes (not mount points nor open fd as seen later): lsns -n -u -t net -o NS,PATH (which would have to be reformatted for later as lsns -n -u -t net -o NS,PATH | while read inode path; do printf '%20u %s\n' $inode "$path"; done ) mount point Those are mostly used by the ip netns add command which creates permanent network namespaces by mounting them, thus avoiding them disappearing when there is no process nor fd resource keeping them up, then also allowing for example to run a router, firewall or bridge in a network namespace without any linked process. Mounted namespaces (handling of mount and perhaps pid namespaces is probably more complex but we're only interested in network namespaces anyway) appear like any other mount point in /proc/mounts , with the filesystem type nsfs . There's no easy way in shell to distinguish a network namespace from an other type of namespace, but since two pseudo-files from the same filesystem (here nsfs ) won't share the same inode, just elect them all and ignore errors later in the interface step when trying to use a non-network namespace reference as network namespace. Sorry, below I won't handle correctly mount points with special characters in them, including spaces, because they are already escaped in /proc/mounts 's output (it would be easier in any other language), so I won't bother either to use null terminated lines. awk '$3 == "nsfs" { print $2 }' /proc/mounts | while read -r mount; do stat -c '%20i %n' "$mount"done open file descriptor Those are probably even more rare than mount points except temporarily at namespace creation, but might be held and used by some specialized application handling multiple namespaces, including possibly some containerization technology. I couldn't devise a better method than search all fd available in every /proc/pid/fd/ , using stat to verify it points to a nsfs namespace and again not caring for now if it's really a network namespace. I'm sure there's a more optimized loop, but this one at least won't wander everywhere nor assume any maximum process limit. find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do find $procpid/fd -mindepth 1 | while read -r procfd; do if [ "$(stat -f -c %T $procfd)" = nsfs ]; then stat -L -c '%20i %n' $procfd fi donedone 2>/dev/null Now remove all duplicate network namespace references from previous results. Eg by using this filter on the combined output of the 3 previous results (especially from the open file descriptor part): sort -k 1n | uniq -w 20 in each namespace enumerate the interfaces Now we have the references to all the existing network namespaces (and also some non-network namespaces which we'll just ignore), simply enter each of them using the reference and display the interfaces. Take the previous commands' output as input to this loop to enumerate interfaces (and as per OP's question, choose to display their addresses), while ignoring errors caused by non-network namespaces as previously explained: while read -r inode reference; do if nsenter --net="$reference" ip -br address show 2>/dev/null; then printf 'end of network %d\n\n' $inode fidone The init network's inode can be printed with pid 1 as reference: echo -n 'INIT NETWORK: ' ; stat -L -c %i /proc/1/ns/net Example (real but redacted) output with a running LXC container,an empty "mounted" network namepace created with ip netns add ... having an unconnected bridge interface, a network namespace with an other dummy0 interface, kept alive by a process not in this network namespace but keeping an open fd on it, created with: unshare --net sh -c 'ip link add dummy0 type dummy; ip address add dev dummy0 10.11.12.13/24; sleep 3' & sleep 1; sleep 999 < /proc/$!/ns/net & and a running Firefox which isolates each of its "Web Content" threads in an unconnected network namespace (all those down lo interfaces): lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 192.0.2.2/24 2001:db8:0:1:bc5c:95c7:4ea6:f94f/64 fe80::b4f0:7aff:fe76:76a8/64 wlan0 DOWN dummy0 UNKNOWN 198.51.100.2/24 fe80::108a:83ff:fe05:e0da/64 lxcbr0 UP 10.0.3.1/24 2001:db8:0:4::1/64 fe80::216:3eff:fe00:0/64 virbr0 DOWN 192.168.122.1/24 virbr0-nic DOWN vethSOEPSH@if9 UP fe80::fc8e:ff:fe85:476f/64 end of network 4026531992lo DOWN end of network 4026532418lo DOWN end of network 4026532518lo DOWN end of network 4026532618lo DOWN end of network 4026532718lo UNKNOWN 127.0.0.1/8 ::1/128 eth0@if10 UP 10.0.3.66/24 fe80::216:3eff:fe6a:c1e9/64 end of network 4026532822lo DOWN bridge0 UNKNOWN fe80::b884:44ff:feaf:dca3/64 end of network 4026532923lo DOWN dummy0 DOWN 10.11.12.13/24 end of network 4026533021INIT NETWORK: 4026531992
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/505112", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/340756/" ] }
505,188
In zsh , with glob expansion we can use the P flag to prepend each match with some text: $ ls -1barbazfoo$ print -- *(P:--file:)--file bar --file baz --file foo Is there something analogous for array parameters? If I write items=(foo bar baz) is there some flag I can use to get behavior like this? $ print -- ${(...)items}--file foo --file bar --file baz
The best answer depends on what you need the modified array for. If you’re generating command-line arguments (as in the question), and you’re using a tool that accepts flags like --file=foo as an alternative to --file foo , the most idiomatic approach is to use the ${^name} form of expansion: $ items=(foo "two words" baz)$ print -lr -- --file=${^items}--file=foo--file=two words--file=baz I’m using print -lr to demonstrate that the expansion produces an array with three elements. If you call a command like this, the command will see three arguments. If you’re generating command-line arguments, you can put the text to prepend ( --file in this example) in a dummy array and combine that with the arguments array using the ${name:^^arrayname} form of expansion: $ flag=(--file)$ items=(foo "two words" baz)$ print -lr -- ${flag:^^items}--filefoo--filetwo words--filebaz This is an array with six elements. Either this format or the one in (1)—or both—should be acceptable to just about any command-line tool for passing flags with associated values. You can modify the approach in (1) to use a space instead of an equals sign: $ items=(foo "two words" baz)$ print -lr -- --file\ ${^items}--file foo--file two words--file baz (Note that you have to escape the space with a backslash; using double quotes like "--file ${^items}" means something else.) This is an array with three elements but with an embedded space in each element. Few, if any, command line tools will accept flags in this format, but it could be useful for other kinds of text manipulation. You can also use the ${name/pattern/repl} form of expansion. The pattern should start with # (to indicate that it needs to match at the beginning of each array element) but it should otherwise be empty (so that every array element will match and so that no text is actually replaced). $ items=(foo "two words" baz)$ print -lr -- ${items/#/--file }--file foo--file two words--file baz This has the same caveats as (3), and it’s also less idiomatic than that approach. Note that all four of these approaches will do the right thing (i.e., they won’t suddenly split strings on spaces) when the array elements contain spaces.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/505188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88560/" ] }
505,453
Why does cron require MTA for logging? Is there any particular advantage to this? Why can't it create a log file like most other utilities?
Consider that the traditional "standard" way of logging data is syslog , where the metadata included in the messages are the "facility code" and the priority level. The facility code can be used to separate log streams from different services so that they can be split into different log files, etc. (even though the facility codes are somewhat limited in that they have fixed traditional meanings.) What syslog doesn't have, is a way to separate messages for or from different users, and that's something that cron needs on a traditional multi-user system. It's no use collecting the messages from all users' cron jobs to a common log file where only the system administrator can see them. On the other hand, email naturally provides for sending messages to different users, so it's a logical choice here. The alternative would be for cron to do the work manually, and to create logfiles to each users' home directory, but a traditional multi-user Unix system would be assumed to have a working MTA, so implementing it in cron would have been mostly a futile exercise. On modern systems, there might be alternative choices, of course.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/505453", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/220462/" ] }
505,466
Is there a way to persist the command history and search it or group it by command? In other words, I would like to list, say every grep command I have done over the last 3 months. Is that capability possible somehow?
Consider that the traditional "standard" way of logging data is syslog , where the metadata included in the messages are the "facility code" and the priority level. The facility code can be used to separate log streams from different services so that they can be split into different log files, etc. (even though the facility codes are somewhat limited in that they have fixed traditional meanings.) What syslog doesn't have, is a way to separate messages for or from different users, and that's something that cron needs on a traditional multi-user system. It's no use collecting the messages from all users' cron jobs to a common log file where only the system administrator can see them. On the other hand, email naturally provides for sending messages to different users, so it's a logical choice here. The alternative would be for cron to do the work manually, and to create logfiles to each users' home directory, but a traditional multi-user Unix system would be assumed to have a working MTA, so implementing it in cron would have been mostly a futile exercise. On modern systems, there might be alternative choices, of course.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/505466", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
505,527
When the CPU is in user mode, the CPU can't execute privileged instructions and can't access kernel space memory. And when the CPU is in kernel mode, the CPU can execute all instructions and can access all memory. Now in Linux, a user mode program can access all memory (using /dev/mem ) and can execute the two privileged instructions IN and OUT (using iopl() I think). So a user mode program in Linux can do most things (I think most things) that can be done in kernel mode. Doesn't allowing a user mode program to have all this power defeats the purpose of having CPU modes?
So a user mode program in Linux can do most things (I think most things) that can be done in kernel mode. Well, not all user mode programs can, only those with the appropriate privileges. And that's determined by the kernel. /dev/mem is protected by the usual filesystem access permissions, and the CAP_SYS_RAWIO capability. iopl() and ioperm() are also restricted through the same capability. /dev/mem can also be compiled out of the kernel altogether ( CONFIG_DEVMEM ). Doesn't allowing a user mode program to have all this power defeats the purpose of having CPU modes? Well, maybe. It depends on what you want privileged user-space processes to be able to do. User-space processes can also trash the whole hard drive if they have access to /dev/sda (or equivalent), even though that defeats the purpose of having a filesystem driver to handle storage access. (Then there's also the fact that iopl() works by utilizing the CPU privilege modes on i386, so it can't well be said to defeat their purpose.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/505527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/341099/" ] }
505,558
My team is working on a CI environment. A ko file, named x.ko , is always generated from the CI environment at a regular time everyday and its type is ELF 64-bit LSB relocatable . Today, I found that the type of this ko file became data . I'm trying to figure out the reason. I try to cat this ko file but the output is nothing. Then, I try to cat -et x.ko , and it gives me lots of ^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ... Do you know what ^@^@^@^@^@^@^@^@^@ means?
Your file is full of nulls, rather than empty. A regular cat will print the nulls to standard output, but your terminal will generally display them each as nothing, while cat -v represents them as ^@ . ^@ represents a null byte because the byte value of "@" (0x40, or 64) xor 64 (flip bit 7) is zero. Why it's suddenly full of nulls, we can't tell from here. This related question may be informative about the caret representation: Are ASCII escape sequences and control characters pairings part of a standard?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/505558", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/145824/" ] }
505,629
In the current situation, a certain script 'calling.sh' launches another script 'called.sh' in the background, performs other operations, sleeps for a while, and then terminates 'called.sh' with a pkill called.sh . This works fine. Then, I would also like to launch 'called.sh' from other terminals as a standalone script at any other time, whether before or after launching calling.sh. These independent instances should not be killed by 'calling.sh'. How can I achieve this? Intuition says that the calling script should be able to tell the process it started from any other namesakes that are running in the meantime. As a variant, 'calling.sh' may also launch 'called' which is a symbolic link to 'called.sh'. Does this complicate managing the above situation? Which specific cautions and adjustments does using a symbolic link require?
Don't use the name to kill it. Since the calling.sh script is calling the process you later want to kill, just use $! (from man bash ): ! Expands to the process ID of the job most recently placed into the background, whether executed as an asynchronous command or using the bg builtin So, if you're calling.sh is like this: called.sh &## do stuffpkill called.sh Change it to this: called.sh &calledPid=$!# do stuffkill "$calledPid"
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/505629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/132913/" ] }
505,735
My Makefile looks like this: %.foo: %.bar cp $< $@test: *.foo echo *.foo I have 2 files in the directory: a.bar and b.bar . When I run make test it outputs: cp b.bar *.fooecho *.foo*.foo It also creates a file *.foo in the current directory. I am actually expecting to see this: cp a.bar a.foocp b.bar b.fooecho *.fooa.foo b.foo And also creating a.foo and b.foo . How to achieve that?
In this case you need to handle wildcards explicitly, with the wildcard function (at least in GNU Make ): %.foo: %.bar cp $< $@foos = $(patsubst %.bar,%.foo,$(wildcard *.bar))test: $(foos) echo $(foos) $(wildcard *.bar) expands to all the files ending in .bar , the patsubst call replaces .bar with .foo , and all the targets are then processed as you’d expect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/505735", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106945/" ] }
505,764
I'm trying to install MySQL on Ubuntu 18.04. To do that, first I have donwload the package with the command: wget -c https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb And then, run the command: sudo dpkg -i mysql-apt-config_0.8.12-1_all.deb But I've got a problem with dependencies and I don't know what are these dependencies. This is part of the message that I've got: Configuring mysql-community-server (8.0.15-1ubuntu18.04) ...Job for mysql.service failed because the control process exited with error code.See "systemctl status mysql.service" and "journalctl -xe" for details.Job for mysql.service failed because the control process exited with error code.See "systemctl status mysql.service" and "journalctl -xe" for details.invoke-rc.d: initscript mysql, action "start" failed.● mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2019-03-12 01:06:40 CET; 17ms ago Docs: man:mysqld(8) http://dev.mysql.com/doc/refman/en/using-systemd.html Process: 12302 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE) Process: 12263 ExecStartPre=/usr/share/mysql-8.0/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 12302 (code=exited, status=1/FAILURE) Status: "SERVER_BOOTING"mar 12 01:06:39 R2D2 systemd[1]: Starting MySQL Community Server...mar 12 01:06:40 R2D2 systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILUREmar 12 01:06:40 R2D2 systemd[1]: mysql.service: Failed with result 'exit-code'.mar 12 01:06:40 R2D2 systemd[1]: Failed to start MySQL Community Server.dpkg: error processing package mysql-community-server (--configure): installed mysql-community-server package post-installation script subprocess returned error exit status 1dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-community-server (= 8.0.15-1ubuntu18.04); however: Package mysql-community-server is not configured yet.dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigureddpkg: dependency problems prevent configuration of mysql-community-server-dbgsym: mysql-community-server-dbgsym depends on mysql-community-server (= 8.0.15-1ubuntu18.04); however: Package mysql-community-server is not configured yet.dpkg: error processing package mysql-community-server-dbgsym (--configure): dependency problems - leaving unconfiguredNo apport report written because the error message indicates its a followup error from a previous failure.Errors were encountered while processing: mysql-community-server mysql-server mysql-community-server-dbgsymE: Sub-process /usr/bin/dpkg returned an error code (1) This is the log that I get consulting with journalctl -xe : josecarlos@R2D2:~/Descargas$ journalctl -xemar 12 01:35:01 R2D2 CRON[13062]: pam_unix(cron:session): session closed for user rootmar 12 01:36:01 R2D2 org.gnome.Shell.desktop[1874]: (/usr/lib/firefox/firefox:10916): dconf-WARNING **: 01:36:01.2mar 12 01:36:39 R2D2 sudo[13085]: josecarlos : TTY=pts/0 ; PWD=/home/josecarlos/Descargas ; USER=root ; COMMAND=/umar 12 01:36:39 R2D2 sudo[13085]: pam_unix(sudo:session): session opened for user root by (uid=0)mar 12 01:36:40 R2D2 systemd[1]: Starting MySQL Community Server...-- Subject: Unit mysql.service has begun start-up-- Defined-By: systemd-- Support: http://www.ubuntu.com/support-- -- Unit mysql.service has begun starting up.mar 12 01:36:40 R2D2 audit[13150]: AVC apparmor="STATUS" operation="profile_replace" info="same as current profilemar 12 01:36:40 R2D2 kernel: audit: type=1400 audit(1552351000.784:89): apparmor="STATUS" operation="profile_replamar 12 01:36:41 R2D2 systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILUREmar 12 01:36:41 R2D2 systemd[1]: mysql.service: Failed with result 'exit-code'.mar 12 01:36:41 R2D2 systemd[1]: Failed to start MySQL Community Server.-- Subject: Unit mysql.service has failed-- Defined-By: systemd-- Support: http://www.ubuntu.com/support-- -- Unit mysql.service has failed.-- -- The result is RESULT.mar 12 01:36:41 R2D2 sudo[13085]: pam_unix(sudo:session): session closed for user root Output of sudo apt update && sudo apt upgrade : Hit:1 http://es.archive.ubuntu.com/ubuntu bionic InReleaseHit:2 http://repo.mysql.com/apt/ubuntu bionic InRelease Hit:3 http://archive.canonical.com/ubuntu bionic InRelease Hit:4 https://deb.nodesource.com/node_11.x bionic InRelease Hit:5 http://packages.microsoft.com/repos/vscode stable InRelease Reading package lists... Done Building dependency tree Reading state information... DoneAll packages are up to date.Reading package lists... DoneBuilding dependency tree Reading state information... DoneCalculating upgrade... Done0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.3 not fully installed or removed.After this operation, 0 B of additional disk space will be used.Do you want to continue? [Y/n] YSetting up mysql-community-server (8.0.15-1ubuntu18.04) ...Job for mysql.service failed because the control process exited with error code.See "systemctl status mysql.service" and "journalctl -xe" for details.Job for mysql.service failed because the control process exited with error code.See "systemctl status mysql.service" and "journalctl -xe" for details.invoke-rc.d: initscript mysql, action "start" failed.* mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2019-03-12 12:17:46 CET; 8ms ago Docs: man:mysqld(8) http://dev.mysql.com/doc/refman/en/using-systemd.html Process: 8839 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE) Process: 8800 ExecStartPre=/usr/share/mysql-8.0/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 8839 (code=exited, status=1/FAILURE) Status: "SERVER_BOOTING"mar 12 12:17:45 R2D2 systemd[1]: Starting MySQL Community Server...mar 12 12:17:46 R2D2 systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILUREmar 12 12:17:46 R2D2 systemd[1]: mysql.service: Failed with result 'exit-code'.mar 12 12:17:46 R2D2 systemd[1]: Failed to start MySQL Community Server.dpkg: error processing package mysql-community-server (--configure): installed mysql-community-server package post-installation script subprocess returned error exit status 1dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-community-server (= 8.0.15-1ubuntu18.04); however: Package mysql-community-server is not configured yet.dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigureddpkg: dependency problems prevent configuration of mysql-community-server-dbgsym: mysql-community-server-dbgsym depends on mysql-community-server (= 8.0.15-1ubuntu18.04); however: Package mysql-community-server is not configured yet.dpkg: error processing package mysql-community-server-dbgsym (--configure): dependency problems - leaving unconfiguredNo apport report written because the error message indicates its a followup error from a previous failure.No apport report written because the error message indicates its a followup error from a previous failure.Errors were encountered while processing: mysql-community-server mysql-server mysql-community-server-dbgsymE: Sub-process /usr/bin/dpkg returned an error code (1) I have make a big mistake and this is that I've got installed mysql-server-5.5 previously in my laptop. I have remove and purge everything and reinstall again, but it doesn't work. The log of the command sudo apt install mysql-server is: josecarlos@R2D2:~$ LANG=C sudo apt install mysql-serverReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following additional packages will be installed: libmecab2 mecab-ipadic mecab-ipadic-utf8 mecab-utils mysql-client mysql-common mysql-community-client mysql-community-client-core mysql-community-server mysql-community-server-coreThe following NEW packages will be installed: libmecab2 mecab-ipadic mecab-ipadic-utf8 mecab-utils mysql-client mysql-common mysql-community-client mysql-community-client-core mysql-community-server mysql-community-server-core mysql-server0 upgraded, 11 newly installed, 0 to remove and 0 not upgraded.Need to get 58,3 MB of archives.After this operation, 418 MB of additional disk space will be used.Do you want to continue? [Y/n] YGet:1 http://es.archive.ubuntu.com/ubuntu bionic/universe amd64 libmecab2 amd64 0.996-5 [257 kB]Get:2 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-common amd64 8.0.15-1ubuntu18.04 [84,4 kB]Get:3 http://es.archive.ubuntu.com/ubuntu bionic/universe amd64 mecab-utils amd64 0.996-5 [4.856 B]Get:4 http://es.archive.ubuntu.com/ubuntu bionic/universe amd64 mecab-ipadic all 2.7.0-20070801+main-1 [12,1 MB]Get:5 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-client-core amd64 8.0.15-1ubuntu18.04 [1.450 kB]Get:6 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-client amd64 8.0.15-1ubuntu18.04 [2.310 kB]Get:7 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-client amd64 8.0.15-1ubuntu18.04 [81,0 kB]Get:8 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-server-core amd64 8.0.15-1ubuntu18.04 [17,6 MB]Get:9 http://es.archive.ubuntu.com/ubuntu bionic/universe amd64 mecab-ipadic-utf8 all 2.7.0-20070801+main-1 [3.522 B]Get:10 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-community-server amd64 8.0.15-1ubuntu18.04 [24,2 MB]Get:11 http://repo.mysql.com/apt/ubuntu bionic/mysql-8.0 amd64 mysql-server amd64 8.0.15-1ubuntu18.04 [81,0 kB]Fetched 58,3 MB in 2s (30,7 MB/s) Preconfiguring packages ...Selecting previously unselected package mysql-common.(Reading database ... 821063 files and directories currently installed.)Preparing to unpack .../00-mysql-common_8.0.15-1ubuntu18.04_amd64.deb ...Unpacking mysql-common (8.0.15-1ubuntu18.04) ...Selecting previously unselected package mysql-community-client-core.Preparing to unpack .../01-mysql-community-client-core_8.0.15-1ubuntu18.04_amd64.deb ...Unpacking mysql-community-client-core (8.0.15-1ubuntu18.04) ...Selecting previously unselected package mysql-community-client.Preparing to unpack .../02-mysql-community-client_8.0.15-1ubuntu18.04_amd64.deb ...Unpacking mysql-community-client (8.0.15-1ubuntu18.04) ...Selecting previously unselected package mysql-client.Preparing to unpack .../03-mysql-client_8.0.15-1ubuntu18.04_amd64.deb ...Unpacking mysql-client (8.0.15-1ubuntu18.04) ...Selecting previously unselected package libmecab2:amd64.Preparing to unpack .../04-libmecab2_0.996-5_amd64.deb ...Unpacking libmecab2:amd64 (0.996-5) ...Selecting previously unselected package mysql-community-server-core.Preparing to unpack .../05-mysql-community-server-core_8.0.15-1ubuntu18.04_amd64.deb ...Unpacking mysql-community-server-core (8.0.15-1ubuntu18.04) ...Selecting previously unselected package mysql-community-server.Preparing to unpack .../06-mysql-community-server_8.0.15-1ubuntu18.04_amd64.deb ...Unpacking mysql-community-server (8.0.15-1ubuntu18.04) ...Selecting previously unselected package mecab-utils.Preparing to unpack .../07-mecab-utils_0.996-5_amd64.deb ...Unpacking mecab-utils (0.996-5) ...Selecting previously unselected package mecab-ipadic.Preparing to unpack .../08-mecab-ipadic_2.7.0-20070801+main-1_all.deb ...Unpacking mecab-ipadic (2.7.0-20070801+main-1) ...Selecting previously unselected package mecab-ipadic-utf8.Preparing to unpack .../09-mecab-ipadic-utf8_2.7.0-20070801+main-1_all.deb ...Unpacking mecab-ipadic-utf8 (2.7.0-20070801+main-1) ...Selecting previously unselected package mysql-server.Preparing to unpack .../10-mysql-server_8.0.15-1ubuntu18.04_amd64.deb ...Unpacking mysql-server (8.0.15-1ubuntu18.04) ...Setting up mysql-common (8.0.15-1ubuntu18.04) ...Setting up libmecab2:amd64 (0.996-5) ...Setting up mysql-community-client-core (8.0.15-1ubuntu18.04) ...Setting up mysql-community-server-core (8.0.15-1ubuntu18.04) ...Processing triggers for libc-bin (2.27-3ubuntu1) ...Processing triggers for man-db (2.8.3-2) ...Setting up mecab-utils (0.996-5) ...Setting up mysql-community-client (8.0.15-1ubuntu18.04) ...Setting up mecab-ipadic (2.7.0-20070801+main-1) ...Compiling IPA dictionary for Mecab. This takes long time...reading /usr/share/mecab/dic/ipadic/unk.def ... 40emitting double-array: 100% |###########################################| /usr/share/mecab/dic/ipadic/model.def is not found. skipped.reading /usr/share/mecab/dic/ipadic/Noun.nai.csv ... 42reading /usr/share/mecab/dic/ipadic/Interjection.csv ... 252reading /usr/share/mecab/dic/ipadic/Adj.csv ... 27210reading /usr/share/mecab/dic/ipadic/Noun.others.csv ... 151reading /usr/share/mecab/dic/ipadic/Others.csv ... 2reading /usr/share/mecab/dic/ipadic/Noun.place.csv ... 72999reading /usr/share/mecab/dic/ipadic/Suffix.csv ... 1393reading /usr/share/mecab/dic/ipadic/Noun.proper.csv ... 27327reading /usr/share/mecab/dic/ipadic/Adverb.csv ... 3032reading /usr/share/mecab/dic/ipadic/Noun.demonst.csv ... 120reading /usr/share/mecab/dic/ipadic/Filler.csv ... 19reading /usr/share/mecab/dic/ipadic/Noun.adverbal.csv ... 795reading /usr/share/mecab/dic/ipadic/Symbol.csv ... 208reading /usr/share/mecab/dic/ipadic/Postp.csv ... 146reading /usr/share/mecab/dic/ipadic/Conjunction.csv ... 171reading /usr/share/mecab/dic/ipadic/Noun.name.csv ... 34202reading /usr/share/mecab/dic/ipadic/Noun.csv ... 60477reading /usr/share/mecab/dic/ipadic/Noun.verbal.csv ... 12146reading /usr/share/mecab/dic/ipadic/Noun.adjv.csv ... 3328reading /usr/share/mecab/dic/ipadic/Prefix.csv ... 221reading /usr/share/mecab/dic/ipadic/Adnominal.csv ... 135reading /usr/share/mecab/dic/ipadic/Auxil.csv ... 199reading /usr/share/mecab/dic/ipadic/Noun.number.csv ... 42reading /usr/share/mecab/dic/ipadic/Noun.org.csv ... 16668reading /usr/share/mecab/dic/ipadic/Verb.csv ... 130750reading /usr/share/mecab/dic/ipadic/Postp-col.csv ... 91emitting double-array: 100% |###########################################| reading /usr/share/mecab/dic/ipadic/matrix.def ... 1316x1316emitting matrix : 100% |###########################################| done!update-alternatives: using /var/lib/mecab/dic/ipadic to provide /var/lib/mecab/dic/debian (mecab-dictionary) in auto modeSetting up mysql-client (8.0.15-1ubuntu18.04) ...Setting up mecab-ipadic-utf8 (2.7.0-20070801+main-1) ...Compiling IPA dictionary for Mecab. This takes long time...reading /usr/share/mecab/dic/ipadic/unk.def ... 40emitting double-array: 100% |###########################################| /usr/share/mecab/dic/ipadic/model.def is not found. skipped.reading /usr/share/mecab/dic/ipadic/Noun.nai.csv ... 42reading /usr/share/mecab/dic/ipadic/Interjection.csv ... 252reading /usr/share/mecab/dic/ipadic/Adj.csv ... 27210reading /usr/share/mecab/dic/ipadic/Noun.others.csv ... 151reading /usr/share/mecab/dic/ipadic/Others.csv ... 2reading /usr/share/mecab/dic/ipadic/Noun.place.csv ... 72999reading /usr/share/mecab/dic/ipadic/Suffix.csv ... 1393reading /usr/share/mecab/dic/ipadic/Noun.proper.csv ... 27327reading /usr/share/mecab/dic/ipadic/Adverb.csv ... 3032reading /usr/share/mecab/dic/ipadic/Noun.demonst.csv ... 120reading /usr/share/mecab/dic/ipadic/Filler.csv ... 19reading /usr/share/mecab/dic/ipadic/Noun.adverbal.csv ... 795reading /usr/share/mecab/dic/ipadic/Symbol.csv ... 208reading /usr/share/mecab/dic/ipadic/Postp.csv ... 146reading /usr/share/mecab/dic/ipadic/Conjunction.csv ... 171reading /usr/share/mecab/dic/ipadic/Noun.name.csv ... 34202reading /usr/share/mecab/dic/ipadic/Noun.csv ... 60477reading /usr/share/mecab/dic/ipadic/Noun.verbal.csv ... 12146reading /usr/share/mecab/dic/ipadic/Noun.adjv.csv ... 3328reading /usr/share/mecab/dic/ipadic/Prefix.csv ... 221reading /usr/share/mecab/dic/ipadic/Adnominal.csv ... 135reading /usr/share/mecab/dic/ipadic/Auxil.csv ... 199reading /usr/share/mecab/dic/ipadic/Noun.number.csv ... 42reading /usr/share/mecab/dic/ipadic/Noun.org.csv ... 16668reading /usr/share/mecab/dic/ipadic/Verb.csv ... 130750reading /usr/share/mecab/dic/ipadic/Postp-col.csv ... 91emitting double-array: 100% |###########################################| reading /usr/share/mecab/dic/ipadic/matrix.def ... 1316x1316emitting matrix : 100% |###########################################| done!update-alternatives: using /var/lib/mecab/dic/ipadic-utf8 to provide /var/lib/mecab/dic/debian (mecab-dictionary) in auto modeSetting up mysql-community-server (8.0.15-1ubuntu18.04) ...dpkg: error processing package mysql-community-server (--configure): installed mysql-community-server package post-installation script subprocess returned error exit status 1dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-community-server (= 8.0.15-1ubuntu18.04); however: Package mysql-community-server is not configured yet.dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfiguredNo apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-community-server mysql-serverE: Sub-process /usr/bin/dpkg returned an error code (1) Righ now, these are the package of mysql-server installed in my laptop: I don't know what am I doing wrong right now.
This fixed it for me (MySQL 8.0 - Ubuntu 20.04) sudo apt-get purge mysql\* libmysql\*sudo apt autoremove But the package " mysql-client-core-8.0 " don't uninstall, so... sudo apt --fix-broken installsudo apt-get --reinstall install mysql-client-core-8.0sudo apt-get purge mysql\* libmysql\*sudo apt autoremovesudo apt updatesudo apt install mysql-server No more errors!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/505764", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/307606/" ] }
505,768
I'm using a blackbox CLI based on Bash and I'm not entirely sure what stuff I can use.Brace expansion doesn't work, and with it goes my ability to do loops without listing the arguments explicitly, which is something I was trying to avoid by looping to start with. for x in {1..5}do for y in {a..c} do echo $HOME$x$y donedone How do I run something like this without brace expansion and without listing the arguments explicitly? Environment variables should also work, that's why I appended a random $HOME to the example. Please feel free to provide different alternatives (AWK, sed) as I'm not entirely sure what will and what won't work.
This fixed it for me (MySQL 8.0 - Ubuntu 20.04) sudo apt-get purge mysql\* libmysql\*sudo apt autoremove But the package " mysql-client-core-8.0 " don't uninstall, so... sudo apt --fix-broken installsudo apt-get --reinstall install mysql-client-core-8.0sudo apt-get purge mysql\* libmysql\*sudo apt autoremovesudo apt updatesudo apt install mysql-server No more errors!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/505768", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/341177/" ] }
505,815
If I know the index node (inode) of a file, but I don't know its path (or any of its paths), is it possible to create a hard link to that inode directly? I could find the file using sudo find / -inum 123546 and then create a hardlink, but that would be way too slow for my application. N.B. I'm using an ext4 filesystem.
AFAIK, not with the kernel API. If such an interface existed, it would have to be limited to the super-user as otherwise that would let anyone access files in directories they don't have search access to. But you could use debugfs on the file system (once it's unmounted) to do it (assuming you have write access to the block device). debugfs -w /dev/block/device (replace /dev/block/device with the actual block device the file system resides in). Then, at the prompt of debugfs , enter stat < 123 > (with the angle brackets, replacing 123 with the actual inode number) to check that the file exists (inode has a link count greater than 0) and is not a directory. If all good, enter: ln < 123 > path/to/newfile to create the hardlink (note that the path is relative to the root of the file system). Followed by: mi < 123 > to increment the link count (press Enter for all the fields except the link count where you'll want to add 1 to the current value).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/505815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/334204/" ] }
505,828
Suppose a program cook takes one argument: the pathname of a text file containing the recipe of the food to cook. Suppose I wish to call this program from within a bash script, also suppose I already have the recipe in a string variable: #!/bin/bashthe_recipe="$(cat << EOFwash cucumberswash knifeslice cucumbersEOF)"cook ... # What should I do here? It expects a file, but I only have a string. How can I pass the recipe to the command when it expects a filename argument? I thought about creating a temporary file just for the purpose passing a file, but I wish to know if there are alternative ways to solve this problem.
You can use the "fake" filename /dev/stdin which represents the standard input. So execute this: echo "$the_recipe" | cook /dev/stdin The echo command and the pipe sends the contents of the specified variable to the standard input of the next command cook , and that opens the standard input (as a separate file descriptor) and reads that.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/505828", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/217242/" ] }
505,908
Something that's always bugged me and I've been unable to find good information on: How can you or why can't you forward an entire desktop over SSH ( ssh -X )? I'm very familiar with forwarding individual windows using ssh -X . But there are times when I'd like to use my linux laptop as a dumb terminal for another linux machine. I've always thought that it should be possible to shut down the desktop environment on my laptop and then from the command line ssh into another machine and startup an desktop environment forwarded to my laptop. Searches online come up with a bunch of third party tools such as VNC and Xephyr, or they come up with the single window ssh commands and config. But that's NOT what I'm looking for. I'm looking to understand a little of the the anatomy (of xwindows?, wayland?, gdm?) to understand how you'd go about doing this, OR why it's not possible. Note: Xephyr isn't what I'm looking for because it tries to run the remote desktop in a window VNC isn't what I'm looking for for a whole bunch of reasons, not least because it's not X11 forwarding but forwarding bitmaps.
How can you? I've been using the below method from the (now suspended) Xmodulo site to remote into my entire Raspberry Pi desktop from any Ubuntu machine. Works with my original RPi, RPi2 and RPi3. Of course you have to mod sshd_config to allow X11 forwarding on the remote machine (I'd say client/host, but I believe they are different in X11 from other uses and I may confuse myself). Mind the spaces -- they break this procedure frequently when I can't type. You then have the entire desktop and can run the machine as if physically connected. I switch to Ubuntu using CTRL+ALT+F7, then back to RPi with CTRL+ALT+F2. YMMV. A quirk: You must physically release CTRL+ALT before hitting another function key when switching back and forth. Original link: http://xmodulo.com/2013/12/remote-control-raspberry-pi.html Original work attributed to: Kristophorus Hadiono. The referenced pictures are, sadly, lost. ============== 8< ============== Method #3: X11 Forwarding for Desktop over SSH With X11+SSH forwarding, you can actually run the entire desktop of Raspberry Pi remotely, not just standalone GUI applications. Here I will show how to run the remote RPi desktop in the second virtual terminal (i.e., virtual terminal 8) via X11 forwarding. Your Linux desktop is running by default on the first virtual terminal, which is virtual terminal #7. Follow instructions below to get your RPi desktop to show up in your second virtual terminal. Open your konsole or terminal, and change to root user.$ sudo su Type the command below, which will activate xinit in virtual terminal 8. Note that you will be automatically switched to virtual terminal 8. You can switch back to the original virtual terminal 7 by pressing CTRL+ALT+F7. # xinit -- :1 & After switching to virtual terminal 8, execute the following command to launch the RPi desktop remotely. Type pi user password when asked (see picture below). # DISPLAY=:1 ssh -X [email protected] lxsession You will bring to your new virtual terminal 8 the remote RPi desktop, as well as a small terminal launched from your active virtual terminal 7 (see picture below). Remember, do NOT close that terminal. Otherwise, your RPi desktop will close immediately. You can move between first and second virtual terminals by pressing CTRL+ALT+F7 or CTRL+ALT+F8. To close your remote RPi desktop over X11+SSH, you can either close a small terminal seen in your active virtual terminal 8 (see picture above), or kill su session running in your virtual terminal 7.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/505908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20140/" ] }
506,028
I need to create a bash script that would transform the text file content separated by | and ] ... Text File Content for example: Col1|Col2|Col3|P1]P2]P3|D1]D2]D3||Col4 Col3|ColA|ColA|PA]PB]|DA]DB]|ColD|| The desired output would be: Col1 Col2 Col3 P1 D1 0 Col4Col1 Col2 Col3 P2 D2 0 Col4Col1 Col2 Col3 P3 D3 0 Col4Col3 ColA ColA PA DA ColD 0Col3 ColA ColA PB DB ColD 0Col3 ColA ColA 0 0 ColD 0 EDITED: blank column and blank data after ] would all be replaced by 0 Thank you.
You can do it via sample script (mine is not optimal but will work) awk -F'[]|]' '{ print $1,$2,$3,$4,$7,$10 print $1,$2,$3,$5,$8,$10 print $1,$2,$3,$6,$9,$10 }' input_filename Or awk -F'[]|]' '{ for (i = 4; i <= 6; i++) print $1,$2,$3,$i,$(i+3),$10}' input_filename You can change the output field separator (space by default) by adding -v OFS=',' . And thanks to @steeldriver one more flexible way (with internal separation of field) to do the job: awk -F'|' '{ split($3,a,/]/); n = split($4,b,/]/); for(i=1;i<=n;i++) print $1,$2,a[1],a[i+1],b[i],$5}' input_filename As per edited question if you want to replace empty field with 0 (zero) you can do it with script like: awk -F'[]|]' '{ for (i = 1; i <= 11; i++) if ($i == "") $i=0} { print $1,$2,$3,$4,$7,$10,$11 print $1,$2,$3,$5,$8,$10,$11 print $1,$2,$3,$6,$9,$10,$11 }' input_filename From your comment the script should look like: awk -F'|' -v OFS="\t" '{ n = split($4,D,"]"); split($5,E,"]"); for (i = 1; i <= n; i++) { if (D[i] == "") D[i]=0; if (E[i] == "") E[i]=0;} print $1,$2,$3,D[i],E[i],$6,$7 }' input_file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506028", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/341513/" ] }
506,039
I have a file with the following file mode bits ( a+rw ): [0] mypc<johndoe>:~>sudo touch /tmp/test [0] mypc<johndoe>:~>sudo chmod a+rw /tmp/test[0] mypc<johndoe>:~>ls -l /tmp/test-rw-rw-rw- 1 root root 0 Mar 13 11:09 /tmp/test Why can't I remove the file? [0] mypc<johndoe>:~>rm /tmp/testrm: cannot remove '/tmp/test': Operation not permitted
The /tmp directory is conventionally marked with the restricted deletion flag, which appears as a permission letter t or T in ls output. Restricted deletion implies several things. In the general case, it implies that only the owner of the file, or the owner of /tmp itself, can delete a file/directory in /tmp . You can not delete the file, because you are not the owner, which is root . Try running rm with sudo which you probably forgot. sudo rm /tmp/test More specifically to Linux alone, the restricted deletion flag (on a world-writable directory such as /tmp ) also enables the protected_symlinks , protected_hardlinks , protected_regular , and protected_fifos restrictions, which in such directories respectively prevent users from following symbolic links that they do not own, prevent users making hard links to files that they do not own, prevents users opening FIFOs that they do not own, and prevents users from open existing files that they do not own when they expected to create them. This will surprise you with permissions errors when doing various further things as root when you do use sudo . More on these at question like " Hard link permissions behavior different between CentOS 6 and CentOS 7 " , " Symbolic link not working as expected when changes user ", and " Group permissions for root not working in /tmp ".
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/506039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/271136/" ] }
506,064
I have a partition sdb1 of size 3.3 GB. From 3.3 GB only 1.9 GB is used and 1.2 GB is an empty space. I want to create an image of this partition.But I want the image to only have the used space i.e. 1.9 GB Now when I run a command using dd: dd if=/dev/sdb1 of=/home/data/Os.img bs=1M status=progress I saw behavior that dd is making the image of 3.3 GB While I don't want the unallocated space to be the part of the OS image. So I tried different solutions. I found one solution: dd if=/dev/sdb1 of=/home/data/OS.img bs=1M count=1946 status=progress This solution created an image of 1.9 GB as I have defined a block size of 1M and count is 1946 which will give 1.9 GB totally. Question I am unable to determine if this dd command only made an image of the used space or it just created an image of size 1.9 GB in which there are both used and unused spaces? And is there any other way in which the image can be created for the used space only?
This has been answered here . First make an image of the partition. Then use losetup to mount the image. As it's not a disk but a partition, you don't need the partscan flag. # lsblk# losetup --find foo.img Lookup the newly created loop device # lsblkloop1 7:1 0 4096M 0 loop Zero out the remaining space by writing a large empty file. # mount /dev/loop1 /mnt# dd if=/dev/zero of=/mnt/filler conv=fsync bs=1M# rm /mnt/filler# umount /dev/loop1 Now compress the image. # ls -s4096M foo.img# gzip foo.img# ls -s11M foo.img.gz So far for the partition image. If there is an ext4 file system on the partition, you can resize it with resize2fs to the minimum size. This moves all data to the first blocks. e2fsck -f /dev/loop1resize2fs -M /dev/loop1
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/333769/" ] }
506,070
I frequently find my self in this scenario. I'm in the middle of typing a command and I need to check something else before I complete it. Is there a way to open a subshell of some sort with my current input so far being remembered, then when I exit this subshell I'm back to where I was? $ mylongcommand -n -e <SOME KEY COMBINATION WHICH OPENS A SUBSHELL>$ date...$ exit$ mylongcommand -n -e <BACK TO WHERE I WAS> I'm using zsh
There is the key combination Esc Q which saves the command buffer and allows to enter a new command. After running the command the buffer contains what you typed before. If you have to run another command before finishing this you can type Esc Q again. (I didn't try to open a subshell after pressing Esc Q yet.) See http://zsh.sourceforge.net/Intro/intro_10.html and search for "esc-q"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/506070", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16985/" ] }
506,172
we are install some service on redhat 7 but for now we no need the service application anymore is it possible to disable the start of the service? I not mean to disable it on the next rebootwhat we mean is to avoid starting the service , in spite service installed
systemctl disable servicename . Running systemctl disable removes the symlink to the service in /etc/systemd/system/* . From now on, that service won't start on boot anymore.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506172", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
506,341
IMPORTANT: I am able to restore halt old behavior if I do: apt install sysvinit-core Plus: Old Red Hat based versions are able to power off with halt too. I am trying to understand why in Debian 9 you should not use halt to turn off your computer anymore. That behavior are removed from systemd. I am a bit scary, but I think there must be a good logical reason. Are other unixes or unix-like with this same behavior or are Linux turning into something different? Now if you want to turn off the machine after the system shutdown you should use poweroff instead of halt... Somebody know how this command are interpreted in other Unix-Like? That is not a duplicated, because, debian 7 can power off with halt for example.
Debian When using SysVinit in Debian(esque) systems, /etc/defaults/halt includes a variable that defines whether the system will run halt or poweroff at the end of transition to runlevel 0. The default setting is HALT=poweroff . Before SysVinit 2.74 you were not supposed to run halt directly, and starting from that version, the SysVinit halt command will just call shutdown -h unless the current runlevel is 0 or 6. This is documented in halt(8) man page. At the end of transition to runlevel 0 the runlevel scripts will run $HALT , which is equal to poweroff by default. RHEL/CentOS RHEL/CentOS 5 was the last version to use SysVinit: version 6 used upstart and version 7 uses systemd . In RHEL 5.11, the last script to run when transitioning to runlevel 0 is /etc/init.d/halt , and its last lines are: [ "$INIT_HALT" != "HALT" ] && HALTARGS="$HALTARGS -p"exec $command $HALTARGS Since the same script is also run at the end of transition to runlevel 6 (reboot), the actual command to run is defined by the variable $command , and it will be either /sbin/halt or /sbin/reboot . According to the value of the $INIT_HALT variable, the script will decide whether or not add the -p option. That variable is set by /sbin/shutdown . The RHEL 5.11 shutdown(8) man page says: HALT OR POWEROFF The -H option just sets the init environment variable INIT_HALT to HALT, and the -P option just sets that variable to POWEROFF. The shutdown script that calls halt(8) as the last thing in the shutdown sequence should check these environment variables and call halt(8) with the right options for these options to actually have any effect. Debian 3.1 (sarge) supports this. (Yes, the RHEL 5.11 man page mentions Debian 3.1! I guess someone at RedHat porting patches from various sources to RHEL missed one reference...) It looks like RedHat has decided to code the above-mentioned test in the /etc/init.d/halt script in such a way that switching off the power (using /sbin/halt -p at the end of shutdown) is the default halt action: the only way to achieve halt without poweroff is to use the upper-case -H option of the shutdown command to explicitly request it, e.g. shutdown -hH now But again, the default powerdown is triggered by the runlevel scripts customized by the Linux distribution, so it's not really a feature of the SysVinit halt command. Historical note Old SysVinit-using systems (both Linux and non) used to have several commands that were not intended to be used directly, but only as part of the appropriate shutdown/reboot scripts. Before SysVinit 2.74, the /sbin/halt command with no options would have done the same as halt -f of modern SysVinit does, i.e. a brutal, immediate kernel shutdown without stopping any services or unmounting filesystems. For the same reason, being used to the Linux killall command can be dangerous on other Unixes. Its man page even has this ominous warning: Be warned that typing killall name may not have the desired effect on non-Linux systems, especially when done by a privileged user. This is because the classic SystemV killall was one of those commands designed to be used only as part of a shutdown script. In Linux distributions with SysVinit, the classic version of the command may be found as killall5 . It literally kills all processes except kernel threads and processes in its own session: its intended use is in a shutdown script after shutting down all services, just before unmounting the local filesystems, to kill of any other processes that might delay or prevent the unmounting. (How I know this, you ask? Well, I once made the mistake of running killall <something> as root on a Solaris 2.6 system. A very effective learning experience.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506341", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/170792/" ] }
506,344
My question is similar to this question: Where to find the Linux changelog of minor versions But I would like to search all changelogs from 4.18.0 to 4.20.16 for any reference to a specific word, such as sama5d3, mmc0, or other term. I can search the individual changelogs but didn't see a way to search a set at the same time?
Debian When using SysVinit in Debian(esque) systems, /etc/defaults/halt includes a variable that defines whether the system will run halt or poweroff at the end of transition to runlevel 0. The default setting is HALT=poweroff . Before SysVinit 2.74 you were not supposed to run halt directly, and starting from that version, the SysVinit halt command will just call shutdown -h unless the current runlevel is 0 or 6. This is documented in halt(8) man page. At the end of transition to runlevel 0 the runlevel scripts will run $HALT , which is equal to poweroff by default. RHEL/CentOS RHEL/CentOS 5 was the last version to use SysVinit: version 6 used upstart and version 7 uses systemd . In RHEL 5.11, the last script to run when transitioning to runlevel 0 is /etc/init.d/halt , and its last lines are: [ "$INIT_HALT" != "HALT" ] && HALTARGS="$HALTARGS -p"exec $command $HALTARGS Since the same script is also run at the end of transition to runlevel 6 (reboot), the actual command to run is defined by the variable $command , and it will be either /sbin/halt or /sbin/reboot . According to the value of the $INIT_HALT variable, the script will decide whether or not add the -p option. That variable is set by /sbin/shutdown . The RHEL 5.11 shutdown(8) man page says: HALT OR POWEROFF The -H option just sets the init environment variable INIT_HALT to HALT, and the -P option just sets that variable to POWEROFF. The shutdown script that calls halt(8) as the last thing in the shutdown sequence should check these environment variables and call halt(8) with the right options for these options to actually have any effect. Debian 3.1 (sarge) supports this. (Yes, the RHEL 5.11 man page mentions Debian 3.1! I guess someone at RedHat porting patches from various sources to RHEL missed one reference...) It looks like RedHat has decided to code the above-mentioned test in the /etc/init.d/halt script in such a way that switching off the power (using /sbin/halt -p at the end of shutdown) is the default halt action: the only way to achieve halt without poweroff is to use the upper-case -H option of the shutdown command to explicitly request it, e.g. shutdown -hH now But again, the default powerdown is triggered by the runlevel scripts customized by the Linux distribution, so it's not really a feature of the SysVinit halt command. Historical note Old SysVinit-using systems (both Linux and non) used to have several commands that were not intended to be used directly, but only as part of the appropriate shutdown/reboot scripts. Before SysVinit 2.74, the /sbin/halt command with no options would have done the same as halt -f of modern SysVinit does, i.e. a brutal, immediate kernel shutdown without stopping any services or unmounting filesystems. For the same reason, being used to the Linux killall command can be dangerous on other Unixes. Its man page even has this ominous warning: Be warned that typing killall name may not have the desired effect on non-Linux systems, especially when done by a privileged user. This is because the classic SystemV killall was one of those commands designed to be used only as part of a shutdown script. In Linux distributions with SysVinit, the classic version of the command may be found as killall5 . It literally kills all processes except kernel threads and processes in its own session: its intended use is in a shutdown script after shutting down all services, just before unmounting the local filesystems, to kill of any other processes that might delay or prevent the unmounting. (How I know this, you ask? Well, I once made the mistake of running killall <something> as root on a Solaris 2.6 system. A very effective learning experience.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/155754/" ] }
506,347
I have read what is multi-user.target and the systemd documentation , which states that the multi-user.target is a special target. Further, a lot of the systemd examples contain that line. Why do so many example services contain that line? What would happen if they did not contain WantedBy=multi-user.target? Could you give me an example of when it would actually be advisable to not include that line in a service file definition? Along the same lines, when is it a good idea to keep that line in?
1.) multi-user.target is basically the closest equivalent of classic SysVinit runlevel 3 that systemd has. When a systemd system boots up, systemd is trying to make the system state match the state specified by default.target - which is usually an alias for either graphical.target or multi-user.target . multi-user.target normally defines a system state where all network services are started up and the system will accept logins, but a local GUI is not started. This is the typical default system state for server systems, which might be rack-mounted headless systems in a remote server room. graphical.target is another possible alias for default.target . Normally it's defined as a superset of the multi-user.target : it includes everything the multi-user.target does, plus the activation of a local GUI login. So kind of like runlevel 5 in classic SysVinit. The line WantedBy=multi-user.target in a service is essentially the same as specifying "this service should start in runlevels 3, 4 and 5" in SysVinit systems: it tells systemd that this service should be started as part of normal system start-up, whether or not a local GUI is active. However, WantedBy is separate from the enabled/disabled state: so in another sense, it's sort of a "preset": it determines under what conditions the automatic start may happen, but only when the service is enabled in the first place. 2.) if you omit the WantedBy=multi-user.target line and no other enabled service includes a Requires=your.service or Wants=your.service in its service definition, your service will not be started automatically. systemd works on dependencies, and at boot time, if nothing Requires or Wants your service, it won't be started even if the service is enabled. Sure, you could edit your default.target to add or delete Requires or Wants lines for any services you want started at boot time - but so that you can just drop a new service file into the system and have it work by default (which makes things very easy for software package managers), systemd has the WantedBy and RequiredBy keywords which can be used to insert Wants and Requires -type dependencies (respectively) from "the other end". 3.) You should omit the line if you don't want the service to be ever started automatically at boot time, or this service is a part of a chain of dependencies you've defined explicitly. For example, you might be refactoring server application A and for some reason or another decide to split some optional functionality off it into a separate service B, to allow the user the choice of not installing it if it isn't needed. You could then make service B a separate service-B.rpm , and define B.service with WantedBy=A.service to make systemd start up service B automatically whenever service A is started - but only when service-B.rpm is actually installed. Note that a Wants or WantedBy only says that the system should startup one service whenever another service or target is also started, but it specifies nothing at all about the startup/shutdown order. If you need service B to be already running when service A starts up, you'd need to add Before=A.service in the B.service file to explicitly specify the start-up order dependency. 4.) Anytime you do want the service to have the capability of being started automatically at boot time, and there are no other dependencies already defined.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/506347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7684/" ] }
506,400
Both Linux and BSD have common programs like ls and cat and echo and kill . Do they come from the same source code, or do both and Linux and BSD own their own unique source code for these programs?
Linux is a kernel. It does not have the code for applications programs in the first place. Linux -based operating systems do not even necessarily use the same source code as one another , let alone the same code as on the BSDs. There are famously multiple implementations of several fairly basic programs. These include, but are not limited to: ifconfig had 2 implementations, one in GNU inetutils and the other in NET-3 net-tools . It now has 3, the third being mine. (See https://unix.stackexchange.com/a/504084/5132 .) su has 2 implementations, one in util-linux and the other in shadow . Debian switched from one to the other in 2018, making several old questions and answers here on this WWW site wrong. (See https://unix.stackexchange.com/a/460769/5132 and, for one example, " su vs su - (on Debian): why is PATH the same? ".) There are seemingly umpteen (actually 4 on Debian/Ubuntu) possible places whence one can obtain the mailx command: GNU Mailutils, BSD mailx, NMH, and s-nail. (See https://unix.stackexchange.com/a/489510/5132 .) The BSDs are operating systems. They do contain the code for these programs. However, there is no single BSD operating system, and the code for such programs does sometimes vary amongst NetBSD, FreeBSD, OpenBSD, and DragonFly BSD. Moreover, it is definitely different to the code used for the several Linux-based operating systems. Famously, Apple/NeXT used BSD applications softwares in MacOS/NeXTSTEP but enhanced several programs to support ACLs in ways different to the ways that the (other) BSDs did. One sets access controls using the chmod command, for example. So the Darwin versions of these commands are different yet again. There are three added twists. Programs like kill and echo are usually shell builtins. So the code for these commands varies according to what shell you are using, rather than what operating system. Then there are BusyBox and ToyBox, available both for Linux-based operating systems and the BSDs and even used as the primary implementations of such commands on a few of the former, which have their own implementations of many commands. Then there is OpenSolaris, from which come the likes of Illumos and Schillix, with the Solaris implementations of all of these tools, which is different yet again. There are whole histories here, encompassing the original split between BSD and AT&T Unix, through the efforts to "PD" clone many Unix programs in the late 1980s and 1990s, around three decades of shuffling around after that, the whole open-source release of the code for Solaris, and OpenBSD's reimplementations of several things. Even the histories of tools that one might be misled into thinking have one implementation such as cron (which a lot of people erroneously think is the original Unix tool, or erroneously think is at least one single flavour written as "PD cron " by Paul Vixie in 1987, or do not realize has workalike replacements written by other people in the years since) are non-trivial.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/506400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/341875/" ] }
506,503
When I login, I have these messages: -bash: $'\r' : command not found-bash: $'\r' : command not found-bash: $'\r' : command not found It is quite clear that it is caused by Windows-style line endings in some startup script(s), so my question is: Can I track script that causes that and how?
Bash reads a number of different files on startup, even depending on how it's started ( see the manual for the description ). Then there's stuff like /etc/profile.d/ that aren't directly read by the shell, but can be referenced from the other startup files in many distributions. You'll have to go through all of those but luckily, you can just grep for the carriage return. Try e.g. something like: grep $'\r' ~/.bashrc ~/.profile ~/.bash_login ~/.bash_profile /etc/bash.bashrc /etc/profile /etc/profile.d/* See also Is it possible to find out which files are setting/adding to environment variables, and their order of precedence? for a similar issue.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/506503", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/341951/" ] }
506,504
Let's say I ran quite a few ip commands and ended up with a required network configuration, but I didn't save the command history. Instead of rewriting commands in a file or script again, is there a way to dump/save the state of network configuration like iptables-save or mysqldump and so, later we can restore? I see similar thing is possible with netnsh in windows (not sure if it's exactly the kind of solution I am looking for..havne't gone through it, but it seems like it dumps the network configuration state). But I can't find any option in Linux (especially CentOS/RHEL)
Some support does exist for saving addresses, routes and rules, using iproute2's ip command. For obvious reason, this doesn't exist for links, even if one could imagine the possibility to save some virtual links, not all ("saving" a single side of a veth-pair link with its peer accross an other network namespace? not gonna happen...), or being able to save a bridge's and bridge's ports configurations including vlan etc., this doesn't appear to exist currently. Existing commands are: ip address saveip address restoreip route saveip route restoreip rule saveip rule restore The dump format is binary and the commands will refuse to save to or restore from a tty. I suggest restoring addresses before routes (rules can be done at any order), or most saved routes won't be restored because they can't satisfy routing conditions depending on addresses. Warning: of course all flush commands below will likely disrupt network connectivity until the restore is done, so this should be avoided from remote network access (or be done in other network namespaces). ip address save / ip address restore So to copy addresses from a simple network namespace orig 's configuration having a dummy0 interface (to keep the example simple) to a namespace copy : ip netns add origip netns exec orig sh << 'EOF'ip link add dummy0 type dummyip address add dev dummy0 192.0.2.2/24ip address add dev dummy0 2001:db8:0:1::2/64ip link set dummy0 upip address save > /tmp/addressEOFip netns add copyip netns exec copy sh << 'EOF'ip link add dummy0 type dummyip link set dummy0 upip address restore < /tmp/addressip -br addressEOF will give for example this result: lo DOWN dummy0 UNKNOWN 192.0.2.2/24 2001:db8:0:1::2/64 fe80::68e3:bdff:feb0:6e85/64 fe80::e823:d1ff:fe8c:3a15/64 Note: that previous automatic IPv6 link-local ( scope link ) address was also saved, and is thus restored, leading to an additional (and wrong) IPv6 link-local address, because the link/ether address (here in orig 6a:e3:bd:b0:6e:85 ) on which is based the IPv6 link-local address is not saved and thus not restored (leaving here the in copy the other random MAC ea:23:d1:8c:3a:15 on dummy0 ). So care should actually be done to separately save and copy the MAC address of such virtual interfaces if it really matters, or prune after some addresses for physical interfaces. You should probably flush all addresses before restoring them to avoid leaving old ones if the environment wasn't a "clean slate". Contrary to routes below, I couldn't find a simple way to flush all of them in one command without having to state an interface. Using those two should be good enough: ip address flush permanentip address flush temporary On the same principle, routes and rules can be saved and restored: ip route save / ip route restore There's a trick. ip route save will save only the main table, which is good for common use cases, but not with policy routing's additional routing tables. You can state a specific table (eg ip rule save table 220 ) if needed. But the special table 0 represents all tables, using ip route save table 0 will save all of them (including for each route the table it belongs to, like would be displayed with ip route show table 0 ) at once. Before restoring routes, it should be preferable to flush all existing routes: ip route flush table 0 all Example showing any routing table can be saved without having to know its value beforehand: # ip route add table 220 unreachable 10.0.0.0/8 metric 9999# ip route show table 220unreachable 10.0.0.0/8 metric 9999 # ip route save table 0 > /tmp/route# ip route flush table 0 all# ip route show table 220## ip route restore table 0 < /tmp/route# ip route show table 220unreachable 10.0.0.0/8 metric 9999 Of course all routes from other tables, including table 254 aka main , are also saved and restored. ip rule save / ip rule restore This one is also tricky because if not flushed before it will add duplicates without complaining, and flushing the rules never flushes rule prio 0, so rule priority 0 has to be explicitly deleted: ip rule fluship rule delete priority 0 So to save and restore: ip rule save > /tmp/rule [...] just deleting, or switching to some other environment etc. ip rule fluship rule delete priority 0ip rule restore < /tmp/rule I hope you can find some usage of this, for example for automatization with multiple network namespaces.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506504", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74221/" ] }
506,526
I have the following node in an XML document: <client-version>1.2.8</client-version> How can I replace replace the node's value, 1.2.8 , with 1.2.9 ? Wanted output: <client-version>1.2.9</client-version>
You would use an XML parser to do this. For example xmlstarlet (a command line XML tool): $ xmlstarlet ed -u '//client-version' -v '1.2.9' file.xml<?xml version="1.0"?><client-version>1.2.9</client-version> The above command would locate all occurrences of the client-version document node and change their values to the string 1.2.9 . To only change the ones that are 1.2.8 , you would use xmlstarlet ed -u '//client-version[text() = "1.2.8"]' -v '1.2.9' file.xml Redirect the output to a new file, inspect it and rename it to the original filename, or run xmlstarlet with its -L or --inplace options to edit the file in-place. Using xq , from yq , from https://kislyuk.github.io/yq/ , which allows you to use jq expressions to modify XML documents: xq -x '(..|."client-version"? // empty) |= "1.2.9"' file.xml This updates the value of each client-version node to 1.2.9 regardless of where in the document it is located. The string 1.2.9 could be inserted from a variable like so: new_version=1.2.9xq -x --arg ver "$new_version" '(..|."client-version"? // empty) |= $ver' file.xml
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506526", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/341965/" ] }
506,660
$ uname -aLinux laptop 4.19.0-2-amd64 #1 SMP Debian 4.19.16-1 (2019-01-17) x86_64 GNU/Linux I need to use cv::face::createLBPHFaceRecognizer() , which is not a part of the core OpenCV but rather a contributed module. $ dpkg -l libopencv-contrib-devii libopencv-contrib-dev:amd64 3.2.0+dfsg-6 amd64 development files for libopencv-contrib3.2 everything fine ... no: src/cmd.cpp:150: error: ‘cv::face’ has not been declared const auto model = cv::face::createLBPHFaceRecognizer(); ^~~~ OK. Let's then include the needed headers manually: $ dpkg -S libopencv-contrib-devlibopencv-contrib-dev:amd64: /usr/share/doc/libopencv-contrib-devlibopencv-contrib-dev:amd64: /usr/share/doc/libopencv-contrib-dev/changelog.Debian.gzlibopencv-contrib-dev:amd64: /usr/share/doc/libopencv-contrib-dev/copyrightlibopencv-contrib-dev:amd64: /usr/share/doc/libopencv-contrib-dev/README.Debian Nothing! Is this a packager's mistake (this is Debian testing after all)? An OpenCV peculiarity? A minor oversight on my side? I would like to continue using the package manager, instead of compiling the whole thing myself.
The package is fine, you’re using the wrong dpkg option: dpkg -L libopencv-contrib-dev will list all the files in the libopencv-contrib-dev , which is what you’re after (and will show all the files listed here ), whereas dpkg -S libopencv-contrib-dev searches all installed packages for files with libopencv-contrib-dev in their path, which only matches the four files you’ve listed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506660", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20506/" ] }
506,807
I want to replace the second, third, fourth and fifth dots in this string 2019-03-17T11:32:28.143343Z;1234.5678;901.234;567.89012;3456.78;192.168.0.1 with a commas, to get this result: 2019-03-17T11:32:28.143343Z;1234,5678;901,234;567,89012;3456,78;192.168.0.1 The first comma and the sixth (and any after that) should stay the same. I found this command, which I could execute multiple times (but maybe not the best practise): echo "$tmp" | sed 's/\./\,/2' How can I get this done in one command?
Your data consists of six ; -delimited fields, and you'd like to replace the dots in fields 2 through to 5 (not 1 or 6) with commas. This is easiest done with awk : awk -F ';' 'BEGIN { OFS=FS } { for (i=2; i<=5; ++i) gsub("\\.", ",", $i); print }' file With the example data given, this produces 2019-03-17T11:32:28.143343Z;1234,5678;901,234;567,89012;3456,78;192.168.0.1 The code simply iterates of the ; -delimited fields of each input line and calls gsub() to do a global search and replace (as you would do with s/\./,/g or y/./,/ in sed ) on the individual fields that the loop iterates over. The modified line is then printed. The -F option sets the input field separator to a semicolon, and we use the BEGIN block to also set the output field separator to the same value (you would otherwise get space-separated fields). Using sed , you might do something like sed 's/\./,/2; s/\./,/2; s/\./,/2; s/\./,/2' file I.e., replace the 2nd dot four times (which one is the 2nd dot will change with each substitution, since you substitute them). This does however assume that the number of values within each field remains static. To work around this in case you at some point have more than two dot-delimited things in a field, you can do sed 'h; s/^[^;]*;//; s/;[^;]*$//; y/./,/; G;H;x; s/;[^\n]*\n/;/; s/\n.*;/;/' file In short, these commands do Copy the original line to the hold space. Remove the first and last fields in the pattern space. Change all dots to commas in the pattern space (that's the y command). All dots that should change into commas have now been changed. Now we must reassemble the line from the middle bit in the pattern space and the original data in the hold space. Make (with G;H;x ) the pattern space contain The original string, followed by a newline, The modified middle bit, followed by a newline The original string again. So now the pattern space contains three lines . Remove everything but the first field on the first line, and the newline, and replace that removed bit with a ; . Do a similar thing with the last line, i.e. remove the (now lone) newline and everything up to the last ; , and replace with a ; . Done. Or you could just use the awk code.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506807", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/215754/" ] }
506,815
This question is strongly related to this and this question. I have a file that contains several lines where each line is a path to a file. Now I want to pair each line with each different line (not itself). Also a pair A B is equal to a B A pair for my purposes, so only one of these combinations should be produced. Example files.dat reads like this in a shorthand notation, each letter is a file path (absolute or relative) abcde Then my result should look something like this: a ba ca da eb cb db ec dc ed e Preferrably I would like to solve this in bash. Unlike the other questions, my file list is rather small (about 200 lines), so using loops and RAM capacitypose no problems.
Use this command: awk '{ name[$1]++ } END { PROCINFO["sorted_in"] = "@ind_str_asc" for (v1 in name) for (v2 in name) if (v1 < v2) print v1, v2 } ' files.dat PROCINFO may be a gawk extension. If your awk doesn’t support it,just leave out the PROCINFO["sorted_in"] = "@ind_str_asc" lineand pipe the output into sort (if you want the output sorted). (This does not require the input to be sorted.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/342198/" ] }
506,833
Is there a canonical way to get all the logs from journalctl since a service was last restarted? What I want to do is restart a service and immediately see all the logs since I initiated the restart. I came up with: $ unit=prometheus$ sudo systemctl restart $unit$ since=$(systemctl show $unit | grep StateChangeTimestamp= | awk -F= '{print $2}')$ sudo systemctl status -n0 $unit && sudo journalctl -f -u $unit -S "$since" This will probably work, but I was wondering if there is a more concrete way to say: restart and give me all logs from that point onwards.
You can use the invocation id , which is a unique identifier for a specific run of a service unit. It was introduced in systemd v232, so you need at least that version of systemd for this to work. To get the invocation id of the current run of the service: $ unit=prometheus$ systemctl show -p InvocationID --value "$unit"0e486642eb5b4caeaa5ed1c56010d5cf And then to search journal entries with that invocation id attached to them: $ journalctl INVOCATION_ID=0e486642eb5b4caeaa5ed1c56010d5cf + _SYSTEMD_INVOCATION_ID=0e486642eb5b4caeaa5ed1c56010d5cf I found that you need both INVOCATION_ID and _SYSTEMD_INVOCATION_ID to get all the logs. The latter is added by systemd for logs output by the unit itself (e.g. the stdout of the process running in that service), while the former is attached to events taken by systemd (e.g. "Starting" and "Started" messages for that unit.) Note that you don't need to filter by the unit name as well. Since the invocation id is unique, filtering by the id itself is sufficient to only include logs for the service you're interested in.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/339746/" ] }
506,866
dnf search linput and dnf search lgbm don't yield any results. How can I get these in Fedora? Edit: Backstory I'm trying to build a Rust program, but it won't compile because apparently I'm missing some things. It said: = note: /usr/bin/ld: cannot find -lxkbcommon /usr/bin/ld: cannot find -lxkbcommon /usr/bin/ld: cannot find -linput /usr/bin/ld: cannot find -lgbm collect2: error: ld returned 1 exit status I installed lxkbcommon (edit: I actually installed libxkbcommon . Not sure how I missed that.) via dnf install libxkbcommon-devel and then the output looked like this: = note: /usr/bin/ld: cannot find -linput /usr/bin/ld: cannot find -lgbm collect2: error: ld returned 1 exit status So I figured I needed something called linput and lgbm as well, only I cannot find those with dnf search and I'm coming up empty-handed with google.
What you are getting are error messages from the linker ( ld ), which is complaining that the libraries you are looking for are not available. A message such as /usr/bin/ld: cannot find -linput actually means it was looking for a file named libinput.so . The -l flag is a command-line argument (to ld or to gcc ) that expects the library name to follow and then the library name is used to form the file name which includes the lib prefix and the .so suffix (for dynamically loadable library, which is what is typically used in most distributions, Fedora included.) So it turns out that the files you need are libinput.so and libgbm.so . You can then use dnf provides to search for those files. Assuming you're using a 64-bit distribution, these libraries would be in /usr/lib64 , so the full commands would be: $ dnf provides /usr/lib64/libinput.solibinput-devel-1.12.6-3.fc30.x86_64 : Development files for libinputRepo : rawhideMatched from:Filename : /usr/lib64/libinput.so$ dnf provides /usr/lib64/libgbm.somesa-libgbm-devel-19.0.0~rc7-1.fc30.x86_64 : Mesa libgbm development packageRepo : rawhideMatched from:Filename : /usr/lib64/libgbm.so If you don't know the exact directory, you can also use dnf provides '*/libinput.so' or other wildcards if you know even less information about the files you want to search (and are willing to sort through more search results in look for something useful.) In your case, it seems what you need is to: $ sudo dnf install libinput-devel mesa-libgbm-devel From that point on, dnf should also bring all other dependencies you need. Hopefully this is all you're missing to build the software you're building. But if you have further issues of missing libraries, using this information you might be able to find packages that ship those, assuming they're available in Fedora.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/506866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289438/" ] }
506,891
Let's say I have an associative array in bash , declare -A hashhash=( ["foo"]=aa ["bar"]=bb ["baz"]=aa ["quux"]=bb ["wibble"]=cc ["wobble"]=aa) where both keys and values are unknown to me (the actual data is read from external sources). How may I create an array of the keys corresponding to the same value, so that I may, in a loop over all unique values, do printf 'Value "%s" is present with the following keys: %s\n' "$value" "${keys[*]}" and get the output (not necessarily in this order) Value "aa" is present with the following keys: foo baz wobbleValue "bb" is present with the following keys: bar quuxValue "cc" is present with the following keys: wibble The important bit is that the keys are stored as separate elements in the keys array and that they therefore do not need to be parsed out of a text string. I could do something like declare -A seenseen=()for value in "${hash[@]}"; do if [ -n "${seen[$value]}" ]; then continue fi keys=() for key in "${!hash[@]}"; do if [ "${hash[$key]}" = "$value" ]; then keys+=( "$key" ) fi done printf 'Value "%s" is present with the following keys: %s\n' \ "$value" "${keys[*]}" seen[$value]=1done But it seems a bit inefficient with that double loop. Is there a piece of array syntax that I've missed for bash ? Would doing this in e.g. zsh give me access to more powerful array manipulation tools? In Perl, I would do my %hash = ( 'foo' => 'aa', 'bar' => 'bb', 'baz' => 'aa', 'quux' => 'bb', 'wibble' => 'cc', 'wobble' => 'aa');my %keys;while ( my ( $key, $value ) = each(%hash) ) { push( @{ $keys{$value} }, $key );}foreach my $value ( keys(%keys) ) { printf( "Value \"%s\" is present with the following keys: %s\n", $value, join( " ", @{ $keys{$value} } ) );} But bash associative arrays can't hold arrays... I'd also be interested in any old school solution possibly using some form of indirect indexing (building a set of index array(s) when reading the values that I said I had in hash above?). It feels like there ought to be a way to do this in linear time.
zsh to reverse keys <=> values In zsh , where the primary syntax for defining a hash is hash=(k1 v1 k2 v2...) like in perl (newer versions also support the awkward ksh93/bash syntax for compatibility though with variations when it comes to quoting the keys) keys=( "${(@k)hash}" )values=( "${(@v)hash}" )typeset -A reversedreversed=( "${(@)values:^keys}" ) # array zipping operator Or using the Oa parameter expansion flag to reverse the order of the key+value list: typeset -A reversedreversed=( "${(@kvOa)hash}" ) or using a loop: for k v ( "${(@kv}hash}" ) reversed[$v]=$k The @ and double quotes is to preserve empty keys and values (note that bash associative arrays don't support empty keys). As the expansion of elements in associative arrays is in no particular order, if several elements of $hash have the same value (which will end up being a key in $reversed ), you can't tell which key will be used as the value in $reversed . for your loop You'd use the R hash subscript flag to get elements based on value instead of key, combined with e for exact (as opposed to wildcard) match, and then get the keys for those elements with the k parameter expansion flag: for value ("${(@u)hash}") print -r "elements with '$value' as value: ${(@k)hash[(Re)$value]}" your perl approach zsh (contrary to ksh93 ) doesn't support arrays of arrays, but its variables can contain the NUL byte, so you could use that to separate elements if the elements don't otherwise contain NUL bytes, or use the ${(q)var} / ${(Q)${(z)var}} to encode/decode a list using quoting. typeset -A seenfor k v ("${(@kv)hash}") seen[$v]+=" ${(q)k}"for k v ("${(@kv)seen}") print -r "elements with '$k' as value: ${(Q@)${(z)v}}" ksh93 ksh93 was the first shell to introduce associative arrays in 1993. The syntax for assigning values as a whole means it's very difficult to do it programmatically contrary to zsh , but at least it's somewhat justified in ksh93 in that ksh93 supports complex nested data structures. In particular, here ksh93 supports arrays as values for hash elements, so you can do: typeset -A seenfor k in "${!hash[@]}"; do seen[${hash[$k]}]+=("$k")donefor k in "${!seen[@]}"; do print -r "elements with '$k' as value ${x[$k][@]}"done bash bash added support for associative arrays decades later, copied the ksh93 syntax, but not the other advanced data structures, and doesn't have any of the advanced parameter expansion operators of zsh. In bash , you could use the quoted list approach mentioned in the zsh using printf %q or with newer versions ${var@Q} . typeset -A seenfor k in "${!hash[@]}"; do printf -v quoted_k %q "$k" seen[${hash[$k]}]+=" $quoted_k"donefor k in "${!seen[@]}"; do eval "elements=(${seen[$k]})" echo -E "elements with '$k' as value: ${elements[@]}"done As noted earlier however, bash associative arrays don't support the empty value as a key, so it won't work if some of $hash 's values are empty. You could choose to replace the empty string with some place holder like <EMPTY> or prefix the key with some character that you'd later strip for display.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/506891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116858/" ] }
506,892
I am trying to understand some performance issues related to sed and awk , and I did the following experiment, $ seq 100000 > test$ yes 'NR==100001{print}' | head -n 5000 > test.awk$ yes '100001{p;b}' | head -n 5000 > test.sed$ time sed -nf test.sed testreal 0m3.436suser 0m3.428ssys 0m0.004s$ time awk -F@ -f test.awk testreal 0m11.615suser 0m11.582ssys 0m0.007s$ sed --versionsed (GNU sed) 4.5$ awk --versionGNU Awk 4.2.1, API: 2.0 (GNU MPFR 3.1.6-p2, GNU MP 6.1.2) Here, since the test file only contains 100000 lines, all the commands in test.sed and test.awk are no-ops. Both programs only need to match the line number with the address (in sed ) or NR (in awk ) to decide that the command does not need to be executed, but there is still a huge difference in the time cost. Why is it the case? Are there anyone with different versions of sed and awk installed that gives a different result on this test? Edit :The results for mawk (as suggested by @mosvy), original-awk (the name for "one true awk" at debian based systems, suggested by @GregA.Woods) and perl are given below, $ time mawk -F@ -f test.awk testreal 0m5.934suser 0m5.919ssys 0m0.004s$ time original-awk -F@ -f test.awk testreal 0m8.132suser 0m8.128ssys 0m0.004s$ yes 'print if $.==100001;' | head -n 5000 > test.pl$ time perl -n test.pl testreal 0m33.245suser 0m33.110ssys 0m0.019s$ mawk -W versionmawk 1.3.4 20171017$ perl --versionThis is perl 5, version 28, subversion 1 (v5.28.1) built for x86_64-linux-thread-multi Replacing -F@ with -F '' does not make observable changes in the case of gawk and mawk . original-awk does not support empty FS . Edit 2 The test by @mosvy gives different results, 21s for sed and 11s for mawk , see the comment below for details.
awk has a wider feature set than sed , with a more flexible syntax. So it's not unreasonable that it'll take longer both to parse its scripts, and to execute them. As your example command (the part inside the braces) never runs, the time-sensitive part should be your test expression. awk First, look at the test in the awk example: NR==100001 and see the effects of that in gprof (GNU awk 4.0.1): % cumulative self self total time seconds seconds calls s/call s/call name 55.89 19.73 19.73 1 19.73 35.04 interpret 8.90 22.87 3.14 500000000 0.00 0.00 cmp_scalar 8.64 25.92 3.05 1000305023 0.00 0.00 free_wstr 8.61 28.96 3.04 500105014 0.00 0.00 mk_number 6.09 31.11 2.15 500000001 0.00 0.00 cmp_nodes 4.18 32.59 1.48 500200013 0.00 0.00 unref 3.68 33.89 1.30 500000000 0.00 0.00 eval_condition 2.21 34.67 0.78 500000000 0.00 0.00 update_NR ~50% of the time is spent in "interpret", the top-level loop to run the opcodes resulting from the parsed script. Every time the test is run (ie. 5000 script lines * 100000 input lines), awk has to: Fetch the built-in variable "NR" ( update_NR ). Convert the string "100001" ( mk_number ). Compare them ( cmp_nodes , cmp_scalar , eval_condition ). Discard any temporary objects needed for the comparison ( free_wstr , unref ) Other awk implementations won't have the exact same call flow, but they will still have to retrieve variables, automatically convert, then compare. sed By comparison, in sed , the "test" is much more limited. It can only be a single address, an address range, or nothing (when the command is the first thing on the line), and sed can tell from the first character whether it's an address or command. In the example, it's 100001 ...a single numerical address. The profile (GNU sed 4.2.2) shows % cumulative self self total time seconds seconds calls s/call s/call name 52.01 2.98 2.98 100000 0.00 0.00 execute_program 44.16 5.51 2.53 1000000000 0.00 0.00 match_address_p 3.84 5.73 0.22 match_an_address_p[...] 0.00 5.73 0.00 5000 0.00 0.00 in_integer Again, ~50% of the time is in the top-level execute_program . In this case, it's called once per input line, then loops over the parsed commands. The loop starts with an address check, but that's not all it does in your example (see later). The line numbers in the input script were parsed at compile-time ( in_integer ). That only has to be done once for each address number in the input, ie. 5000 times, and doesn't make a significant contribution to the overall running time. That means that the address check, match_address_p , only compares integers that are already available (through structs and pointers). further sed improvements The profile shows that match_address_p is called 2*5000*100000 times, ie. twice per script-line*input-line. This is because, behind the scenes, GNU sed treats the "start block" command 100001{...} as a negated branch to the end of the block 100001!b end; ... :end This address match succeeds on every input line, causing a branch to the end of the block ( } ). That block-end has no associated address, so it's another successful match. That explains why so much time is spent in execute_program . So that sed expression would be even faster if it omitted the unused ;b , and the resulting unnecessary {...} , leaving only 100001p . % cumulative self self total time seconds seconds calls s/call s/call name 71.43 1.40 1.40 500000000 0.00 0.00 match_address_p 24.49 1.88 0.48 100000 0.00 0.00 execute_program 4.08 1.96 0.08 match_an_address_p That halves the number of match_address_p calls, and also cuts most of the time spent in execute_program (because the address match never succeeds).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/259023/" ] }
506,894
Goal: boot up from fully installed debian 9 USB drive I installed debian into my USB drive but it fails to recognize my USB in boot option .This is the link where I downloaded the iso file. Below are what I have tried I tried to burn the iso into my first USB drive using Rufus I then try to boot from the USB live I run through the setup progress, burning the OS into my second USB . During the progress, it did not show any errors. (it was able to detect my second USB). However, after I finished the setup. It did not show up the second USB as one of the boot option. I checked the boot sequence. My first priority is USB port. I repeated all the steps mentioned above on my friend laptop (Windows 10) and it worked on his laptop. Update: 1) I unplugged the internal hard drive and setting up debian without it, it just tells me no bootable drive. 2) Run the checksum C:\Users\PC\Downloads\ISO>certutil -hashfile debian-live-9.8.0-amd64-gnome+nonfree.iso MD5MD5 hash of debian-live-9.8.0-amd64-gnome+nonfree.iso:83436d6e797c75084dbeba203f5a818dCertUtil: -hashfile command completed successfully. it is the same as the official website. 3) I tried to copy the ESI files /EFI/boot and /EFI/debian from Windows and pasted it to the USB ESI partition 4) I also took out my second internal Hard Drive and inserted a new Hard drive to install Debian into it
Here is how I fixed it This is how the files in the USB ESP partition should look like: full-install USB ESP partition EFI (directory) Boot (directory) bootx64.efi grubx64.efi fbx64.efi debian (directory) grubx64.efi However there were NO EFI FILES in my USB after I finished installing (So please check your ESP partition!). So I have to copy the Boot folder and debian folder from Windows ESP parition and copy the grubx64.efi into Boot folder Details about how to access EFI partition in Windows please check out this link Details about how to access EFI partition in USB please check out this link NOTE: to access the EFI partition in USB you have to use any Linux distros live USB and I was using Kali live USB.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/506894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/342097/" ] }