source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
663,924 | I have a SQL resultset exported as JSON, in the form of 1:1 mappings in an array. e.g. [ { "subject": "Accounting", "level": "A Level" }, { "subject": "Accounting", "level": "IB" }, { "subject": "Accounting", "level": "University" }, { "subject": "Accounting", "level": "GCSE" }, { "subject": "Accounting", "level": "Mentoring" }, { "subject": "Accounting", "level": "13 Plus" }, { "subject": "Accounting", "level": "11 Plus" },etc..... I would like to coalesce the levels into a single subject key like so: [ "Accounting": ["A Level", "IB", "University"], "Foo": ["Foo Intro", "Foo Basics", "Intermediate Foo"] ] Note: no anonymous objects How can I achieve this with jq? | Given $ jq '.' file.json[ { "subject": "Accounting", "level": "A Level" }, { "subject": "Foo", "level": "IB" }, { "subject": "Accounting", "level": "University" }, { "subject": "Foo", "level": "GCSE" }, { "subject": "Accounting", "level": "Mentoring" }, { "subject": "Accounting", "level": "13 Plus" }, { "subject": "Foo", "level": "11 Plus" }] then borrowing heavily from Reshape a jq array with summarized data $ jq 'reduce .[] as $d (null; .[$d.subject] += [$d.level])' file.json { "Accounting": [ "A Level", "University", "Mentoring", "13 Plus" ], "Foo": [ "IB", "GCSE", "11 Plus" ]} Note: the outer level is an object rather than an array (as in your expected output - which doesn't appear to be valid JSON). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/663924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/486108/"
]
} |
663,936 | I guess this may be a naive question but I can't get my head around so I felt like asking...I was searching for some solution to a problem, when I found this very interesting post about why is using [while|for] loops in bash considered bad practice. There is a very good explanation in the post (see the chosen answer) but I can't find anything that solves the issues that are discussed. I searched extensively: I googled (or duckduckgo-ed) how to read a file in bash and all the results I am getting point towards a solution that, according to the above-mentioned post, is absolutely non-bash style and something that should be avoided. In particular, we have this: while read line; do echo $line | cut -c3done and this: for line in `cat file`; do foo=`echo $line | awk '{print $2}'` echo whatever $foodone that are indicated as very bad examples of shell scripting. At this point I am wondering, and this is the actual question: if the posted while loops should be avoided because they are bad practice and whatever...what am I supposed to do, instead? EDIT: I see that I am already having comments/questions addressing the exact issue with the while loop, so I feel like to widen the question a bit. Basically, what I am understanding is that I need to dig deeper into bash commands, and that is the real thing that I should do. But, when one searches around, it looks like people are, in the general case, using and teaching bash in an improper way (as per my google-ing). | The point of the post you linked to is to explain that using bash to parse text files is a bad idea in general . It isn't specifically about using loops and there is nothing intrinsically wrong with shell loops in other contexts. Nobody is saying that a shell script with while is somehow bad. That other post is saying that you shouldn't try to parse text files using the shell and you should instead use other tools. To clarify, when I say "using the shell" I mean using the shell's internal tools to open the file, extract the data and parse it. For example something like this: while read number; do if [ $number -gt 10 ]; then echo "The number '$number' is greater than 10" else echo "The number '$number' is less than or equal to 10"done < numbers.txt Please read the answers at Why is using a shell loop to process text considered bad practice? for details on why this sort of thing is a bad idea. Here, I will only clarify that that post isn't arguing against shell loops in general, but against using shell loops (or the shell) for parsing files. The reason you don't find suggestions for better ways of doing it with bash is that there are no good ways of doing this with bash or any other shell. No matter what you do, parsing text using a shell will be slow, cumbersome, and error prone. Shells are primarily designed as a way of entering commands to be run by the computer. They can be used as scripting languages but, again, they are at their best when given commands to run and not when used instead of commands designed to handle text parsing. Shells are tools and just like any other tool, they should be used for the purpose they were designed for. The problem is that many people have learned a little bit of shell scripting, so they have a tool, a "hammer". Because all they know is a hammer, every problem they encounter looks like a nail to them and they try and use their hammer on this nail. Sadly, parsing text is not something that the shell was designed to handle, it isn't a "nail", so using a "hammer" is just not a good idea. So, the answer to "how should I read a file in bash" is very simply "you should not use bash and instead use a tool that is appropriate for the job". | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/663936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/352134/"
]
} |
663,944 | FILE-A has 100,000 lines. FILE-B is 50 search terms. I'm looking to complete the search of FILE-A (CSV or TXT) with the various terms from FILE-B (CSV or TXT) AND -- here is the kicker -- save the results in individual TXT files based off the search terms from FILE-B. Example: FILE-A 1234567812398702349878397423668764196784978991023786723 FILE-B 122378 Results = "1.txt" with all matching lines from FILE-A, "2.txt" with all lines matching from FILE-A, "23.txt", "78.txt" and so on. So if FILE-B has 50 search terms, I would end up with 50 TXT files, named with the search term, assuming at least one hit with said term from FILE-A. I have searched using "fgrep -f FILE-B.txt FILE-A.csv >> output.txt" This puts all of the search terms from FILE-B found in FILE-A into one "output.txt". I'm instead looking to separate them into individual text files. | Grep + Xargs xargs -d '\n' sh -c ' for term; do grep "$term" fileA > "$term.txt"; done' xargs-sh < fileB Improved by cas . Grep + Shell Generally using shell loops to read a file is bad practice , but here fileB is much smaller than fileA so it won't significantly hurt performance. while IFS= read -r term; do grep "$term" fileA > "$term.txt"done < fileB Awk awk 'NR==FNR{pat[$0];next}{for(term in pat){if($0~term){print>term}}}' fileB fileA NR==FNR{pat[$0];next} reads the first file given as an argument and puts each line in the array pat . {for(term in pat){if($0~term){print>term}}} is self-explainable: For each term in the array, test if the current line matches the term and print it to a file named accordingly if yes. Not all Awks will allow for many files to be open at the same time. One way to tackle this, as suggested by Ed Morton , is to use a close statement and to use the append operator: awk 'NR==FNR{pat[$0];next}{for(term in pat){if($0~term){print>>term;close(term)}}}' fileB fileA | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/663944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/390113/"
]
} |
664,311 | As answered in Highlight the current date in cal the current date in output form cal is automatically highlighted (reverse colors) if the output goes to terminal. That's what I had always been getting. However, with my current Debian GNU/Linux, it is not the case any more, and I'm wondering what the fix is. $ echo $TERMxterm$ lsb_release -a No LSB modules are available.Distributor ID: DebianDescription: Debian GNU/Linux bullseye/sidRelease: testingCodename: bullseye | I believe the correct "Answer" to this question is documented here on GitHub To quote add alias cal="if [ -t 1 ] ; then ncal -b ; else /usr/bin/cal ; fi" into your shell rc file. This is an extremely irritating change. Changing the behavior of a frequently used cli command for at least 17 years to make it "correct" is kind of insane. Now I understand why so many people hate Windows so much but are still reluctant to switch to Linux. I'm pretty sure almost all package maintainer who use cal (actually I think majority of them uses date anyway) are trained to use cal -h to turn off the highlight. Now the change even breaks compatibility with cal -h . The change is documented here A simpler hack to solve the "no highlight" is to alias cal to ncal -b , but it is not 100% correct with the package ncal maintainer's expectation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/664311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/374303/"
]
} |
664,396 | I have a script where I dinamically change the arguments which must be passed to a command ( mkvpropedit in this case). Consider the example script below: #!/bin/bashLANG_NAME="eng lish"MYSETTINGS=()MYSETTINGS+=("--edit 1")MYSETTINGS+=("--set \"language=${LANG_NAME}\"")echo "Count = ${#MYSETTINGS[@]}" # should be 2set -x # enable to see the invoked commandmkvpropedit ${MYSETTINGS[@]} When I run this, I get in the console: [~] # ./test.shCount = 2+ mkvpropedit --edit 1 --set '"language=eng' 'lish"' But I would like not having the single quotes on the final invocation of mkvpropedit , like so: + mkvpropedit --edit 1 --set "language=eng lish" I tried also echoing the array into a variable, and echo removes the single quote, but then I'm not able to use the variable as an argument of mkvpropedit because the single quotes appear again... Of course the script has to work also if the variable is a single word, such as LANG_NAME="eng" . My Bash version is 3.2 (Busybox, actually). Updated question Probably the example below better explains what I'm trying to do. I've changed some names to be replicable. #!/bin/bashTITLE_NAME="my title"MYSETTINGS=()MYSETTINGS+=("--edit track:2")MYSETTINGS+=("--set \"name=${TITLE_NAME}\"")set -xmkvpropedit file.mkv ${MYSETTINGS[@]} If I run this script, I get (due to the wrong quote): # ./test.sh+ mkvpropedit file.mkv --edit track:2 --set '"name=my' 'title"'Error: More than one file name has been given ('file.mkv' and 'title"'). While if I run, manually: # mkvpropedit file.mkv --edit track:2 --set "name=my title"The file is being analyzed.The changes are written to the file.Done. So it's definitely a quoting issue; I would like to invoke mkvpropedit using the array in the script. Using eval What seems to work, at the moment, is inserting mkvpropedit and file.mkv into the array and eventually call eval "${MYSETTINGS[@]}" , but is it worth and safe? Isn't eval evil (pun intended)? TITLE_NAME="my title"MYSETTINGS=(mkvpropedit file.mkv)MYSETTINGS+=("--edit track:2")MYSETTINGS+=("--set \"name=${TITLE_NAME}\"")set -xeval "${MYSETTINGS[@]}" Returns: # ./test.sh+ eval mkvpropedit file.mkv '--edit track:2' '--set "name=my title"'++ mkvpropedit file.mkv --edit track:2 --set 'name=my title'The file is being analyzed.The changes are written to the file.Done. | There are no single quotes - that's just the shell's unambiguous representation of the variable's contents when you use set -x . You can see that if you instead look at the array elements using declare -p or by printing them one at a time: LANG_NAME="eng lish"MYSETTINGS=()MYSETTINGS+=("--edit 1")MYSETTINGS+=("--set \"language=${LANG_NAME}\"") then $ declare -p MYSETTINGSdeclare -a MYSETTINGS=([0]="--edit 1" [1]="--set \"language=eng lish\"") or $ printf '>>>%s<<<\n' "${MYSETTINGS[@]}">>>--edit 1<<<>>>--set "language=eng lish"<<< However , you almost certainly want to pass --edit , 1 , --set , and language=eng lish as separate tokens to the command, which means quoting each token that contains whitespace or glob characters during array construction, like language="${LANG_NAME}" or "language=${LANG_NAME}" double quoting the array expansion when you use it (to prevent word-splitting and filename generation - aka "split+glob") So LANG_NAME="eng lish"MYSETTINGS=()MYSETTINGS+=(--edit 1)MYSETTINGS+=(--set language="${LANG_NAME}") then mkvpropedit file.mkv "${MYSETTINGS[@]}" Note that you do not need additional double quotes around the variable expansion, because double-quoted "${name[@]}" expands each element of name to a separate word without further tokenization - further quotes like \"name=${TITLE_NAME}\" would be passed to the command literally. See also How can we run a command stored in a variable? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/664396",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/188792/"
]
} |
664,486 | I have an Ubuntu server 20.04 with an encrypted 50GB LVM root partition and I just realized the filesystem itself only shows 25GB The install was default (apart from the encryption bit) and I don't understand why it didn't use all the space for the root partition? How do I expand the root filesystem? PV VG Fmt Attr PSize PFree /dev/mapper/dm_crypt-0 ubuntu-vg lvm2 a-- 48.48g <24.24g VG #PV #LV #SN Attr VSize VFree ubuntu-vg 1 1 0 wz--n- 48.48g <24.24g LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert ubuntu-lv ubuntu-vg -wi-ao---- 24.24g | why it didn't use all the space for the root partition? When the logical volume was created, only 24.24 GB was allocated for it. That can actually be a good thing: the remainder can be used to create another logical volume if you find you need one for some reason, or you can use the free space to extend an existing logical volume, even while its filesystem is mounted and in use. Having some unallocated space held in reserve can be a good thing, as it allows you to react to unexpected future requirements easily: A filesystem needs more space than expected? No problem, you can extend it on-line. (Extending a filesystem is usually much easier than shrinking one, so lowballing the expected requirements and then extending as needed can be a good strategy.) You need a small filesystem with special mount options for a chroot jail? Just create a new LV for it using some of the unallocated space. How do I expand the root filesystem? For example, to extend it by 5 GB: sudo lvextend --resizefs -L +5G ubuntu-vg/ubuntu-lv Or if you want to use all the remaining unallocated capacity to extend the root filesystem: sudo lvextend --resizefs -l +100%FREE ubuntu-vg/ubuntu-lv If you don't use the --resizefs option, then the command will just extend the logical volume but not the filesystem inside it. Then you must use another command to tell the filesystem to take advantage of the extension: either fsadm resize /dev/mapper/ubuntu--vg-ubuntu--lv or a filesystem-specific command like resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv or xfs_growfs / . (The --resizefs option of lvextend will actually just run the fsadm resize ... command for you once the LV is successfully extended.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/664486",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194276/"
]
} |
664,489 | On my windows 10 system I can't boot into a parrot os gpt formatted pendrive. I used rufus to format it. Whenever i choose the pendrive from boot menu i get the grub command line . Then i tried formatting my pendrive as mbr and then i tried booting from it and it booted successfully. But i still can't install it because my hdd is in gpt. I can live boot into parrot os. I also can't use chroot from live boot as i dont have linux installed. I don't know why am i getting the grub command line as I don't have any linux ditro installed in my system. | why it didn't use all the space for the root partition? When the logical volume was created, only 24.24 GB was allocated for it. That can actually be a good thing: the remainder can be used to create another logical volume if you find you need one for some reason, or you can use the free space to extend an existing logical volume, even while its filesystem is mounted and in use. Having some unallocated space held in reserve can be a good thing, as it allows you to react to unexpected future requirements easily: A filesystem needs more space than expected? No problem, you can extend it on-line. (Extending a filesystem is usually much easier than shrinking one, so lowballing the expected requirements and then extending as needed can be a good strategy.) You need a small filesystem with special mount options for a chroot jail? Just create a new LV for it using some of the unallocated space. How do I expand the root filesystem? For example, to extend it by 5 GB: sudo lvextend --resizefs -L +5G ubuntu-vg/ubuntu-lv Or if you want to use all the remaining unallocated capacity to extend the root filesystem: sudo lvextend --resizefs -l +100%FREE ubuntu-vg/ubuntu-lv If you don't use the --resizefs option, then the command will just extend the logical volume but not the filesystem inside it. Then you must use another command to tell the filesystem to take advantage of the extension: either fsadm resize /dev/mapper/ubuntu--vg-ubuntu--lv or a filesystem-specific command like resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv or xfs_growfs / . (The --resizefs option of lvextend will actually just run the fsadm resize ... command for you once the LV is successfully extended.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/664489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/486471/"
]
} |
664,504 | I want to count and sum number of matches in a file with my awk regex. The file file contains: Gra pesgra ndmastraw berryblue Berrybananapeanutschool I need to make a regex for pattern matching, but I am unsure of how to implement AND/OR in regex, despite them having same precedence. I have tried: awk 'tolower($1) ~ /(gra|straw) (pes|berry)|banana|peanut/ {sum+=1} END {print sum+0}' file So it should be either (gra pes, gra berry, straw pes, straw berry) OR banana, peanut and returns 4, since there are 4 matches. I'm assuming my syntax went wrong with the OR banana|peanut, but I am not sure how to fix it. Any ideas on what went wrong? thank you | why it didn't use all the space for the root partition? When the logical volume was created, only 24.24 GB was allocated for it. That can actually be a good thing: the remainder can be used to create another logical volume if you find you need one for some reason, or you can use the free space to extend an existing logical volume, even while its filesystem is mounted and in use. Having some unallocated space held in reserve can be a good thing, as it allows you to react to unexpected future requirements easily: A filesystem needs more space than expected? No problem, you can extend it on-line. (Extending a filesystem is usually much easier than shrinking one, so lowballing the expected requirements and then extending as needed can be a good strategy.) You need a small filesystem with special mount options for a chroot jail? Just create a new LV for it using some of the unallocated space. How do I expand the root filesystem? For example, to extend it by 5 GB: sudo lvextend --resizefs -L +5G ubuntu-vg/ubuntu-lv Or if you want to use all the remaining unallocated capacity to extend the root filesystem: sudo lvextend --resizefs -l +100%FREE ubuntu-vg/ubuntu-lv If you don't use the --resizefs option, then the command will just extend the logical volume but not the filesystem inside it. Then you must use another command to tell the filesystem to take advantage of the extension: either fsadm resize /dev/mapper/ubuntu--vg-ubuntu--lv or a filesystem-specific command like resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv or xfs_growfs / . (The --resizefs option of lvextend will actually just run the fsadm resize ... command for you once the LV is successfully extended.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/664504",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/486670/"
]
} |
664,625 | I am using this usb wifi device on Debian running on my DE10-Nano board . Looking at the product details, it seems like this uses the RT5370 chipset which is included in the RT2800USB driver. I have enabled this in the kernel as shown in the screenshot below: However, the wifi device doesn't work unless I install the firmware also with the following command: sudo apt install firmware-ralink My question is - what does the firmware have to do with the driver? Shouldn't the wifi device already have the necessary firmware? What exactly is going on here? I'm new to kernel drivers and devices so trying to understand the magic going on here. My understanding is that to use a device, I just need to make sure the relevant driver is either compiled into the kernel or available as a module that you can load in later. Here is the dmesg output when I run ifup wlan0 . The firmware file rt2870.bin is provided by the package firmware-ralink . [ 78.302351] ieee80211 phy0: rt2x00lib_request_firmware: Info - Loading firmware file 'rt2870.bin'[ 78.311413] ieee80211 phy0: rt2x00lib_request_firmware: Info - Firmware detected - version: 0.36[ 80.175252] wlan0: authenticate with 30:23:03:41:73:67[ 80.206023] wlan0: send auth to 30:23:03:41:73:67 (try 1/3)[ 80.220665] wlan0: authenticated[ 80.232966] wlan0: associate with 30:23:03:41:73:67 (try 1/3)[ 80.257518] wlan0: RX AssocResp from 30:23:03:41:73:67 (capab=0x411 status=0 aid=5)[ 80.270065] wlan0: associated[ 80.503705] IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready | Many hardware device manufacturers do not embed firmware into their devices, they require firmware to be loaded into the device by the operating system's driver. Some other manufacturers embed an old version of the firmware but allow an updated version to be loaded by the driver - quite often the embedded version is ancient and/or buggy (and rarely, if ever, updated in the device itself because that might require changes to the manufacturing or testing process - this is generally a deliberate design decision. The rationale is that the embedded firmware version doesn't have to be good , it just has to resemble something that's minimally functional - updates can and should be loaded by the driver) The firmware files almost always have a license which is incompatible with the GPL (or even no explicit or discernible license, just an implied "right to use" by being distributed with the device itself and the Windows driver it comes with) and thus can not be distributed with the kernel itself, and has to be distributed as a separate package. To get the device working, you need both the driver and the firmware. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/664625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324101/"
]
} |
664,925 | Please would someone be able to explain how I can convert all the lower case characters in a text file to upper case and then save it as a new file? My file is called NewFile.txt and contains 500 lines of random characters. | In the POSIX toolchest, there's: <input.txt tr '[:lower:]' '[:upper:]' >output.txt However note that with the GNU implementation, that only works for single-byte characters ; so in locales using the UTF-8 charset for instance, only on abcdefghijklmnopqrstuvwxyz letters without diacritics. <input.txt awk '{print toupper($0)}' >output.txt is also POSIX and works OK with the GNU implementation of awk . <input.txt dd conv=ucase >output.txt is also POSIX but not many implementations will transliterate non-ASCII characters. <input.txt sed 's/.*/\U&/g' > output.txt Works in GNU sed , but GNU sed only (that \U is not standard). With perl : <input.txt perl -Mopen=locale -pe '$_=uc' >output.txt That one doesn't use the locale's toupper rules, so may work better on words like office (converting that one ffi character to the three character FFI ¹). uconv , from the ICU project should be pretty good at handling all sorts of international corner cases, and assuming input / output encoded in UTF-8 (or whatever uconv --default-code returns; though see the -f / --from-code and -t / --to-code options to specify different input and output encodings): <input.txt uconv -x upper >output.txt Within the vim editor, if on the first character of the file ( gg to get there), enter gUG to convert all to uppercase til the end of the file. Then :saveas output.txt to save to output file. Or with any ex or vi implementation (though not all will handle non-ASCII characters): :%s/.*/\U&/ (and :w output.txt to write the edited file to output.txt and :q! to quit without saving the now modified input file). With the zsh shell: zmodload zsh/mapfilemapfile[output.txt]=${(U)mapfile[input.txt]}# or (csh-style):mapfile[output.txt]=$mapfile[input.txt]:u To convert from upper to lower case instead, in case that's not already obvious: tr : swap [:lower:] and [:upper:] awk : change toupper to tolower dd : change ucase to lcase GNU sed / ex / vi : change \U to \L perl : change uc to lc . uconv : change upper to lower vim : change gUG to guG (that's the trick one). zsh : change (U) to (L) , :u to :l . ¹ the C / POSIX toupper() / towupper() API only converts one character to another one at a time, so is limited in how it can change the case of text. See https://unicode-org.github.io/icu/userguide/icu/posix.html#case-mappings about that and more. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/664925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487121/"
]
} |
664,953 | My OS is Debian 11 and my printer is an HP LaserJet Pro MFP M28a.After the first install using the hplip package, both printing and scanning were working fine but, for some reason, I had to reinstall the printer and then the scanner became unreachable from XSane at all. I have tried several options and I either get error message code 9 ("error: SANE: Error during device I/O") or 10 ("error: SANE: Out of memory (code=10)") when trying to launch XSane.Is it actually an error from XSane or from the whole configuration on my computer? By the way, here is the output from hp-check: hp-check[76380]: info: :[01mHP Linux Imaging and Printing System (ver. 3.21.2)[0mhp-check[76380]: info: :[01mDependency/Version Check Utility ver. 15.1[0mhp-check[76380]: info: :hp-check[76380]: info: :Copyright (c) 2001-18 HP Development Company, LPhp-check[76380]: info: :This software comes with ABSOLUTELY NO WARRANTY.hp-check[76380]: info: :This is free software, and you are welcome to distribute ithp-check[76380]: info: :under certain conditions. See COPYING file for more details.hp-check[76380]: info: :hp-check[76380]: info: :[01mNote: hp-check can be run in three modes:[0mhp-check[76380]: info: :1. Compile-time check mode (-c or --compile): Use this mode before compiling the HPLIP supplied tarball (.tar.gz or .run) to determine if thehp-check[76380]: info: :proper dependencies are installed to successfully compile HPLIP.hp-check[76380]: info: :2. Run-time check mode (-r or --run): Use this mode to determine if a distro supplied package (.deb, .rpm, etc) or an already built HPLIPhp-check[76380]: info: :supplied tarball has the proper dependencies installed to successfully run.hp-check[76380]: info: :3. Both compile- and run-time check mode (-b or --both) (Default): This mode will check both of the above cases (both compile- and run-timehp-check[76380]: info: :dependencies).hp-check[76380]: info: :hp-check[76380]: info: :Check types:hp-check[76380]: info: :a. EXTERNALDEP - External Dependencieshp-check[76380]: info: :b. GENERALDEP - General Dependencies (required both at compile and run time)hp-check[76380]: info: :c. COMPILEDEP - Compile time Dependencieshp-check[76380]: info: :d. [All are run-time checks]hp-check[76380]: info: :PYEXT SCANCONF QUEUES PERMISSIONhp-check[76380]: info: :hp-check[76380]: info: :Status Types:hp-check[76380]: info: : OKhp-check[76380]: info: : MISSING - Missing Dependency or Permission or Plug-inhp-check[76380]: info: : INCOMPAT - Incompatible dependency-version or Plugin-versionhp-check[76380]: info: :warning: [01mdebian-11 version is not supported. Using debian-10.7 versions dependencies to verify and install...[0mhp-check[76380]: info: :hp-check[76380]: info: :---------------hp-check[76380]: info: :| SYSTEM INFO |hp-check[76380]: info: :---------------hp-check[76380]: info: :hp-check[76380]: info: : Kernel: 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) GNU/Linux Host: Aragorn Proc: 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) GNU/Linux Distribution: debian 11hp-check[76380]: info: : Bitness: 64 bithp-check[76380]: info: :hp-check[76380]: info: :-----------------------hp-check[76380]: info: :| HPLIP CONFIGURATION |hp-check[76380]: info: :-----------------------hp-check[76380]: info: :hp-check[76380]: info: :HPLIP-Version: HPLIP 3.21.2hp-check[76380]: info: :HPLIP-Home: /usr/share/hplipwarning: HPLIP-Installation: Auto installation is not supported for debian distro 11 versionhp-check[76380]: info: :hp-check[76380]: info: :[01mCurrent contents of '/etc/hp/hplip.conf' file:[0mhp-check[76380]: info: :# hplip.conf. Generated from hplip.conf.in by configure.[hplip]version=3.21.2[dirs]home=/usr/share/hpliprun=/var/runppd=/usr/share/ppd/hplip/HPppdbase=/usr/share/ppd/hplipdoc=/usr/share/doc/hpliphtml=/usr/share/doc/hplip-docicon=nocupsbackend=/usr/lib/cups/backendcupsfilter=/usr/lib/cups/filterdrv=/usr/share/cups/drvbin=/usr/binapparmor=/etc/apparmor.d# Following values are determined at configure time and cannot be changed.[configure]network-build=yeslibusb01-build=nopp-build=nogui-build=yesscanner-build=yesfax-build=yesdbus-build=yescups11-build=nodoc-build=yesshadow-build=nohpijs-install=yesfoomatic-drv-install=yesfoomatic-ppd-install=nofoomatic-rip-hplip-install=nohpcups-install=yescups-drv-install=yescups-ppd-install=nointernal-tag=3.21.2restricted-build=noui-toolkit=qt5qt3=noqt4=noqt5=yespolicy-kit=yeslite-build=noudev_sysfs_rules=nohpcups-only-build=nohpijs-only-build=noapparmor_build=noclass-driver=nohp-check[76380]: info: :hp-check[76380]: info: :[01mCurrent contents of '/var/lib/hp/hplip.state' file:[0mhp-check[76380]: info: :[plugin]installed = 1eula = 1version = 3.21.2hp-check[76380]: info: :hp-check[76380]: info: :[01mCurrent contents of '~/.hplip/hplip.conf' file:[0mhp-check[76380]: info: :[commands]scan = /usr/bin/xsane -V %SANE_URI%[fax]email_address =voice_phone =[installation]date_time = 08/16/21 17:03:40version = 3.21.2[last_used]device_uri = escl:http://127.0.0.1:60001printer_name = Imprimanteworking_dir = .[polling]device_list =enable = falseinterval = 5[refresh]enable = truerate = 30type = 1[settings]systray_messages = 0systray_visible = 1[upgrade]last_upgraded_time = 1607445828notify_upgrade = truepending_upgrade_time = 0hp-check[76380]: info: : <Package-name> <Package-Desc> <Required/Optional> <Min-Version> <Installed-Version> <Status> <Comment>hp-check[76380]: info: :hp-check[76380]: info: :-------------------------hp-check[76380]: info: :| External Dependencies |hp-check[76380]: info: :-------------------------hp-check[76380]: info: :hp-check[76380]: info: :[31;01m error: cups CUPS - Common Unix Printing System REQUIRED 1.1 - INCOMPAT 'CUPS may not be installed or not running'[0mhp-check[76380]: info: : gs GhostScript - PostScript and PDF language interpreter and previewer REQUIRED 7.05 9.53.3 OK -hp-check[76380]: info: : xsane xsane - Graphical scanner frontend for SANE OPTIONAL 0.9 0.999 OK -hp-check[76380]: info: : scanimage scanimage - Shell scanning program OPTIONAL 1.0 1.0.31 OK -hp-check[76380]: info: : dbus DBus - Message bus system REQUIRED - 1.12.20 OK -hp-check[76380]: info: : policykit PolicyKit - Administrative policy framework OPTIONAL - 0.105 OK -hp-check[76380]: info: : network network -wget OPTIONAL - 1.21 OK -hp-check[76380]: info: : avahi-utils avahi-utils OPTIONAL - 0.8 OK -hp-check[76380]: info: :hp-check[76380]: info: :------------------------hp-check[76380]: info: :| General Dependencies |hp-check[76380]: info: :------------------------hp-check[76380]: info: :hp-check[76380]: info: : libjpeg libjpeg - JPEG library REQUIRED - - OK -hp-check[76380]: info: : cups-devel CUPS devel- Common Unix Printing System development files REQUIRED - - OK -hp-check[76380]: info: : cups-image CUPS image - CUPS image development files REQUIRED - - OK -hp-check[76380]: info: : libpthread libpthread - POSIX threads library REQUIRED - b'2.31' OK -hp-check[76380]: info: : libusb libusb - USB library REQUIRED - 1.0 OK -hp-check[76380]: info: : sane SANE - Scanning library REQUIRED - - OK -hp-check[76380]: info: : sane-devel SANE - Scanning library development files REQUIRED - - OK -hp-check[76380]: info: : libavahi-dev libavahi-dev REQUIRED - - OK -hp-check[76380]: info: : libnetsnmp-devel libnetsnmp-devel - SNMP networking library development files REQUIRED 5.0.9 5.9 OK -hp-check[76380]: info: : libcrypto libcrypto - OpenSSL cryptographic library REQUIRED - 1.1.1 OK -hp-check[76380]: info: : python3X Python 2.2 or greater - Python programming language REQUIRED 2.2 3.9.2 OK -hp-check[76380]: info: : python3-notify2 Python libnotify - Python bindings for the libnotify Desktop notifications OPTIONAL - - OK -hp-check[76380]: info: :[31;01m error: python3-pyqt4-dbus PyQt 4 DBus - DBus Support for PyQt4 OPTIONAL 4.0 - MISSING 'python3-pyqt4-dbus needs to be installed'[0mhp-check[76380]: info: :[31;01m error: python3-pyqt4 PyQt 4- Qt interface for Python (for Qt version 4.x) REQUIRED 4.0 - MISSING 'python3-pyqt4 needs to be installed'[0mhp-check[76380]: info: : python3-dbus Python DBus - Python bindings for DBus REQUIRED 0.80.0 1.2.16 OK -hp-check[76380]: info: : python3-xml Python XML libraries REQUIRED - 2.2.10 OK -hp-check[76380]: info: : python3-devel Python devel - Python development files REQUIRED 2.2 3.9.2 OK -hp-check[76380]: info: : python3-pil PIL - Python Imaging Library (required for commandline scanning with hp-scan) OPTIONAL - 8.1.2 OK -hp-check[76380]: info: : python3-reportlab Reportlab - PDF library for Python OPTIONAL 2.0 3.5.59 OK -hp-check[76380]: info: :hp-check[76380]: info: :--------------hp-check[76380]: info: :| COMPILEDEP |hp-check[76380]: info: :--------------hp-check[76380]: info: :hp-check[76380]: info: : libtool libtool - Library building support services REQUIRED - 2.4.6 OK -hp-check[76380]: info: : gcc gcc - GNU Project C and C++ Compiler REQUIRED - 10.2.1 OK -hp-check[76380]: info: : make make - GNU make utility to maintain groups of programs REQUIRED 3.0 4.3 OK -hp-check[76380]: info: :hp-check[76380]: info: :---------------------hp-check[76380]: info: :| Python Extentions |hp-check[76380]: info: :---------------------hp-check[76380]: info: :hp-check[76380]: info: : cupsext CUPS-Extension REQUIRED - 3.21.2 OK -hp-check[76380]: info: : hpmudext IO-Extension REQUIRED - 3.21.2 OK -hp-check[76380]: info: :hp-check[76380]: info: :----------------------hp-check[76380]: info: :| Scan Configuration |hp-check[76380]: info: :----------------------hp-check[76380]: info: :hp-check[76380]: info: :'/etc/sane.d/dll.d/hpaio' not found.hp-check[76380]: info: : hpaio HPLIP-SANE-Backend REQUIRED - 3.21.2 OK 'hpaio found in /etc/sane.d/dll.conf'hp-check[76380]: info: : scanext Scan-SANE-Extension REQUIRED - 3.21.2 OK -hp-check[76380]: info: :hp-check[76380]: info: :------------------------------hp-check[76380]: info: :| DISCOVERED SCANNER DEVICES |hp-check[76380]: info: :------------------------------hp-check[76380]: info: :hp-check[76380]: info: :device `escl:http://127.0.0.1:60001' is a HP LaserJet MFP M28a (A7B2AA) (USB) flatbed scannerdevice `hpaio:/usb/HP_LaserJet_MFP_M28-M31?serial=VNC3R77190' is a Hewlett-Packard HP_LaserJet_MFP_M28-M31 all-in-onedevice `hpaio:/net/hp_laserjet_mfp_m28-m31?ip=127.0.0.1&queue=false' is a Hewlett-Packard hp_laserjet_mfp_m28-m31 all-in-onedevice `escl:http://127.0.0.1:60001' is a HP LaserJet MFP M28a (A7B2AA) (USB) flatbed scannerdevice `hpaio:/usb/HP_LaserJet_MFP_M28-M31?serial=VNC3R77190' is a Hewlett-Packard HP_LaserJet_MFP_M28-M31 all-in-onedevice `hpaio:/net/hp_laserjet_mfp_m28-m31?ip=127.0.0.1&queue=false' is a Hewlett-Packard hp_laserjet_mfp_m28-m31 all-in-onehp-check[76380]: info: :hp-check[76380]: info: :--------------------------hp-check[76380]: info: :| DISCOVERED USB DEVICES |hp-check[76380]: info: :--------------------------hp-check[76380]: info: :hp-check[76380]: info: : Device URI Modelhp-check[76380]: info: : ------------------------------------------------- ----------------------------hp-check[76380]: info: : hp:/usb/HP_LaserJet_MFP_M28-M31?serial=VNC3R77190 HP LaserJet MFP M28-M31hp-check[76380]: info: :hp-check[76380]: info: :---------------------------------hp-check[76380]: info: :| INSTALLED CUPS PRINTER QUEUES |hp-check[76380]: info: :---------------------------------hp-check[76380]: info: :hp-check[76380]: info: :hp-check[76380]: info: :[01m[0mhp-check[76380]: info: :[01m[0mhp-check[76380]: info: :Type: Unknownhp-check[76380]: info: :Device URI: implicitclass://HP_LaserJet_MFP_M28a_A7B2AA_USB_/hp-check[76380]: info: :hp-check[76380]: info: :[01m[0mhp-check[76380]: info: :[01m[0mhp-check[76380]: info: :Type: Printerhp-check[76380]: info: :Device URI: hp:/usb/HP_LaserJet_MFP_M28-M31?serial=VNC3R77190hp-check[76380]: info: :Communication status: Goodhp-check[76380]: info: :hp-check[76380]: info: :hp-check[76380]: info: :--------------hp-check[76380]: info: :| PERMISSION |hp-check[76380]: info: :--------------hp-check[76380]: info: :hp-check[76380]: info: :USB Imprimante Required - - OK Node:'/dev/bus/usb/001/006' Perm:' root lp rw- rw- rw- rw- rw- r--'hp-check[76380]: info: :hp-check[76380]: info: :-----------hp-check[76380]: info: :| SUMMARY |hp-check[76380]: info: :-----------hp-check[76380]: info: :hp-check[76380]: info: :[01mMissing Required Dependencies[0mhp-check[76380]: info: :[01m-----------------------------[0merror: 'cups' package is missing or 'cups' service is not running.error: 'libcups2' package is missing/incompatibleerror: 'python3-pyqt4' package is missing/incompatibleerror: 'gtk2-engines-pixbuf' package is missing/incompatiblehp-check[76380]: info: :hp-check[76380]: info: :[01mMissing Optional Dependencies[0mhp-check[76380]: info: :[01m-----------------------------[0merror: 'python3-dbus.mainloop.qt' package is missing/incompatiblehp-check[76380]: info: :hp-check[76380]: info: :Total Errors: 3hp-check[76380]: info: :Total Warnings: 0hp-check[76380]: info: :hp-check[76380]: info: :hp-check[76380]: info: :Done. I know there is some warning telling me Debian 11 is not supported and that some required packages are missing (and I did not remove them since then), but I did manage to make the scanner work at first, so there must be a way to get it back running (without messing too much with the packages). | In the POSIX toolchest, there's: <input.txt tr '[:lower:]' '[:upper:]' >output.txt However note that with the GNU implementation, that only works for single-byte characters ; so in locales using the UTF-8 charset for instance, only on abcdefghijklmnopqrstuvwxyz letters without diacritics. <input.txt awk '{print toupper($0)}' >output.txt is also POSIX and works OK with the GNU implementation of awk . <input.txt dd conv=ucase >output.txt is also POSIX but not many implementations will transliterate non-ASCII characters. <input.txt sed 's/.*/\U&/g' > output.txt Works in GNU sed , but GNU sed only (that \U is not standard). With perl : <input.txt perl -Mopen=locale -pe '$_=uc' >output.txt That one doesn't use the locale's toupper rules, so may work better on words like office (converting that one ffi character to the three character FFI ¹). uconv , from the ICU project should be pretty good at handling all sorts of international corner cases, and assuming input / output encoded in UTF-8 (or whatever uconv --default-code returns; though see the -f / --from-code and -t / --to-code options to specify different input and output encodings): <input.txt uconv -x upper >output.txt Within the vim editor, if on the first character of the file ( gg to get there), enter gUG to convert all to uppercase til the end of the file. Then :saveas output.txt to save to output file. Or with any ex or vi implementation (though not all will handle non-ASCII characters): :%s/.*/\U&/ (and :w output.txt to write the edited file to output.txt and :q! to quit without saving the now modified input file). With the zsh shell: zmodload zsh/mapfilemapfile[output.txt]=${(U)mapfile[input.txt]}# or (csh-style):mapfile[output.txt]=$mapfile[input.txt]:u To convert from upper to lower case instead, in case that's not already obvious: tr : swap [:lower:] and [:upper:] awk : change toupper to tolower dd : change ucase to lcase GNU sed / ex / vi : change \U to \L perl : change uc to lc . uconv : change upper to lower vim : change gUG to guG (that's the trick one). zsh : change (U) to (L) , :u to :l . ¹ the C / POSIX toupper() / towupper() API only converts one character to another one at a time, so is limited in how it can change the case of text. See https://unicode-org.github.io/icu/userguide/icu/posix.html#case-mappings about that and more. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/664953",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205892/"
]
} |
665,272 | I have a json like this { "AgentGroupId": null, "AgentId": null, "CreateType": "Website", "IsPrimary": true, "IsShared": true, "HeaderAuthentication": { "Headers": [ { "Name": "api-key", "Value": "TEST_API_KEY_VALUE-2", "OriginalName": null, "IsReplacedCredentials": false }, { "Name": "Authorization", "Value": "", "OriginalName": null, "IsReplacedCredentials": false } ], "IsEnabled": true }, "IsTimeWindowEnabled": false, "AdditionalWebsites": [], "BasicAuthenticationApiModel": { "Credentials": null, "IsEnabled": false, "NoChallenge": false }, "ClientCertificateAuthenticationSetting": null, "Cookies": null, "CrawlAndAttack": true, "EnableHeuristicChecksInCustomUrlRewrite": true, "ExcludedLinks": [ { "RegexPattern": "gtm\\.js" }, { "RegexPattern": "WebResource\\.axd" }, { "RegexPattern": "ScriptResource\\.axd" } ], "ExcludedUsageTrackers": [], "DisallowedHttpMethods": [], "ExcludeLinks": true, "ExcludeAuthenticationPages": false, "FindAndFollowNewLinks": true, "FormAuthenticationSettingModel": { "Integrations": {}, "CustomScripts": [], "InteractiveLoginRequired": false, "DefaultPersonaValidation": null, "DetectBearerToken": true, "DisableLogoutDetection": false, "IsEnabled": false, "LoginFormUrl": null, "LoginRequiredUrl": null, "LogoutKeywordPatterns": null, "LogoutKeywordPatternsValue": null, "LogoutRedirectPattern": null, "OverrideTargetUrl": false, "Personas": [], "PersonasValidation": null }} My goal is to replace the value of api-key under HeaderAuthentication (it could be index 0 or 1 or 2 or any) I did this jq '.HeaderAuthentication.Headers[] | select(.Name == "api-key") | .Value = "xxx"' scanprofile.json > tmp && mv tmp scanprofile.json The issue is seems jq is returning only the part that replaced, but I need the whole file, what I am doing wrong? this is the content of file after running the command { "Name": "api-key", "Value": "xxx", "OriginalName": null, "IsReplacedCredentials": false} ps. I saw some stackoverflow post using sponge, I can't use sponge in our environment | The jq expression .HeaderAuthentication.Headers[] | select(.Name == "api-key") picks out the Headers array element that has api-key as its Name value. The expression (.HeaderAuthentication.Headers[] | select(.Name == "api-key")).Value |= "NEW VALUE" updates the value of the Value key in that array element to the literal string NEW VALUE . Using a shell variable that holds the new value, from the command line: new_api_key='My new key'jq --arg newkey "$new_api_key" '(.HeaderAuthentication.Headers[] | select(.Name == "api-key")).Value |= $newkey' file.json If the key needs to be base64 encoded, update with the value ($newkey|@base64) in place of just $newkey in the jq expression. To make the change in-place, use something like tmpfile=$(mktemp)cp file.json "$tmpfile" &&jq --arg ...as above... "$tmpfile" >file.json &&rm -f -- "$tmpfile" or, if you don't need to keep the original file's permissions and ownership etc., tmpfile=$(mktemp)jq --arg ...as above... file.json >"$tmpfile" &&mv -- "$tmpfile" file.json | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/665272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/459881/"
]
} |
665,275 | I wanted to split a file into multiple files based on its first column value, using zcat file2split.gz | awk '{print>$1}' , but encountered the following error: awk: cmd. line:1: (FILENAME=file2split FNR=1666) fatal: can't redirect to `CCTGGCAG_GATATAAC_HAP1' (Operation not permitted) Any idea for that? Thanks! The zip data is 25Mb in size and can be downloaded here: https://drive.google.com/file/d/1Qjq-ibdiyemBfuqpoC2h0VDhw09PS0ao/view?usp=sharing | The jq expression .HeaderAuthentication.Headers[] | select(.Name == "api-key") picks out the Headers array element that has api-key as its Name value. The expression (.HeaderAuthentication.Headers[] | select(.Name == "api-key")).Value |= "NEW VALUE" updates the value of the Value key in that array element to the literal string NEW VALUE . Using a shell variable that holds the new value, from the command line: new_api_key='My new key'jq --arg newkey "$new_api_key" '(.HeaderAuthentication.Headers[] | select(.Name == "api-key")).Value |= $newkey' file.json If the key needs to be base64 encoded, update with the value ($newkey|@base64) in place of just $newkey in the jq expression. To make the change in-place, use something like tmpfile=$(mktemp)cp file.json "$tmpfile" &&jq --arg ...as above... "$tmpfile" >file.json &&rm -f -- "$tmpfile" or, if you don't need to keep the original file's permissions and ownership etc., tmpfile=$(mktemp)jq --arg ...as above... file.json >"$tmpfile" &&mv -- "$tmpfile" file.json | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/665275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487494/"
]
} |
665,326 | I have a problem with a remote server having a keyboard layout in the console different from my physical keyboard. I need to copy a @ letter to be able to paste in a browser forum. The server is in a VPN without external access, so a simple googling for 'at symbol' doesn't work. Is there some trick to have a @ printed in the console so I can copy and paste it? Is there a well-known file to simply do a cat and show a @ inside it? A README or similar. | With the bash shell: echo $'\x40' With a POSIX shell: printf '\100' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/665326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167205/"
]
} |
665,427 | I've been arguing about this with my team. In development, we use Windows (CRLF) and on the server we use Linux (LF). Is there a problem if Linux sees a file with CRLF newlines? Should Git handle such a case via the .gitattributes file? | Mostly the linux Kernel itself does not know or care about line endings when you upload files to your server. Though as muru notes CRLF will screw up a shebang . However there is a convention on in Linux that all lines in text files end in a single LF. Many tools will read the CR and treat it as any other regular character (a,b,c,...). This comes from the POSIX definition of a text file . This can cause problems in some languages such as shell scripts (sh, bash, zsh, ksh, ...). If you are lucky the script will fail on a syntax error caused by a spurious extra argument. However in bad cases this can creep into the content of files and file names. This is mostly a problem for tools and languages which are only designed to run under linux / unix. Many platform independent languages and tools auto adapt. So you are unlikely to see a problem an IDE , or code editor. So to attempt to end your argument with your colleagues, no linux does not have a problem with CRLF line endings. However some tools and languages can choke or do strange things if you leave them in. If you are writing code to be run on Linux / Unix platforms then it's generally easier to configure git to strip any CR characters for you leaving you with LF line endings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/665427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487658/"
]
} |
665,438 | After installing Ubuntu 20.04 LTS I had an issue with my earphones. Initially my mic and my earphones both weren't working. But after a bit of research I was able to fix the speakers but mic is still an issue. I determined the codec with $ cat /proc/asound/card*/codec* | grep CodecCodec: Realtek ALC662 rev3 Then after knowing the codec of my sound card I visited the kernel documentation and picked up my model and added the following line in /etc/modprobe.d/alsa-base.conf : options snd-hda-intel model=alc662-headset-multi This fix seemed to work when I selected Analog Stereo Duplex(unplugged) in Pavucontrol and the audio came in speakers but mic was still dead but somehow it was able to capture all the voices coming out of speaker of headphone. I went to settings but there was no option to select input device device. I installed a audio recorder and there I could find an option to select external mic and it worked quite well. How to fix this problem? I am unable to access mic in any other application Additional Notes: The problem is in front panel only, rear mic and speaker works. My PC is very old so I have two separate jacks for mic and speakers. | Mostly the linux Kernel itself does not know or care about line endings when you upload files to your server. Though as muru notes CRLF will screw up a shebang . However there is a convention on in Linux that all lines in text files end in a single LF. Many tools will read the CR and treat it as any other regular character (a,b,c,...). This comes from the POSIX definition of a text file . This can cause problems in some languages such as shell scripts (sh, bash, zsh, ksh, ...). If you are lucky the script will fail on a syntax error caused by a spurious extra argument. However in bad cases this can creep into the content of files and file names. This is mostly a problem for tools and languages which are only designed to run under linux / unix. Many platform independent languages and tools auto adapt. So you are unlikely to see a problem an IDE , or code editor. So to attempt to end your argument with your colleagues, no linux does not have a problem with CRLF line endings. However some tools and languages can choke or do strange things if you leave them in. If you are writing code to be run on Linux / Unix platforms then it's generally easier to configure git to strip any CR characters for you leaving you with LF line endings. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/665438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487616/"
]
} |
665,444 | I hope someone can guide me how do I convert below json to csv that I'm expecting for. Much appreciated in advance. Update: thanks for the solutions provided, but I found that sometimes array does not exist when the 2nd column has only 1 record, example below is "ASite" has only 1 record "unixhost1123" paired to it. source json [ { "results": [ [ "sm-clust001", [ "163slesm02", "163slesm01" ] ], [ "sm-cssl112", [ "ucsbnchac240", "ucsbnchac209", "ucsbnchac241", "ucsbnchac242" ] ], [ "ASite", "unixhost1123" ] ] }] Expecting csv "sm-clust001","163slesm02""sm-clust001","163slesm01""sm-cssl112","ucsbnchac240""sm-cssl112","ucsbnchac209""sm-cssl112","ucsbnchac241""sm-cssl112","ucsbnchac242""ASite","unixhost1123" | .[].results[] is a set of arrays. In each array, the first element is what you want to have in the first column, and the second element is another array that we want to loop over. So let's keep track of the first element in $name (assuming this is a cluster name of some sort), and then output this together with each element of the sub-array: .[].results[] | .[0] as $name | .[1][]? // .[1] | [ $name, . ] | @csv The bit that says .[1][]? // .[1] selects the elements of the sub-array if it exists, otherwise it selects the second element of the array (and assumes that it's a scalar instead). On the command line: jq -r '.[].results[] | .[0] as $name | .[1][]? // .[1] | [ $name, . ] | @csv' file Result, given your example document: "sm-clust001","163slesm02""sm-clust001","163slesm01""sm-cssl112","ucsbnchac240""sm-cssl112","ucsbnchac209""sm-cssl112","ucsbnchac241""sm-cssl112","ucsbnchac242""ASite","unixhost1123" This solution is generalized for any number of columns in my answer to the user's followup question . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/665444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487692/"
]
} |
665,472 | So basically, I ran this: sed -i '/s/icap_infos/icap_servers/g' * instead of this: sed -i 's/icap_infos/icap_servers/g' * Notice the leading forward slash before 's' in the command? Now all my source files have a weird text string scattered across the whole file. How do I fix this? | /s/ is a regular expression address that matches any line containing an s character. In GNU sed, itext is an extension to the standard i\text that i nserts text after the addressed line. So your command inserted text cap_infos/icap_servers/g before any line containing an s . Assuming your original files contained no such files you should be able to reverse it using sed -i.old '\:^cap_infos/icap_servers/g$:d' * to d elete lines matching cap_infos/icap_servers/g exactly. Note the use of alternate delimiter : (introduced by \: ) since the pattern itself contains the default delimiter / . The current files will be backed up with suffix .old in case it doesn't work and you need to try something different. In future get into the habit of dry-running sed commands without -i first and/or making backups using -i.bak instead of plain -i . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/665472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487729/"
]
} |
665,689 | I've installed fzf on debian 11 (bullseye). When I type in ctrl-r to trigger a history search, nothing happens. Works fine on my mac. I'm using zsh. UPDATE: tried adding bindkey '^r' fzf-history-widget to .zshrc but I just get a "no such widget" error. fzf --version reports 0.24 (devel) | OK, found the answer at https://packages.debian.org/bullseye/fzf which says to refer to README file: Bash====Append this line to ~/.bashrc to enable fzf keybindings for Bash: source /usr/share/doc/fzf/examples/key-bindings.bashAppend this line to ~/.bashrc to enable fuzzy auto-completion for Bash: source /usr/share/doc/fzf/examples/completion.bashZsh===Append this line to ~/.zshrc to enable fzf keybindings for Zsh: source /usr/share/doc/fzf/examples/key-bindings.zshAppend this line to ~/.zshrc to enable fuzzy auto-completion for Zsh: source /usr/share/doc/fzf/examples/completion.zshFish====Issue the following commands to enable fzf keybindings for Fish: mkdir -p ~/.config/fish/functions/ echo fzf_key_bindings > ~/.config/fish/functions/fish_user_key_bindings.fishVim===The straightforward way to use fzf.vim is appending this line to your vimrc: source /usr/share/doc/fzf/examples/fzf.vim | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/665689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/166716/"
]
} |
665,692 | How can I figure out which nixpgs -package provides a given file/command, that may not be installed on the system? Other package managers offer this functionality as follows: apt has apt-file rpm has --whatprovides yum has whatprovides Does nix have a similar feature? Context: I was trying to figure out which package provides grep , before realizing it was provided by the gnugrep package (who would have thought?). I am looking for a systematic way to answer questions like these, as to avoid the guessing game next time. Things I have tried: nix-env -qa only searches through package names https://search.nixos.org/ only searches through package names | You can install the nix-index package, build the index and then use nix-locate command: [nix-shell:~]$ nix-locate '/bin/grep'perl532Packages.grepmail.out 75,569 x /nix/store/i4krsr02b3yymqhzz9kbz066rkjkn5zl-perl5.32.1-grepmail-5.3111/bin/grepmailperl530Packages.grepmail.out 75,569 x /nix/store/vc2iv0zi7kb0fr04gahyx142i30xi0g6-perl5.30.3-grepmail-5.3111/bin/grepmailpatchutils_0_3_3.out 0 s /nix/store/yfl83agm8396xw9hir1rwvdanz13h9w5-patchutils-0.3.3/bin/grepdiffpatchutils.out 0 s /nix/store/wk1yk22f0f6ai478axaqr0yvwy6q7xl5-patchutils-0.3.4/bin/grepdiff(patchutils.out) 248,584 x /nix/store/6jysbyhc43sjvfiyh1bpvi1n3zbz212r-bootstrap-tools/bin/grepipinfo.out 4,293,096 x /nix/store/7p4g03bi15705ipbkrc7vhb42cvgc54f-ipinfo-2.0.1/bin/grepipgrepm.out 1,540 x /nix/store/mfhzjhz2f3mwbg1pq1diblqfdcmcffhs-grepm-0.6/bin/grepmgrepcidr.out 19,576 x /nix/store/7x10lzg5389flnjhfwh4xycqi835knfy-grepcidr-2.0/bin/grepcidrgnugrep.out 271,716 x /nix/store/ba3bf20z5rmd9vgyzsgamvwvb3i1idfn-gnugrep-3.6/bin/grep(unixtools.col.out) 27,660 x /nix/store/0h4ih2jvl9gv3dnmld2vq5iyyv41cy7v-text_cmds-99/bin/grep The Nix Cheatsheet provides a helpful list of corresponding commands in Nix and Ubuntu. Indexing takes some time (depends on network speed and also the size). It can be reduced by using --filter-prefix '/bin/' when you're looking for utilities (8:10 -> 2:50 minutes in my case). Also interaction can be reduced by indexing right after invoking the shell: $ nix-shell -p nix-index --command 'nix-index --version; time nix-index --show-trace --filter-prefix '/bin/'; return'Nixpkgs Files Indexer 0.1.3+ querying available packages+ generating index: 55661 paths found :: 23483 paths not in binary cache :: 08533 paths in queueError: fetching the file listing for store path '/nix/store/siv7varixjdfjs17i3qfrvyc072rx55j-ia-writer-duospace-20180721' failedCaused by: response to GET 'http://cache.nixos.org/siv7varixjdfjs17i3qfrvyc072rx55j.ls' failed to parse (response saved to /run/user/1000/file_listing.json.2)Caused by: expected value at line 1 column 1+ generating index: 66306 paths found :: 23630 paths not in binary cache :: 00000 paths in queue+ wrote index of 2,742,453 bytesreal 2m50,553suser 2m39,151ssys 0m26,571s[nix-shell:~]$ nix-locate /bin/grep...(irccloud.out) 0 s /nix/store/dv7klxqz8pmyml05nrs5f5ddd3hb9nsw-irccloud-0.13.0-usr-target/bin/grepfpc.out 732 r /nix/store/4p5nx2csq7xmag9cbkmg54qzj6kxr71j-fpc-3.2.0/bin/grep.tdfgnugrep.out 257,000 x /nix/store/z3q9q9549ci7kbdgyq99r6crnvrky6v3-gnugrep-3.7/bin/grepgrepm.out 1,596 x /nix/store/746bg318dq0wm1z23lllbg74ymdyac3r-grepm-0.6/bin/grepmgrepcidr.out 22,088 x /nix/store/sja30zvm5nw7ic7gwddc3h89rdgiyza4-grepcidr-2.0/bin/grepcidr... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/665692",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87826/"
]
} |
665,887 | Say I have a file hello : #!/bin/shecho "Hello World!" Provided the executable bit is set on that file, I can execute it by entering its path on the prompt: $ ./helloHello World! Is there a more explicit equivalent to the above? Something akin to: $ execute hello I know I can pass hello as an argument to /bin/sh , but I'm looking for a solution that automatically uses the interpreter specified in the shebang line My use case for this is to execute script files that do not have the executable flag set. These files are stored in a git repository, so I would like to avoid setting their executable flag or having to copy them to another location first. | You can use perl : perl hello From perl docs : If the #! line does not contain the word "perl" nor the word "indir", the program named after the #! is executed instead of the Perl interpreter. This is slightly bizarre, but it helps people on machines that don't do #!, because they can tell a program that their SHELL is /usr/bin/perl, and Perl will then dispatch the program to the correct interpreter for them. ( via ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/665887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73254/"
]
} |
666,033 | In the following json file, { "contacts": [ { "name": "John", "phone": "1234" }, { "name": "Jane", "phone": "5678" } ]} I need to update both phone numbers based on the name and store the whole json in a new file. I tried stuff like: jq '.contacts[] | select(.name == "John") | .phone = "4321"' < contacts.json >updated_contacts.json But then I don't know how to go back to the parent node and change Jane's one, nor retrieve the whole json. I tried to store to root node in a variable with as , but it keeps unchanged. As a temporary workaround I'm just doing this: jq '.contacts[0].number = "4321" | .contacts[1].number = "4321"' < contacts.json >updated_contacts.json But I should not rely on array indexes, but names, as the original json may change. Any idea how could I do it using jq command? | To change one entry, make sure that you use |= and that the left hand side of that update operator is a path in the original document: jq --arg name John --arg phone 4321 \ '( .contacts[] | select(.name == $name) ).phone |= $phone' file You can't use .contacts[] | select(.name == "John") | .phone |= ... since the select() actually extracts a set of elements from the contacts array. You would therefore only change the elements that you extract, separately from the main part of the document. Notice the difference in ( ... | select(...) ).phone |= ...^^^^^^^^^^^^^^^^^^^^^path in original document which works, and ... | select(...) | .phone |= ... ^^^^^^^^^^^ extracted bits which doesn't work. Using a loop for more than one entry, assuming e.g. bash : names=( John Jane )phones=( 4321 4321 )tmpfile=$(mktemp)for i in "${!names[@]}"; do name=${names[i]} phone=${phones[i]} jq --arg name "$name" --arg phone "$phone" \ '( .contacts[] | select(.name == $name) ).phone |= $phone' file >"$tmpfile" mv -- "$tmpfile" filedone That is, I put the names in one array and the new numbers in another, then loop over the indexes and update file for each entry that needs changing, using a temporary file an intermediate storage. Or, with an associative array: declare -A lookuplookup=( [John]=4321 [Jane]=4321 )for name in "${!lookup[@]}"; do phone=${lookup[$name]} # jq as abovedone Assuming you have some JSON input document with the new phone numbers, such as { "John": 1234, "Jane": 5678} which you can create using jo John=1234 Jane=5678 Then you can update the numbers in a single jq invocation: jo John=1234 Jane=5678 |jq --slurpfile new /dev/stdin \ '.contacts |= map(.phone = ($new[][.name] // .phone))' file This reads our input JSON with the new numbers inte a structure, $new , that looks like [ { "John": 1234, "Jane": 5678 }] This is used in the map() call to change the phone numbers of any contact that is listed. The // .phone makes sure that if the name isn't listed, the phone number stays the same. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666033",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/488251/"
]
} |
666,053 | img2pdf works very quickly with hundreds of images and creates a pdf out of them with a command like img2pdf *.tif -o out.pdf but the page order is wrong in my case. I ran the command in konsole under Kubuntu 20.04. The image files are named (renamed in Dolphin file manager) in the form Vol_1.tifVol_2.tifVol_3.tif...Vol_430.tif The resulting pdf starts with the file/page called Vol_100.tif which makes some sense (100 is seen before 1 or 11). Then, the one called Vol_119.tif is followed by Vol_11.tif , Vol_129.tif is followed by Vol_12.tif ... ... Vol_189.tif is followed by Vol_18.tif . How to proceed? | Your best option is to rename the files so that they all have the same number of zero-padded digits, using the perl rename utility (this has different names on different distros, including perl-rename , prename , file-rename ). For example: rename -n 's/^(Vol_)(\d+)/sprintf "%s%03i", $1, $2/e' Vol_*.tif Change %03i to %04i or %05i if three digit zero-padding is not enough. This uses the -n option, so will only show what would be renamed. If/when you're certain it does what you want, either remove the -n to silently rename the files (no output except on errors), or replace it with -v for verbose operation. With -v , you'll see output like this: $ rename -v 's/^(Vol_)(\d+)/sprintf "%s%03i", $1, $2/e' Vol_*.tifVol_1.tif renamed as Vol_001.tifVol_2.tif renamed as Vol_002.tifVol_50.tif renamed as Vol_050.tifVol_60.tif renamed as Vol_060.tif Another option is to perform a natural sort on the filenames, so you'll need to use GNU find , sort , and xargs (or other versions with support for NUL separators): e.g. find . -name 'Vol_*.tif' -print0 | sort -z -V | xargs -0r img2pdf -o out.pdf GNU sort's -V option is a "version" sort, which is their name for a natural sort. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/341192/"
]
} |
666,165 | I have long-running programs that can restart their internal state. I want to see the log file entries only for the most recent state (to load into vim 's quickfix). How can I show all lines after the last occurrence of the string STARTING SESSION ? My current solution (log files are sometimes gigabytes long, so I never look at more than the last 5000 lines): tail -n5000 logfile.log | grep -B5000 -v -e 'STARTING SESSION'> shortened.log This works well when sessions produce a lot of logging, but if I have shorter logs with many restarts, it includes multiple sessions. Essentially, I want something like a --reverse flag that would make grep search from the end of the file instead of the start: grep --reverse --after-context=5000 --max-count=1 'STARTING SESSION' logfile.log Notes: The question is similar to Print line after nth occurrence of a match , but I want the last occurrence. The problem is almost the same as Getting text from last marker to EOF in POSIX.2 except that I don't have a POSIX requirement and my files are large. I'd prefer efficient solutions with GNU utils (I'm using mingw64 ). | Reverse the file, display it until the first occurrence, and reverse the output again: tac logfile.log | sed '/STARTING SESSION/q' | tac tac is efficient when given a regular (seekable) file to process , and since sed exits as soon as it sees the start line, the whole pipeline will only process the end of the log file as far as necessary (rounded up to tac ’s, sed ’s, and the kernel’s buffer sizes). This should scale well to large files. tac is a GNU utility. On non-GNU systems, you can often use tail -r to do the same. If the log file doesn’t have a “STARTING SESSION” line at all, this won’t produce the same behaviour as your grep : it will output the complete log file. To avoid this, a variant of Kusalananda’s approach can be used instead: tac logfile.log | sed -n '/STARTING SESSION/{H;x;p;q;};H' | tail -n +2 | tac The sed expression looks for “STARTING SESSION”, and when matched, append the current line to the hold space, swaps the hold space with the pattern space, outputs it and exits; any other line is appended to the hold space. tail -n +2 is used to skip the first blank line (appending the pattern space to the hold space adds a leading newline). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21401/"
]
} |
666,213 | 1sh of all, we are talking about Debian/Ubuntu OS only.(apologize if offended, because I'm only familiar with these 2) 2nd of all, we are talking about a non-root user with sudo privilege only. Every time I'm going to run a command with sudo privilege, I always ask myself should I use sudo bash -c 'command' or just sudo command , it's kind of a pain for me. And since that there are some situation that sudo command doesn't work directly, such as, sudo echo "smth" >> /etc/privilegedfile But it seems always work by using sudo sh -c "echo 'smth' >> /etc/privilegedfile" or sudo bash -c "echo 'smth' >> /etc/privilegedfile" (or use another tool, like tee to make it work.) So question is, Is there any occasion(exception) that sudo bash -c 'command' is not capable to execute, but only sudo command can? is it a good idea to use sudo bash -c instead of sudo forever? | sudo bash -c and sudo are significantly different, because sudo can be configured to grant permissions to specific users running specific commands. You may therefore find yourself in a setup where you are allowed to run sudo command (for some value of command ), but not sudo bash . In general it’s a good idea to be as specific as possible, so favour sudo command over sudo bash -c command . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/480847/"
]
} |
666,351 | I recently installed Debian for CLI purposes.I am looking to install CLI packages, I want to know how to search for packages (CLI packages such as nano)? | To find CLI packages in Debian, you can look for packages tagged as interface::command-line , either using the tag search engine , or on your system by installing debtags and running debtags search interface::command-line Both approaches have options to refine the search. See the Debtags wiki page for more details. This does have limits: packages aren’t all tagged appropriately. You can also look for packages which depend on libncurses6 : apt-rdepends -r libncurses6 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487583/"
]
} |
666,404 | Let’s say we have three machines: A, B and C. Machine A can't reach machine B from any network, so I can't send files between both. But both A and B can be reached from C (my machine). Today I have to copy a huge file from A to B. Currently I would need to copy it first from A to C and then from C to B. Is there a way to connect or pipe the scp to stream the incoming data to the target machine without need to save in the middle? | man scp writes (with my emphasis) -3 Copies between two remote hosts are transferred through the local host . Without this option the data is copied directly between the two remote hosts. [...] So what you need is a copy command that routes via your local system (I almost always include -p , to preserve timestamps and permissions, so I've done that here too): scp -3p serverA:path/to/file(s) serverB:/path/to/destination Very recent versions of scp have -3 enabled by default since 9 August 2021, and to disable it one must now specify -R . Using this third party copy feature will disable prompts for passphrases and passwords as scp cannot ask for them for both hosts. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/666404",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/182393/"
]
} |
666,539 | I would like to remove all empty lines that are ONLY located at the end of the file using awk I was able to successfully find a way to delete all of the empty lines at the top only with the following command: awk '/^$/ && a!=1 {a=0} !/^$/ {a=1} a==1 {print}' file.txt However, I didn't know how to reverse it so I could remove the bottom lines instead. I know I could just use the command above and pipe it with tac , but I prefer a direct approach using awk command only (if possible). To clarify, a line is considered "empty" if it is "visually empty", i.e. contains at most spaces and/or tabs. | Awk Since Awk reads the file sequentially, from the first to the last line, without external help (e.g. Tac) it can only figure whether a block of empty lines is at the end of the file when it actually reaches the end of the file. What you can do is keep a variable with the empty lines (i.e., only newline characters, the default record separator RS ) and print those empty lines whenever you reach a non-empty line: awk '/^$/{n=n RS}; /./{printf "%s",n; n=""; print}' file I don't understand why there is a difference between print n and printf n . print appends the output record separator ( ORS , by default a newline) to the expression to be printed. Thus you would get an extra newline if you tried it. You could also write it with a single output statement as in awk '/^$/{n=n RS}; /./{printf "%s%s%s",n,$0,RS; n=""}' file Ed or Ex To print the output (just as Awk did), choose either of printf '%s\n' 'a' '' '.' '?.?+1,$d' ',p' 'Q' | ed -s fileprintf '%s\n' 'a' '' '.' '?.?+1,$d' '%p' 'q!' | ex -s file To directly apply the changes to the file, choose either of printf '%s\n' 'a' '' '.' '?.?+1,$d' 'w' 'q' | ed -s fileprintf '%s\n' 'a' '' '.' '?.?+1,$d' 'x' | ex -s file To understand what's going on. Command substitution Shells strip trailing newline characters in command substitution. printf '%s\n' "$(cat file)" Mind that some shells will not handle large files and error with "argument list too long". Inspired by this answer . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666539",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/477132/"
]
} |
666,627 | Let's say I accidentally ran sudo dd if=/dev/sda of=/dev/sda instead of sudo dd if=/dev/sda of=/dev/sdb And I didn't want to wait for the 500gb operation to complete to find out if I broke my system. Does anyone know what the outcome would be so I can either wait in peace for it to complete or just interrupt it and start with a fresh install so long? Edit: Here is the resulting output $ time sudo dd if=/dev/sda of=/dev/sda bs=64K conv=sync,noerror status=progress512001769472 bytes (512 GB, 477 GiB) copied, 9084 s, 56.4 MB/sdd: error writing '/dev/sda': No space left on device7814181+1 records in7814181+0 records out512110190592 bytes (512 GB, 477 GiB) copied, 9088.72 s, 56.3 MB/sreal 151m28.774suser 0m28.459ssys 5m33.146s | In isolation, dd if=/dev/sdx of=/dev/sdx would write the same data that is already there. No harm done(*). However, if the device is in active use e.g. by a filesystem, there may be a possible race condition: dd reads data (A), filesystem writes data (B), dd writes data (A), causing (B) to be lost / corrupted. So it could still result in data loss or system crash. Other possible side effects are wasting write cycles if it's SSD storage, and causing sparse files / thin volumes / snapshots to use up more storage space. Also note that if you do the same thing with a regular file instead of a block device, the result would be an empty file and all data lost: $ dd if=foobar of=foobar0+0 records in0+0 records out0 bytes copied, 0.000179684 s, 0.0 kB/s That's because without conv=notrunc the first thing dd does is truncate the output file to 0 bytes, and at that time, all it can read from a 0 byte file is 0 bytes too. (*) unless you use additional options that are capable of changing offsets, such as seek , skip , noerror , sync , ... or if you end up using different but same device like if=/dev/sdx of=/dev/sdx1 where such offsets are introduced by partitioning. In this case dd would end up writing the same data pattern repeatedly (by reading what it previously wrote at an offset). It would corrupt everything. There is also a more obscure corner case where a device erratically returns wrong data without properly reporting it as a read error. In this case you'd end up writing corrupt data back to the device. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666627",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/488872/"
]
} |
666,657 | I have a directory with lots of json and pdf files that are named in a pattern. I am trying to filter the files on name with the following pattern \d{11}-\d\.(?:json|pdf) in the command. For some reason it is not working. I believe it is due the fact that the xargs take the arguments one big line of string or when the input is split there is some whitespace, \n or null character. ls | xargs -d '\n' -n 1 grep '\d{11}-\d\.(?:json|pdf)' if I try just this ls | xargs -d '\n' -n 1 grep '\d' It selects file names with digits in them, as soon as I specify the multiplicity regex, nothing matches. | First, ls | xargs grep 'pattern' makes grep look for occurrences incontents of files listed by ls , not in list of filenames. To look forfilenames it should be enough to do: ls | grep 'pattern' Second, grep '\d{11}-\d\.(?:json|pdf)' would work only with GNU grepand -P option. Use the following syntax instead - it works with GNU,busybox and FreeBSD implementations of grep: ls | grep -E '[[:digit:]]{11}-[[:digit:]]\.(json|pdf)' Third, parsing ls is not a goodidea . Use GNU find : find . -maxdepth 1 -regextype egrep -regex '.*/[[:digit:]]{11}-[[:digit:]]\.(json|pdf)' or FreeBSD find: find -E . -maxdepth 1 -regex '.*/[[:digit:]]{11}-[[:digit:]]\.(json|pdf)' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/486348/"
]
} |
666,766 | I have heard many times that issuing rm -rf is dangerous since users can accidentally remove the entire system. But sometimes I want to remove a directory recursively without being asked every time. I am thinking about using yes | rm -r instead. The thing I am thinking is that, is yes | rm -r safer than rm -rf ? Or essentially, are they the same? | First, as others have already said, yes | rm -r is very similar but not identical to rm -rf . The difference is that the -f option tells rm to continue past various errors. This means that yes | rm -r will exit on the first error unlike rm -rf which continue on and keep deleting everything it can. This means that yes | rm is slightly less dangerous than rm -f , but not substantially so. So, what do you do to mitigate the risks of rm ? Here are a few habits I've developed that have made it much less likely to run into trouble with rm . This answer assumes you're not aliasing rm to rm -i , which is a bad practice in my opinion. Do not use an interactive root shell. . This immediately makes it much more difficult to do the worst-case rm -rf / . Instead, always use sudo , which should be a visual clue to look very carefully at the command you're typing. If it's absolutely necessary to start a root shell, do what you need there and exit. If something is forcing you to be root most of the time, fix whatever it is. Be wary of absolute paths. If you find yourself typing a path starting with / , stop. It's safer to avoid absolute paths, and instead cd to the directory you intend to delete from, use ls and pwd to look around to make sure you're the right place, and then go ahead. Pause before hitting return on the rm command. I've trained myself to always always always lift my fingers from the keyboard after typing any rm command (and a few other potentially dangerous commands), inspect what I've typed very carefully, and only then put my fingers back to the keyboard and hit return. Use echo rm ... to see what you're asking rm to do. I often do crucial rm commands as a two-step process. First, I type $ echo rm -rf ... this expands all shell globs (ie, * patterns, etc) and shows me the rm command that would have been executed. If this looks good, again after careful inspection, I type ^P (control-P) to get the previous input line back, delete echo , and inspect the command line again, then hit return without changing anything else. Maintain backups. If you're doing most of the above, the odds of having to restore the entire system are very low, but you can still accidentally delete your own files, and it's handy to be able to get them back from somewhere. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/666766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
666,770 | Background I'm attempting to configure automatic LUKS unlock on CentOS 8 Stream. I would like to place a keyfile on the unencrypted boot partitionand and use it to unlock the LUKS protected LVM PV (which contains the root filesystem). I understand that this is a strange thing to want to do and undermines much of the value of disk encryption - but please humor me. Here's an overview of the current layout: $ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTnvme0n1 259:0 0 931.5G 0 disk ├─nvme0n1p1 259:1 0 256M 0 part /boot/efi├─nvme0n1p2 259:2 0 1G 0 part /boot└─nvme0n1p3 259:3 0 930.3G 0 part └─luks-3d33d226-9640-4343-ba5a-b9812dda1465 253:0 0 930.3G 0 crypt └─cs-root 253:1 0 20G 0 lvm /$ sudo e2label /dev/nvme0n1p2boot Today the /etc/crypttab contains the following for booting with a manually entered passphrase (UUIDs redacted for readability) which works just fine: luks-blah UUID=blah none discard In order to achieve automatic unlocking I have generated a keyfile /boot/keys/keyfile and added it as a key on the LUKS partition using luksAddKey . Attempt 1 In my first attempt I changed the crypttab line to this: luks-blah UUID=blah /keys/keyfile:LABEL=boot discard,keyfile-timeout=10s This does result in automatic unlocking and mounting of the root filesystem, but the boot process fails and dumps me into rescue mode as the system cannot mount /boot . The reason is that the boot partition has already been mounted (to a randomish location in order to obtain the keyfile: /run/systemd/cryptsetup/keydev-luks-blah ). Attempt 2 I tried changing crypttab to this: luks-blah UUID=blah /boot/keys/keyfile discard,keyfile-timeout=10s I thought maybe the boot scripts are smart enough to figure out how to access /boot/keys/keyfile without /boot being mounted yet. This didn't work however, and I just get the prompt to manually enter the passphrase. Question Is there a way to unlock the root filesystem using a keyfile stored on a partition that needs to be available for normal mounting? | First, as others have already said, yes | rm -r is very similar but not identical to rm -rf . The difference is that the -f option tells rm to continue past various errors. This means that yes | rm -r will exit on the first error unlike rm -rf which continue on and keep deleting everything it can. This means that yes | rm is slightly less dangerous than rm -f , but not substantially so. So, what do you do to mitigate the risks of rm ? Here are a few habits I've developed that have made it much less likely to run into trouble with rm . This answer assumes you're not aliasing rm to rm -i , which is a bad practice in my opinion. Do not use an interactive root shell. . This immediately makes it much more difficult to do the worst-case rm -rf / . Instead, always use sudo , which should be a visual clue to look very carefully at the command you're typing. If it's absolutely necessary to start a root shell, do what you need there and exit. If something is forcing you to be root most of the time, fix whatever it is. Be wary of absolute paths. If you find yourself typing a path starting with / , stop. It's safer to avoid absolute paths, and instead cd to the directory you intend to delete from, use ls and pwd to look around to make sure you're the right place, and then go ahead. Pause before hitting return on the rm command. I've trained myself to always always always lift my fingers from the keyboard after typing any rm command (and a few other potentially dangerous commands), inspect what I've typed very carefully, and only then put my fingers back to the keyboard and hit return. Use echo rm ... to see what you're asking rm to do. I often do crucial rm commands as a two-step process. First, I type $ echo rm -rf ... this expands all shell globs (ie, * patterns, etc) and shows me the rm command that would have been executed. If this looks good, again after careful inspection, I type ^P (control-P) to get the previous input line back, delete echo , and inspect the command line again, then hit return without changing anything else. Maintain backups. If you're doing most of the above, the odds of having to restore the entire system are very low, but you can still accidentally delete your own files, and it's handy to be able to get them back from somewhere. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/666770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119672/"
]
} |
666,779 | I have a weird issue: sometimes when my monitor is turned off, the fans are running loud, even when there shouldn't be much usage of the CPU on the system as far as I know. But as soon as I move my mouse and start top to try to diagnose this, the activity, whatever it is, stops; with the fans winding down. So I want a script/program/method that I could start at some point in time, leave the computer unattended while this program is recording CPU activity of processes, then when I resume operating the computer I should be able to read the program's report from which I would quickly know what processes are making the fans work hard. EDIT: one chromium process is the one making the fans run loud while the screen is off. No idea why, though. | First, as others have already said, yes | rm -r is very similar but not identical to rm -rf . The difference is that the -f option tells rm to continue past various errors. This means that yes | rm -r will exit on the first error unlike rm -rf which continue on and keep deleting everything it can. This means that yes | rm is slightly less dangerous than rm -f , but not substantially so. So, what do you do to mitigate the risks of rm ? Here are a few habits I've developed that have made it much less likely to run into trouble with rm . This answer assumes you're not aliasing rm to rm -i , which is a bad practice in my opinion. Do not use an interactive root shell. . This immediately makes it much more difficult to do the worst-case rm -rf / . Instead, always use sudo , which should be a visual clue to look very carefully at the command you're typing. If it's absolutely necessary to start a root shell, do what you need there and exit. If something is forcing you to be root most of the time, fix whatever it is. Be wary of absolute paths. If you find yourself typing a path starting with / , stop. It's safer to avoid absolute paths, and instead cd to the directory you intend to delete from, use ls and pwd to look around to make sure you're the right place, and then go ahead. Pause before hitting return on the rm command. I've trained myself to always always always lift my fingers from the keyboard after typing any rm command (and a few other potentially dangerous commands), inspect what I've typed very carefully, and only then put my fingers back to the keyboard and hit return. Use echo rm ... to see what you're asking rm to do. I often do crucial rm commands as a two-step process. First, I type $ echo rm -rf ... this expands all shell globs (ie, * patterns, etc) and shows me the rm command that would have been executed. If this looks good, again after careful inspection, I type ^P (control-P) to get the previous input line back, delete echo , and inspect the command line again, then hit return without changing anything else. Maintain backups. If you're doing most of the above, the odds of having to restore the entire system are very low, but you can still accidentally delete your own files, and it's handy to be able to get them back from somewhere. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/666779",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/116512/"
]
} |
666,846 | I hope the title explains this correctly. I am currently trying to print out an array after counting unique values from a spreadsheet. My awk command works correctly: awk -F"," 'NR>1{col[$1,$9]++} END {for (i in col) printf("%s: %d\n", i, col[i])}' my_file.csv | sort When printing though I get a special character that looks like a question mark. How do I print this with a comma + space between the year and the season. eg: 1896, summer: 151 | Awk is treating [$1,$9] as a pseudo multi-dimensional array, and inserting its internal SUBSEP character. This is documented in The GNU Awk User's Guide for example: SUBSEP The subscript separator. It has the default value of "\034" and is used to separate the parts of the indices of a multidimensional array.Thus, the expression ‘foo["A", "B"]’ really accesses foo["A\034B"](see section Multidimensional Arrays). Ex. $ echo 'A,A' | gawk -F, '{col[$1,$2]++} END{for(i in col) print i}' | od -to10000000 101 034 101 0120000004 If you want a 1d array indexed by the literal value of the string, you can use [$1 "," $9] or more generally [$1 FS $9] (the latter ensures that the solution will work for data with other separators): $ echo 'A,A' | gawk -F, '{col[$1 FS $2]++} END{for(i in col) print i}'A,A If you want comma+space either use [$1 FS" " $2] or set SUBSEP = FS" " in a BEGIN block. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666846",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/488399/"
]
} |
666,853 | I want to merge some pre-sorted tab-delimited files: file bygroup.0 : ancient-american mercury 1 164ancient-american mh25 2 8717664ancient-neolith tk11 262 40074321970ancientdna jk21 6936 17069206689ancientdna rm20 11267 372606702813ancientgen ab34 1573 27800468142ancientgen dg11 3516 45081427920ancientgen fa8 7179 462396221983ancientgen mp15 41 10248223517ancientgen mp18 254 1049351143ancientgen rm20 15100 1565340401ancientgen tc9 1695 89861489631 file bygroup.2 : ancient-american mercury 1 160ancient-american mh25 2 10362712888ancient-neolith tk11 264 43842268110ancientdna jk21 6919 16379509855ancientdna rm20 11268 324906365415ancientgen ab34 1577 33947364202ancientgen dg11 3518 48092138390ancientgen fa8 7174 472364587220ancientgen mp15 39 32487920045ancientgen mp18 254 1058177852ancientgen rm20 15104 998615135ancientgen tc9 1692 94858351562 You can see the 2 files have the same number of lines, and are in the same order based on columns 1 and 2, and the entries in those columns are the same. Now I want to merge them so that all the lines with the same values in the first 2 columns are output sequentially. I thought sort -m would be all I'd need, but: $ sort -m bygroup.*ancient-american mercury 1 160ancient-american mercury 1 164ancient-american mh25 2 10362712888ancient-american mh25 2 8717664ancient-neolith tk11 262 40074321970ancientdna jk21 6936 17069206689ancientdna rm20 11267 372606702813ancientgen ab34 1573 27800468142ancientgen dg11 3516 45081427920ancientgen fa8 7179 462396221983ancientgen mp15 41 10248223517ancientgen mp18 254 1049351143ancientgen rm20 15100 1565340401ancientgen tc9 1695 89861489631ancient-neolith tk11 264 43842268110ancientdna jk21 6919 16379509855ancientdna rm20 11268 324906365415ancientgen ab34 1577 33947364202ancientgen dg11 3518 48092138390ancientgen fa8 7174 472364587220ancientgen mp15 39 32487920045ancientgen mp18 254 1058177852ancientgen rm20 15104 998615135ancientgen tc9 1692 94858351562 (I get the same results with other options I added, eg. sort -k 1,2 -ifm .) It does what I expected for ancient-american, but not for the others. What's going on, and is there another fast and efficient way of doing this without resorting to a full sort ( sort works without the -m here). | Awk is treating [$1,$9] as a pseudo multi-dimensional array, and inserting its internal SUBSEP character. This is documented in The GNU Awk User's Guide for example: SUBSEP The subscript separator. It has the default value of "\034" and is used to separate the parts of the indices of a multidimensional array.Thus, the expression ‘foo["A", "B"]’ really accesses foo["A\034B"](see section Multidimensional Arrays). Ex. $ echo 'A,A' | gawk -F, '{col[$1,$2]++} END{for(i in col) print i}' | od -to10000000 101 034 101 0120000004 If you want a 1d array indexed by the literal value of the string, you can use [$1 "," $9] or more generally [$1 FS $9] (the latter ensures that the solution will work for data with other separators): $ echo 'A,A' | gawk -F, '{col[$1 FS $2]++} END{for(i in col) print i}'A,A If you want comma+space either use [$1 FS" " $2] or set SUBSEP = FS" " in a BEGIN block. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/666853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/489105/"
]
} |
667,101 | I am currently enrolled in a course that is teaching me UNIX fundamentals, such as common commands and such. After doing some digging on UNIX, I came across the rabbit hole of legal battles over who owns UNIX, and the UNIX wars. I have done some research, but the sources are sort of dated (circa 2003 - 2004) and have conflicting information as far as who owns it. Here are a couple of the sources I have found: https://www.zdnet.com/article/who-really-owns-unix/ - states that the Open Group owns it https://www.informit.com/articles/article.aspx?p=175171&seqNum=2 - states that the SCO owns it After reading these sources, it sounds like the Open Group is claiming to own the UNIX trademark, while the SCO claims to own the UNIX source code. Am I understanding that correctly? | TLDR As of today, and talking about the USA, the UNIX trademark is owned by The Open Group (you can see it on the USPTO website ) for "COMPUTER PROGRAMS, * NAMELY, TEST SUITES USED TO DETERMINE COMPLIANCE WITH CERTAIN SPECIFICATIONS AND STANDARDS *" (First use: Dec. 14, 1972, Use in Commerce: Dec. 14, 1972) A bit more Novell transferred the trademarks of Unix to The Open Group in 1993. See message from Chuck Karish on comp.std.unix news group . I quote a piece: Q4. Will Novell continue to control UNIX? A4. No. From today, the APIs which define UNIX will be controlledby X/Open and managed through the company's proven openindustry consensus processes.Novell will continue to own one product (a single implementation of UNIX)which currently conforms to the specification. Novell is clearly free toevolve that product in any way that it chooses, but may only continue to callit UNIX if it maintains conformance to the X/Open specifications. SCO tried to buy UNIX from Novell (again). You may read docketing statement of The SCO Group, Inc. v. Novell, Inc case . I quote a piece: It is therefore ORDERED that SCO’s Renewed Motion for Judgment as aMatter of Law or, in the Alternative, for a New Trial (Docket No. 871)is DENIED. DATED June 10, 2010. BY THE COURT: TED STEWART United States District Judge Then SCO appealed: Party or Parties filing Notice of Appeal/Petition: The SCO Group, Inc.______________________________________________________________________ I. TIMELINESS OF APPEAL OR PETITION FOR REVIEW A. APPEAL FROM DISTRICTCOURT Date notice of appeal filed: July 7, 2010 On August 30, 2011, the Appeals Court affirmed the trial decision. You can read this . A quote: VII. IMPLIED COVENANT OF GOOD FAITH AND FAIR DEALINGSCO argues the district court erred in entering judgment in Novell’s favor on its good faith and fair dealing claim (...). The district court’s conclusion on this point is consistent with the jury verdict on copyright ownership and is supported by evidence in the record.AFFIRMED. Entered by the Court: Terrence L. O’Brien. United States Circuit Judge So Unix is not owned by SCO. In fact, SCO holds some UNIX® certifications issued by The Open Group: UNIX 95 and UNIX 93 . Any system that wants to be called a UNIX® must be certified by The Open Group. A list of certified Unixes can be found on The Open Group official register of UNIX Certified Products page . Some related systems not holding a certification are usually referred to as *nixes or Unix-like systems. You can find out more on Wikipedia article about UNIX, section Branding and article about SCO Group, Inc. v. Novell, Inc. lawsuit . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/667101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/489364/"
]
} |
667,139 | In reading through the source to fff to learn more about Bash programming, I saw a timeout option passed to read as an array here : read "${read_flags[@]}" -srn 1 && key "$REPLY" The value of read_flags is set like this : read_flags=(-t 0.05) (The resulting read invocation intended is therefore read -t 0.05 -srn 1 ). I can't quite figure out why a string could not have been used, i.e.: read_flags="-t 0.05"read "$read_flags" -srn 1 && key "$REPLY" This string based approach results in an "invalid timeout specification". Investigating, I came up with a test script parmtest : show() { for i in "$@"; do printf '[%s]' "$i"; done printf '\n'}opt_string="-t 1"opt_array=(-t 1)echo 'Using string-based option...'show string "$opt_string" x y zread "$opt_string"echoecho 'Using array-based option...'show array "${opt_array[@]}" x y zread "${opt_array[@]}" Running this, with bash parmtest ( $BASH_VERSION is 5.1.4(1)-release), gives: Using string-based option...[string][-t 1][x][y][z]parmtest: line 11: read: 1: invalid timeout specificationUsing array-based option...[array][-t][1][x][y][z](1 second delay...) I can see from the debug output that the value of 1 in the array based approach is separate and without whitespace. I can also see from the error message that there's an extra space before the 1 : read: 1: invalid timeout specification . My suspicions are in that area. The strange thing is that if I use this approach with another command, e.g. date , the problem doesn't exist: show() { for i in "$@"; do printf '[%s]' "$i"; done printf '\n'}opt_string="-d 1"opt_array=(-d 1)echo 'Using string-based option...'show string "$opt_string" x y zdate "$opt_string"echoecho 'Using array-based option...'show array "${opt_array[@]}" x y zdate "${opt_array[@]}" (The only differences are the opt_string and opt_array now specify -d not -t and I'm calling date not read in each case). When run with bash parmtest this produces: Using string-based option...[string][-d 1][x][y][z]Wed Sep 1 01:00:00 UTC 2021Using array-based option...[array][-d][1][x][y][z]Wed Sep 1 01:00:00 UTC 2021 No error. I've searched, but in vain, to find an answer to this. Moreover, the author wrote this bit directly in one go and used an array immediately , which makes me wonder. Thank you in advance. Update 03 Sep : Here's the blog post where I've written up what I've learned so far from reading through fff , and I've referenced this question and the great answers in it too: Exploring fff part 1 - main . | The reason is a difference in how the read builtin function and the date command interpret their command-line arguments. But, first things first. In both of your examples, you place - as is recommended - quotes around the dereferencing of your shell variables, be it "${read_flags[@]}" in the array case or "$read_flags" in the scalar case.The main reason why it is recommended to always quote your shell variables is to prevent unwanted word splitting. Consider the following You have a file called My favorite songs.txt with spaces in it, and want to move it to the directory playlists/ . If you store the filename in a variable $fname and call mv $fname playlists/ the mv command will see four arguments: My , favorite , songs.txt and playlists/ and try to move the three nonexistant files My , favorite and songs.txt to the directory playlists/ . Obviously not what you want. Instead, if you place the $fname reference in double-quotes, as in mv "$fname" playlists/ it makes sure the shell passes this entire string including the spaces as one word to mv , so that it recognizes it is just one file (albeit with spaces in its name) that needs to be moved. Now you have a situation in which you want to store option arguments in a shell variable. These are tricky, because sometimes they are long, sometimes short, and sometimes they take a value. There are numerous ways on how to specify options that take arguments, and usually how they are parsed is left entirely at the discretion of the programmer (see this Q&A ) for a discussion). The reason why Bash's read builtin and the date command react differently is therefore likely in the internal workings on how these two parse their command-line arguments. However, we may speculate a little. When storing -t 0.05 in a scalar shell variable and passing it as "$opt_string" , the recipient will see this as one string containing a space (see above). When storing -t and 0.05 in an array variable and passing it as "${opt_array[@]}" the recipient will see this as two separate items, the -t and the 0.05 . (1) (2) Many programs will use the getopt() function from the GNU C library for parsing command-line arguments, as is recommended by the POSIX guidelines. The getopt() distinguishes "short" options and "long" option format, e.g. date -u or date --utc in case of the date command. The way option values for an option (say, -o / --option ) are interpreted by getopt is usually -o value or -o value for short options and --option= value or --option value for long options. When passing -t 0.05 as two words to a tool that uses getopt() , it will take the first character after the - as being the option name and the next word as the option value (the -o value syntax). So, read would take t as option name and 0.05 as option value. When passing -t 0.05 as one word, it will be interpreted as the -o value syntax: getopt() will take (again) the first character after the - as the option name and the remainder of the string as option value, so the value would be 0.05 with a leading space . The read command apparently doesn't accept timeout specifications with a leading space. And indeed, if you call read -t " 0.05" -srn 1 where the value is explicitly a string with leading space, read also complains about this. As a conclusion , the date command is obviously written in a more lenient way when it comes to the option value for -d and doesn't care if the value string starts with a space. This is perhaps not unexpected, as the values that the date specifications can take on are very diverse, as opposed to the case of a timeout specification that (clearly) needs to be a number. (1) Note that using the @ (as opposed to * ) makes a great difference here, because when the array reference is quoted, all array elements will then appear as if they were individually quoted and thus could contain spaces themselves without being split further . (2) In principle, there is a third option: Store -t 0.05 in a scalar variable $opt_string , but pass it as $opt_string without the quotes. In this case, we would have word-splitting at the space, and again two items, -t and 0.05 , would be passed separately to the program. However, this is not the recommended way because sometimes your argument value will have explicit whitespaces that need preserving. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/667139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87597/"
]
} |
667,204 | I have a symbolic link, sd/common.py -> actual_file , and I want to replace the link with a generated file. However, whenever I do cp /tmp/Star_Wrangler/common.py sd/common.py ... it copies from /tmp/Star_Wrangler/common.py and overwrites the actual_file instead of just replacing the symbolic link as I intend. Every time I forget to delete the symbolic link before copying, this keeps happening. Is there an option to get the behaviour I expect? I look at the manuals, but they all talk about symbolic links at the source, not at the target. | This depends on what Unix you are using. On some BSD systems (OpenBSD, FreeBSD), you will find that cp -f will unlink (remove) the symbolic link and replace it with the source. Using GNU cp , this would not have the same effect, and you would need to use the long option --remove-destination instead. On macOS, use cp -c to, as the manual says, "copy files using clonefile(2) ". On NetBSD, use cp -a ("archive mode", the same as cp -RpP on that system). This doesn't work on GNU, macOS, OpenBSD, or FreeBSD, even though all of these systems have the same or similar -a option for cp (on GNU systems, it's the same as -dR --preserve ). You already mention this yourself: Removing the link before copying the file will solve the issue. The rm utility removes the link rather than the file referenced by the link. This is also the most portable way to replace a symbolic link with a regular file. If you are writing a script, then I suggest that you use rm followed by cp . If you are working interactively and keep forgetting to do this, then it's also likely that you forget to use a specific option with cp for these situations. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/667204",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350958/"
]
} |
667,288 | #!/bin/shfoo() { echo "in foo"}type foo checkbashisms.pl obviously does not like type , why? $ checkbashisms.pl foo.shpossible bashism in foo.sh line 7(type):type foo Is it not POSIX? But it's supported by all common shells (i.e. bash , zsh , dash , busybox sh , mksh ; even in ksh ; maybe just csh does not support it), should not there be a way how to suppress this warning? | type is part of POSIX , but as part of the X/Open Systems Interfaces option (XSI). The checkbashisms man page explicitly says Note that the definition of a bashism in this context roughly equates to "a shell feature that is not required to be supported by POSIX"; this means that some issues flagged may be permitted under optional sections of POSIX, such as XSI or User Portability. So type is flagged because it is an optional feature. I’m not aware of any way of disabling specific warnings in checkbashisms , other than removing them from the script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/667288",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26407/"
]
} |
667,758 | I have a fasta file namely test.fasta, pas.fasta, cel.fasta as shown below test.fasta>tileATGTC>259TGATpas.fasta>taATGCTcel.fasta>787TGTAG>yogTGTAT>InNNTAG I need to print the file name and the total number of fasta sequences as shown below, test,2pas,1cel,3 I have used the following commands but failed to serve my purpose grep ">" test.fasta | wc -l && ls test.fasta Please help me to do the same. Thanks in advance. | That's what the -c option of grep (to c ount) is for: $ grep -ce '^>' -- *.fastacel.fasta:3pas.fasta:1test.fasta:2 Note that if there's only one matching file, the file name will not be printed. Some grep implementations have a -H option to force the file name to be printed always: $ grep -Hce '^>' -- *.fastacel.fasta:3 To get your exact expected output, you just need to replace .fasta: with , : $ grep -Hce '^>' -- *.fasta | sed 's/\.fasta:/,/'cel,3pas,1test,2 (here assuming your file names don't contain other occurrences of .fasta: such as my.fasta:foo.fasta ; of course newline or , or " characters and potentially whitespace characters in file names would also be a problem if the output is meant to be in CSV format) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/667758",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/357294/"
]
} |
667,774 | This week I've tried several different Live USB Linux distros on my Asus X541UAK (4 GB RAM, 500 GB HDD, Inter Core i3-7100, Windows 10 Pro 19043 [latest], BIOS v311 [latest]): Manjaro 21.1.1 , with kernel 5.4 ª, started up instantly and ran flawlessly. Just perfection. a. Kubuntu 20.04 , also with kernel 5.4 (which I tried a year ago, when it just came out), ran flawlessly, with no excess elements before startup. Kubuntu and Ubuntu Studio 20.04.2.0 , both with kernel 5.8 , showed an error tpm_crb MSFT0101:00: [Firmware Bug]: ACPI region does not cover the entire command/response buffer. [mem 0xfed40000-0xfed4087f flags 0x200] vs fed40080 f80 twice (two identical rows) before the respective logo appeared and the system checked all the files in itself, but after that it also ran flawlessly. Kubuntu and Ubuntu Studio 20.04.3 , both with kernel 5.11 , showed the same error screen, checked all their files, and that's all. Kubuntu has a "Try/Install" window before loading into Live session, and pressing Try button caused endless* loading; the rest of buttons were unclickable. Ubuntu Studio (with XFCE) didn't have a welcome window and loaded the desktop instantly. But none of apps (tried Firefox, Ardour, Okular) worked — they crashed with the Crash reporter. The application has a problem and crashed pop up appearing. a. Pop!_OS 20.04 also seems to have kernel 5.11 (the devs don't mark which subversion of Ubuntu it's based on, most probably 20.04.3, since it's downloaded within this week) and behaves identically to Ubuntu Studio. * I didn't wait for more than 5 minutes and turned off the laptop manually None of the images is corrupt, nor the USB sticks. The way of making em bootable (burning with Etcher, Rufus, creating a Ventoy partition) doesn't change anything. The firmware bugs clearly show that there's something with my TPM, but my BIOS (even on latest version) doesn't have TPM even mentioned (although Windows shows that i have TPM 2.0 enabled). Questions: Is it actually about kernels? The most flawless of them, Manjaro, is an Arch derivative, the rest are Ubuntu derivatives. If i'm about to use 5.11 kernel in future (I hope 5.13 or 5.14 won't have such an error), what can be the way to fix it? ª Update: Manjaro (Aug 27th 2021) turned out to have kernel v5.13. I suggest that the error was fixed somewhere in .12 or .13. Sorry for misinformation | That's what the -c option of grep (to c ount) is for: $ grep -ce '^>' -- *.fastacel.fasta:3pas.fasta:1test.fasta:2 Note that if there's only one matching file, the file name will not be printed. Some grep implementations have a -H option to force the file name to be printed always: $ grep -Hce '^>' -- *.fastacel.fasta:3 To get your exact expected output, you just need to replace .fasta: with , : $ grep -Hce '^>' -- *.fasta | sed 's/\.fasta:/,/'cel,3pas,1test,2 (here assuming your file names don't contain other occurrences of .fasta: such as my.fasta:foo.fasta ; of course newline or , or " characters and potentially whitespace characters in file names would also be a problem if the output is meant to be in CSV format) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/667774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491031/"
]
} |
667,812 | Giving that Windows 10 would most likely wipe my Linux EFI boot entry, See the comment after the answer here : Windows 10 will usually "self-heal" its firmware boot entry if you manage to get Windows booting even once. In the process, if there is no existing Windows boot entry in the firmware (i.e. in the efibootmgr list), it will usually usurp Boot0000 for itself, regardless of whether or not it is already in use. I'd like to backup my EFI boot entry before so that I can then easily restore it even Windows 10 wipes it. Seems there is no existing tools that can do it, though https://github.com/rhboot/efibootmgr/issues/10 mentioned the efivar utility, with somewhat manual process. However, I cannot find any further info into that direction. Hence the question. Or, if I have a EFI boot entry like this: Boot0000* debian HD(13,GPT,007a058a-8e5e-45df-8d97-6575b66b5355,0x1afa9000,0x113000)/File(\EFI\debian\grubx64.efi) How to recreate it next time? | It's easy enough to recreate a boot entry from scratch once you know how... and have the efibootmgr tool at hand, of course. Boot0000* debian HD(13,GPT,007a058a-8e5e-45df-8d97-6575b66b5355,0x1afa9000,0x113000)/File(\EFI\debian\grubx64.efi) The 007a058a-8e5e-45df-8d97-6575b66b5355 is the PARTUUID of the ESP partition the \EFI\debian\grubx64.efi is located in. (The 13 may be a partition number, but according to the specification, the PARTUUID is the primary identifier.) The efibootmgr command just needs to know the disk: it will find the ESP partition on that disk, and its PARTUUID, automatically on its own, assuming there is only one ESP per disk. So, let's assume that this PARTUUID belongs to your /dev/sda13 partition (use blkid or lsblk -o +partuuid to check). To recreate the boot entry (or to make an extra copy of it right now): efibootmgr -c -d /dev/sda -L debian -l \\EFI\\debian\\grubx64.efi Backslashes are doubled because backslash is a special escape character for the shell. This command will automatically find the ESP partition on /dev/sda and its PARTUUID, and will build the boot entry for you. efibootmgr will automatically pick the first free BootNNNN number for the boot entry, and will also automatically add it as the first entry in the BootOrder . So if Boot0000 already exists, this would create Boot0001 and set BootOrder to 0001,0000 if it previously was just 0000 . This would be an effective backup of your current boot entries: (lsblk -o +partuuid; efibootmgr -v) > boot-entry-repair-kit.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/667812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/374303/"
]
} |
667,860 | There are a few broken symlinks in /etc/systemd/{system,user} like this one: anacron.timer -> /lib/systemd/system/anacron.timer but there is no anacron.timer in /lib/systemd/system . Instead there is anacron.timer in /usr/lib/systemd/system (maybe because of some kind of migration, I don't know) and timer and service works correctly. Is it a good (systemd 247.9) practice to manually remove this broken symlinks from /etc/systemd ? or should it be done by package maintainer ? | This sort of problem is supposed to be fixed by the package maintainer. In anacron ’s case, the bug is filed as #993348 : the previous version of anacron shipped its systemd files in /lib/systemd , the new version moved them to /usr/lib/systemd but didn’t update any symlinks in /etc . However /etc is owned by the system administrator, not the package manager, so it’s fine for you to clean up the mess too. The links aren’t necessary, you might as well delete them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/667860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/172002/"
]
} |
667,869 | How can I combine the "find" command with the "nano" command? For example, find . -name "helloworld.py" | nano How to open that file (first out of several lets say) after it is found using "nano" (without using a function but a single line of chained commands)? | To edit the first file only, find . -name helloworld.py -exec nano {} \; -quit This looks for files named helloworld.py , and for each such file found, runs nano /path/to/helloworld.py , and then quits (which means that only the first file will be processed). To edit all the matching files, find . -name helloworld.py -exec nano {} + This runs nano with as many files as will fit on the command line. Use Ctrl X to close each file in turn. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/667869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/395980/"
]
} |
667,940 | Recently Docker announced that Docker Desktop would cease to be free for some kinds of users. See announcement in blog post . I don't need any of the features that are exclusive to Docker Desktop®. I have used docker in a laptop with debian on it and that version is good enough for me. Is there a way to install the linux version of docker in macOS ? I need both the engine and the cli tool, nothing more. I run build commands, push, tag, run, docker-compose, etc. | On my Macbook, I've installed docker via homebrew with brew install docker docker-compose docker-machine xhyve docker-machine-driver-xhyve (though this was way before docker desktop became non-free, but I'd assume it'd still work) This uses xhyve as a virtual machine, so are basically running a Linux distro in xhyve, and then Docker in this Linux distro. You need to do a bit of configuration, I followed this article . My commandline for creating the VM was docker-machine create default --driver xhyve --xhyve-experimental-nfs-share=true --xhyve-disk-size "40000" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/667940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168588/"
]
} |
667,959 | I'm trying to transfer files on my NAS, but I will get this error "The name of a file or a folder within an encrypted shared folder cannot exceed 143 English characters or 47 Asian (CJK) characters" is there a command in the shell to find every file that meets that? | On my Macbook, I've installed docker via homebrew with brew install docker docker-compose docker-machine xhyve docker-machine-driver-xhyve (though this was way before docker desktop became non-free, but I'd assume it'd still work) This uses xhyve as a virtual machine, so are basically running a Linux distro in xhyve, and then Docker in this Linux distro. You need to do a bit of configuration, I followed this article . My commandline for creating the VM was docker-machine create default --driver xhyve --xhyve-experimental-nfs-share=true --xhyve-disk-size "40000" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/667959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491217/"
]
} |
668,019 | I am wanting to change the name of column number 5 in each file to the file name itself for all files in a given directory. I have 250 files in the directory and the column names are tab-delimited. At the moment, all files have identical column names.Example of file met-d-Glucose.txt : FID IID PHENO CNT SCORESUM 3999347013_R01C01 1 -9 21 -0.217178 3999347013_R01C02 1 -9 21 -0.054835 3999347013_R02C01 1 -9 21 -0.130287 3999347013_R02C02 1 -9 21 0.0062288 3999347013_R03C01 1 -9 21 -0.0933029 3999347013_R03C02 1 -9 21 0.0434727 I want to change the name of column 5 to the file name. e.g. the output for the example file named met-d-Glucose.txt above would be: FID IID PHENO CNT met-d-Glucose.txt 3999347013_R01C01 1 -9 21 -0.217178 3999347013_R01C02 1 -9 21 -0.054835 3999347013_R02C01 1 -9 21 -0.130287 3999347013_R02C02 1 -9 21 0.0062288 3999347013_R03C01 1 -9 21 -0.0933029 3999347013_R03C02 1 -9 21 0.0434727 The original column name is always SCORESUM.The header line is always the first line.There are never columns after the 5th column.SCORESUM does not appear anywhere else. | On my Macbook, I've installed docker via homebrew with brew install docker docker-compose docker-machine xhyve docker-machine-driver-xhyve (though this was way before docker desktop became non-free, but I'd assume it'd still work) This uses xhyve as a virtual machine, so are basically running a Linux distro in xhyve, and then Docker in this Linux distro. You need to do a bit of configuration, I followed this article . My commandline for creating the VM was docker-machine create default --driver xhyve --xhyve-experimental-nfs-share=true --xhyve-disk-size "40000" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/668019",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491162/"
]
} |
668,067 | Citrix workspace app fails to launch after downloading the .ica file.The error says: SSL ErrorContact your help desk with the following information:You have not chosen to trust "DigiCert High Assurance EV Root CA", the issuer of the server's security certificate (SSL error 61). I was able to easily fix it on Ubuntu (20.04) by linking the certificates: sudo ln -s /usr/share/ca-certificates/mozilla/* /opt/Citrix/ICAClient/keystore/cacerts But on Fedora v34 (derived from Redhat) that fix doesn't work. /usr/share/ca-certificates/mozilla does not exist. I have also tried linking the files in /etc/ssl/certs to the ICAClient path as well as: sudo ln -s /etc/pki/ca-trust/extracted/pem/* /opt/Citrix/ICAClient/keystore/cacerts | In your browser goto the site where you launch your citrix session from and click on the padlock widget on far left part of the url -> click on "Connection is secure" ______ on chrome browser ______ -> click on "Certificate is valid" this will open a popup window -> on chrome click on tab "Details" -> look at the field "Certificate Hierarchy" -> click on the bottom most line which is the name of your cert -> hit Export (which will download the cert file) ______ on firefox ______ -> click on "More information" this will open a popup window get into its "Security" tab -> click on "View Certificate" -> this will open a page click on the "DigiCert High Assurance EV Root CA" tab -> look at "Miscellaneous" -> Download -> click on "PEM (cert)" and it will download the cert file we are done with the browser rename the cert file you just downloaded so it ends with .pem ... my freshly downloaded file lives at ~/Downloads/foo.bar.pem in a terminal issue following commands cd /opt/Citrix/ICAClient/keystore/sudo mv cacerts cacerts~~ignoresudo ln -s /etc/ssl/certs cacertssudo cp ~/Downloads/foo.bar.pem /opt/Citrix/ICAClient/keystore/cacerts if your box does not have dir /opt/Citrix/ICAClient/keystore/cacerts then you can identify your correct path by issuing dpkg -L icaclient | grep cacerts finally issue # this engages above new .pem file/opt/Citrix/ICAClient/util/ctx_rehash PS ... if Citrix is reading this please slurp this up and post on your Citrix workspace install site to help folks like me who had to struggle for hours first time as nowhere is this documented | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/668067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491355/"
]
} |
668,262 | When I run a Centos 7 Docker Image like this docker run -it centos:7 bash Running something which is uses Process Substitution is fine (as expected, as Bash supports Process Substitution since the beginning of time - Bash 1.4.x actually). For example: while IFS= read -r test; do echo $test; done < <(cat anaconda-post.log) But when I switch to /bin/sh the same code doesn't work anymore /bin/shwhile IFS= read -r test; do echo $test; done < <(cat anaconda-post.log)sh: syntax error near unexpected token `<' Although /bin/sh seems to be Bash /bin/sh --versionGNU bash, version 4.2.46(2)-release (x86_64-redhat-linux-gnu)Copyright (C) 2011 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> But then why doesn't process substitution work anymore? Other non-POSIX features seems to work, though echo ${PATH//:/ }/usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /bin | Yes, bash , when called as sh , runs in POSIX mode, disabling all its bash only features. From the manual - Invoked with name sh If Bash is invoked with the name sh, it tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well $ ls -lrth /bin/shlrwxrwxrwx. 1 root root 4 Aug 26 2018 /bin/sh -> bash$ /bin/bash -c 'set -o | grep posix'posix off$ /bin/sh -c 'set -o | grep posix'posix on with posix mode enabled, non-standard features like process substitution won't be enabled. See Bash POSIX Mode to see its complete behavior running in the mode. From release 5.1 of the shell, process substitutions are available in POSIX mode. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/668262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2726/"
]
} |
668,280 | I have file1 , and I need to delete lines matching a pattern. But I would like to save these deleted lines in another file2 . sed '/zz/!d' file1 > file2sed -i '/zz/d' file1 Is there a way to combine these commands into one ? Or is there a more elegant way to do it? | Checked at GNU Sed: sed -ni '/zz/!{p;b};w file2' file1 The flags must go in that order -ni . Explanation:We do not stop the script with the d command, but set the -n flag (silent) and write lines that do not match the template using the p command (Print the current pattern space) and jump with b to the end of the script. Lines matching the pattern reach the w command, which writes the pattern space to the file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/668280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
668,296 | I would like a solution that works on Linux and is portable across shells (not just bash) and filesystems (including drvfs or btrfs) NB: directory names may contain spaces With find I can produce a list of all paths rooted within a folder like this: find -type d../a dir./a dir/20210101./a dir/20210101/bin./a dir/20210101/etc./a dir/20210101/var./a dir/20210101/var/log./a dir/20211201./b dir./b dir/20210212./b dir/20210212/bin./b dir/20210212/etc./c dir./d dir./d dir/20210711 I would however like to exclude "base" or "parent" paths that are already included in the the deepest unique path. Please also help with the correct terms to use to describe this as I feel I am not using the optimal description. I can do it with a basic script but assume there is a more elegant way using one of the following: find ls Here is my script: save_ifs=$IFS;IFS=$'\n';prev_path="";for path in $(find -depth -type d); do if [ ! ${#path} -lt ${#prev_path} ]; then echo $path; fi prev_path=$path;done and its output - which is the desired output ./a dir/20210101/bin./a dir/20210101/etc./a dir/20210101/var/log./a dir/20211201./b dir/20210212/bin./b dir/20210212/etc./c dir./d dir/20210711 | Checked at GNU Sed: sed -ni '/zz/!{p;b};w file2' file1 The flags must go in that order -ni . Explanation:We do not stop the script with the d command, but set the -n flag (silent) and write lines that do not match the template using the p command (Print the current pattern space) and jump with b to the end of the script. Lines matching the pattern reach the w command, which writes the pattern space to the file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/668296",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269611/"
]
} |
668,320 | fstrim requires the Linux block device to be mounted, and it is not very verbose. blkdiscard could tell, but also that would require a write operation. Can I somehow tell if a block device supports trimming/discarding, without actually trying to trim/discard something on it? | You can check the device’s maximum discard sizes, e.g. $ cat /sys/block/X/queue/discard_max_bytes (replacing X as appropriate). If this shows a value greater than 0, the device supports discards. Strictly speaking, discard_max_hw_bytes indicates what the hardware supports; discard_max_bytes indicates what the software supports, and the latter is usually what‘s relevant: A discard_max_bytes value of 0 means that the device does not support discard functionality. (This is in the discard_max_hw_bytes section, but it’s effectively true for both. The references will be fixed in 5.15 .) This works on many different block devices, not just disks: loop devices, device mapper devices, etc. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/668320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52236/"
]
} |
668,558 | sh -c 'printf "%d " 024'bash -c 'printf "%d " 024'zsh -c 'printf "%d" 024' The above outputs 20 20 24 . Why is zsh not respecting octal notation? Is there a way to change this? | zsh is not a POSIX shell, it was written before the POSIX specification of sh was released, took features and syntax from ksh, csh, tcsh, rc and added many of its own. Its syntax is for a large part Korn-like, but that doesn't mean it's fully compatible with the Korn shell. In any case, it's not and never was intended to be a POSIX compliant interpreter for the sh language (at least not in its default mode). It however has a number of emulation modes (csh, ksh and sh (formerly trying to follow the SysV sh, now following POSIX sh)) that can be used to improve compatibility with other shells and interpret code written for them. That means it can have its own syntax separate from that of sh or any other shell (like csh, rc, fish, perl, python...) and still be able to interpret code written for sh. If invoked as sh , csh , or ksh (or anything starting with s (or b for Bourne), c or k ), or in newer versions with --emulate sh/ksh/csh , it will enter the corresponding emulation mode, but that's not how you would normally use those. To interpret a sh script, you'd run sh , not zsh . You'd rather run emulate sh within zsh to switch the emulation mode to sh when wanting to interpret sh code within zsh, and you can also run emulate sh -c 'some code' to have some code interpreted in sh emulation. So, within zsh : emulate sh -c 'printf "%d\n" 024' would run printf in a more POSIX way. As for printf , a printf utility first appeared in Research Unix Ninth Edition (from AT&T Research / Bell Labs, not in wide use) circa 1986, to get the same text formatting API as available in C with the printf libc function. While the C one takes a format first argument as a pointer to a NUL-delimited array of bytes and extra arguments of different types (pointer, integer, double...), an executable can only take arguments as strings. So to format a number for instance for %d , it needs to be given a text representation of a number and convert it back to a binary integer and printf converts it back to text for output. In the original implementation in V9 , printf recognised the same numbers as in the C language (dec -123 , +123 , octal 0123 , hex 0x123 , float 1.2e-2 , even 'x' or "foo" (where it's the value of the first character that is used)). SVR4 (descended from V7) also had a very dumb printf utility that just did: printf(fmt, argv[2], argv[3], argv[4], argv[5], argv[6], argv[7], argv[8], argv[9], argv[10], argv[11], argv[12], argv[13], argv[14], argv[15], argv[16], argv[17], argv[18], argv[19], argv[20]); So it was only useful for %s -type formatting as printf() was only given pointers to the string arguments. POSIX.2 (written in the late 80s behind (mostly-)closed doors and released in 1992 (not freely available then)), specified a printf command. As the echo utility was very unportable and unreliable, an alternative was much needed. For some reason, they didn't go for ksh 's print builtin but specified a printf utility mostly based on the V9 implementation and with a reference to the C language when it comes to convert strings to numbers. ksh88, the shell on which the POSIX sh specification is based on never had a printf builtin (nor did its pdksh clone). It has its own print builtin as a reliable alternative to echo . A -f option for print was added in ksh93 though along with a printf alias for print -f . In ksh, most builtins or constructs that take numbers as input will accept any arithmetic expression, not just numeric constants. So in ksh93 printf '%d\n' '1 + 1' Would print 2 . And in early versions, printf '%d\n' 010 would print 10, not 8. In ksh shell arithmetic expressions, initially, numbers with leading 0s were not considered as octal because in a shell, it's much more common to deal with 0-padded decimal numbers (like in date/time, file names) than it is to deal with octal numbers. However the POSIX specification did sort of require those to be treated as octal which most shells ignored (as the original implementation of the shell on which the standard was based didn't). However after that PASC Interpretation Request which made it clearer, most shells started to switch their behaviour (including ksh93, and zsh , though that was reverted ), causing much pain. If you look at the release history of ksh93 , you'll notice that the handling of 0-prefixed numbers as octal has been on and off. Even today, you'll find that 010 in arithmetic expressions is 8 inside ((...)) or $((...)) , but 10 in most other places, including let '...' , array[...] , [[ ... -eq ... ]] and... printf . set -o posix makes it change to 8 in most places, one notable exception being... printf again . Since zsh never claimed to be a POSIX shell, it just added a new option ( octal_zeroes ) in 2000, enabled in sh emulation, but not otherwise. A printf builtin was added later in 2001, initially using strtod() to convert text to numbers so treating numbers with leading 0 as octal, but later changed to accept any arithmetic expression like in ksh93 so subject to the octal_zeroes options, and allowing more number formats such as 0b10010 , ksh-style arbitrary based numbers ( 12#A001 , 2#10010 ...), 1_000_000 ... So, in zsh , to pass a number octal to printf , the options are: emulate sh and use leading 0: emulate sh -c 'printf %d 024' set the octalzeroes option: set -o octalzeroes; printf %d 024 same in a local scope only: (){ set -o localoptions -o octalzeroes; printf %d 024; } (here in an anonymous function). use the 8#oooo notation which is always recognised: printf %d '8#24' disable the printf builtin, and assume the system's printf utility is POSIX compliant in that regard: disable printf; printf %d 024 . Other ways to invoke that standalone printf : command printf %d 024 (not in sh emulation), =printf %d 024 (not in sh emulation). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/668558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/160697/"
]
} |
668,793 | Take the following file : $ cat f1stu vwx yzauvw xyz abcabc def ghidef ghi jklghi jkl mnojkl mno pqrmno pqr stupqr stu vwxstu vwx yza To print all lines from the first one containing abc to the first one containing mno with GNU sed : $ sed -n '/abc/,/mno/p' f1uvw xyz abcabc def ghidef ghi jklghi jkl mno How could I print all lines until the last one containing mno , e.g. how could I get the following result : uvw xyz abcabc def ghidef ghi jklghi jkl mnojkl mno pqrmno pqr stu In other words, is there a way to make GNU sed 's range selection greedy ? Update In my setting : If mno is missing, it should print out everything until the end of the file. mno cannot occur before the first abc . There's always at least one abc , and abc and mno are never on the same line EDIT I just added a dummy stu vwx yza line at the start, so that the file doesn't start with a line including abc (to avoid solutions that start from the first line - they should start from the first line having abc in it) | sed '/abc/,$!d;0,/mno/b;:1;/mno/b;$d;N;b1' file Work algorithm: Two address ranges are used. The first /abc/,$!d; removes everything up to the first pattern match. The second 0,/mno/b; up to a match with the pattern /mno/ , sends each line buffer(pattern space) to the output bypassing the remaining script, thereby preventing the deletion if the pattern is not found in the file. The rest of the script :1;/mno/b;$d;N;b1 works in a loop. In the editor buffer, lines are appended until a pattern match occurs. If a /mno/ pattern is encountered, the entire buffer is sent to the output, bypassing the rest of the script. If no match occurs, the buffer is deleted at the last line. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/668793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152418/"
]
} |
668,844 | I'm going through the Linux From Scratch 11.0 book . In III. Building the LFS Cross Toolchain and Temporary Tools, ii. Toolchain Technical Notes , there is a bit about Canadian Cross cross-compilation. I do not understand why there need to be 3 stages and 3 machines to get to the end result. The text assumes that we start with computer A and a compiler that runs on A and produces binaries for A. So why don't we just use that compiler to build a compiler that runs on C and builds binaries for C?Why there is so much hassle instead, with building a compiler that runs on A, but builds for B, then a compiler that runs on B, but builds for C, and finally the compiler that runs on C and builds for C? I also found an article on Wikipedia about it - https://en.wikipedia.org/wiki/Cross_compiler . | Scenario described is this Machine A is slow and has a compiler Machine B is fast, but has no compiler Machine C is the target but it is slow, and has no compiler You could build all the binaries for C on A, but it would take a long time because machine A is slow. The author argues that it is worth taking a small amount of time to cross-compile on A a compiler for B. Then the fast machine B could be used to cross-compile all the necessary binaries for slow machine C, resulting in an overall time saving against compiling on A or C. The final step, where a compiler is built on C for C, is simply to remove the dependency on machine B. Although it's slow, machine C can now compile the occasional program itself, directly. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/668844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/345969/"
]
} |
668,902 | I need to create a script that will execute daily (cron job), calculate the uptime of the system, and if that number is greater than 72 hours, reboot the system. I am getting lost in converting the value of hours to something I can compare to 72. It returns me 6499.04: command not found #!/bin/bashif $(awk '{print $1}' /proc/uptime) / 3600 > 72 ; then echo "yes" fi Thank you in advance for any help! | I would do the whole test in AWK: awk '$1 > (72 * 3600) { print "yes" }' /proc/uptime If you want to use that as a test, use the exit code: if awk '{ exit ($1 < (72 * 3600)) }' /proc/uptime; then echo Need to rebootfi AWK evaluates ($1 < (72 * 3600)) as 0 if the comparison fails, 1 otherwise; 0 indicates success as an exit code, so we invert the condition. If you’re using systemd, another approach would be to use a systemd timer, with OnBootSec=72h (see FelixJN’s answer for details). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/668902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/492220/"
]
} |
669,004 | I'm running Debian, namely: # uname -ALinux martlins2 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux and for some time I see some errors telling that some parts of some packages uses unknown compression while doing apt update . In particular, the cause of the issue lays in the middle of the dpkg : # apt update(...)# apt upgrade(...)dpkg-deb: error: archive '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb' uses unknown compression for member 'control.tar.zst', giving upTraceback (most recent call last): File "/usr/share/apt-listchanges/DebianFiles.py", line 124, in readdeb output = subprocess.check_output(command) File "/usr/lib/python3.9/subprocess.py", line 424, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/usr/lib/python3.9/subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args,subprocess.CalledProcessError: Command '['dpkg-deb', '-f', '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb', 'Package', 'Source', 'Version', 'Architecture', 'Status']' returned non-zero exit status 2.The above exception was the direct cause of the following exception:Traceback (most recent call last): File "/usr/bin/apt-listchanges", line 323, in <module> main(config) File "/usr/bin/apt-listchanges", line 104, in main pkg = DebianFiles.Package(deb) File "/usr/share/apt-listchanges/DebianFiles.py", line 358, in __init__ parser.readdeb(self.path) File "/usr/share/apt-listchanges/DebianFiles.py", line 127, in readdeb raise RuntimeError(_("Error processing '%(what)s': %(errmsg)s") %RuntimeError: Error processing '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb': Command '['dpkg-deb', '-f', '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb', 'Package', 'Source', 'Version', 'Architecture', 'Status']' returned non-zero exit status 2.dpkg-deb: error: archive '/tmp/apt-dpkg-install-XiLPN8/01-libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb' uses unknown compression for member 'control.tar.zst', giving updpkg: error processing archive /tmp/apt-dpkg-install-XiLPN8/01-libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb (--unpack): dpkg-deb --control subprocess returned error exit status 2(...)Errors were encountered while processing: /tmp/apt-dpkg-install-XiLPN8/01-libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb(...)E: Sub-process /usr/bin/dpkg returned an error code (1) To proove it, I've run the dpkg command (simplified) directly: # dpkg -f /var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb 'Package'dpkg-deb: error: archive '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb' uses unknown compression for member 'control.tar.zst', giving up The file really does use such compression: # file /var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb: Debian binary package (format 2.0), with control.tar.zs, data compression zst I do have installed the zstd package: # apt search zstd(...)libzstd1/stable,stable,now 1.4.8+dfsg-2.1 amd64 [installed,automatic] fast lossless compression algorithm(...)zstd/stable,stable,now 1.4.8+dfsg-2.1 amd64 [installed] fast lossless compression algorithm -- CLI tool Furthermore, I found following dpkg bugreport: https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/1764220 saying the zstd support have been added in 1.18.4ubuntu1.7 version. My version of dpkg is 1.20.9 : # dpkg --versionDebian 'dpkg' package management program version 1.20.9 (amd64).(...) so that may not be an issue. I've also removed the whole contents of the /var/cache/apt/archives/* and re- update && upgrade d. Didn't help. Do you have any tips what to do with that? Is there/Are there an further packages missing? Does the Debian version doesn't have such feature? Is it an configuration issue? Is there any workaround? | Debian’s dpkg package didn’t support zstd compression prior to version 1.21.18 . Support was added just in time for Debian 12 . I’m guessing you’ve added a Ubuntu PPA; you shouldn’t use those with Debian. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/669004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/443908/"
]
} |
669,062 | I have a variable, var , that contains: XXXX YY ZZZZZ\naaa,bbb,ccc All I want is aaa in the second line.I tried: out=$(echo "$var" | awk 'NR==2{sub(",.*","")}' ) but I get no output. I tried using , as the FS but I can't get the syntax right. I really want to learn awk/regex syntax. I want to use out as a variable "$out" somewhere else -- not to print. | You don't want regexes there. The entire point of awk is to automatically split a line into fields, so just set the field separator to , and print the first field of the second line: $ printf '%s' "$var" | awk -F, 'NR==2{print $1}'aaa Or, if your shell supports <<< : $ awk -F, 'NR==2{print $1}' <<<"$var"aaa If you really want to do it manually and not use awk as intended, you can do: $ awk 'NR==2{sub(/,.*/,""); print}' <<<"$var"aaa You were getting no output because you didn't tell awk to print anything. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/669062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/475899/"
]
} |
669,089 | I need to ask you about using grep command in a Bash script in Debian. I have got for example file with these lines: /fruit-/apple.txt/fruit-/banana.txt/fruit-/samples/vegetables-/carrot.txt/vegetables-/garlic.txt I want to select all lines where is word fruit- . I can call command: grep -w "fruit-" file.txt Output will be: /fruit-/apple.txt/fruit-/banana.txt/fruit-/samples But when I use command: grep -w "fruit" file.txt I also get same output such as above. But it's wrong. Output should be 0. Because I don't type - in pattern. Why doesn't grep treat the - correctly? | The -w option indeed tells grep to only look for lines that match fruit as a "word", meaning that it must either start at the beginning of the line or be preceded by a "non-word" character, and either end at the end of the line or be followed by a "non-word" character. However, a "word" character as per the man-page of grep is: Word-constituent characters are letters, digits, and the underscore. That means that the - is a "non-word" character, and fruit- will match the "word-search" for fruit as the matching algorithm will stop upon reaching the - . Now, it seems you want to select only those lines where the content between the first two / is exactly fruit , as opposed to containing the pattern fruit . In these cases, you have to make the match more explicit: With grep , you can say: grep "^/fruit/" file.txt This will anchor the pattern to the beginning of the line and only accept those lines where there is no - after the fruit . Alternatively, use awk with the / set as field-separator: awk -F/ '!$1&&$2=="fruit"' file.txt This will only accept lines which have an empty first field (i.e. start right with a / ) and whose second field is exactly fruit . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/669089",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/423113/"
]
} |
669,100 | I was able to download the qt3 yum package wget http://mirror.centos.org/centos/7/os/x86_64/Packages/qt3-devel-3.3.8b-51.el7.x86_64.rpm but there were some dependencies missing when installing Error: Problem: conflicting requests - nothing provides qt3 = 3.3.8b-51.el7 needed by qt3-devel-3.3.8b-51.el7.x86_64 - nothing provides libmng.so.1()(64bit) needed by qt3-devel-3.3.8b-51.el7.x86_64 - nothing provides libqt-mt.so.3()(64bit) needed by qt3-devel-3.3.8b-51.el7.x86_64 - nothing provides libqui.so.1()(64bit) needed by qt3-devel-3.3.8b-51.el7.x86_64(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) I need qt3 to compile an old software but I was not able to install it neither using the remote yum repository: sudo yum install qt3 qt3-devel qt3-qtbase-devel As: No match for argument: qt3No match for argument: qt3-develNo match for argument: qt3-qtbase-develError: Unable to find a match: qt3 qt3-devel qt3-qtbase-devel Trying to install qt sends back to qt5 instead | The -w option indeed tells grep to only look for lines that match fruit as a "word", meaning that it must either start at the beginning of the line or be preceded by a "non-word" character, and either end at the end of the line or be followed by a "non-word" character. However, a "word" character as per the man-page of grep is: Word-constituent characters are letters, digits, and the underscore. That means that the - is a "non-word" character, and fruit- will match the "word-search" for fruit as the matching algorithm will stop upon reaching the - . Now, it seems you want to select only those lines where the content between the first two / is exactly fruit , as opposed to containing the pattern fruit . In these cases, you have to make the match more explicit: With grep , you can say: grep "^/fruit/" file.txt This will anchor the pattern to the beginning of the line and only accept those lines where there is no - after the fruit . Alternatively, use awk with the / set as field-separator: awk -F/ '!$1&&$2=="fruit"' file.txt This will only accept lines which have an empty first field (i.e. start right with a / ) and whose second field is exactly fruit . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/669100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/324258/"
]
} |
669,111 | I've tried to create a awk script or script in other method. I want a given line from the log file that contains date and time variables (but contains a given word) to be underlined with a specific color. I created something like this in awk, but it only underlines a certain phrase, without a date and time, would it be possible to underline the date and time additionally or the entire line containing that words? awk $'{ gsub(" DEBUG StateMachine\|entr \'NTP:nextGetTimeTimeoutState'", "\033[1;41m&\033[0m");print }' LOG.log this line from LOG.log looks something like this: 2021-08-17 10:16:35,445 DEBUG StateMachine|exit 'NTP:nextGetTimeTimeoutState'2021-08-17 10:16:35,445 DEBUG StateMachine|entr 'NTP:nextIteratorState'2021-08-17 10:16:35,445 INFO StateMachine|task 'NTP:nextIteratorState'2021-08-17 10:16:35,449 DEBUG StateMachine|exit 'NTP:nextIteratorState'2021-08-17 10:16:35,449 DEBUG StateMachine|entr 'NTP:nextGetTimeTimeoutState'2021-08-17 10:16:35,449 INFO StateMachine|wait 60000 NTP:nextGetTimeTimeoutState | Any time you find yourself using $'{...}' around an awk script you are doing something wrong and should ask for help. Never do that as it's never required in a well-written script and causes your script to become fragile as it's inviting the shell to interpret some parts of it before awk even sees it. Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. :-) You're escaping regexp metachars to make your regexp act like it's a string. Don't do that - just use string instead of regexp operators when you want to match a string: awk 'index($0,"DEBUG StateMachine|entr \047NTP:nextGetTimeTimeoutState\047") { $0 = "\033[1;41m" $0 "\033[0m"}1' LOG.log The \047 s instead of ' s are because you can't escape a ' in a ' -delimited string (including scripts) in shell. See http://awk.freeshell.org/PrintASingleQuote . To highlight 2 different lines with the same color you could use: awk ' index($0,"DEBUG StateMachine|entr \047NTP:nextGetTimeTimeoutState\047") || index($0,"DEBUG StateMachine|exit \047NTP:nextGetTimeTimeoutState\047") { $0 = "\033[1;41m" $0 "\033[0m" }1' LOG.log and to highlight 2 lines with 2 different colors: awk ' index($0,"DEBUG StateMachine|entr \047NTP:nextGetTimeTimeoutState\047") { $0 = "\033[1;42m" $0 "\033[0m" } index($0,"DEBUG StateMachine|exit \047NTP:nextGetTimeTimeoutState\047") { $0 = "\033[1;41m" $0 "\033[0m" }1' LOG.log Having said that, since you apparently are trying to use different colors based on different parts of the input, now it would be appropriate to use a regexp with capture groups to isolate the relevant parts of the input and then just look at those parts to determine the color to use for each line and here's how I'd implement that using GNU awk for the 3rd arg to match() for capture groups: $ cat tst.awkBEGIN { red = "\033[1;41m" green = "\033[1;42m" yellow = "\033[1;43m" blue = "\033[1;44m" purple = "\033[1;45m" reset = "\033[0m" map["nextGetTimeTimeoutState","entr"] = green map["nextGetTimeTimeoutState","exit"] = red map["nextIteratorState","entr"] = yellow map["nextIteratorState","task"] = blue map["nextIteratorState","exit"] = purple}match($0,/(DEBUG|INFO) StateMachine\|(\S+)\s+\047NTP:([^\047]+)\047/,a) { key = a[3] SUBSEP a[2] if ( key in map ) { $0 = map[key] $0 reset }}{ print } or using any POSIX awk: $ cat tst.awkBEGIN { red = "\033[1;41m" green = "\033[1;42m" yellow = "\033[1;43m" blue = "\033[1;44m" purple = "\033[1;45m" reset = "\033[0m" map["nextGetTimeTimeoutState","entr"] = green map["nextGetTimeTimeoutState","exit"] = red map["nextIteratorState","entr"] = yellow map["nextIteratorState","task"] = blue map["nextIteratorState","exit"] = purple}match($0,/(DEBUG|INFO) StateMachine\|[^[:space:]]+[[:space:]]+\047NTP:[^\047]+\047/) { split($0,a,/[|[:space:]:\047]+/) key = a[9] SUBSEP a[7] if ( key in map ) { $0 = map[key] $0 reset }}{ print } Whichever one you use the output will be: You don't need the intermediate variables red , green , etc. as you could just do: map["nextGetTimeTimeoutState"]["entr"] = "\033[1;42m" map["nextGetTimeTimeoutState"]["exit"] = "\033[1;41m" but I find it helps clarity and ease of future maintenance/updates to have them. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/669111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/492420/"
]
} |
669,354 | When I do my_script < filename.txt > filename.txt , the file is overwritten and truncated. Is there some way on the Unix command line to specify that redirection is not done concurrently, i.e. the output does not begin until the input has completed? I am trying to write a utility that reads a file, and based on the command line options, regenerates and overwrites it. I realize I could add support in the program for not using stdin/stdout, but I like the flexibility and convenience of redirection. | You can do this: (rm -f foo && yourprogram > foo) < foo E.g.: (rm -f foo && wc > foo) < foo It opens foo for reading. Then it starts a subshell, and removes the i-node of foo while keeping the file open. Finally it opens foo for output, thus creating foo . It requires write permission to the dir, so if you only have write permission to the file, you are out of luck. It will change the i-node (so permission, owner, ctime is lost), but if only the name is important, it should be OK. Contrary to sponge this works even if the output of yourprogram is bigger than memory. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/669354",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165926/"
]
} |
669,449 | I am executing the below command for 1000 files: ebook-convert <name-of-first-file>.epub <name-of-first-file>.mobiebook-convert <name-of-second-file>.epub <name-of-second-file>.mobi Apparently, instead of manually doing this for 1000 files, one could write a bash script for the job. I was wondering if there is an easier way to do something like this in Linux though, a small command that would look something like ebook-convert *.epub *.mobi Can you use wildcards in a similar way, that works for a scenario like the above? | You can’t do it directly with wildcards, but a for loop can get you there: for epub in ./*.epub; do ebook-convert "${epub}" "${epub%.epub}.mobi"; done Zsh supports a more elegant form of this loop . Instead of a shell script, if your file names don’t contain whitespace characters, and more generally can be safely handled by Make and the shell, you can use GNU Make; put this in a Makefile : all: $(patsubst %.epub,%.mobi,$(wildcard *.epub))%.mobi : %.epub ebook-convert ./$< ./$@ and then run make , which will ensure that all .epub files are converted to a .mobi file. You can run this repeatedly to update files as necessary — it will only build files which are missing or older than their source file. (Make sure that the ebook-convert line starts with a tab, not spaces.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/669449",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/433400/"
]
} |
669,463 | I am trying to build using ./configure . I have Three include directories -I/path1/include-I/path2/include-I/path3/include Two link directories -L/path1/lib-L/path2/lib Two -l flag options -ltensorflow-lasan Two compile flags -O3-g How can I put all these flags effectively as options in ./configure ? | The canonical way to do this is to provide values for various variables in the ./configure invocation: ./configure CPPFLAGS="-I/path1/include -I/path2/include -I/path3/include" \ CFLAGS="-O3 -g" \ LDFLAGS="-L/path1/lib -L/path2/lib" \ LIBS="-ltensorflow -lasan" If the C++ compiler is used, specify CXXFLAGS instead of (or in addition to) CFLAGS . These variables can also be set in the environment, but recommended practice is to specify them as command-line arguments so that their values will be stored for re-use. See Forcing overrides when configuring a compile (e.g. CXXFLAGS, etc.) for details. Note that in most cases it would be unusual to specify that many paths as flags; instead, I would expect to find --with options to tell the configure script where to find various dependencies. For example, --with-tensorflow=/path/to/tensorflow which would then result in the appropriate -I and -L flags being set. Run ./configure --help to see what options are available. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/669463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/492749/"
]
} |
669,669 | I need to run a script every 64 hours. I couldn't find the answer with cron . Is it possible with it, or should I use a loop in a shell script? | I suggest perhaps using a crontab "front end" like crontab.guru for figuring out crontab if you're a beginner. However, as in your case, the hour setting only allows for values of 0 to 23, so you can't use crontab here. Instead, I'd suggest using at . In your case, I'd probably use something like: at now + 64 hours and then enter your command or echo "<your command>" | at now + 64 hours at the beginning of your script, etc.Basically, you'll be scheduling running the command right when the command has been invoked the last time. Also, if you don't want a time delta, rather the exact time, I suggest doing a bit of time arithmetic, and then use an exact time with at to have the command run. I highly suggest reading the man page of at , as it is fairly comprehensive. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/669669",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/239796/"
]
} |
669,812 | In discussions about swap files ("should I create one?") I often see an obscure mention along the lines "in certain situations swap file can do more harm than good". That comes up often in conjunction with "if you have X GB of RAM or more you don't need a swap", usually backed by credible arguments. Then there are obviously countering arguments such as expressed here . I can imagine a background-running application or service foo whose correct functionality depends on instant memory access. Dropping the memory space foo uses to swap would slow down the access to it slowing foo itself down causing warnings, errors or even a total failure. However that's how far I get, the actual facts remain elusive. For example my Linux box has 64GB of RAM which allows me to run multiple VMs simultaneously. At this point many Linux users decide to not swap at all, but I've reserved 16GB just in case - drive space is cheap but the classic swap = 2x RAM seems exaggerated. Can someone give some real life examples from the UNIX/Linux world when having a swap file in a system with large amounts of RAM actually can cause / did cause unwanted consequences, and what those consequences could be / would have been? Does having a separate swap partition instead of a swap file change the situation? Prompted by user10489's answer: *NIX-like OS:s don't have a separate hiberfile similar to what Windows uses? To clarify: RAM/swap ratio or whether or not swap should be allocated in the first place is out of scope of the question. | Swap can be bad in that it may make some failure cases last longer . Consider a situation where some process starts using excessive amounts of memory, due to a bug or a misconfiguration or other such reason. If there's no swap, it'll eventually run the system out of memory, causing the OS to resolve the issue by eventually killing the process. (But possibly causing other trouble anyway.) But if there is loads of swap space, the process will start consuming swap space, possibly thrashing pages between main memory and swap, and that slows eve-ry-thing down. The system will eventually run out of memory, but you suffer longer before that. I'm mostly thinking slow swap devices, i.e. disks of spinning rust, since that's where I've encountered this... Probably less of an issue with modern high-speed SSDs. (Of course, a proper solution to that would be per-process limits on memory use, but lacking those, as your usual random desktop system probably does, the fact of the swap space existing or not can play an influence.) I'm not commenting on if having swap is good or bad in general , just the one possible situation. I'm also not commenting on ratios and such; they're usually generalizations, and may possibly be based on requirements that aren't valid any more. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/669812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90878/"
]
} |
669,947 | A bash script is using a variable Q for some purpose (outside the scope of this question): Q=0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ As this script is used in an environment where each byte counts, this is waste. But some workaround like Q=0$(seq -s "" 9)$(echo {A..Z}|tr -d " ") (for C locale) is even worse. Am I too blind to see the obvious trick to compactly generate such a simple sequence? | For any shell capable of brace expansion: Using printf : $ printf %s {0..9} {A..Z}0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ --> Q=$(printf %s {0..9} {A..Z}) Backticks instead of $() saves one byte. For Bash specifically, printf -v var to printf into a variable is nice but no shorter than backticks. printf -vQ %s {0..9} {A..Z}Q=`printf %s {0..9} {A..Z}` | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/669947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216004/"
]
} |
669,956 | What are some possible causes, that a command could not be found in Linux? Other than it is not in the PATH ? Some background info: When trying to execute pdflatex from vscode, I got some troubles, that vscode was not able to find pdflatex. Probably because the PATH is not set correctly. Since I was not able to fix the problem right away, I tried to work around this problem by executing a shell script, which then calls pdflatex: #!/bin/bash export PATH=/usr/bin pdflatex $@ or #!/bin/bash /usr/bin/pdflatex $@ In both cases, the script works as expected when executed over the normal terminal. But when executed in the vscode intern terminal it says pdflatex: command not found As far as I know, the only way that a command can not be found, is if it is not in a directory included by the PATH . Or when the absolute path is wrong. But this seems not to be the case here.So what other factors are used to determine, how a command is searched for? Additional Infos (as requestet) OS: POP OS 21.04 from vscode terminal: $ echo $PATH/app/bin:/usr/bin:/home/flo/.var/app/com.visualstudio.code from a native terminal: $ echo $PATH/opt/anaconda3/bin:/opt/anaconda3/condabin:/home/flo/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin Other Commands as ls , which are also in /usr/bin directory do work from the vscode internal terminal (as ls aswell /usr/bin/ls ). properties of pdflatex: $ ls -l /usr/bin/pdflatexlrwxrwxrwx 1 root root 6 Feb 17 2021 /usr/bin/pdflatex -> pdftex or $file /usr/bin/pdflatex/usr/bin/pdflatex: symbolic link to pdftex and pdftex (same behavior as pdflatex): $ ls -l /usr/bin/pdftex-rwxr-xr-x 1 root root 2115048 Mar 13 2021 /usr/bin/pdftex or $ file /usr/bin/pdftex/usr/bin/pdftex: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=88c89d7d883163b4544f9461668b73383e1ca04e, for GNU/Linux 3.2.0, stripped the following script gives also the same output: #!/bin/bash pdflatex $@ The original (copied, without any edits) script is as follow: #!/bin/bash#export PATH=/usr/bin#printenv PATHpdflatex $@#/usr/bin/pdflatex $@ To test the other scripts, I changed the comments and deleted the irrelevant lines in the post here. /app/bin does not exist. ( /app does not exist) I tried to change the PATH in vscode (inside the LaTeX Workshop extensions) since this is most likely the cause for my problem in the first place. However, I could neither fix the problem nor confirm in any way, that my configs (for the LaTeX Workshop extension) had any effect at all. when adding the following lines to the script ( makeTex.sh is my wrapper script): declare -p LD_LIBRARY_PATH declare -p LD_PRELOAD The outputs are as follows:native Terminal: ./makeTex.sh: line 4: declare: LD_LIBRARY_PATH: not found./makeTex.sh: line 5: declare: LD_PRELOAD: not found vscode Terminal: declare -x LD_LIBRARY_PATH="/app/lib"./makeTex.sh: line 5: declare: LD_PRELOAD: not found The problem occured by using vscode 1.57.1 (installed via flatpak). Other versions of vscode (at least vscodium 1.60.1) do not show the same behavior. | For any shell capable of brace expansion: Using printf : $ printf %s {0..9} {A..Z}0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ --> Q=$(printf %s {0..9} {A..Z}) Backticks instead of $() saves one byte. For Bash specifically, printf -v var to printf into a variable is nice but no shorter than backticks. printf -vQ %s {0..9} {A..Z}Q=`printf %s {0..9} {A..Z}` | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/669956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263650/"
]
} |
669,984 | I have the following file (note that the ======== are actually present in the file): start ======== id: 5713start ======== id: 5911start ======== id: 5911end ========= id: 5911start ======== id: 6111end ========= id: 5713start ======== id: 31117 I want to remove any two lines that have the same id and have respectively start and end in them. Based on the above example, the output will be: start ======== id: 5911start ======== id: 6111start ======== id: 31117 How to do this with bash , awk , sed ... ? | Using any awk in any shell on every Unix box this will print as many unpaired start and/or end statements as exist in your input: $ cat tst.awk$1 == "start" { beg[$NF] = $0; delta = 1 }$1 == "end" { end[$NF] = $0; delta = -1 }{ cnt[$NF] += delta }END { for ( key in cnt ) { for (i=1; i<=cnt[key]; i++) { print beg[key] } for (i=-1; i>=cnt[key]; i--) { print end[key] } }} $ awk -f tst.awk filestart ======== id: 5911start ======== id: 6111start ======== id: 31117 To better demonstrate using more comprehensive sample input: $ cat filestart ======== id: 5713start ======== id: 5911start ======== id: 5911start ======== id: 5911end ========= id: 5911start ======== id: 6111end ========= id: 5713end ========= id: 5713start ======== id: 31117 $ awk -f tst.awk fileend ========= id: 5713start ======== id: 5911start ======== id: 5911start ======== id: 6111start ======== id: 31117 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/669984",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38789/"
]
} |
670,184 | I would like to count the number of N characters in the second column of a file and then print this count to the third column.Example input file (tab-separated): sample1 TCTNGsample2 CCNGGGGGTNsample3 GGGNNNTC Desired output (tab-separated): sample1 TCTNG 1sample2 CCNGGGGGTN 2sample3 GGGNNNTC 3 I can get a messy version doing the following, but I would like a one-liner, preferably in awk . > awk -F '\t' '{print $2}' file.txt | awk -FN '{print NF-1}' > NCount.txt> paste -d '\t' file.txt NCount.txtsample1 TCTNG 1sample2 CCNGGGGGTN 2sample3 GGGNNNTC 3 | The gsub() function returns the number of made substitutions. You may use this fact to count the number of N characters in the 2nd field and to add this number as a new field on each line: $ awk -F '\t' '{ $3 = gsub("N","N",$2) }; 1' filesample1 TCTNG 1sample2 CCNGGGGGTN 2sample3 GGGNNNTC 3 The output is caused by the trailing 1 (it is equivalent to using { print } or { print $0 } ). Set the value of the special variable OFS to use another field delimiter than the default (space) in the output. Here I'm using whatever the input field delimiter is set to: $ awk -F '\t' 'BEGIN { OFS=FS } { $3 = gsub("N","N",$2) }; 1' filesample1 TCTNG 1sample2 CCNGGGGGTN 2sample3 GGGNNNTC 3 Simirlarly in Perl, but using the tr operator in place of gsub() : $ perl -MEnglish -a -F '\t' -e 'BEGIN { $OFS="\t"; $ORS="\n" } print @F, ($F[1] =~ tr/N/N/)' filesample1 TCTNG 1sample2 CCNGGGGGTN 2sample3 GGGNNNTC 3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/670184",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/493463/"
]
} |
670,200 | My script-in-progress displays a multi-column colorized table, but the color-codes are interfering with the formatting. The color codes cannot be moved to the format string because the coloring of some columns is variable from row to row. No solutions are given in related Q&A's (cited below). I've provided pared down examples and work-arounds below. My actual usage is a bash script displaying an 11 column table with different columns in different colors, generated while looping through a bunch of jpg files, analyzing various exif data and outputing results as a row in the table, with some colors also varying from row to row based on the exif analysis results. But as I said, I've provided pared down examples work-arounds below. Here is a simplified display-table snippet that demonstrates the issue: # RedBlk defined in .bashrc.local as RedBlk="^[[0;31;40m"# DefDef defined in .bashrc.local as DefDef="^[[0m"(GrnBlk=$(tput setaf 2) YelBlk="\e[1;33m" echo "123456789 123456789" for ii in {8..13}; do case $ii in 8|9) clr1=""; clr2="";; 11) clr1=$RedBlk; clr2="";; 12) clr1=$YelBlk; clr2=$GrnBlk;; *) clr1=$GrnBlk; clr2=$RedBlk;; esac printf "%6d %6b$DefDef %6b\n" $((ii*2)) $clr1$ii $clr2$((ii*4)) done) .... with output: 123456789 123456789 16 8 32 18 9 36 20 10 40 22 11 44 24 12 48 26 13 52 Note the same problem occurs using "^[[0;31;40m" or $(tput setaf 2) or "\e[1;33m" Here is an oversimplified example: (echo "123456789 123456789 123456789" printf "%20s\n" "Hello""Again" printf "%20s\n" $RedBlk"Hello"$DefDef"Again") .... with output: 123456789 123456789 123456789 HelloAgainHelloAgain Two work-arounds are 1) alter the field width (e.g. from 20 to 20+${#RedBlk}+${#DefDef} = 20+10+4 in the over simplified example), and 2) split up the string and hack the format: (echo "123456789 123456789 123456789" printf "%20s\n" "Hello""Again" printf "%34s\n" $RedBlk"Hello"$DefDef"Again" printf "%s%15s%s%s\n" $RedBlk "Hello" $DefDef "Again") .... with output: 123456789 123456789 123456789 HelloAgain HelloAgain HelloAgain But both work-arounds are ultra clumsy given actual usage. These Q&A's, though related, do not provide a solution: https://stackoverflow.com/questions/67638971/how-to-get-color-and-width-formatting-with-printf https://stackoverflow.com/questions/58519511/bash-printf-formated-output-with-colors https://stackoverflow.com/questions/5412761/using-colors-with-printf How to use printf and %s when there are color codes? What are some solutions that are more simple? This is what I am using: bash --versionGNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu)# which printf/usr/bin/printf# /usr/bin/printf --versionprintf (GNU coreutils) 8.26# /usr/bin/xterm -versionXTerm(327) | The gsub() function returns the number of made substitutions. You may use this fact to count the number of N characters in the 2nd field and to add this number as a new field on each line: $ awk -F '\t' '{ $3 = gsub("N","N",$2) }; 1' filesample1 TCTNG 1sample2 CCNGGGGGTN 2sample3 GGGNNNTC 3 The output is caused by the trailing 1 (it is equivalent to using { print } or { print $0 } ). Set the value of the special variable OFS to use another field delimiter than the default (space) in the output. Here I'm using whatever the input field delimiter is set to: $ awk -F '\t' 'BEGIN { OFS=FS } { $3 = gsub("N","N",$2) }; 1' filesample1 TCTNG 1sample2 CCNGGGGGTN 2sample3 GGGNNNTC 3 Simirlarly in Perl, but using the tr operator in place of gsub() : $ perl -MEnglish -a -F '\t' -e 'BEGIN { $OFS="\t"; $ORS="\n" } print @F, ($F[1] =~ tr/N/N/)' filesample1 TCTNG 1sample2 CCNGGGGGTN 2sample3 GGGNNNTC 3 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/670200",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/493471/"
]
} |
670,284 | I have a log file which roughly looks like this: Sep 23 10:28:26 node kernel: em0: device is going DOWNSep 23 10:28:26 node kernel: em0: device is going UPSep 23 10:29:14 node cdsmon: /tmp/instance0 ; core dumpedSep 23 10:29:14 node cdsmon: /tmp/instance0 ; core dumpedSep 23 10:28:26 node kernel: em0: device is going DOWNSep 23 10:29:14 node cdsmon: /tmp/instance1 ; core dumpedSep 23 10:28:26 node kernel: em0: device is going UPSep 23 10:29:14 node cdsmon: /tmp/instance2 ; core dumped I want to detect the lines with cdsmon and then split the line by ; (to get the /tmp/instance0 and the event like core dumped ). For this I used sed as: sed -u -n -e "s/^.*cdsmon: //p" /tmp/dev.log which gives output as: /tmp/instance0 ; core dumped/tmp/instance0 ; core dumped/tmp/instance1 ; core dumped/tmp/instance2 ; core dumped But upon piping this output to awk as shown below, it gives the same output as above: sed -u -n -e "s/^.*cdsmon: //p" /tmp/dev.log | awk -F ";" "{print $1}" The same is observed despite removing the -u option from sed . Can anyone please point out if I'm missing anything? I'm using a FreeBSD box with regular awk/sed and unfortunately cannot install any new package. | The reason for the behavior of awk is that you have enclosed the program in double quotes, which leaves the string open to variable expansion by the shell. That means the shell from which you are running the program will first expand $1 , and since that is likely undefined, it expands to the empty string. So, your program amounts to awk -F ";" "{print}" and this is why the entire line is printed. This is one of the reasons you should always include your awk (and sed ) programs in single quotes. Note that in most cases you don't need to pipe the output from sed into awk or vice versa. In your example, if you want to get the first field after the "event label", you could do the following: sed -E -n 's/^.*cdsmon: ([^;]*).*$/\1/p' /tmp/dev.log This will define a capture group around the string after cdsmon: and up to the first ; , and replace the entire line with the content of that capture group. If you want to print a summary of the events logged by cdsmon , you can expand the sed approach above as: sed -E -n 's/^.*cdsmon: ([^;]*) ; (.*)$/\1 : \2/p' dev.log Alternatively, here is another awk -only approach: awk -F'(cdsmon: | ; )' 'NF==3{printf "%s : %s\n",$2,$3}' dev.log For your example, both will print /tmp/instance0 : core dumped/tmp/instance0 : core dumped/tmp/instance1 : core dumped/tmp/instance2 : core dumped but be aware that the awk approach can stumble on edge cases. It takes the patterns cdsmon: and ; as field separators. When there are three fields (in your example, it can only happen for the cdsmon: entries), it prints the second and third field, corresponding to the instance name after cdsmon: and the reason after ; . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/670284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257183/"
]
} |
670,302 | So I installed Ubuntu on a 32GB SD card, and made all the settings and adjustments that I need. Now I want to start to burn this image into a device that has only 8GB memory in its eMMC . The used space on the SD card is just 1.4GB and I want to make an image that is 8GB using dd . I tried different things, but it didn't work. Filesystem Size Used Avail Use% Mounted onudev 464M 0 464M 0% /devtmpfs 100M 1.3M 99M 2% /runoverlay 29G 1.4G 28G 5% /tmpfs 500M 0 500M 0% /dev/shmtmpfs 5.0M 4.0K 5.0M 1% /run/locktmpfs 500M 0 500M 0% /sys/fs/cgrouptmpfs 100M 0 100M 0% /run/user/1000tmpfs 100M 0 100M 0% /run/user/0```:~# fdisk -lDisk /dev/mmcblk0: 29.74 GiB, 31914983424 bytes, 62333952 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xc9c537b6Device Boot Start End Sectors Size Id Type/dev/mmcblk0p1 49152 131071 81920 40M b W95 FAT32/dev/mmcblk0p2 131072 2361343 2230272 1.1G 83 Linux/dev/mmcblk0p3 2361344 62333951 59972608 28.6G 83 LinuxDisk /dev/mmcblk1: 7.29 GiB, 7818182656 bytes, 15269888 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xc9c537b6Device Boot Start End Sectors Size Id Type/dev/mmcblk1p1 49152 131071 81920 40M b W95 FAT32/dev/mmcblk1p2 131072 2361343 2230272 1.1G 83 Linux/dev/mmcblk1p3 2361344 62333951 59972608 28.6G 83 LinuxDisk /dev/mmcblk1boot1: 4 MiB, 4194304 bytes, 8192 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mmcblk1boot0: 4 MiB, 4194304 bytes, 8192 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes eMMC is /dev/mmcblk1 . | The reason for the behavior of awk is that you have enclosed the program in double quotes, which leaves the string open to variable expansion by the shell. That means the shell from which you are running the program will first expand $1 , and since that is likely undefined, it expands to the empty string. So, your program amounts to awk -F ";" "{print}" and this is why the entire line is printed. This is one of the reasons you should always include your awk (and sed ) programs in single quotes. Note that in most cases you don't need to pipe the output from sed into awk or vice versa. In your example, if you want to get the first field after the "event label", you could do the following: sed -E -n 's/^.*cdsmon: ([^;]*).*$/\1/p' /tmp/dev.log This will define a capture group around the string after cdsmon: and up to the first ; , and replace the entire line with the content of that capture group. If you want to print a summary of the events logged by cdsmon , you can expand the sed approach above as: sed -E -n 's/^.*cdsmon: ([^;]*) ; (.*)$/\1 : \2/p' dev.log Alternatively, here is another awk -only approach: awk -F'(cdsmon: | ; )' 'NF==3{printf "%s : %s\n",$2,$3}' dev.log For your example, both will print /tmp/instance0 : core dumped/tmp/instance0 : core dumped/tmp/instance1 : core dumped/tmp/instance2 : core dumped but be aware that the awk approach can stumble on edge cases. It takes the patterns cdsmon: and ; as field separators. When there are three fields (in your example, it can only happen for the cdsmon: entries), it prints the second and third field, corresponding to the instance name after cdsmon: and the reason after ; . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/670302",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/493566/"
]
} |
670,307 | I have a file called file.txt containing: MAL TIRRUEZF CR MAL RKZYIOL EX MAL OIY UAE RICF "MAL ACWALRM DYEUPLFWL CR ME DYEU MAIM UL IZL RKZZEKYFLF GH OHRMLZH" I'd like the characters replaced as follows: M = TA = HL = EC = OR = FE = IX = S(Any other letter) = _(Anything else) = (itself) I have the fixed characters covered with: tr MALCREX THEOFIS < file.txt Or: sed 'y/MALCREX/THEOFIS/' < file.txt But how could I enforce the last two rules I mentioned? | I think you could use the fact that for many practical implementations, if a character repeats in the first set to tr , the last instance takes effect. Combined with the repeat syntax, you could do it without having to explicitly list the letters that don't appear in your transformation table. With the GNU version of tr, and whatever FreeBSD based one I have on my Mac, this: tr 'A-ZMALCREX' '[_*26]THEOFIS' turns MAL TIRRUEZF CR MAL RKZYIOL EX MAL OIY UAE RICF "MAL ACWALRM DYEUPLFWL CR ME DYEU MAIM UL IZL RKZZEKYFLF GH OHRMLZH" into THE __FF_I__ OF THE F_____E IS THE ___ _HI F_O_ "THE HO_HEFT __I__E__E OF TI __I_ TH_T _E __E F___I___E_ __ __FTE__" Of course that assumes that A-Z produces exactly 26 characters, and I'm not sure if that applies in every locale with every tr implementation. It should work in the C locale, and e.g. the GNU version of tr doesn't support anything but raw 8-bit characters anyway. The above doesn't work in Busybox, but that appears to be because it doesn't support the repetition syntax. There, you have to do it manually: busybox tr 'A-ZMALCREX' '__________________________THEOFIS' (thats 26 copies of the underscore) Having a repeat character override the earlier instance of the same comes naturally for a simple table-based implementation. If your tr is implemented differently, you'll need to use the solutions from other answers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/670307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
670,636 | I have a USB Zigbee dongle, but I'm unable to connect to it. It briefly shows up in /dev/ttyUSB0 , but then quickly disappears. I see the following output in the console: $ dmesg --follow...[ 738.365561] usb 1-10: new full-speed USB device number 8 using xhci_hcd[ 738.607730] usb 1-10: New USB device found, idVendor=1a86, idProduct=7523, bcdDevice= 2.64[ 738.607737] usb 1-10: New USB device strings: Mfr=0, Product=2, SerialNumber=0[ 738.607739] usb 1-10: Product: USB Serial[ 738.619446] ch341 1-10:1.0: ch341-uart converter detected[ 738.633501] usb 1-10: ch341-uart converter now attached to ttyUSB0[ 738.732348] audit: type=1130 audit(1632606446.974:2212): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=brltty-device@sys-devices-pci0000:00-0000:00:01.3-0000:03:00.0-usb1-1\x2d10 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'[ 738.768081] audit: type=1130 audit(1632606447.007:2213): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=brltty@-sys-devices-pci0000:00-0000:00:01.3-0000:03:00.0-usb1-1\x2d10 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'[ 738.776433] usb 1-10: usbfs: interface 0 claimed by ch341 while 'brltty' sets config #1[ 738.783508] ch341-uart ttyUSB0: ch341-uart converter now disconnected from ttyUSB0[ 738.783521] ch341 1-10:1.0: device disconnected[ 739.955783] input: BRLTTY 6.4 Linux Screen Driver Keyboard as /devices/virtual/input/input35... | The problem here is BRLTTY, a program that "provides access to the Linux/Unix console (when in text mode) for a blind person using a refreshable braille display". If you are not blind, you can disable BRLTTY in two different ways: Remove udev rules BRLTTY uses udev rules to get permissions to mess with the TTYs without being root. You can disable these rules by overriding the rules shipped by your distro with /dev/null : for f in /usr/lib/udev/rules.d/*brltty*.rules; do sudo ln -s /dev/null "/etc/udev/rules.d/$(basename "$f")"donesudo udevadm control --reload-rules Disable service The BRLTTY service is launched by the brltty.path service. This service can be completely prevented from ever starting by running by doing the following: $ sudo systemctl mask brltty.pathCreated symlink /etc/systemd/system/brltty.path → /dev/null. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/670636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70735/"
]
} |
670,765 | I have a script that takes a string input from users. I am looking to check that the string input should have exactly 2 dots. The relevance is only to the dots. The string should not start and end with a dot.There should be no consecutive dots. This is the pattern I am using: ^[^\.]*\.[^\.]*\.[^\.]*$ This is the string I am looking for : abc.def.xyz But in the pattern above, if dots are in front or at the end, then that string gets selected - which I don't want. There should be only two dots in the string. Not wanted: .abc.xyz # no dot at the start abc.xyz. # no dot at the end abc.def.ced.xyz # only two dots not more than that I have tried using (?!\.) for the dot at the start, but it didn't work. | You're not saying how the string is input from the user, but note that if it may contain newline characters, you can't use grep to filter them (unless you use the --null extension) as grep works on one line at a time. Also note that the [^\.] regex matches on characters other than backslash and . and the . regex operator (or [...] ) in many regex implementations will not match on bytes that don't form valid characters in the locale. Here, to check that $string contains 2 and only 2 dots, but not at the start nor end and not next to each other, you can use the standard sh : case $string in (*.*.*.* | .* | *. | *..* ) echo not OK;; (*.*.*) echo OK;; (*) echo not OK;;esac Or with ksh globs, a subset of which can be made available in the bash shell by doing shopt -s extglob : case $string in ( +([!.]).+([!.]).+([!.]) ) echo OK;; (*) echo not OK;;esac bash can also do extended regex matching with the =~ operator inside its [[...]] ksh-style construct, but again, you'll want to fix the locale to C: regex_match_in_C_locale() { local LC_ALL=C [[ $1 =~ $2 ]]}if regex_match_in_C_locale "$string" '^[^.]+\.[^.]+\.[^.]+$'; then echo OKelse echo not OKfi POSIXly, you can do basic regex matching with the expr utility: if LC_ALL=C expr "x$string" : 'x[^.]\{1,\}\.[^.]\{1,\}\.[^.]\{1,\}$' > /dev/nullthen echo OKelse echo not OKfi Or extended regex matching with the awk utility: regex_match_in_C_locale() { LC_ALL=C awk -- 'BEGIN {exit(ARGV[1] !~ ARGV[2])}' "$@"}if regex_match_in_C_locale "$string" '^[^.]+\.[^.]+\.[^.]+$'; then echo OKelse echo not OKfi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/670765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491701/"
]
} |
670,766 | I need to list the sub directories of a certain path. The separation should be with spaces and not new lines. I also do not want the absolute path to the sub directories, just their names. # correctdir1 dir2 dir3# incorrect: separation with new linesdir1dir2dir3# incorrect: absolute paths/home/x/y/dir1 /home/x/y/dir2 /home/x/y/dir3 I've seen a lot of other posts like this SO post , but they do not accomplish my request. I've tried ls -d ~/y but it lists absolute paths and separates with new lines. I guess I could use sed to remove the irrelevant part of the path, and then remove all the new lines. But I couldn't get it to work, and it seems like there should be a better solution | Assuming that you are using GNU tools, you could use GNU basename to get the names of all subdirectories in a particular directory. You could then use paste to format this as a space-delimited list. basename -a /some/path/*/ | paste -d ' ' -s - The above command uses the fact that GNU basename has an -a option to return the filename portion of multiple pathnames given as operands on its command line. We use a file-globbing pattern ending in / to generate the pathnames for GNU basename . Only directories can match such a pattern. In the end, the paste creates the space-separated list from the newline-separated list produced by GNU basename . Note that it would be difficult to parse the generated list of filenames if any of the original names of directories contain space characters. Note that if the directory contains symbolic links, this method will try to follow those symbolic links. Restricting us from using any external tools, we could use an array in the bash shell to store and manipulate the directory paths. shopt -s nullglobtopdir=/some/pathdirpaths=( "$topdir"/*/ )dirpaths=( "${dirpaths[@]#$topdir/}" )dirpaths=( "${dirpaths[@]%/}" )printf '%s\n' "${dirpaths[*]}" The above shell code expands the same globbing pattern as we used in the first part of this answer but stores the resulting directory paths in the array dirpaths . It then deletes the known prefix $topdir/ from each element of the array and the trailing / before printing the array as a single string of space-delimited names. The delimiter used between the names on the last line will be the first character from $IFS , which by default is a space. Using find , you could look for subdirectories in the particular top directory you're interested in while making sure not to return the top directory itself. You would also stop find from progressing into the subdirectories. topdir=/some/pathfind "${topdir%/}/." ! -name . -prune -type d -exec basename {} \; | paste -d ' ' -s - The above command avoids the search starting point using a negated -name test, and it prunes the search tree with -prune so that find does not recurse down into any subdirectories. We call basename for each found directory which outputs the filename of the directories onto separate lines. As the last step, we're piping the result from find through paste to format the output into a space-separated list on a single line. With GNU find , you could write this as find /some/path -mindepth 1 -maxdepth 1 -type d -printf '%f\n' | paste -d ' ' -s - Using find like this will list directories with hidden names, and you will not see any symbolically linked directories. In the zsh shell, you would be able to use a more advanced shell globbing pattern to pick out the filenames of only directories and print them in one go. print -r -- /some/path/*(/:t) This command uses a glob qualifier, /:t , consisting of two parts, affecting the preceding globbing pattern /some/path/* . The / makes the pattern only match directories (not symbolically linked ones; for that use -/ ), while :t extracts the "tail" of each generated pathname, i.e., the filename component. The print -r command prints its arguments with spaces as delimiters while avoiding expanding escape sequences like \n or \t in the data. Using -- to delimit the operands from the options (also works with - like in the ksh shell) makes sure directory names resulting from the glob expansion are not taken as options even if they start with - . You could use this from within the bash shell to generate your list. zsh -c 'print -r -- /some/path/*(/:t)' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/670766",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/494501/"
]
} |
670,979 | OS: Debian 10.10 I search to understand why the "usermod" command run when I launch with "su -" but when he is launched from "su root" the command is "bash: usermod: command not found". Thks! | su command without - keeps your existing environment, and only switches you to user without loading all of his environment variables. su - will simulate user login and will not only switch you to user but also load his environment variables. From man su -, -l, --login Start the shell as a login shell with an environment similar to a real login: o clears all the environment variables except TERM and variables specified by --whitelist-environment o initializes the environment variables HOME, SHELL, USER, LOGNAME, and PATH o changes to the target user's home directory o sets argv[0] of the shell to '-' in order to make the shell a login shell In this case you probably don't load all the elements in PATH variable of root user. Type echo $PATH after you do su root and after you do su - you will probably have extra folders in PATH after su - command. usermod command should be in /usr/sbin , which is path only meant to be available to superuser, commands inside /sbin and /usr/sbin are meant to be used for administration purposes and only run by administrative users not normal users. you can use type usermod or which usermod and see that usermod is on path /usr/sbin/usermod and you probably won't have /usr/sbin in output of echo $PATH after su root but will have it inside PATH variable after su - command /sbin Like /bin, this directory holds commands needed to boot thesystem, but which are usually not executed by normal users. /usr/sbin This directory contains program binaries for system administration which are not essential for the boot process, formounting /usr, or for system repair. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/670979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/460369/"
]
} |
671,142 | I just copied all the files/subdirectories in my home directory to another user's home directory. Then I did a recursive chown on his home directory, so that he became the owner of all his files/subdirectories. The last thing I need to do is a recursive chgrp on his home directory, so that his username will be the group for all his files/subdirectories, instead of my username. The issue is that there are a couple of subdirectories whose group is "docker". Inside these subdirectories, there are some files/directories whose group is my username, and some other files/directories whose group is "docker". How do I recursively run chgrp on his home directory so that every single file/subdirectory whose group is my username gets changed to his username, but every single file/subdirectory whose group is "docker" stays "docker"? | Use find to exclude anything owned by group docker ; starting from the target home directory: find . ! -group docker -exec chgrp newgroup {} + replacing newgroup as appropriate. Alternatively, look for anything owned by your group: find . -group oldgroup -exec chgrp newgroup {} + replacing oldgroup and newgroup as appropriate. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/671142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72548/"
]
} |
671,147 | I have a big size file like 5GB with .gz . Inside that file, we have few XML files that contains values that I want to search and extract just in case if those values are there. For example I want to extract the tags that contains the name NOOSS and also the subcontent of this tags like <pmJobId> , <requestedJobState> , <reportingPeriod> , <jobPriority> from the the .gz file <Pm xmlns="urnCmwPm"> <pmId>1</pmId> <PmJob> <pmJobId>NOOSSCONTROLExample</pmJobId> <requestedJobState>ACTIVE</requestedJobState> <reportingPeriod>FIVE_MIN</reportingPeriod> <jobType>MEASUREMENTJOB</jobType> <jobPriority>HIGH</jobPriority> <granularityPeriod>FIVE_MIN</granularityPeriod> <jobGroup>Sla</jobGroup> <reportContentGeneration>CHANGED_ONLY</reportContentGeneration> <MeasurementReader> <measurementReaderId>mr_2</measurementReaderId> <measurementSpecification struct="MeasurementSpecification"> <measurementTypeRef>Anything</measurementTypeRef> </measurementSpecification> <thresholdRateOfVariation>PER_SECOND</thresholdRateOfVariation> </MeasurementReader> <MeasurementReader> <measurementReaderId>mr_1</measurementReaderId> <measurementSpecification struct="MeasurementSpecification"> <measurementTypeRef>ManagedElement=1,SystemFunctions=1,Pm=1,PmGroup=OSProcessingLogicalUnit,MeasurementType=CPULoad.Total</measurementTypeRef> </measurementSpecification> <thresholdRateOfVariation>PER_SECOND</thresholdRateOfVariation> </MeasurementReader> </PmJob></Pm> I was using cat *gz 1 zgrep -a "PmJobId" but the output only show the <pmJobId> value and not the rest of the information or tags. Please your help, I'm noobie on this. Im using CentOS - RedHat Linux. Thanks | Use find to exclude anything owned by group docker ; starting from the target home directory: find . ! -group docker -exec chgrp newgroup {} + replacing newgroup as appropriate. Alternatively, look for anything owned by your group: find . -group oldgroup -exec chgrp newgroup {} + replacing oldgroup and newgroup as appropriate. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/671147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/232422/"
]
} |
671,208 | I installed VLC through terminal but it shows: bash: /snap/bin/vlc: No such file or directory I also tried: which vlc and it showed: /usr/bin/vlc When I try to run it through sudo su , it shows this error: VLC is not supposed to be run as root. Sorry.If you need to use real-time priorities and/or privileged TCP portsyou can use vlc-wrapper (make sure it is Set-UID root andcannot be run by non-trusted users first). Any idea how I can fix this issue? I tried using the snap VLC package, which I installed using the terminal, but I couldn't navigate to my Downloads folder. I could only navigate in the "computer" folder, which consists of /bin , /usr , /var , etc. I was able to play the items of the folder I wanted by dragging and dropping. I'm also only able to open VLC through the terminal. Opening it through the start menu doesn't do anything. I'm using Zorin OS 16, which is based on Ubuntu 20.04, if I'm not wrong. | You should run $ /usr/bin/vlc As for why executing vlc looks for /snap/bin/vlc , I wouldn't know.If you had a snap for vlc installed, I guess it should have worked as well. Perhaps you have an alias set in your ~/.bashrc or elsewhere.If you find such an alias, and remove it, you could probably start running vlc without the need for prepending the full path. EDIT To remove the difficulty, you could check if you actually have any file or soft link /snap/bin/vlc .Check with $ type vlc$ ls -al /snap/bin/vlc Also, you could setup your own alias vlc=/usr/bin/vlc in ~/.bashrc .If that is read after the presumed other alias, you would be ok. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671208",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/509916/"
]
} |
671,284 | I can't decrypt my passwords with pass neither with gpg directly. gpg: encrypted with rsa4096 key, ID id, created creation_date "name <email>" gpg: public key decryption failed: No pinentry gpg: decryption failed: No pinentry It does not show a prompt dialog asking for the master password. It says "no pinentry" but the program is installed: $ ls /usr/bin/pinentry*/usr/bin/pinentry/usr/bin/pinentry-curses/usr/bin/pinentry-emacs/usr/bin/pinentry-gnome3/usr/bin/pinentry-gtk-2/usr/bin/pinentry-qt/usr/bin/pinentry-tty Please, I need help asap because I can't login into nothing withoutmy passwords, which are all encrypted with GPG. | I solved the problem by running the following commands pkill gpg-agentgpg-agent --pinentry-program=/usr/bin/pinentry-gtk-2 --daemon and it worked. I don't know why pinentry wasn't working, but startinga new gpg-agent daemon has worked. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/462354/"
]
} |
671,530 | I am not into scripting but manage to create few with the help in this forum. Coming across a problem but not able to get it work (not sure if it is possible) I have a fileY with content lrwxrwxrwx 1 user1 gp 35 2021-09-07 2000 /folder/subfolder1/subfolder2/subfolder3/main/summary.txtlrwxrwxrwx 1 user1 gp 35 2021-09-08 1400 /folder/subfolder1/subfolder2/main/summary.txtlrwxrwxrwx 1 user1 gp 35 2021-09-09 1800 /folder/subfolder1/subfolder2/subfolder3/subfolder4/main/summary.txt I wanted to output the column 3,6,7,8 and concatenate with the folder name before "main" like below user1 2021-09-07 2000 /folder/subfolder1/subfolder2/subfolder3/main/summary.txt subfolder3user1 2021-09-08 1400 /folder/subfolder1/subfolder2/main/summary.txt subfolder2user1 2021-09-09 1800 /folder/subfolder1/subfolder2/subfolder3/subfolder4/main/summary.txt subfolder4 How can i have below sed command as one of the {print} variable for awk command? awk '{print $3,$6,$7,$8}' fileYsed 's/\// /g; s/\./ /g' fileY | awk '{for(i=8;i<=NF;i++){if($i~/^main/){a=i}} print $(a-1)}' | You never need sed when you're using awk. If the directory you want is always 3rd-last in the path as in your examples then all you need is this using any awk: $ awk '{print $3, $6, $7, $8, p[split($8,p,"/")-2]}' fileuser1 2021-09-07 2000 /folder/subfolder1/subfolder2/subfolder3/main/summary.txt subfolder3user1 2021-09-08 1400 /folder/subfolder1/subfolder2/main/summary.txt subfolder2user1 2021-09-09 1800 /folder/subfolder1/subfolder2/subfolder3/subfolder4/main/summary.txt subfolder4 Otherwise using GNU awk for the 3rd arg to match(): $ awk '{match($8,"([^/]+)/main/",a); print $3, $6, $7, $8, a[1]}' fileuser1 2021-09-07 2000 /folder/subfolder1/subfolder2/subfolder3/main/summary.txt subfolder3user1 2021-09-08 1400 /folder/subfolder1/subfolder2/main/summary.txt subfolder2user1 2021-09-09 1800 /folder/subfolder1/subfolder2/subfolder3/subfolder4/main/summary.txt subfolder4 or using any awk: $ awk '{match($8,"[^/]+/main/"); print $3, $6, $7, $8, substr($8,RSTART,RLENGTH-6)}' fileuser1 2021-09-07 2000 /folder/subfolder1/subfolder2/subfolder3/main/summary.txt subfolder3user1 2021-09-08 1400 /folder/subfolder1/subfolder2/main/summary.txt subfolder2user1 2021-09-09 1800 /folder/subfolder1/subfolder2/subfolder3/subfolder4/main/summary.txt subfolder4 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671530",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/491876/"
]
} |
671,655 | I installed Debian 11 (Bullseye) onto a device with no internet. I used the "firmware CD" version of the ISO. I have configured the network, so I can do ping 8.8.8.8 . I tried to run sudo apt update , but I discovered that there weren't any sources in the sources.list file (e.g., it was empty). I found this question , but it is for Debian Jessie, not Bullseye. I would also like non-free packages. How can I restore the default repositories, as if I had installed Debian with an internet connection? | You can find all the information about sources.list in the official Debian wiki site , specifically about your question under Example sources.list : deb http://deb.debian.org/debian bullseye main contrib non-freedeb-src http://deb.debian.org/debian bullseye main contrib non-freedeb http://deb.debian.org/debian-security/ bullseye-security main contrib non-freedeb-src http://deb.debian.org/debian-security/ bullseye-security main contrib non-freedeb http://deb.debian.org/debian bullseye-updates main contrib non-freedeb-src http://deb.debian.org/debian bullseye-updates main contrib non-free You can comment or delete the cdrom lines, since they are not useful anymore, and when executing apt update an error will be thrown. Also comment out the deb-src lines unless you actually intend to download and compile source packages in the near future. Commenting them out halves the download time for apt update . Uncomment them if and when you want to recompile a package or examine its source code. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/671655",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/495409/"
]
} |
671,666 | I have this thing that I maintain at work, and it has a pretty arcane DSL that it uses. And the tooling for it is not great. To deal with the poor tooling, I've written some scripts to try to find some issues with the code before I send it to production. The current problem I'm trying to solve has to do with variable names. Variables are named like @@Variable@@ . If there is only 1 @ , or more than 2 @ s, then it is a fatal error. Right now I've got it looping through the files in question, and grepping for @@@ and raising an error when it finds 3 or more consecutive @ 's. So that part is cool. But I'm sort of stuck on the single @ . There can be more than one variable on a line. @@Var1@@ words words words @@Var2@@ #This works@Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong.@@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. There are loads of permutations of the above, and there is no limit to the number of variables on a line. This awk script works if there is a single variable on any given line, but it doesn't work if there is more than 1 variable on a line. awk '/@/ && ! /@@.*@@/' test.txt What I really need to do is match anything where there is only a single instance of @ . In the sample code above, it would match on all lines except line 1. | $ grep -E '(^|[^@])@([^@]|$)|@@@' file@Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong.@@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. or: $ awk '/(^|[^@])@([^@]|$)|@@@/' file@Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong.@@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. or analyzing one field at a time: $ cat tst.awk{ for (i=1; i<=NF; i++) { if ( $i ~ /^@[^@]|[^@]@$|@@@/ ) { print "Failed line:", NR, $0 print "\tbecause of field", i, $i } }} $ awk -f tst.awk fileFailed line: 2 @Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong. because of field 1 @Var1@@Failed line: 3 @@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong. because of field 1 @@Var1@Failed line: 4 @@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong. because of field 5 @Var2@@Failed line: 5 @@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. because of field 5 @@Var2@ You don't need anything additional to find @@@ cases, the above includes finding that case too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21312/"
]
} |
671,689 | It is possible to write on /dev/mem without using mmap? I'm enabling pull-up resistors on a Raspberry Pi inside an LKM and the function void *mmap (caddr_t addr, size_t len, int prot, int flags, int fd, off_t offset) doesn't exists. I've tried to use open (to later convert it into filp_open ) but it does nothing: #include <stdio.h>#include <stdarg.h>#include <stdint.h>#include <stdlib.h>#include <ctype.h>#include <unistd.h>#include <errno.h>#include <string.h>#include <fcntl.h>#include <sys/mman.h>#include <time.h>#include <errno.h>// From https://github.com/RPi-Distro/raspi-gpio/blob/master/raspi-gpio.c#define PULL_UNSET -1#define PULL_NONE 0#define PULL_DOWN 1#define PULL_UP 2#define GPIO_BASE_OFFSET 0x00200000#define GPPUD 37#define GPPUDCLK0 38#define BASE_READ 0x1000#define BASE_SIZE (BASE_READ/sizeof(uint32_t))uint32_t getGpioRegBase(void) { const char *revision_file = "/proc/device-tree/system/linux,revision"; uint8_t revision[4] = { 0 }; uint32_t cpu = 0; FILE *fd; if ((fd = fopen(revision_file, "rb")) == NULL) { printf("Can't open '%s'\n", revision_file); exit(EXIT_FAILURE); } else { if (fread(revision, 1, sizeof(revision), fd) == 4) cpu = (revision[2] >> 4) & 0xf; else { printf("Revision data too short\n"); exit(EXIT_FAILURE); } fclose(fd); } printf("CPU: %d\n", cpu); switch (cpu) { case 0: // BCM2835 [Pi 1 A; Pi 1 B; Pi 1 B+; Pi Zero; Pi Zero W] //chip = &gpio_chip_2835; return 0x20000000 + GPIO_BASE_OFFSET; case 1: // BCM2836 [Pi 2 B] case 2: // BCM2837 [Pi 3 B; Pi 3 B+; Pi 3 A+] //chip = &gpio_chip_2835; return 0x3f000000 + GPIO_BASE_OFFSET; case 3: // BCM2711 [Pi 4 B] //chip = &gpio_chip_2711; return 0xfe000000 + GPIO_BASE_OFFSET; default: printf("Unrecognised revision code\n"); exit(1); }}int writeBase(uint32_t reg_base, uint32_t offset, uint32_t data) { int fd; if ((fd = open("/dev/mem", O_RDWR | O_SYNC | O_CLOEXEC) ) < 0) return -1; if (lseek(fd, reg_base+offset, SEEK_SET) == -1) return -2; if (write(fd, (void*)&data, sizeof(uint32_t)) != sizeof(uint32_t)) return -3; if (close(fd) == -1) return -4; return 0;}int setPull(unsigned int gpio, int pull) { int r; int clkreg = GPPUDCLK0 + (gpio / 32); int clkbit = 1 << (gpio % 32); uint32_t reg_base = getGpioRegBase(); r = writeBase(reg_base, GPPUD, pull); // base[GPPUD] = pull if (r < 0) return r; usleep(10); r = writeBase(reg_base, clkreg, clkbit); // base[clkreg] = clkbit if (r < 0) return r; usleep(10); r = writeBase(reg_base, GPPUD, 0); // base[GPPUD] = 0 if (r < 0) return r; usleep(10); r = writeBase(reg_base, clkreg, 0); // base[clkreg] = 0 usleep(10); return r;}int main(int argc, char *argv[]) { int gpio, r; if (argc!=2) { printf("GPIO pin needed!\n"); return 1; } gpio = atoi(argv[1]); printf("Enabling pull-up on GPIO%d...\n", gpio); r = setPull(gpio, PULL_UP); printf("Return value: %d\n", r); if (r != 0) printf("%s\n", strerror(errno)); return r;} This is a fragment of raspi-gpio that does what I want: #include <stdio.h>#include <stdarg.h>#include <stdint.h>#include <stdlib.h>#include <ctype.h>#include <unistd.h>#include <errno.h>#include <string.h>#include <fcntl.h>#include <sys/mman.h>#include <time.h>// From https://github.com/RPi-Distro/raspi-gpio/blob/master/raspi-gpio.c#define PULL_UNSET -1#define PULL_NONE 0#define PULL_DOWN 1#define PULL_UP 2#define GPIO_BASE_OFFSET 0x00200000#define GPPUD 37#define GPPUDCLK0 38uint32_t getGpioRegBase(void) { const char *revision_file = "/proc/device-tree/system/linux,revision"; uint8_t revision[4] = { 0 }; uint32_t cpu = 0; FILE *fd; if ((fd = fopen(revision_file, "rb")) == NULL) { printf("Can't open '%s'\n", revision_file); } else { if (fread(revision, 1, sizeof(revision), fd) == 4) cpu = (revision[2] >> 4) & 0xf; else printf("Revision data too short\n"); fclose(fd); } printf("CPU: %d\n", cpu); switch (cpu) { case 0: // BCM2835 [Pi 1 A; Pi 1 B; Pi 1 B+; Pi Zero; Pi Zero W] return 0x20000000 + GPIO_BASE_OFFSET; case 1: // BCM2836 [Pi 2 B] case 2: // BCM2837 [Pi 3 B; Pi 3 B+; Pi 3 A+] return 0x3f000000 + GPIO_BASE_OFFSET; case 3: // BCM2711 [Pi 4 B] return 0xfe000000 + GPIO_BASE_OFFSET; default: printf("Unrecognised revision code\n"); exit(1); }}volatile uint32_t *getBase(uint32_t reg_base) { int fd; if ((fd = open ("/dev/mem", O_RDWR | O_SYNC | O_CLOEXEC) ) < 0) return NULL; return (uint32_t *)mmap(0, /*chip->reg_size*/ 0x1000, PROT_READ|PROT_WRITE, MAP_SHARED, fd, reg_base);}void setPull(volatile uint32_t *base, unsigned int gpio, int pull) { int clkreg = GPPUDCLK0 + (gpio / 32); int clkbit = 1 << (gpio % 32); base[GPPUD] = pull; usleep(10); base[clkreg] = clkbit; usleep(10); base[GPPUD] = 0; usleep(10); base[clkreg] = 0; usleep(10);}int main(int argc, char *argv[]) { if (argc!=2) { printf("GPIO pin needed!\n"); return 1; } uint32_t reg_base = getGpioRegBase(); volatile uint32_t *base = getBase(reg_base); if (base == NULL || base == (uint32_t *)-1) { printf("Base error"); return 1; } printf("Base: %p\n", base); setPull(base, atoi(argv[1]), PULL_UP); return 0;} And here's the KML fragment that enables the pull-up (I need to remove the mmap part): #include <linux/types.h> // uint_32#include <linux/fs.h> // filp_open/filp_close#include <linux/delay.h> // udelay#define PULL_DOWN 1#define PULL_UP 2#define GPIO_BASE_OFFSET 0x00200000#define GPPUD 37#define GPPUDCLK0 38static uint32_t getGpioRegBase(bool *error) { uint8_t revision[4] = { 0 }; uint32_t cpu = 0; struct file *fd; ssize_t rc = 0; if (IS_ERR(( fd = filp_open("/proc/device-tree/system/linux,revision", O_RDONLY | O_SYNC | O_CLOEXEC, 0) ))) { *error = true; return 0; } if ((rc = kernel_read(fd, revision, sizeof(revision), 0)) == 4) cpu = (revision[2] >> 4) & 0xf; else { *error = true; return 0; } filp_close(fd, NULL); *error = false; switch (cpu) { case 0: // BCM2835 [Pi 1 A; Pi 1 B; Pi 1 B+; Pi Zero; Pi Zero W] return 0x20000000 + GPIO_BASE_OFFSET; case 1: // BCM2836 [Pi 2 B] case 2: // BCM2837 [Pi 3 B; Pi 3 B+; Pi 3 A+] return 0x3f000000 + GPIO_BASE_OFFSET; case 3: // BCM2711 [Pi 4 B] return 0xfe000000 + GPIO_BASE_OFFSET; default: *error = true; return 0; }}static volatile uint32_t *getBase(uint32_t reg_base) { struct file *fd; volatile uint32_t *r; if (IS_ERR(( fd = filp_open("/dev/mem", O_RDWR | O_SYNC | O_CLOEXEC, 0) ))) return NULL; r = (uint32_t*)mmap(0, 0x1000, PROT_READ|PROT_WRITE, MAP_SHARED, fd, reg_base); filp_close(fd, NULL); // TODO the original didn't have this return r;}static void setPull(volatile uint32_t *base, uint32_t gpio, int pull) { int clkreg = GPPUDCLK0 + (gpio / 32); int clkbit = 1 << (gpio % 32); base[GPPUD] = pull; udelay(10); base[clkreg] = clkbit; udelay(10); base[GPPUD] = 0; udelay(10); base[clkreg] = 0; udelay(10);}/** * Equivalent to 'raspi-gpio set <gpio> <pu/pd>' * @param gpio Valid GPIO pin * @param pull PULL_DOWN/PULL_UP */static int setGpioPull(uint32_t gpio, int pull) { bool error; uint32_t reg_base; volatile uint32_t *base; reg_base = getGpioRegBase(&error); if (error) return -1; base = getBase(reg_base); if (base == NULL || base == (uint32_t*)-1) return -1; setPull(base, gpio, pull); return 0;}``` | $ grep -E '(^|[^@])@([^@]|$)|@@@' file@Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong.@@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. or: $ awk '/(^|[^@])@([^@]|$)|@@@/' file@Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong.@@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. or analyzing one field at a time: $ cat tst.awk{ for (i=1; i<=NF; i++) { if ( $i ~ /^@[^@]|[^@]@$|@@@/ ) { print "Failed line:", NR, $0 print "\tbecause of field", i, $i } }} $ awk -f tst.awk fileFailed line: 2 @Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong. because of field 1 @Var1@@Failed line: 3 @@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong. because of field 1 @@Var1@Failed line: 4 @@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong. because of field 5 @Var2@@Failed line: 5 @@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. because of field 5 @@Var2@ You don't need anything additional to find @@@ cases, the above includes finding that case too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/486449/"
]
} |
671,715 | I have some packages installed not from the official Debian repositories, so I was looking for something that somehow checks the integrity of the system. The debsums packackage seems to do exactly that, by checking the MD5 sums of installed Debian packages. I executed sudo debsums -a | grep -v OK which gave the following output: /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 FAILED/etc/xdg/lxsession/LXDE/desktop.conf FAILED/etc/ssh/ssh_config FAILED/etc/sysctl.conf FAILED/etc/systemd/journald.conf FAILED/etc/default/ufw FAILED I noticed that these are some of the configs that I have manually touched before. There are lot more configs that I've changed but they are not shown, apparently. Does this mean that the packages of the FAILED configs above are actually altered (potentially maliciously), or an expected behavior? | $ grep -E '(^|[^@])@([^@]|$)|@@@' file@Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong.@@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. or: $ awk '/(^|[^@])@([^@]|$)|@@@/' file@Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong.@@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong.@@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. or analyzing one field at a time: $ cat tst.awk{ for (i=1; i<=NF; i++) { if ( $i ~ /^@[^@]|[^@]@$|@@@/ ) { print "Failed line:", NR, $0 print "\tbecause of field", i, $i } }} $ awk -f tst.awk fileFailed line: 2 @Var1@@ words words words @@Var2@@ #This will fail because Var1 is wrong. because of field 1 @Var1@@Failed line: 3 @@Var1@ words words words @@Var2@@ #This will fail because Var1 is wrong. because of field 1 @@Var1@Failed line: 4 @@Var1@@ words words words @Var2@@ #This will fail because Var2 is wrong. because of field 5 @Var2@@Failed line: 5 @@Var1@@ words words words @@Var2@ #This will fail because Var2 is wrong. because of field 5 @@Var2@ You don't need anything additional to find @@@ cases, the above includes finding that case too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/494957/"
]
} |
671,747 | I'd like to ban this range of Chinese IP addresses in nginx: '223.64.0.0 - 223.117.255.255' I know how to ban each of /16 range like: deny 223.64.0.0/16; But it will take many lines to include the whole 223.64 - 223.117 range. Is there a shorthand notation to do so in one line? | ipcalc ( ipcalc package on Debian) can help you deaggregate a range into a number of matching CIDR s: $ ipcalc -r 223.64.0.0 - 223.117.255.255deaggregate 223.64.0.0 - 223.117.255.255223.64.0.0/11223.96.0.0/12223.112.0.0/14223.116.0.0/15 Same with that other ipcalc ( ipcalc-ng package and command name on Debian): $ ipcalc-ng -d '223.64.0.0 - 223.117.255.255'[Deaggregated networks]Network: 223.64.0.0/11Network: 223.96.0.0/12Network: 223.112.0.0/14Network: 223.116.0.0/15 That one has more options to vary the output format: $ ipcalc-ng --no-decorate -d '223.64.0.0 - 223.117.255.255'223.64.0.0/11223.96.0.0/12223.112.0.0/14223.116.0.0/15 Including json which gives endless possibilities of reformatting if combined with tools like jq : $ ipcalc-ng -j -d '223.64.0.0 - 223.117.255.255' | jq -r '.DEAGGREGATEDNETWORK[]|"deny " + . + ";"'deny 223.64.0.0/11;deny 223.96.0.0/12;deny 223.112.0.0/14;deny 223.116.0.0/15; $ ipcalc-ng -j -d '223.64.0.0 - 223.117.255.255' | jq -r '"deny " + (.DEAGGREGATEDNETWORK|join(" ")) + ";"'deny 223.64.0.0/11 223.96.0.0/12 223.112.0.0/14 223.116.0.0/15; | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/671747",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/495497/"
]
} |
671,785 | I have a file on my Ubuntu machine where I've marked the start of some lines using '@': @Abbb@Bbbb@Dccc I want to remove the new lines at the end of lines starting '@' so the above file becomes @Abbb@Bbbb@Dccc I can match the lines starting @ using sed: sed /^@/ ... but I'm lost at trying to find the way to remove the new line character at the end of the string.There must be a way of doing this easily using sed without having to manually open the file in a text editor and remove them myself. | $ printf '%s\n' 'g/^@/j' ',p' 'Q' | ed -s file@Abbb@Bbbb@Dccc The above uses a short ed script to join each line in the file that matches the regular expression ^@ with the line after. It then prints all the lines in the editing buffer and quits without saving. $ sed '/^@/ { N; s/\n//; }' file@Abbb@Bbbb@Dccc This sed command appends the next line to the input buffer if the current line matches ^@ . The inserted newline character is removed and the line printed. Both of these variations would miss a line starting with @ if it was immediately preceded by such a line. This means, they would turn @1@2X@3Y into @1@2X@3Y rather than @1@2X@3Y It is unclear whether this is ok or not. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308763/"
]
} |
671,787 | Basically I have the same problem as described here ( https://linux.debian.user.narkive.com/csvu5OQJ/shim-init-error-for-debian-live-11-0-0-amd64-kde-iso ) but I don't have a solution. The only difference is that I'm using the firmware-11.0.0-amd64-netinst.iso image. When I try to install Debian 11 on my Fujitsu Lifebook I'll get the error message set_second_stage() failed: Invalid ParameterSomething has gone seriously wrong: shim_init() failed: Invalid Parameter Shortly after that message, my PC turns itself off. Nothing else, just this message. I can't switch to a legacy BIOS mode. The only thing I could do is disabling secure boot, but this doesn't help. I created the USB stick using Rufus, UNetbootin and dd and it doesn't make a difference. Does anybody has an idea how to fix this? Best regards | $ printf '%s\n' 'g/^@/j' ',p' 'Q' | ed -s file@Abbb@Bbbb@Dccc The above uses a short ed script to join each line in the file that matches the regular expression ^@ with the line after. It then prints all the lines in the editing buffer and quits without saving. $ sed '/^@/ { N; s/\n//; }' file@Abbb@Bbbb@Dccc This sed command appends the next line to the input buffer if the current line matches ^@ . The inserted newline character is removed and the line printed. Both of these variations would miss a line starting with @ if it was immediately preceded by such a line. This means, they would turn @1@2X@3Y into @1@2X@3Y rather than @1@2X@3Y It is unclear whether this is ok or not. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/481147/"
]
} |
671,932 | I am running this find command: find $HOME * -depth -type d -iname *bowtie* | grep "bowtie*-*1[^2]" My expected out put is: /home/user/Documents/scripts/bowtie1.3.0-linux-x86_64/home/user/Documents/scripts/bowtie-1.3.0-linux-x86_64 But it return this: /home/user/Documents/scripts/bowtie1.3.0-linux-x86_64/home/user/Documents/scripts/bowtie-1.3.0-linux-x86_64Documents/scripts/bowtie1.3.0-linux-x86_64Documents/scripts/bowtie-1.3.0-linux-x86_64 How can I modify the find command to only show the first two results? | You are running find with several search paths that are the same. You are using $HOME and whatever names * generate. The * obviously expands to some directories that, among other things, include the names you are seeking. Suggestion: find "$HOME" -type d -name 'bowtie*1.[!2]*' The above also eliminates your grep by using the -name test more creatively. You will need to quote any patterns that should not be expanded relative to the current directory before invoking find . The pattern matches names starting with bowtie and then contains 1.n , where n is not 2 . In place of [!2] , you could use [13-9] to force the match of an integer other than 2 , not just any character other than 2 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/671932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/454474/"
]
} |
671,949 | I want to search all defined functions in bash for some search string. The below is a start, but I want to then eliminate all terms that are not followed by whitespace on the next line (i.e. eliminate all entries that did not find $1 in the body of that function). fu() { declare -f | grep -e \(\) -e $1; } e.g. This output: ...tt ()untargz ()urlfix ()ver () [ -f /etc/lsb-release ] && RELEASE="$(cat /etc/lsb-release | grep DESCRIPTION | sed 's/^.*=//g' | sed 's/\"//g') ";vi. ()vi.su ()... would reduce to ...ver () [ -f /etc/lsb-release ] && RELEASE="$(cat /etc/lsb-release | grep DESCRIPTION | sed 's/^.*=//g' | sed 's/\"//g') ";... An even much much better way (if possible) would be if every matching function could be determined and displayed in full. I envision that roughly as: Collect the names of the functions with the search string in their body (the name of the function is always a single word on a line before the match, starting at ^ followed by a space then the line ending with ()$ ), then using command -V on each of those names, OR, doing a declare -f again but this time, using those names and matching everything after them from { to } (where { and } are on single lines by themselves at ^ - I know that grep/awk/sed can do amazing things to those that have such knowledge. End result would be running fu awk and it will show me the definition of every function that contains awk in the body of the function. | The following awk command on the receiving end of the pipe comes to mind: declare -f | awk -v srch="pattern" 'NF==2 && $2=="()"{if (m) print buf; buf=""; m=0} buf && index($0,srch){m=1} {buf=buf ORS $0} END{if (m) print buf}' The idea is to store the declaration and body of each function in a buffer buf while parsing the output of declare -f , but only print the buffer if the search string was found. The program will recognize the start of a new function definition if it encounters a line consisting of only two fields (= space-separated "words"), where the second field is () . If a match was found while parsing the previous function (indicated by the flag m being 1), the buffer buf will be printed. Both the buffer and the flag will be reset. The search word is passed to the program as awk variable srch . If it is found on the current line (the index function returns a non-zero result), the m flag is set to 1, but only if we are not on the line where the function declaration starts (indicated by buf not being empty), otherwise matches in the function name would also count. Every line will be appended to the buffer buf , and separated from the previous content by the output record separator ORS (which defaults to newline). At the end, another check is performed if the match was found, and if so, the buffer printed. Without that check the last function definition would never be considered. Note The program performs a full-string search by using the index() function of awk . If you want the search to be based on regular expression matching, you would need to change the condition from index($0,srch) to $0~srch (but, as always, be aware that searching for strings that contain characters which have special meanings for regular expressions becomes more cumbersome). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/671949",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/441685/"
]
} |
672,274 | I see that this has the behavior: [root@divinity test]# echo 0 > file.txt[root@divinity test]# cat file.txt0[root@divinity test]# echo 0> file.txt[root@divinity test]# cat file.txt I also noticed that if I include "" then it works as expected: [root@divinity test]# echo 0""> file.txt[root@divinity test]# cat file.txt0 I imagine this is all just part of IO redirection, but I do not quite understand what echo 0> is doing. | In echo 0 > file.txt , with the spaces, > file.txt causes the shell to redirect standard output so that it is written to file.txt (after truncating the file if it already exists). The rest of the command, echo 0 , is run with the redirections in place. When a redirection operator is prefixed with an unquoted number, with no separation, the redirection applies to the corresponding file descriptor instead of the default. 0 is the file descriptor for standard input , so 0> file.txt redirects standard input to file.txt . (1 is standard output, 2 is standard error.) The number is treated as part of the redirection operator, and no longer as part of the command’s arguments; all that’s left is echo . You can see this happening more obviously if you include more content in the echo arguments: $ echo number 0 > file.txt$ echo number 0> file.txtnumber number shows up in the second case because standard output isn’t redirected. This variant of the redirection operator is more commonly done with standard error, 2> file.txt . Redirecting standard input in this way will break anything which actually tries to read from its standard input; if you really want to redirect standard input, while allowing writes, you can do so with the <> operator: cat 0<> file.txt | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/672274",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137794/"
]
} |
672,295 | Is there any way to run a program without pressing enter? I could then have a script that cd's one folder up. Then I could hold down ctrl and every time i would hit a button, that script would run. That could make life easier in the shell as I could go up the folder structure faster. And could even clear the screen each time and run ls. Or do whatever with just a single click of a button, while in the shell. I'm using bash and my terminal emulator is Linux Mint, Xfce's default. | You can do this using bash's .inputrc file, the readline startup configuration file. First, edit the file ~/.inputrc (this means a file named .inputrc in your $HOME directory; create it if it doesn't exist) and add this line: Control-u: "cd ../\n" That sets the keyboard shortcut Ctrl + u to insert cd ../ followed by a newline (enter). Now, open a new terminal and you can use Ctrl + u to move one directory up. So yes, you can hold down Ctrl and then move one directory up every time you press u . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/672295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8563/"
]
} |
672,341 | I have the following in a log file: [2.09 10:23:56] [23.09 10:3:56] [23.09 10:23:56] Some other thing[23.09 10:23:56] [23.09 10:23:56] [23.09 10:23:5] [23.09 10:23:56] Something[23.09 10:23:56] and would like to remove the "empty" lines (the ones only containing the timestamps) using sed.I've tried the following: sed -i '/\[\d{1,2}\.\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}\] ($|\R)/d' filesed -i '/\[[0-9][0-9]?\.[0-9][0-9]? [0-9][0-9]?:[0-9][0-9]?:[0-9][0-9]?\] \n/d' filesed -i '/\[[0-9][0-9]?\.[0-9][0-9]? [0-9][0-9]?:[0-9][0-9]?:[0-9][0-9]?\] ($|\R)/d' file but nothing seems to do the trick. Any help is appreciated! | sed '/^\[[0-9]\{1,2\}\.[0-9]\{1,2\} [0-9]\{1,2\}:[0-9]\{1,2\}:[0-9]\{1,2\}\] $/ d' sed doesn't support \d . quantifiers {...} must be backslashed (unless you use -E ) the alternative | must be backslashed (unless you use -E ) the optional sign ? must be backslashed (unless you use -E ) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/672341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/60175/"
]
} |
672,426 | I have a shell script which logs the performance of various programs passed to it as arguments, to help me choose the most performant one. The relevant snippet: #!/usr/bin/env shfunction cal_perf () { real_t=0 user_t=0 sys_t=0 for (( trial=1 ; trial<=$3 ; ++trial )) ; do shopt -s lastpipe /usr/bin/time -f '%e:%S:%U' $1 $2 |& IFS=":" read real sys user real_t=$(echo "$real + $real_t" | bc) user_t=$(echo "$user + $user_t" | bc) sys_t=$(echo "$sys + $sys_t" | bc) done real_t=$(echo "scale=2 ; $real_t / $3" | bc) user_t=$(echo "scale=2 ; $user_t / $3" | bc) sys_t=$(echo "scale=2 ; $sys_t / $3" | bc) printf "%s\t%d\t%.2f\t%.2f\t%.2f\n" $2 $3 $real_t $user_t $sys_t >> timings_$(date '+%Y%m%d')}# mainprintf "program\t#trials\treal_time_am\tuser_time_am\tsys time_am\n" > timings_$(date '+%Y%m%d')translator=$1shiftwhile [ $# -gt 1 ] ; do cal_perf $translator $1 ${!#} ; shift ; done It's supposed to be run on the command line as follows: perf <translator_progam> <list_of_programs_to_compare> <number_of_trials> ...for instance: suppose I want to compare the performances of xip.py, foo.py, bar.py, bas.py, qux.py --the net content of the working directory--and run them each for 50 times before generating the stats; I'd invoke the script as: perf python *py 50 I think I am missing something obvious here, but when I invoke this script as bash $HOME/bin/perf ... everything works as intended. However, the following two invocations fail (error attached): perf ... or even placing it in the working directory and invoking as ./perf ... changing the shebang to /usr/bin/env bash solves this problem, but /usr/bin/sh points to /usr/bin/bash on my system. | You are running the script with sh as the interpreter, but you are using bash features. Even assuming that sh points to bash , you still cannot do this as bash disables many of its custom features when asked to run as sh . One solution is to declare the script correctly. You're using bash features so declare it as a bash script. #!/bin/bash Or, if you need continue with a dependency on env , #!/usr/bin/env bash Another option is to remove the bash -specific syntax. In either case, don't forget to ensure variables are enclosed inside double-quotes when you use them. For example, if one of your unquoted variables (including script parameters) contains a space it will be split by the shell into two or more words. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/672426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/459222/"
]
} |
672,609 | what is the command or how would you make a process into a service in Linux? isn't a service basically a daemon? | An example of a user service is the easiest way to describe how to do this. Let's assume that you have a binary or a script, called mytask , that you want to run as a service, and it is located in /usr/local/bin/ . Create a systemd unit file, called my_example.service , in your home directory, ~/.config/systemd/user/ , with the following contents: [Unit]Description=[My example task][Service]Type=simpleStandardOutput=journalExecStart=/usr/local/bin/mytask[Install]WantedBy=default.target The line ExecStart is the most relevant, as it is in this line that you specify the path to your binary or script that you want to run. To make your service start automatically upon boot, run systemctl --user enable my_example.service If you want to start the service immediately, without rebooting, run systemctl --user start my_example.service If you want to stop the service, run systemctl --user stop my_example.service To check the status of your service, run systemctl --user status my_example.service | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/672609",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/496227/"
]
} |
672,612 | I want to find a string ("AAA") in an specific file extension ("*.txt") inside a directory tree (../MyParentFolder), and replace it with the subfolder name (MySubfolder). I know a similar question is asked here but I cannot make the jump to replacing with the subfolder name. | An example of a user service is the easiest way to describe how to do this. Let's assume that you have a binary or a script, called mytask , that you want to run as a service, and it is located in /usr/local/bin/ . Create a systemd unit file, called my_example.service , in your home directory, ~/.config/systemd/user/ , with the following contents: [Unit]Description=[My example task][Service]Type=simpleStandardOutput=journalExecStart=/usr/local/bin/mytask[Install]WantedBy=default.target The line ExecStart is the most relevant, as it is in this line that you specify the path to your binary or script that you want to run. To make your service start automatically upon boot, run systemctl --user enable my_example.service If you want to start the service immediately, without rebooting, run systemctl --user start my_example.service If you want to stop the service, run systemctl --user stop my_example.service To check the status of your service, run systemctl --user status my_example.service | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/672612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/496366/"
]
} |
672,663 | I am using Kubuntu 20.04. When I run sudoedit /etc/fstab , VS Code opens to a blank document and the CLI immediately returns (see details below). If I run export SUDO_EDITOR=nano , the document opens in the nano editor with the contents of /etc/fstab as expected. If I run export SUDO_EDITOR=/snap/bin/code , it once again opens VS Code with a blank document. What am I doing wrong? Or is this a bug? kevin@kevcoder00 ~ $ echo $VISUALkevin@kevcoder00 ~ $ echo $SUDO_EDITORkevin@kevcoder00 ~ $ echo $EDITOR/snap/bin/codekevin@kevcoder00 ~ $ sudoedit /etc/fstab [sudo] password for kevin: sudoedit: /etc/fstab unchanged | You need to tell the editor to wait: SUDO_EDITOR="/snap/bin/code --wait" sudoedit /etc/fstab Without that option, VS Code forks, or notifies an already-running instance, and control immediately returns to sudoedit . The latter sees that nothing has changed and deletes the temporary copy that is used for editing purposes. (Snap might contribute to the effect, but VS Code on its own requires this.) See also How to properly edit system files (as root) in GUI (and CLI) in Gnu/Linux? | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/672663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10651/"
]
} |
672,771 | I wish to extracted out elapsedTime attribute values from the file. Records look {"realm":"/test","transactionId":"9e26c614","elapsedTime":17,"elapsedTimeUnits":"MILLISECONDS","_id":"9e26c6asdasd"} The file I am having is in gb's and I want to get the values greater than 10000. I tried to grep but due to colon grep is not working. grep -wo --color 'elapsedTime' fileName -> this just prints attribute namesgrep -w --color "elapsedTime" fileName -> this just highlights the attribute. | The data is JSON format so it's best to use a parser that understands this format. This will pick out the elapsedTime value from the JSON in the file /tmp/data jq .elapsedTime /tmp/data17 This will pick out only those values larger than 10000 jq '.elapsedTime | select(. > 10000)' /tmp/data If you really cannot use jq then a sed|awk construct can be considered. However, this requires that there must be only one elapsedTime label and associated value per line. There may be other caveats and I really do not recommend it, but if you're desperate here it is, sed -En 's/^.*"elapsedTime":\s*"?([0-9]+).*$/\1/p' /tmp/data | awk '$1 > 10000' In response to a follow-up question ( comment ), to pick out two elements you need to filter on a single element from the object, and then display the required elements: jq -r 'select (.elapsedTime > 10000) | [ .elapsedTime, .transactionId ] | @tsv ' /tmp/data | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/672771",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/333713/"
]
} |
Subsets and Splits