source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
467,039 | Given this minimal example ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) it outputs LINE 1 and then, after one second, outputs LINE 2 , as expected . If we pipe this to grep LINE ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | grep LINE the behavior is the same as in the previous case, as expected . If, alternatively, we pipe this to cat ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | cat the behavior is again the same, as expected . However , if we pipe to grep LINE , and then to cat , ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | grep LINE | cat there is no output until one second passes, and both lines appear on the output immediately, which I did not expect . Why is this happening and how can I make the last version to behave in the same way as the first three commands? | When (at least GNU) grep ’s output is not a terminal, it buffers its output, which is what causes the behaviour you’re seeing. You can disable this either using GNU grep ’s --line-buffered option: ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | grep --line-buffered LINE | cat or the stdbuf utility: ( echo "LINE 1" ; sleep 1 ; echo "LINE 2" ; ) | stdbuf -oL grep LINE | cat Turn off buffering in pipe has more on this topic. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/467039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249779/"
]
} |
467,124 | I have a pdf document which covers half of an A4 page. Is there a way to duplicate the text/images of that document, such that I get a single page A4 document (pdf file) with twice the orginal content, one above each other? Since the original only covers half a page, no scaling should be necessary. | You can use a combination of pdfjam and pdftk to do this: pdfjam --offset '0mm -148.5mm' half-a4.pdf --outfile other-a4.pdfpdftk half-a4.pdf stamp other-a4.pdf output double.pdf pdfjam is being used to shift the page down half a page (A4 = 297mm tall, and 297÷2=148.5). If you need to shift the other way, you'd use -110mm 0mm . Then pdftk puts the two pages on top of each other. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467124",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308720/"
]
} |
467,247 | I'm trying to escape the following code with the echo command but I keep getting the actual octet and not the emoji. Also where could I find the octet values of the emojis? I seem to always find the UTF-8 values. #!/usr/bin/env bashUNICORN='\360\237\246\204\n'FIRE=''# this does not work when I run the scriptprintf '\360\237\246\204\n'printf "Riding a ${UNICORN:Q}" echo "Riding a ${UNICORN:Q}" #[Fails]: how to extract the actual emoji? EDIT_1: Just updating the code after reading comments #!/usr/bin/env bash# Note: use hexdump -b to get one-bye octal displayUNICORN_UTF8=$'\360\237\246\204'printf "U1F525\n"|hexdump -b # [ASK]: How to translate the return value to a valid UTF8 ?FIRE_UTF8=$'\125\061\106\065\062\065\012'echo "Riding a ${UNICORN_locale_encoding}"echo "${UNICORN_UTF8} + ${FIRE_UTF8}" EDIT_2: Posting final code. It sort of works. #!/usr/bin/env bash# Author:# Usage:# Note: use hexdump -b to get one-bye octal display of the emoji (needed for when ≠ computers use ≠ commandLine tools) # Ex: printf "U1F525\n"|hexdump -v -e '"\\" 1/1 "%03o"' ; echo UNICORN_UTF8=$'\360\237\246\204'FIRE_UTF8=$'\xF0\x9F\x94\xA5'LEAVE_SPACE=\^[a-zA-Z0-9_]*$\echo "Riding an ${UNICORN_UTF8} ${LEAVE_SPACE} out of a ${FIRE_UTF8} ${LEAVE_SPACE} house." | echo 's syntax is different from the standard C escapes as supported by printf / awk / $'...' ... In standard echo syntax, you need a leading 0 in front of the octal sequence (which can have from 1 to 3 digits)¹: echo '\0360\0237\0246\0204' Note that for bash 's echo builtin to work with that, you need to enable the xpg_echo option²: $ UNICORN_utf8_printf_format='\360\237\246\204'$ UNICORN_utf8_echo='\0360\0237\0246\0204'$ UNICORN_utf8=$'\360\237\246\204'$ printf "$UNICORN_utf8_printf_format\n"$ printf '%s\n' "$UNICORN_utf8"$ shopt -s xpg_echo$ echo "$UNICORN_utf8_echo" Above, only $UNICORN_utf8 contains a character, encoded in UTF8. The other ones contain sequences of backslash and digits that are meant to be expanded by the respective tools. The %b format of the printf utility also understands the same sequences as echo . %b was actually added so we can get rid of echo which is impossible to use portably and reliably . $ printf '%b\n' "$UNICORN_utf8_echo" See also (in zsh and bash ³): UNICORN_locale_encoding=$'\U1f984' Which gets you a Unicorn encoded in the locale's encoding, which would make it work even if the locale's encoding was not UTF-8 and also had that character (probably only GB18030, where is encoded as $'\225\60\330\66' and where $'\360\237\246\204' would be the encoding of 馃 ( \N{CJK UNIFIED IDEOGRAPH-9983}\N{<private use area-E6E9>} )). Some printf implementations (including GNU printf and the printf builtin of zsh , ksh93 and recent versions of bash (4.2 or above)) also support those \UXXXXXXXX escape sequences in their format argument (or arguments to %b except with ksh93); the GNU one needs 8 digits. ¹ GNU coreutils echo and busybox echo support \ooo with -e as an extension (not when POSIXLY_CORRECT is in the environment for GNU echo ) ² other option would be to use the non-standard -e option, but then it wouldn't work when both the posix and xpg_echo options are enabled, like when bash is in UNIX compliance mode. ³ ksh93 and mksh also support that syntax, but encode in UTF-8 regardless of the locale's encoding; in current (2018) versions of FreeBSD sh , you need \U0001f984 and it only works in UTF-8 locales. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309162/"
]
} |
467,362 | Booting from a kernel which I recompiled with a custom .config , I got the the following kmsg(ie. dmesg ) message: systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) What kernel .config options do I need to fix this? | First enable CONFIG_BPF_SYSCALL=y ┌── Enable bpf() system call ─────────────────────────────────┐│ ││ CONFIG_BPF_SYSCALL: ││ ││ Enable the bpf() system call that allows to manipulate eBPF ││ programs and maps via file descriptors. ││ ││ Symbol: BPF_SYSCALL [=y] ││ Type : bool ││ Prompt: Enable bpf() system call ││ Location: ││ -> General setup ││ Defined at init/Kconfig:1414 ││ Selects: ANON_INODES [=y] && BPF [=y] && IRQ_WORK [=y] ││ Selected by [n]: ││ - AF_KCM [=n] && NET [=y] && INET [=y] │└─────────────────────────────────────────────────────────────┘ ^ that allows you to then also enable CONFIG_CGROUP_BPF=y : ┌── Support for eBPF programs attached to cgroups ─────────────────┐│ ││ CONFIG_CGROUP_BPF: ││ ││ Allow attaching eBPF programs to a cgroup using the bpf(2) ││ syscall command BPF_PROG_ATTACH. ││ ││ In which context these programs are accessed depends on the type ││ of attachment. For instance, programs that are attached using ││ BPF_CGROUP_INET_INGRESS will be executed on the ingress path of ││ inet sockets. ││ ││ Symbol: CGROUP_BPF [=y] ││ Type : bool ││ Prompt: Support for eBPF programs attached to cgroups ││ Location: ││ -> General setup ││ -> Control Group support (CGROUPS [=y]) ││ Defined at init/Kconfig:845 ││ Depends on: CGROUPS [=y] && BPF_SYSCALL [=y] ││ Selects: SOCK_CGROUP_DATA [=y] │└──────────────────────────────────────────────────────────────────┘ That's all that's necessary for those systemd messages to go away. When you select the above, this is what happens in .config : Before: # CONFIG_BPF_SYSCALL is not set After: CONFIG_BPF_SYSCALL=y# CONFIG_XDP_SOCKETS is not set# CONFIG_BPF_STREAM_PARSER is not setCONFIG_CGROUP_BPF=yCONFIG_BPF_EVENTS=y Two more options become available: CONFIG_XDP_SOCKETS and CONFIG_BPF_STREAM_PARSER but it's not necessary to enable them. But if you're wondering what they are about: ┌── XDP sockets ────────────────────────────────────────┐│ ││ CONFIG_XDP_SOCKETS: ││ ││ XDP sockets allows a channel between XDP programs and ││ userspace applications. ││ ││ Symbol: XDP_SOCKETS [=n] ││ Type : bool ││ Prompt: XDP sockets ││ Location: ││ -> Networking support (NET [=y]) ││ -> Networking options ││ Defined at net/xdp/Kconfig:1 ││ Depends on: NET [=y] && BPF_SYSCALL [=y] │└───────────────────────────────────────────────────────┘┌── enable BPF STREAM_PARSER ───────────────────────────────────────────┐│ ││ CONFIG_BPF_STREAM_PARSER: ││ ││ Enabling this allows a stream parser to be used with ││ BPF_MAP_TYPE_SOCKMAP. ││ ││ BPF_MAP_TYPE_SOCKMAP provides a map type to use with network sockets. ││ It can be used to enforce socket policy, implement socket redirects, ││ etc. ││ ││ Symbol: BPF_STREAM_PARSER [=n] ││ Type : bool ││ Prompt: enable BPF STREAM_PARSER ││ Location: ││ -> Networking support (NET [=y]) ││ -> Networking options ││ Defined at net/Kconfig:301 ││ Depends on: NET [=y] && BPF_SYSCALL [=y] ││ Selects: STREAM_PARSER [=m] │└───────────────────────────────────────────────────────────────────────┘ If wondering why CONFIG_BPF_EVENTS=y : ┌── Search Results ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ ││ Symbol: BPF_EVENTS [=y] ││ Type : bool ││ Defined at kernel/trace/Kconfig:476 ││ Depends on: TRACING_SUPPORT [=y] && FTRACE [=y] && BPF_SYSCALL [=y] && (KPROBE_EVENTS [=n] || UPROBE_EVENTS [=y]) && PERF_EVENTS [=y] │└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ Kernel tested 4.18.5 on a Fedora 28 AppVM inside Qubes OS 4.0 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/467362",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
467,407 | I have a pipeline like this: command1 | command2 Is there a way to trace both commands simultaneously? | You can get a single trace with: strace -f sh -c 'command1 | command2' The "-f" will "follow" fork calls into the child processes (so you'll also get any sub-commands invoked by command1 or command2, which may or may not be what you want.) Also, you'll get a trace of the sh process too. If you want each processes output in a separate file, the "-ff" option will trace subprocesses and append the PID to the "-o" filename, as in: strace -ff -o trace sh -c 'command1 | command2' This should create separate trace.<PID> files for each forked child. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/201820/"
]
} |
467,412 | I have a file that has a list of paths like so: "1" "/user/bin/share""2" "/home/user/.local""3" "/root/" Is there a way to extract just the paths? I dont want the numbers or quotation marks. How can I sed or grep the paths out of the file? What regex would be required for such a task? | If all the paths start with / , you could just match / followed by a sequence of non- " characters: $ grep -o '/[^"]*' file/user/bin/share/home/user/.local/root/ Alternatively, for a more structured approach, use awk to strip quotes from and print just the second field: awk '{gsub(/"/,"",$2); print $2}' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22661/"
]
} |
467,474 | I don't know what I was trying to do but I basically deleted yum.conf . I found an old config for yum on github but it still doesn't work. What do I do? I am using Centos 7. | Although I've no idea what was originally in your /etc/yum.conf , try placing this generic/vanilla content in there. $ cat /etc/yum.conf[main]cachedir=/var/cache/yum/$basearch/$releaseverkeepcache=0debuglevel=2logfile=/var/log/yum.logexactarch=1obsoletes=1gpgcheck=1plugins=1installonly_limit=5bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://b ugs.centos.org/bug_report_page.php?category=yumdistroverpkg=centos-release$ | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/467474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309569/"
]
} |
467,507 | When I try to connect to x2goserver I get error: Any idea how to solve it or what are the possible causes? Both client and remote computers are running Manjaro x64 XFCE and are located in the same LAN network. | It looks like you forgot to create the database. sudo x2godbadmin --createdb | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467507",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57885/"
]
} |
467,582 | I am running RHEL6. I have a requirement to have "umask 077" in my /etc/bashrc which I am not allowed to change. We have a folder designated for group collaboration where we would like everyone in the same group to be able to rwx. Therefore, users must set "umask 002" manually or in their local .bashrc file or remember to chmod. They often forget and the administrator gets called upon to "fix" permissions because the owner of the file is not available. Is there a way I can force the folder to "umask 002"?I've read that I should use setfacl but I think umask overrides this. | See How do I force group and permissions for created files inside a specific directory? What I tested was to create a directory /var/test. I set the group to be tgroup01. I made sure anything created under /var/test would be set to the tgroup01 group. I then made sure the default group permissions for anything underneath /var/test were rwx. sudo mkdir /var/testsudo chgrp tgroup01 /var/testsudo chmod 2775 /var/testsudo setfacl -m "default:group::rwx" If I then create a directory foo or touch a file blah, they have the correct permissions ls -al /var/testdrwxrwsr-x+ 3 root tgroup01 .drwxr-xr-x 5 root root ..-rw-rw-r-- 1 userA tgroup01 blahdrwxrwxr-x+ 2 userA tgroup01 foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/212363/"
]
} |
467,585 | Just updated my PC from the last LTS Ubuntu distro to 18.04LTS, and the stylus on my Wacom Wireless Bamboo tablet immediately lost all button function. The system seems to be tracking the stylus itself, as moving it over the pad causes the cursor to move around the screen, but touching the stylus to the pad and/or clicking the button on the stylus body gets no response. The touchpad function on the tablet itself, on the other hand, responds to finger drags and taps as cursor movement and clicks, respectively. The A/B buttons at the foot of the tablet do not respond, same as the stylus buttons. Deets as follows: Tablet Model: Wacom Bamboo Pad (Wireless), CTH-300/K System: Ubuntu 18.04.1 LTS, 64-bit Devices > Wacom Tablet: Displays "No stylus found / Please move your stylus to the proximity of the tablet to configure it" and doesn't respond to stylus cursor movement in that area. Tablet > Wacom Bamboo Pad Wireless > Tracking Mode is "Tablet (absolute)". Trying to use "Map Buttons..." to set the A/B tablet buttons doesn't work; they display on screen for mapping but don't respond to being pressed. libwacom-list-local-devices: One point I noticed: Libwacom lists the tablet stylus as 0xfffff;0xffffe , which are the codes for a default standard stylus and a stylus with an eraser. The stylus for the Bamboo Pad doesn't have an eraser or a rocker button, and should be set as type 0xffffd . Not sure if this means it's related to this bug or not. [Device]Name=Wacom Bamboo Pad WirelessDeviceMatch=usb:056a:0319;Class=BambooWidth=4Height=3IntegratedIn=Layout=bamboo-pad.svgStyli=0xfffff;0xffffe;[Features]Reversible=falseStylus=trueRing=falseRing2=falseTouch=trueTouchSwitch=falseStatusLEDs=NumStrips=0Buttons=2[Buttons]Left=Right=Top=Bottom=A;B;Touchstrip=Touchstrip2=OLEDs=Ring=Ring2=EvdevCodes=0x110;0x111;RingNumModes=0Ring2NumModes=0StripsNumModes=0---------------------------------------------------------------[Device]Name=Wacom Bamboo Pad WirelessDeviceMatch=usb:056a:0319;Class=BambooWidth=4Height=3IntegratedIn=Layout=bamboo-pad.svgStyli=0xfffff;0xffffe;[Features]Reversible=falseStylus=trueRing=falseRing2=falseTouch=trueTouchSwitch=falseStatusLEDs=NumStrips=0Buttons=2[Buttons]Left=Right=Top=Bottom=A;B;Touchstrip=Touchstrip2=OLEDs=Ring=Ring2=EvdevCodes=0x110;0x111;RingNumModes=0Ring2NumModes=0StripsNumModes=0--------------------------------------------------------------- xinput --list: Also lists an eraser that isn't there ⎡ Virtual core pointer id=2 [master pointer (3)]⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]⎜ ↳ Microsoft Microsoft® 2.4GHz Transceiver v7.0 id=9 [slave pointer (2)]⎜ ↳ Microsoft Microsoft® 2.4GHz Transceiver v7.0 id=10 [slave pointer (2)]⎜ ↳ Wacom Wireless Bamboo PAD Pen stylus id=11 [slave pointer (2)]⎜ ↳ Wacom Wireless Bamboo PAD Finger touch id=15 [slave pointer (2)]⎜ ↳ Wacom Wireless Bamboo PAD Pen eraser id=12 [slave pointer (2)]⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Power Button id=7 [slave keyboard (3)] ↳ Microsoft Microsoft® 2.4GHz Transceiver v7.0 id=8 [slave keyboard (3)] ↳ Microsoft Microsoft® 2.4GHz Transceiver v7.0 id=13 [slave keyboard (3)] ↳ Microsoft Microsoft® 2.4GHz Transceiver v7.0 id=14 [slave keyboard (3)] xinput props: Noted that there are far more buttons listed than the stylus actually has. /usr/share/libwacom$ xinput --list-props "Wacom Wireless Bamboo PAD Pen stylus"Device 'Wacom Wireless Bamboo PAD Pen stylus': Device Enabled (139): 1 Coordinate Transformation Matrix (141): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (270): 0 Device Accel Constant Deceleration (271): 1.000000 Device Accel Adaptive Deceleration (272): 1.000000 Device Accel Velocity Scaling (273): 10.000000 Device Node (262): "/dev/input/event5" Wacom Tablet Area (299): 0, 0, 10690, 6680 Wacom Rotation (300): 0 Wacom Pressurecurve (301): 0, 0, 100, 100 Wacom Serial IDs (302): 793, 1, 2, 0, 0 Wacom Serial ID binding (303): 0 Wacom Pressure Threshold (304): 26 Wacom Sample and Suppress (305): 2, 4 Wacom Enable Touch (306): 1 Wacom Hover Click (307): 1 Wacom Enable Touch Gesture (308): 0 Wacom Touch Gesture Parameters (309): 0, 0, 250 Wacom Tool Type (310): "STYLUS" (292) Wacom Button Actions (311): "Wacom button action 0" (312), "Wacom button action 1" (313), "Wacom button action 2" (314), "None" (0), "None" (0), "None" (0), "None" (0), "Wacom button action 3" (315) Wacom button action 0 (312): 1572865 Wacom button action 1 (313): 1572866 Wacom button action 2 (314): 1572867 Wacom button action 3 (315): 1572872 Wacom Pressure Recalibration (316): 1 Wacom Panscroll Threshold (317): 1209 Device Product ID (263): 1386, 793 Wacom Debug Levels (318): 0, 0 xinput test "Wacom Wireless Bamboo PAD Pen stylus": Pen movement: motion a[0]=7676 a[1]=3667 a[2]=0 a[3]=0 a[4]=0 a[5]=-900 motion a[0]=7663 a[1]=3660 a[2]=0 a[3]=0 a[4]=0 a[5]=-900 motion a[0]=7656 a[1]=3650 a[2]=0 a[3]=0 a[4]=0 a[5]=-900 motion a[0]=7657 a[1]=3642 a[2]=0 a[3]=0 a[4]=0 a[5]=-900 motion a[0]=7669 a[1]=3637 a[2]=0 a[3]=0 a[4]=0 a[5]=-900 motion a[0]=7688 a[1]=3635 a[2]=0 a[3]=0 a[4]=0 a[5]=-900 (etc.) Pen "click"/touch: motion a[0]=7658 a[1]=3641 a[2]=31164 a[3]=0 a[4]=0 a[5]=-900 Pen button click: No noticeable response If anyone has any ideas on how to proceed, I'd appreciate it. I'd like to try to remove the extra button mapping, reassign libwacom's styli code for the tablet, and/or find out exactly what the button action codes correspond to, but I'm having difficulty finding information on how to do any of that. (And I'd rather not kill what little functionality I still have.) I've also had issues with assigning persistent settings to this tablet in the past , though I'm not sure if that has anything to do with what's going on now. | See How do I force group and permissions for created files inside a specific directory? What I tested was to create a directory /var/test. I set the group to be tgroup01. I made sure anything created under /var/test would be set to the tgroup01 group. I then made sure the default group permissions for anything underneath /var/test were rwx. sudo mkdir /var/testsudo chgrp tgroup01 /var/testsudo chmod 2775 /var/testsudo setfacl -m "default:group::rwx" If I then create a directory foo or touch a file blah, they have the correct permissions ls -al /var/testdrwxrwsr-x+ 3 root tgroup01 .drwxr-xr-x 5 root root ..-rw-rw-r-- 1 userA tgroup01 blahdrwxrwxr-x+ 2 userA tgroup01 foo | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/183905/"
]
} |
467,593 | If I want to delete everything in the current directory except for "myfile", I can use rm -r !("myfile") But if I put this in a script (called cleanup ): #!/bin/bashrm -r !("myfile") I get: pi@raspberrypi:~/tmp $ ./cleanup./cleanup: line 2: syntax error near unexpected token `('./cleanup: line 2: `rm -r !("file2")' If I run ps -p $$ I can see that my terminal is using bash, PID TTY TIME CMD1345 pts/3 00:00:02 bash so I'm unclear on what the problem is. Notes: I realize that if the script actually worked, it would delete itself. So, my script really would be something more like: rm -r !("cleanup"|"myfile") , but the error message is the same either way. As shown by the blockquote, this is on a Raspbian OS (9 - stretch), which is Debian based. I feel like this question is bound to be a duplicate, but I can't find it. There is a similarly named question , but it is in regards to inheriting variables, and so doesn't address my issue. | The !( pattern-list ) pattern is an extended glob . Many distros have it enabled for interactive shells, but not for non-interactive ones. You can check that with $ shopt extglobextglob on$ bash -c 'shopt extglob'extglob off To fix your script, you have to turn it on: add shopt -s extglob at the beginning of it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467593",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/194319/"
]
} |
467,618 | I am running Debian 9.4. hostname works. $ sudo strace -f hostnamectl...snipped...connect(3, {sa_family=AF_UNIX, sun_path="/var/run/dbus/system_bus_socket"}, 33) = -1 ENOENT (No such file or directory)...Failed to create bus connection: No such file or directory UPDATE: here are more information: $ sudo systemctl status dbus.service dbus.socketUnit dbus.service could not be found.Unit dbus.socket could not be found.$ ps -p 1 PID TTY TIME CMD1 ? 00:00:47 systemd$ sudo systemctl list-unit-files --state=running0 unit files listed.$ sudo systemctl list-unit-files --state=enabled...snipped...26 unit files listed. | It looks like dbus package is missing. Check if dbus package is installed or not using below command: $ sudo dpkg -l | grep dbusii dbus 1.10.26-0+deb9u1 amd64 simple interprocess messaging system (daemon and utilities)ii libdbus-1-3:amd64 1.10.26-0+deb9u1 amd64 simple interprocess messaging system (library) If dbus package is installed you will get output as above. If output is blank then dbus package is missing. You can install the package using below command: $ sudo apt-get install dbus After installing the package you can check the status: $ sudo systemctl status dbus.service dbus.socket● dbus.service - D-Bus System Message Bus Loaded: loaded (/lib/systemd/system/dbus.service; static; vendor preset: enabled) Active: active (running) since Fri 2018-09-07 23:39:14 EDT; 10s ago Docs: man:dbus-daemon(1) Main PID: 451 (dbus-daemon) Tasks: 1 (limit: 4915) CGroup: /system.slice/dbus.service └─451 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation● dbus.socket - D-Bus System Message Bus Socket Loaded: loaded (/lib/systemd/system/dbus.socket; static; vendor preset: enabled) Active: active (running) since Fri 2018-09-07 23:39:14 EDT; 10s ago Listen: /var/run/dbus/system_bus_socket (Stream) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467618",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16377/"
]
} |
467,717 | Recently Firefox Quantum has moved into Debian Stable (= Stretch). When updating my system via apt-get upgrade the old Firefox-ESR has been replaced with the new one. Now all my previously working Add-ons have gone. Is there any option to get my Add-ons back (in best case from the official Stable repositories)? NoScript HTTPSEverywhere UBlockOrigin | Firefox 60 no longer supports XUL extensions, so the extensions provided by xul-ext- packages no longer work with it. You’ll need to wait for the equivalent webext- packages to be made available in Debian 9. There are bugs asking for this already, you can subscribe to them to get updates: HTTPS Everywhere (this has been uploaded and is available in Debian 9 ) NoScript (this has been uploaded for Debian 10, but not yet for Debian 9) UBlock Origin (this has been uploaded and is available in Debian 9 ) Alternatively, you can install them manually in the browser. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/241507/"
]
} |
467,844 | my command: gunzip -c serial2udp.image.gz |sudo dd of=/dev/mmcblk0 conv=fsync,notrunc status=progress bs=4M my output: 15930949632 bytes (16 GB, 15 GiB) copied, 1049 s, 15.2 MB/s 0+331128 records in0+331128 records out15931539456 bytes (16 GB, 15 GiB) copied, 1995.2 s, 8.0 MB/s the card: SanDisk Ultra 32GB MicroSDHC Class 10 UHS Memory Card Speed Up To 30MB/s distribution: 16.0.4 xenial with xfce kernel version: 4.13.0.37-generic i understand taking 17 minutes seems reasonable from what I've read. playing with block size doesn't really seem to make much of a difference (bs=100M still exhibits this behaviour with similar timestamps). why do the updates hang and it doesn't produce a finished report for another 16 minutes?? iotop tells me that mmcqd/0 is still running in the background at this point (at 99% IO), so I figure there is a cache somewhere that is holding up the final 5MB but I thought fsync should make sure that doesn't happeniotop shows no traffic crossing at this time either for dd. ctrl-c is all but useless and i don't want to corrupt my drive after writing to it. | I figure there is a cache somewhere that is holding up the final 5MB but I thought fsync should make sure that doesn't happen conv=fsync means to write back any caches by calling fsync - after dd has written all the data. Hanging at the end is exactly what it will do. When the output file is slower than the input file, the data written by dd can pile up in caches. The kernel cache can sometimes fill a significant fraction of system RAM. This makes for very misleading progress information. Your "final 5MB" was just an artefact of how dd shows progress. If your system was indeed caching about 8GB (i.e. half of the 16GB of written data), then I think you either must have about 32GB of RAM, or have been fiddling with certain kernel options. See the lwn.net link below. I agree that not getting any progress information for 15 minutes is pretty frustrating. There are alternative dd commands you could use. If you want dd to show more accurate progress, you might have to accept more complexity. I expect the following would work without degrading your performance, though maybe reality has other ideas than I do. gunzip -c serial2udp.image.gz |dd iflag=fullblock bs=4M |sudo dd iflag=fullblock oflag=direct conv=fsync status=progress bs=4M of=/dev/mmcblk0 oflag=direct iflag=fullblock avoids piling up kernel cache, because it bypasses it altogether. iflag=fullblock is required in such a command AFAIK (e.g. because you are reading from a pipe and writing using direct IO). The effect of missing fullblock is another unfortunate complexity of dd . Some posts on this site use this to argue you should always prefer to use a different command. It's hard to find another way to do direct or sync IO though. conv=fsync should still be used, to write back the device cache. I added an extra dd after gunzip , to buffer the decompressed output in parallel with the disk write. This is one of the issues that makes the performance with oflag=direct or oflag=sync a bit complex. Normal IO (non-direct, non-sync) is not supposed to need this, as it is already buffered by the kernel cache. You also might not need the extra buffer if you were writing to a hard drive with 4M of writeback cache, but I don't assume an SD card has that much. You could alternatively use oflag=direct,sync (and not need conv=fsync ). This might be useful for good progress information if you had a weird output device with hundreds of megabytes of cache . But normally I think of oflag=sync as a potential barrier to performance. There is a 2013 article https://lwn.net/Articles/572911/ which mentions minute-long delays like yours. Many people see this ability to cache minutes worth of writeback data as undesirable. The problem was that the limit on the cache size was applied indiscriminately, to both fast and slow devices. Note that it is non-trivial for the kernel to measure device speed, because it varies depending on the data locations. E.g. if the cached writes are scattered in random locations, a hard drive will take longer from repeatedly moving the write head. why do the updates hang The fsync() is a single system call that applies to the entire range of the file device. It does not return any status updates before it is done. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/467844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309873/"
]
} |
467,851 | I recently came across this page that indicates how to empty a file. How can I do it for all files in a given subfolder ? For instance, by using > file.log to empty a file ? I basically want to create a VM instance image, and I'd like to clear existing log files so when the template is used it starts fresh without lingering data | I figure there is a cache somewhere that is holding up the final 5MB but I thought fsync should make sure that doesn't happen conv=fsync means to write back any caches by calling fsync - after dd has written all the data. Hanging at the end is exactly what it will do. When the output file is slower than the input file, the data written by dd can pile up in caches. The kernel cache can sometimes fill a significant fraction of system RAM. This makes for very misleading progress information. Your "final 5MB" was just an artefact of how dd shows progress. If your system was indeed caching about 8GB (i.e. half of the 16GB of written data), then I think you either must have about 32GB of RAM, or have been fiddling with certain kernel options. See the lwn.net link below. I agree that not getting any progress information for 15 minutes is pretty frustrating. There are alternative dd commands you could use. If you want dd to show more accurate progress, you might have to accept more complexity. I expect the following would work without degrading your performance, though maybe reality has other ideas than I do. gunzip -c serial2udp.image.gz |dd iflag=fullblock bs=4M |sudo dd iflag=fullblock oflag=direct conv=fsync status=progress bs=4M of=/dev/mmcblk0 oflag=direct iflag=fullblock avoids piling up kernel cache, because it bypasses it altogether. iflag=fullblock is required in such a command AFAIK (e.g. because you are reading from a pipe and writing using direct IO). The effect of missing fullblock is another unfortunate complexity of dd . Some posts on this site use this to argue you should always prefer to use a different command. It's hard to find another way to do direct or sync IO though. conv=fsync should still be used, to write back the device cache. I added an extra dd after gunzip , to buffer the decompressed output in parallel with the disk write. This is one of the issues that makes the performance with oflag=direct or oflag=sync a bit complex. Normal IO (non-direct, non-sync) is not supposed to need this, as it is already buffered by the kernel cache. You also might not need the extra buffer if you were writing to a hard drive with 4M of writeback cache, but I don't assume an SD card has that much. You could alternatively use oflag=direct,sync (and not need conv=fsync ). This might be useful for good progress information if you had a weird output device with hundreds of megabytes of cache . But normally I think of oflag=sync as a potential barrier to performance. There is a 2013 article https://lwn.net/Articles/572911/ which mentions minute-long delays like yours. Many people see this ability to cache minutes worth of writeback data as undesirable. The problem was that the limit on the cache size was applied indiscriminately, to both fast and slow devices. Note that it is non-trivial for the kernel to measure device speed, because it varies depending on the data locations. E.g. if the cached writes are scattered in random locations, a hard drive will take longer from repeatedly moving the write head. why do the updates hang The fsync() is a single system call that applies to the entire range of the file device. It does not return any status updates before it is done. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/467851",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/150125/"
]
} |
467,873 | I'm running Linux Mint 19 Tara, and trying to follow the instructions here with the goal of installing pgAdmin4 as a desktop app. There seems to be a problem involving the authentication of the repository. The apt-key step seems to work, as I observe PostgreSQL Debian Repository in the apt-key list. I don't have a deb command (I imagine this is a Mint vs Ubuntu difference?), so I used add-apt-repository http://apt.postgresql.org/pub/repos/apt/ tara-pgdg main instead, after which I observe deb http://apt.postgresql.org/pub/repos/apt/ bionic main in /etc/apt/sources.list.d/additional-repositories.list . At this point running either apt-get upgrade or apt-get update shows an error The repository 'http://apt.postgresql.org/pub/repos/apt bionic Release' does not have a Release file. How can I proceed? It seems unlikely that there really isn't a release file; I can see what looks like an authentication list at https://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/ . Do I have a path wrong or something? | Open terminal and type: wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add - Open Software Sources and click on "Additional repositories" and paste the following for Linux Mint 19 (it's based on Ubuntu Bionic): deb http://apt.postgresql.org/pub/repos/apt/ bionic-pgdg main or the following for Linux Mint 20 (based on Ubuntu Focal Fossa): deb http://apt.postgresql.org/pub/repos/apt/ focal-pgdg main it should look like this: Press "OK" and that will automatically update cache. Now open terminal and type the following: sudo apt update sudo apt install pgadmin4 That should install pgadmin4. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309900/"
]
} |
467,925 | I'm having some trouble trying to remove write permission from the owning group ( g ), and to add read permission to others ( o ) at the same time. How would I remove some permission from the owning group and add some permission for others in the same line? | Noting the title of your question: Removing and adding permission using numerical notation on the same line With chmod from GNU coreutils, which you probably have on a Linux system, you could use $ chmod -020,+004 test.txt to do that. It works in the obvious way: middle digit for the group, 2 is for write; and last digit for "others", and 4 for read. Being able to use + or - with a numerical mode is a GNU extension , e.g. the BSD-based chmod on my Mac gives an error for +004 : $ chmod +004 test.txtchmod: Invalid file mode: +004 So it would be simpler, shorter, more portable and probably more readable to just use the symbolic form: $ chmod g-w,o+r test.txt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/467925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/309927/"
]
} |
467,934 | When a script is reading text file I would like this code to run specific command when it finds a specific string within a specific variable. Example: Let's say that I would like to use the following code to read the output of the command last : #!/bin/bashfor i in `last`; do sleep 0.1 | echo -ne "$i " done The output of the command last is a table in the form of a list of entries, something like: username pts/2 1.2.3.4 via Sun Sep 2 06:40 - 06:40 (00:00). . . . . . . . The variable i in the previous code can be any phrase in the previous table. I would like to apply a specific command e. g. begin new line when the code finds a specific string within the variable i , for example when the variable i contains a closed parenthesis ) I want the code to begin a new line. When the code finishes reading the output of the command last , I want the code to repeat the for loop once again (multiple times) to read if there are any new updates. How can I direct the code to re-run again? For example, is there such a command goto which will force the code to go to specific line? Would you please advice? | If the output from your last is like this: username pts/2 1.2.3.4 via Sun Sep 2 06:40 - 06:40 (00:00) and you want to add the newlines where they originally were, then I would suggest changing the logic a bit. Read the input line-by-line to begin with, and split the line into words only after that. That way, there's no need to explicitly look for the parenthesis. And you can repeat the whole loop by wrapping it inside while true; do ... done . So we have something like this: #!/bin/bashset -fwhile true; do last | while IFS= read -r line; do for word in $line; do sleep .1 printf "%s " "$word" done echo # prints a newline donedone set -f disables filename expansion, which would otherwise possibly happen at the unquoted expansion of $line . Also, I'd use printf instead of echo to print the words, for a number of reasons. If you do explicitly want to look for the closing parenthesis, then you can use the [[ .. ]] test: it allows for pattern matching with glob-like patterns, or with regexes. ( [[ $word =~ $re ]] would be the regex match) #!/bin/bashset -fwhile true; do for word in $(last); do sleep .1 printf "%s " "$word" [[ $word = *')'* ]] && echo donedone Though this one, of course, doesn't add a newline on lines where the final login duration is replaced by something like still logged in . The for word in $whatever construct has the downside that it treats multiple spaces exactly like single spaces, so the output from the script will not have the columns aligned as neatly as in the original. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/467934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
468,007 | I tried to have an interactive program in a bash script : my_program And I wish to be able to close it with 'Ctrl + c'.But when I do it my script is closing as well. I know about. trap '' 2my_programtrap 2 But in this case, I just can't close my_program with Ctrl + c. Do you have any idea how to allow Ctrl + c on a program, but not closing the script running it ? EDIT : add example #!/bin/bashmy_programmy_program2 If i use Ctrl + c to close my_program , my_program2 is never executed because the whole script is exited. | You should use trap true 2 or trap : 2 instead of trap '' 2 . That's what "help trap" in a bash shell says about it: If ARG is the null string each SIGNAL_SPEC is ignored by the shell and by the commands it invokes . Example: $ cat /tmp/test#! /bin/shtrap : INTcatecho first cat killedcatecho second cat killedecho done$ /tmp/test <press control-C>^Cfirst cat killed <press control-C>^Csecond cat killeddone | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/468007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119603/"
]
} |
468,058 | Could someone please clarify what "vendor preset : disable" means? This option is visible after enabling a package in RHEL7. | If you see a Vendor preset: Disabled, it means when the service first installs it will be disabled on start up and will have to be manually started. If you want the service to start up automatically with boot up, all it takes is to change it's start up setting with systemctl enable <service> , example: systemctl enable httpd . A detailed explanation can be found at RHEL systemctl documentation or systemctl man page itself ● httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2018-09-10 09:29:16 MDT; 1h 3min ago Docs: man:httpd(8) man:apachectl(8) Process: 6917 ExecReload=/usr/sbin/httpd $OPTIONS -k graceful (code=exited, status=0/SUCCESS) Main PID: 1261 (httpd) Status: "Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec" CGroup: /system.slice/httpd.service ├─1261 /usr/sbin/httpd -DFOREGROUND ├─6936 /usr/sbin/httpd -DFOREGROUND ├─6937 /usr/sbin/httpd -DFOREGROUND ├─6938 /usr/sbin/httpd -DFOREGROUND ├─6939 /usr/sbin/httpd -DFOREGROUND └─6940 /usr/sbin/httpd -DFOREGROUNDSep 10 09:28:51 localhost systemd[1]: Starting The Apache HTTP Server...Sep 10 09:29:16 localhost systemd[1]: Started The Apache HTTP Server.Sep 10 10:21:02 localhost systemd[1]: Reloaded The Apache HTTP Server. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/468058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228031/"
]
} |
468,109 | TL;DR : I know a program creates and then deletes files in /tmp . How can I intercept them for examination ? Context : There's a particular .jar file, which I don't trust; for some reason its source code contains an ftm method and has capability to make connections, which is evident from network-related syscalls in output of strace (and when I mean connection, I don't mean unix domain sockets, it's AF_INET6 ). I've examined with Wireshark and saw no outgoing TCP or UDP connections during it's use. However, I still don't quite trust it. From the output of strace I've seen that it's creating temporary files in /tmp and then deletes them. Is there a way to intercept those files to examine their contents ? | Better yet, if you want to reverse engineer a nefarious Java binary, rather than trying to intercept files, decompile the suspect .jar file. For it, you can use CFR - another java decompiler CFR will decompile modern Java features - up to and including much of Java 9, but is written entirely in Java 6, so will work anywhere To use, simply run the specific version jar, with the class name(s) you want to decompile (either as a path to a class file, or as a fully qualified classname on your classpath). (--help to list arguments). Alternately, to decompile an entire jar, simply provide the jar path, and if you want to emit files (which you probably do!) add --outputdir /tmp/putithere There are no lack of alternatives, however the CFR project seems to be well maintained, having a 2018 update. Disclaimer: I have not done reverse engineering to Java/JAR binaries since 2005 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85039/"
]
} |
468,123 | I have a file which looks like this: chr1 HAVANA exon 12613 12721 . + . gene_id "ENSG00000223972.5"; transcript_id "ENST00000456328.2"; gene_type "transcribed_unprocessed_pseudogene"; gene_name "DDX11L1"; transcript_type "processed_transcript"; transcript_name "DDX11L1-202"; exon_number 2; exon_id "ENSE00003582793.1"; level 2; transcript_support_level "1"; tag "basic"; havana_gene "OTTHUMG00000000961.2"; havana_transcript "OTTHUMT00000362751.1";chr1 HAVANA exon 13221 14409 . + . gene_id "ENSG00000223972.5"; transcript_id "ENST00000456328.2"; gene_type "transcribed_unprocessed_pseudogene"; gene_name "DDX11L1"; transcript_type "processed_transcript"; transcript_name "DDX11L1-202"; exon_number 3; exon_id "ENSE00002312635.1"; level 2; transcript_support_level "1"; tag "basic"; havana_gene "OTTHUMG00000000961.2"; havana_transcript "OTTHUMT00000362751.1"; I want to extract gene_id and gene_name values along with first 8 columns(file is tab separated). I have written a script in perl which can do this, but I am looking for a one liner in awk,sed etc which can do this. PS. The file is tab separated and has 9 columns. The 9 th column has values which are separated by spaces. My output should look like this: chr1 HAVANA exon 12613 12721 . + . ENSG00000223972.5 DDX11L1chr1 HAVANA exon 13221 14409 . + . ENSG00000223972.5 DDX11L1 | Better yet, if you want to reverse engineer a nefarious Java binary, rather than trying to intercept files, decompile the suspect .jar file. For it, you can use CFR - another java decompiler CFR will decompile modern Java features - up to and including much of Java 9, but is written entirely in Java 6, so will work anywhere To use, simply run the specific version jar, with the class name(s) you want to decompile (either as a path to a class file, or as a fully qualified classname on your classpath). (--help to list arguments). Alternately, to decompile an entire jar, simply provide the jar path, and if you want to emit files (which you probably do!) add --outputdir /tmp/putithere There are no lack of alternatives, however the CFR project seems to be well maintained, having a 2018 update. Disclaimer: I have not done reverse engineering to Java/JAR binaries since 2005 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63486/"
]
} |
468,177 | In bash or zsh , I can use following syntax to read from pipe into variables: echo AAA BBB | read X Y ; echo $X which will print AAA Why does the same not work in /bin/sh ? I am using /bin/sh -> /bin/dash dash in Debian | Why does the same not work in '/bin/sh' ? Assigning variables in a pipe does not work as expected in sh and bash because each command of a pipe runs in a subshell. Actually, the command does work , X and Y get declared, but they are not available outside the pipe. The following will work: echo AAA BBB | { read X Y ; echo $X; } But in your case: try this, read X Y <<< "AAA BBB" or read X Y < <(echo "AAA BBB") Some useful links: http://mywiki.wooledge.org/BashFAQ/024 bash: Assign variable from pipe? Read values into a shell variable from a pipe | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468177",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
468,192 | How do I grep this? (Including the special characters) "Limit reached."[\n]" I tried back-slashing the special symbols but end up not working, like this: grep '\"Limit reached\.\"\[\\n\]\" ' I also tried other techniques but also not working. Is there any other syntax you could suggest/advice? | use -F in grep $ cat test.txt"Limit reached."[\n]"test"Limit reached."[\n]"$ grep -F '"Limit reached."[\n]"' test.txt"Limit reached."[\n]""Limit reached."[\n]" As per Manual page, -F, --fixed-strings, --fixed-regexp Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched. (-F is specified by> POSIX, --fixed-regexp is an obsoleted alias, please do not use it new scripts.) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/468192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308199/"
]
} |
468,321 | I'd guess lspci would be the tool to do this but I can't seem to find any identifying output. Is there a way to know from the command line whether a Thunderbolt port is present on a machine? I have one computer that I know has a Thunderbolt port and lspci shows this: 00:00.0 Host bridge: Intel Corporation Device 3ec2 (rev 07)00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller (x16) (rev 07)00:02.0 VGA compatible controller: Intel Corporation Device 3e9200:08.0 System peripheral: Intel Corporation Skylake Gaussian Mixture Model00:12.0 Signal processing controller: Intel Corporation Device a379 (rev 10)00:14.0 USB controller: Intel Corporation Device a36d (rev 10)00:14.2 RAM memory: Intel Corporation Device a36f (rev 10)00:15.0 Serial bus controller [0c80]: Intel Corporation Device a368 (rev 10)00:16.0 Communication controller: Intel Corporation Device a360 (rev 10)00:17.0 SATA controller: Intel Corporation Device a352 (rev 10)00:1d.0 PCI bridge: Intel Corporation Device a330 (rev f0)00:1f.0 ISA bridge: Intel Corporation Device a306 (rev 10)00:1f.3 Audio device: Intel Corporation Device a348 (rev 10)00:1f.4 SMBus: Intel Corporation Device a323 (rev 10)00:1f.5 Serial bus controller [0c80]: Intel Corporation Device a324 (rev 10)00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-LM (rev 10)01:00.0 Multimedia video controller: Blackmagic Design DeckLink Mini Recorder02:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) I have remotely logged into another machine, and would like to know if it has a Thunderbolt port, and lspci shows the following: 00:00.0 Host bridge: Intel Corporation Device 191f (rev 07)00:01.0 PCI bridge: Intel Corporation Device 1901 (rev 07)00:02.0 VGA compatible controller: Intel Corporation Device 1912 (rev 06)00:14.0 USB controller: Intel Corporation Device a12f (rev 31)00:14.2 Signal processing controller: Intel Corporation Device a131 (rev 31)00:16.0 Communication controller: Intel Corporation Device a13a (rev 31)00:16.3 Serial controller: Intel Corporation Device a13d (rev 31)00:17.0 RAID bus controller: Intel Corporation 82801 SATA Controller [RAID mode] (rev 31)00:1d.0 PCI bridge: Intel Corporation Device a118 (rev f1)00:1f.0 ISA bridge: Intel Corporation Device a146 (rev 31)00:1f.2 Memory controller: Intel Corporation Device a121 (rev 31)00:1f.3 Audio device: Intel Corporation Device a170 (rev 31)00:1f.4 SMBus: Intel Corporation Device a123 (rev 31)00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)01:00.0 Multimedia video controller: Blackmagic Design DeckLink Mini Recorder02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express | On my system, at least, with Thunderbolt 3 it's invisible to lshw & lsusb if nothing's plugged into it (I do see the USB3 port). I have a Clevo N131WU laptop, which you can buy from vendors such as System76 and Tuxedo. That said, the modules are loaded: [tara@tuxmonster ~]$ lsmod | grep thundintel_wmi_thunderbolt 16384 0wmi 28672 1 intel_wmi_thunderbolt[tara@tuxmonster ~]$ And I double checked on my desktop computer, which does not have thunderbolt but is running the same distribution & kernel (Arch, on 4.18), and the thunderbolt driver is not loaded on the desktop. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/125142/"
]
} |
468,323 | I have three files, as shown below: FILE 1: DATE PGTO_CRED20180801 50.0020180802 150.0020180803 130.0020180804 110.0020180805 200.00FILE 2: DATE PGTO_TOTAL20180801 150.0020180802 300.0020180803 200.0020180804 250.0020180805 400.00FILE 3: DATE PGTO_FEE20180801 35.0020180802 10.0020180803 25.0020180804 140.0020180805 135.00 I need my output file to be something like: DATE PGTO_CRED PGTO_TOTAL PGTO_FEE20180801 50.00 150.00 35.0020180802 150.00 300.00 10.0020180803 130.00 200.00 25.0020180804 110.00 250.00 140.0020180805 200.00 400.00 135.00 How do I do this on Redhat Linux? | Many tools can do that, probably awk is the first which comes to mind, but I recommend the join command, especially if the input is already sorted (as in your example): join file1 <(join file2 file3) | column -t The column -t is just to nicely align the output, you can remove it. Output: DATE PGTO_CRED PGTO_TOTAL PGTO_FEE20180801 50.00 150.00 35.0020180802 150.00 300.00 10.0020180803 130.00 200.00 25.0020180804 110.00 250.00 140.0020180805 200.00 400.00 135.00 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/308307/"
]
} |
468,416 | I recently set up both Fedora 28 & Ubuntu 18.04 systems and would like to configure my primary user account on both so that I can run sudo commands without being prompted for a password. How can I do this on the respective systems? | This is pretty trivial if you make use of the special Unix group called wheel on Fedora systems. You merely have to do the following: Add your primary user to the wheel group $ sudo gpasswd -a <primary account> wheel Enable NOPASSWD for the %wheel group in /etc/sudoers $ sudo visudo Then comment out this line: ## Allows people in group wheel to run all commands# %wheel ALL=(ALL) ALL And uncomment this line: ## Same thing without a password%wheel ALL=(ALL) NOPASSWD: ALL Save this file with Shift + Z + Z . Logout and log back in NOTE: This last step is mandatory so that your desktop and any corresponding top level shells are re-execed showing that your primary account is now a member of the wheel Unix group. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/468416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
468,444 | Using sudo journalctl -u {service} I can see the log of specific service. How to find the associated log file? What is the best way to monitor a log file programmatically? (I mean a program the react based on something appears in the log file) | Systems with s6, runit, perp, nosh, daemontools-encore, et al. doing the service management work this way. Each main service has an individual associated set of log files that can be monitored individually, and a decentralized logging mechanism. systemd however does not work this way. There is no individual "associated log file" for any given service. There is no such file to be monitored. All log output is funneled into a single central dæmon, systemd-journald , and that dæmon writes it as a single stream with all services' log outputs combined to the single central journal in /{run,var}/log/journal/ . The -u option to journalctl is a post-processing filter that filters what is printed from the single central journal, all journal entries being tagged with (amongst other things) the name of the associated service. Everything fans in, and it then has to be filtered to separate it back out to (approximately) how it was originally. The systemd way is to use journalctl -f with appropriate filters added, or write your own program directly using the systemd-specific API for its journal. Further reading https://unix.stackexchange.com/a/294206/5132 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/310383/"
]
} |
468,446 | I'm running a computer vision model on a headless remote VM (Ubuntu 16.04) over X11Forwarding with good ol' Putty and Xming as my Windows X Server. All is well but seems there is no frame drop if the client-server bandwidth can't keep up, which means my application is slowed down and only renders a few frames a second when it can do hundreds if bandwidth is plenty. Is there a force frame drop option built into X11 forwarding, and if there is, how do i turn it on? | I highly recommend Xpra for this sort of use-case: not only does it provide the ability to disconnect and reconnect to X applications running on a remote host, it also supports a variety of image encodings to provide a decent experience in different circumstances, can accelerate OpenGL applications and use OpenGL in the client for better performance . It has a native Windows client so it should be easy enough to set up. You’ll need to install it on the remote VM too, but that’s as easy as apt install xpra on Ubuntu. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/468446",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262170/"
]
} |
468,510 | I have a folder where I backup my databases to and I want to delete all backups created between 11 am and 3 pm and it doesn't matter what day it is -- this is the catch! I found this command to be very helpful, but not in my use case: find . -type f -newermt '01 nov 2018 00:00:00' -not -newermt '10 nov 2018 00:00:00' -delete But here it forces me to an interval between two dates! I want to delete only backups created between two specific times. | Simply enough, given that you tagged linux , you have the stat command available, which will extract a file's modification time, and the GNU date command, which will extract the hour from from a given time: find . -type f -exec sh -c ' h=$(date -d @$(stat -c %Y "$1") +%-H); [ "$h" -ge 11 ] && [ "$h" -lt 15 ]' \ sh {} \; -ls If the results look correct, then: find . -type f -exec sh -c ' h=$(date -d @$(stat -c %Y "$1") +%-H); [ "$h" -ge 11 ] && [ "$h" -lt 15 ]' \ sh {} \; -delete Here's a test-run with the -ls version: $ touch -d 'Wed Sep 12 11:00:01 EDT 2018' 11am$ touch -d 'Wed Sep 12 12:00:02 EDT 2018' 12pm$ touch -d 'Wed Sep 12 15:00:03 EDT 2018' 303pm$ find . -type f -exec sh -c 'h=$(date -d @$(stat -c %Y "$1") +%-H); [ "$h" -ge 11 ] && [ "$h" -lt 15 ]' sh {} \; -ls1705096 0 -rw-r--r-- 1 user group 0 Sep 12 2018 ./11am1705097 0 -rw-r--r-- 1 user group 0 Sep 12 2018 ./12pm Credit to Kusalananda for writing the excellent answer I followed, at: Understanding the -exec option of `find` Note that we do not want the {} + version of find here, as we want the -exec results to be per-file, so that we only delete files that match the time range. The embedded shell script has two main pieces: determine the file's "hour" timestamp and then return success or failure based on the range. The first part is itself accomplished in two pieces. The variable is assigned the result of the command substitution; the command should be read inside-out: $(stat -c %Y "$1") -- this (second) command-substitution calls stat on the $1 parameter of the embedded shell script; $1 was assigned by find as one of the pathnames it found. The %Y option to the stat command returns the modification time in seconds-since-the-epoch. date -d @ ... +%-H -- this takes the seconds-since-the-epoch from the above command substitution and asks date to give us the Hours portion of that time; the @ syntax tells date that we're giving it seconds-since-the-epoch as an input format. With the - option in the date output format, we tell GNU date to not pad the value with any leading zeroes. This prevents any octal misinterpretation later. Once we have the $h Hour variable assigned, we use bash's conditional operator [[ to ask whether that value is greater-than-or-equal to 11 and also strictly less than 15. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468510",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31699/"
]
} |
468,515 | I have some emails where I would like to mask all characters except the last 2 before the @ sign and mask everything after that. For example: <[email protected]> Desired output: <joebl**@******.***> So far, using perl , I tried: perl -pe 's/(<.)(.*)(@.)(.*)(.\..*>)/$1."*" x length($2).$3."*" x length($4).$5/e' but it is not giving the intended results. | Simply enough, given that you tagged linux , you have the stat command available, which will extract a file's modification time, and the GNU date command, which will extract the hour from from a given time: find . -type f -exec sh -c ' h=$(date -d @$(stat -c %Y "$1") +%-H); [ "$h" -ge 11 ] && [ "$h" -lt 15 ]' \ sh {} \; -ls If the results look correct, then: find . -type f -exec sh -c ' h=$(date -d @$(stat -c %Y "$1") +%-H); [ "$h" -ge 11 ] && [ "$h" -lt 15 ]' \ sh {} \; -delete Here's a test-run with the -ls version: $ touch -d 'Wed Sep 12 11:00:01 EDT 2018' 11am$ touch -d 'Wed Sep 12 12:00:02 EDT 2018' 12pm$ touch -d 'Wed Sep 12 15:00:03 EDT 2018' 303pm$ find . -type f -exec sh -c 'h=$(date -d @$(stat -c %Y "$1") +%-H); [ "$h" -ge 11 ] && [ "$h" -lt 15 ]' sh {} \; -ls1705096 0 -rw-r--r-- 1 user group 0 Sep 12 2018 ./11am1705097 0 -rw-r--r-- 1 user group 0 Sep 12 2018 ./12pm Credit to Kusalananda for writing the excellent answer I followed, at: Understanding the -exec option of `find` Note that we do not want the {} + version of find here, as we want the -exec results to be per-file, so that we only delete files that match the time range. The embedded shell script has two main pieces: determine the file's "hour" timestamp and then return success or failure based on the range. The first part is itself accomplished in two pieces. The variable is assigned the result of the command substitution; the command should be read inside-out: $(stat -c %Y "$1") -- this (second) command-substitution calls stat on the $1 parameter of the embedded shell script; $1 was assigned by find as one of the pathnames it found. The %Y option to the stat command returns the modification time in seconds-since-the-epoch. date -d @ ... +%-H -- this takes the seconds-since-the-epoch from the above command substitution and asks date to give us the Hours portion of that time; the @ syntax tells date that we're giving it seconds-since-the-epoch as an input format. With the - option in the date output format, we tell GNU date to not pad the value with any leading zeroes. This prevents any octal misinterpretation later. Once we have the $h Hour variable assigned, we use bash's conditional operator [[ to ask whether that value is greater-than-or-equal to 11 and also strictly less than 15. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468515",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/310451/"
]
} |
468,533 | I am trying to install the following packcage postgresql-server-dev-9.5. Using sudo apt-get install postgresql-server-dev-9.5 Now, after I run this command I get this error: Unable to locate package postgresql-server-dev-9.5. Couldn't find any package by glob 'postgresql-server-dev-9.5'. Couldn't find any package by regex 'postgresql-server-dev-9.5' How can I fix this? I use ubuntu 18.04.1 | Ubuntu 18.04 has PostgreSQL 10 , so the correct package there is postgresql-server-dev-10 : sudo apt install postgresql-server-dev-10 To determine the major PostgreSQL version in a given release of Ubuntu, find the matching entries in the postgresql-common page on Launchpad . Thus: 19.04 has PostgreSQL 11 ( postgresql-server-dev-11 ) 18.04 and 18.10 have PostgreSQL 10 ( postgresql-server-dev-10 ) 16.04 has PostgreSQL 9.5 (the second part of the version number is significant here; postgresql-server-dev-9.5 ) 14.04 has PostgreSQL 9.3 ( postgresql-server-dev-9.3 ) 12.04 has PostgreSQL 9.1 ( postgresql-server-dev-9.1 ) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/468533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/310467/"
]
} |
468,647 | I have a file secret.asc containing an ASCII-armored (i.e., plain text and starts with -----BEGIN PGP PRIVATE KEY BLOCK----- ) PGP/GPG secret/private key, and I would like to know its 40-character key fingerprint without importing it into my GPG keyring. Unfortunately, not a single command I've tried has surrendered that information to me. What I've Tried The following failed attempts were run on Ubuntu Xenial 16.04.5 with gpg version 1.4.20 and gpg2 version 2.1.11. The key in question was created solely for experimentation purposes and won't be used in anything, so I don't care if the output reveals too much about it. $ gpg --with-fingerprint secret.ascsec 2048R/161722B3 2018-09-12 uid Testing <[email protected]> Short key ID only, no fingerprint. $ gpg2 --with-fingerprint secret.ascgpg: DBG: FIXME: merging secret key blocks is not anymore availablegpg: DBG: FIXME: No way to print secret key packets here Error. $ gpg --with-fingerprint --no-default-keyring --secret-keyring ./secret.asc --list-secret-keysgpg: [don't know]: invalid packet (ctb=2d)gpg: keydb_search_first failed: invalid packet Error. $ gpg2 --with-fingerprint --no-default-keyring --secret-keyring ./secret.asc --list-secret-keys/home/jwodder/.gnupg/pubring.gpg--------------------------------... This lists the secret keys in my keyring for some reason. $ gpg --dry-run --import -vvvv secret.ascgpg: using character set `utf-8'gpg: armor: BEGIN PGP PRIVATE KEY BLOCKgpg: armor header: Version: GnuPG v1:secret key packet: version 4, algo 1, created 1536783228, expires 0 skey[0]: [2048 bits] skey[1]: [17 bits] skey[2]: [2047 bits] skey[3]: [1024 bits] skey[4]: [1024 bits] skey[5]: [1021 bits] checksum: 386f keyid: 07C0845B161722B3:signature packet: algo 1, keyid 07C0845B161722B3 version 4, created 1536783228, md5len 0, sigclass 0x1f digest algo 2, begin of digest b6 12 hashed subpkt 2 len 4 (sig created 2018-09-12) hashed subpkt 12 len 22 (revocation key: c=80 a=1 f=9F3C2033494B382BEF691BB403BB6744793721A3) hashed subpkt 7 len 1 (not revocable) subpkt 16 len 8 (issuer key ID 07C0845B161722B3) data: [2048 bits]:user ID packet: "Testing <[email protected]>":signature packet: algo 1, keyid 07C0845B161722B3 version 4, created 1536783228, md5len 0, sigclass 0x13 digest algo 2, begin of digest 33 ee hashed subpkt 2 len 4 (sig created 2018-09-12) hashed subpkt 27 len 1 (key flags: 03) hashed subpkt 9 len 4 (key expires after 32d3h46m) hashed subpkt 11 len 5 (pref-sym-algos: 9 8 7 3 2) hashed subpkt 21 len 5 (pref-hash-algos: 8 2 9 10 11) hashed subpkt 22 len 3 (pref-zip-algos: 2 3 1) hashed subpkt 30 len 1 (features: 01) hashed subpkt 23 len 1 (key server preferences: 80) subpkt 16 len 8 (issuer key ID 07C0845B161722B3) data: [2046 bits]gpg: sec 2048R/161722B3 2018-09-12 Testing <[email protected]>gpg: key 161722B3: secret key importedgpg: pub 2048R/161722B3 2018-09-12 Testing <[email protected]>gpg: writing to `/home/jwodder/.gnupg/pubring.gpg'gpg: using PGP trust modelgpg: key 793721A3: accepted as trusted keygpg: key 161722B3: public key "[User ID not found]" importedgpg: Total number processed: 1gpg: imported: 1 (RSA: 1)gpg: secret keys read: 1gpg: secret keys imported: 1 The only fingerprint to be found is that of the revocation key. $ gpg2 --dry-run --import -vvvv secret.asc Same output as above. $ gpg --list-packets secret.asc$ gpg2 --list-packets secret.asc Basically the same output as the --dry-run --import -vvvv commands, only without the gpg: lines. | As indicated in the comments, the simplest solution appears to be to first dearmor the key and then run --list-secret-keys on the new file: $ gpg --dearmor secret.asc # Creates secret.asc.gpg$ gpg --with-fingerprint --no-default-keyring --secret-keyring ./secret.asc.gpg --list-secret-keys Annoyingly, although the dearmored key can be written to stdout with the -o - option, neither --secret-keyring - nor --secret-keyring /dev/stdin will allow the second command to read the key from stdin, so combining the two commands into one with a pipe isn't an option. Also, running the second command with gpg2 instead of gpg still fails to give the desired output. A slightly more elaborate approach, but one that seems to work with both versions of gpg , is to import the secret key into a temporary GPG home directory and then list the temp home's private keys: $ mkdir -m 0700 tmphome$ gpg --homedir tmphome --import secret.asc$ gpg --homedir tmphome --with-fingerprint --list-secret-keys | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468647",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11006/"
]
} |
468,703 | In my Linux machine I have 32 vcores. And from lscpu , I can see same. For my CPU, "Thread(s) per core" is 2. So does it mean that I have actually 64 vcores? | In the lscpu output, the “CPU(s)” line gives the total number of logical CPUs (aka threads). If it’s run inside a VM, that’s the number of virtual cores assigned to the VM, in your case 32. The other information provided by lscpu gives more detail, and should end up matching the number of logical CPUs: “Thread(s) per core” × “Core(s) per socket” × “Socket(s)”. The characteristics of the physical CPUs in the host don’t have much bearing on the characteristics of the virtual CPUs inside the VM, and they don’t add to the allocations — if your host CPU has two threads per core, that doesn’t multiply the assigned cores inside the VM. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468703",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
468,734 | I am trying to find all directories in a folder recursively while exclude all git submodules by excluding all path containing .git file. How could I do it? Explanation: .git file exists at the root of every submodules folder. This submodule folder could be included anywhere. Test Case $ mkdir Test$ cd Test$ mkdir a$ mkdir b$ mkdir c$ cd a$ mkdir .git$ cd ..$ cd b$ touch .git$ cd ..$ cd c$ mkdir c1$ mkdir c2$ cd..$ find . -type d \( \( ! -name . -exec [ -e {}/.git ] \; -prune \) -o \( \( \ -name .git\ -o -name .vscode\ -o -name node_modules\ -o -name Image\ -o -name Rendered\ -o -name iNotebook\ -o -name GeneratedTest\ -o -name GeneratedOutput\ \) -prune \) -o -print \) | sort Expected Results ../a./c./c/c1./c/c2 | find actions are also tests, so you can add tests using -exec : find . \( -exec [ -f {}/.git ] \; -prune \) -o \( -name .git -prune \) -o -print This applies three sets of actions: -exec [ -f {}/.git ] \; -prune prunes directories containing a file named .git -name .git -prune prunes directories named .git (so the command doesn’t search inside the main .git directory of a repository) -print prints anything which isn’t caught by the above. To only match directories, add -type d , either just before -print , or (to save time processing files): find . -type d \( \( -exec [ -f {}/.git ] \; -prune \) -o \( -name .git -prune \) -o -print \) This also works when run this on a directory other than . , by changing the find start path: find /some/other/path -type d \( \( -exec [ -f {}/.git ] \; -prune \) -o \( -name .git -prune \) -o -print \) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220462/"
]
} |
468,764 | I would like to know whether the commands that we call in the shell are functions or programs . | It depends. Commands can fall into multiple categories: builtins, aliases,functions, executables (scripts and binaries in the search path). On the command line, these occupy a single, flat namespace whichmakes overriding possible. There are numerous ways of tellingkinds of programs apart: $ f () { :; }$ alias a=cat$ which ff (){ :} We know that f is a function. $ which aalias a='cat' /usr/bin/cat We know that a is an alias. $ which yes/usr/bin/yes We know that yes is a program. $ builtin echo ; echo $?0 The shell has an echo builtin … $ builtin cat ; echo $?bash: builtin: cat: not a shell builtin1 … but none for cat . If there is a builtin or an aliasbut you insist on calling the program instead, prefix thecommand with a backslash: $ builtin true | printf "%d\n" $?0$ alias true=false$ true ; printf "%d\n" $?1$ \true ; printf "%d\n" $?0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468764",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/310660/"
]
} |
468,813 | Based on an answer to another question I am using curl to stream the stdout of one process as the entity of a POST request: myDataGeneratingApp \| curl -H "Content-Type: application/json" -H "Transfer-Encoding: chunked" -X POST -d @- http://localhost:12000 Unfortunately, curl is waiting for EOF from the stdout before it begins sending the data. I know this because I can run my application stand-alone and data comes out to the console immediately but when I pipe to curl there is a significant delay before the service begins receiving data. How can I use curl to stream data immediately as it becomes available from the standard out of the application? If not possible in curl then is there another solution (e.g. wget)? | Looking through the curl code transfer.c it seems that the program is able to repackage request data (from curl to the server) using the chunking protocol, where each chunk of data is prefixed by the length of the chunk in ascii hexadecimal, and suffixed by \r\n . It seems the way to make it use this in a streaming way, after connecting to the server is with -T - . Consider this example: for i in $(seq 5)do date sleep 1done | dd conv=block cbs=512 |strace -t -e sendto,read -o /tmp/e \ curl --trace-ascii - \ -H "Transfer-Encoding: chunked" \ -H "Content-Type: application/json" \ -X POST -T - http://localhost/... This script sends 5 blocks of data, each beginning with the date and padded to 512 bytes by dd , to a pipe, where strace runs curl -T - to read the pipe.In the terminal we can see == Info: Connected to localhost (::1) port 80 (#0)=> Send header, 169 bytes (0xa9)0000: POST /... HTTP/1.1001e: Host: localhost002f: User-Agent: curl/7.47.10048: Accept: */*0055: Transfer-Encoding: chunked0071: Content-Type: application/json0091: Expect: 100-continue00a7: <= Recv header, 23 bytes (0x17)0000: HTTP/1.1 100 Continue which shows the connection, and the headers sent. In particular curl has not provided a Content-length: header, but an Expect: header to which the server (apache) has replied Continue . Immediately after comes the first 512 bytes (200 in hex) of data: => Send data, 519 bytes (0x207)0000: 2000005: Fri Sep 14 15:58:15 CEST 2018 0045: 0085: 00c5: 0105: 0145: 0185: 01c5: => Send data, 519 bytes (0x207) Looking in the strace output file we see each timestamped read from the pipe, and sendto write to the connection: 16:00:00 read(0, "Fri Sep 14 16:00:00 CEST 2018 "..., 16372) = 51216:00:00 sendto(3, "200\r\nFri Sep 14 16:00:00 CEST 20"..., 519, ...) = 51916:00:00 read(0, "Fri Sep 14 16:00:01 CEST 2018 "..., 16372) = 51216:00:01 sendto(3, "200\r\nFri Sep 14 16:00:01 CEST 20"..., 519, ...) = 51916:00:01 read(0, "Fri Sep 14 16:00:02 CEST 2018 "..., 16372) = 51216:00:02 sendto(3, "200\r\nFri Sep 14 16:00:02 CEST 20"..., 519, ...) = 51916:00:02 read(0, "Fri Sep 14 16:00:03 CEST 2018 "..., 16372) = 51216:00:03 sendto(3, "200\r\nFri Sep 14 16:00:03 CEST 20"..., 519, ...) = 51916:00:03 read(0, "Fri Sep 14 16:00:04 CEST 2018 "..., 16372) = 51216:00:04 sendto(3, "200\r\nFri Sep 14 16:00:04 CEST 20"..., 519, ...) = 51916:00:04 read(0, "", 16372) = 016:00:05 sendto(3, "0\r\n\r\n", 5, ...) = 5 As you can see they are spaced out by 1 second, showing that the data is being sent as it is being received. You must just have at least 512 bytes to send, as the data is being read by fread() . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33532/"
]
} |
468,860 | How to remove the lines in .csv file that second field contain the word content ? example : ams-hbase-log4j,content:\n#LicensedtotheApacheSoftwareFoundation(ASF)underone\n#ormorecontributorlicenseagreementsams-log4j,content:\n#\n#LicensedtotheApacheSoftwareFoundation(ASF)underone\n#ormorecontributorlicenseagreements.ams-site,timeline.metrics.cache.size:150, expected output ams-site,timeline.metrics.cache.size:150, | With sed and to remove the lines that second field contains content , you could do: sed '/^[^,]*,[^,]*content/d' infile or to remove the lines which second field stars with content : sed '/^[^,]*,content/d' infile | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
468,879 | I have a file with more than 200 columns. As for example purpose, I am here using a file with less number of columns(9). Below is the input file (a few lines) chr10 181243 225933 1 1 1 10 0 36chr10 181500 225933 1 1 1 106 0 35chr10 226069 255828 1 1 1 57 0 37chr10 243946 255828 1 1 1 4 0 27chr10 255989 267134 1 1 1 87 0 32chr10 255989 282777 1 1 1 61 0 34chr10 267297 282777 1 1 1 61 0 37chr10 282856 283524 1 1 1 92 0 35chr10 282856 285377 1 1 1 1 0 15chr10 283618 285377 1 1 1 72 0 33 I want to rearrange the file such that my last column (here the 9th column) is the 4th column in the output file and then print everything else. So the output I am looking for is chr10 181243 225933 36 1 1 1 10 0chr10 181500 225933 35 1 1 1 106 0chr10 226069 255828 37 1 1 1 57 0chr10 243946 255828 27 1 1 1 4 0chr10 255989 267134 32 1 1 1 87 0chr10 255989 282777 34 1 1 1 61 0chr10 267297 282777 37 1 1 1 61 0chr10 282856 283524 35 1 1 1 92 0chr10 282856 285377 15 1 1 1 1 0chr10 283618 285377 33 1 1 1 72 0 On a file with fewer number of columns, I can use something like this to achieve the above output: awk -v OFS="\t" '{print $1,$2,$3,$9,$4,$5,$6,$7,$8}' If now I have a file with a large number of columns, how can I place the last column of the file as the 4th column and rest I print as it is? | Perl is very concise for this: split each line into words, pop off the last word and insert it at index 3 (0-based) $ perl -lane 'splice @F, 3, 0, pop(@F); print "@F"' file | column -tchr10 181243 225933 36 1 1 1 10 0chr10 181500 225933 35 1 1 1 106 0... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63486/"
]
} |
468,897 | I was able to backup a drive using the following command. pv -EE /dev/sda > disk-image.img This is all well and good, but now I have no way of seeing the files unless I use this command pv disk-image.img > /dev/sda This, of course, writes the data back to the disk which is not what I want to do. My question is what can I do to mount the .img file itself instead of just writing back to a disk? I've tried mounting using loop but it seems to complain about an invalid NTFS. $ mount -o loop disk-image.imgmount: disk-image.img: can't find in /etc/fstab.$ mount -o loop disk-image.img /mnt/disk-image/NTFS signature is missing.Failed to mount '/dev/loop32': Invalid argumentThe device '/dev/loop32' doesn't seem to have a valid NTFS.Maybe the wrong device is used? Or the whole disk instead of apartition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? | You backed up the whole disk including the MBR (512 bytes), and not a simple partition which you can mount, so you have to skip the MBR. Please try with: sudo losetup -o 512 /dev/loop0 disk-image.imgsudo mount -t ntfs-3g /dev/loop0 /mnt Edit: as suggested by @grawity: sudo losetup --partscan /dev/loop0 disk-image.imgsudo mount -t ntfs-3g /dev/loop0 /mnt | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/468897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270429/"
]
} |
468,966 | To count lines wider than 80 columns I am, currently, using this command: $ git grep -h -c -v '^.\{,80\}$' **/*.{c,h,p{l,y}} \ |awk 'BEGIN { i=0 } { i+=$1 } END { printf ("%d\n", i) }'44984 Unfortunately, the repo uses tabs for indenting so the grep patternis inaccurate. Is there anyway to have the regex treat tabs at thestandard width of 8 chars like how wc -L does? For the purpose of this question, we may assume the contributors were disciplined enough to indent consistently, or that they have git commit hooks in lieu of discipline. For reasons related to performance, I’d prefer a solution that works inside git-grep(1) or maybe another grep tool, without preprocessing files . | Preprocess the files by piping them through expand . The expand utility will expand tabs appropriately (using the standard tab stops at every 8th character). find . -type f \( -name '*.[ch]' -o -name '*.p[ly]' \) -exec expand {} + |awk 'length > 80 { n++ } END { print n }' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/468966",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143505/"
]
} |
469,070 | I would like to install Debian 5 on an older PC, because I expect that the kernel of Debian 5 would work better on this computer. I downloaded the netinstall ISO from debian.org and I tried to install it on a Virtualbox machine. I got this error: Bad mirror . I changed the mirror to archive.debian.org as a hostname, then /debian/ and the problem got resolved. My problem right now is that the installation stucks on Please wait... , on the screen of Select and install (exactly after choosing what to install - only Standard System - at 13%). I don't get any errors. I don't know also how to check logs or something else if there exists some. When I Press CTRL + ALT + F4 , I see the following on the screen: > sep 14 15:36:00 in-target: You should only proceed with the installation if you re certain that> sep 14 15:36:00 in-target: this is what you want to do.> sep 14 15:36:00 in-target:> sep 14 15:36:00 in-target: ispell ibritish wamerican mlocate exim4-config libnfsidmapZ bind9-host> sep 14 15:36:00 in-target: mime-support libidn11 telnet lsof bash-completion dsutils> sep 14 15:36:00 in-target: exim4-daemon-light perl libcap2 mutt reportbug libds58 bc m4 doc-debian> sep 14 15:36:00 in-target: dc at libeuent1 ncurses-term libpcre3 doc-linux-texwhois libsqlite3-0> sep 14 15:36:00 in-target: python2.5 python-minimal libisccc50 procmail time 1ibrpcsecgss3> sep 14 15:36:00 in-target: liblwres50 python ftp pciutils dictionaries-commonpython-central w3m> sep 14 15:36:00 in-target: openbsd-inetd libbind9-50 libxle libgme debian-fafile ucf> sep 14 15:36:00 in-target: perl-modules python2.5-minimal libldap-2.4-2 libiscfg50 libdb4.5> sep 14 15:36:00 in-target: bsd-mailx exim4 libgc1c2 exim4-base patch libisc50 libgssgluel iamerican> sep 14 15:36:00 in-target: portmap nfs-common less libmagicl texinfo liblockfile1> sep 14 15:36:00 in-target:> sep 14 15:36:00 in-target: Do you want to ignore this warning and proceed anyway> sep 14 15:36:00 in-target: To continue, enter "Yes": to abort, enter "No": What is this warning message about? What can I do? Important to note that I had tried to install Debian 9 on a VirtualBox and it worked. I tried to install Debian 6 and had the same problem. | I would like to install Debian 5 on an older PC, because Debian 5's kernel should work well on this computer. Umm... no! That is in fact a Really Bad Idea. There are multiple GNU/Linux distributions available that will run on - and are in fact made for - older 32bit PC's (AntiX, Bodhi etc). You should never run operating systems that have reach end of life, and as such do not recieve security updates in a timely order. And I fail to see why an older kernel should work better than a new one, if it is non PAE you are looking for, there are alternatives (see above). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/469070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/177948/"
]
} |
469,253 | The Linux Programming Interface shows the layout of a virtual address space of a process. Is each region in the diagram a segment? From Understanding The Linux Kernel , is it correct that the following means that the segmentation unit in MMU maps the segments and offsets within segments into the virtual memory address, and the paging unit then maps the virtual memory address to the physical memory address? The Memory Management Unit (MMU) transforms a logical address into a linear address by means of a hardware circuit called a segmentation unit; subsequently, a second hardware circuit called a paging unit transforms the linear address into a physical address (see Figure 2-1). Then why does it say that Linux doesn't use segmentation but only paging? Segmentation has been included in 80x86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: • Memory management is simpler when all processes use the same segment register values—that is, when they share the same set of linear addresses. • One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures, in particular, have limited support for segmentation. The 2.6 version of Linux uses segmentation only when required by the 80x86 architecture. | The x86-64 architecture does not use segmentation in long mode (64-bit mode). Four of the segment registers: CS, SS, DS, and ES are forced to 0, and the limit to 2^64. https://en.wikipedia.org/wiki/X86_memory_segmentation#Later_developments It is no longer possible for the OS to limit which ranges of the "linear addresses" are available. Therefore it cannot use segmentation for memory protection; it must rely entirely on paging. Do not worry about the details of x86 CPUs which would only apply when running in the legacy 32-bit modes. Linux for the 32-bit modes is not used as much. It may even be considered "in a state of benign neglect for several years". See 32-Bit x86 support in Fedora [LWN.net, 2017]. (It happens that 32-bit Linux does not use segmentation either. But you don't need to trust me on that, you can just ignore it :-). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/469253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
469,299 | Basically, I am running: nohup ./executable &> /tmp/out.log & In order to make sure the process is running I ran the command: tail -f /tmp/out.log But the only thing I can get from tail is "nohup: ignoring input", and once killing the process that previously started I can see the contents of out.log | Run your program as: nohup stdbuf -oL ./executable &> /tmp/out.log & stdbuf can change the default buffering. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311038/"
]
} |
469,312 | I would like to figure out which processes are communicating with which websites over a period of time. All what I found programs like ss that list the connections that open this instant and then exit. What I, actually, want is something like wireshark, but one that would log process names. Is there really no such a program? | If you have a recent kernel (preferably at least 4.9, but apparently some things work at 4.2), then you can take advantage of the new dtrace facility that allows you to intercept every tcp connect() call in the kernel and show the process id, remote ip address and port. Since this does not poll, you will not miss any short-lived connections.From the Brendan Gregg blog of 2016 typical output is # tcpconnectPID COMM IP SADDR DADDR DPORT1479 telnet 4 127.0.0.1 127.0.0.1 231469 curl 4 10.201.219.236 54.245.105.25 801469 curl 4 10.201.219.236 54.67.101.145 801991 telnet 6 ::1 ::1 232015 ssh 6 fe80::2000:bff:fe82:3ac fe80::2000:bff:fe82:3ac 22 Further examples are in the bcc-tools package source . Built packages to install are available for several distributions or you can follow the compilation instructions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311050/"
]
} |
469,347 | I want to remove duplicate lines from a file with words of Syriac script. The source file has 3 lines, 1st and 3rd are identical. $ cat file.txt ܐܒܘܢܢܗܘܐܐܒܘܢ When I use sort and uniq , the result presumes that all the 3 lines are identical, which is wrong: $ cat file.txt | sort | uniq -c 3 ܐܒܘܢ Explicitly setting locale to Syriac doesn't help either. $ LC_COLLATE=syr_SY.utf8 cat file.txt | sort | uniq -c 3 ܐܒܘܢ Why would that happen?I'm using Kubuntu 18 and bash, if that matters. | The GNU implementation of uniq as found on Ubuntu, with -c , doesn't report counts of contiguous identical lines but counts of contiguous lines that sort the same¹. Most international locales on GNU systems have that bug that many completely unrelated characters have been defined with the same sort order most of them because their sort order is not defined at all. Most other OSes make sure all characters have different sorting order. $ expr ܐ = ܒ1 ( expr 's = operator, for arguments that are not numerical, returns 1 if operands sort the same, 0 otherwise). That's the same with ar_SY.UTF-8 or en_GB.UTF-8 . What you'd need is a locale where those characters have been given a different sorting order. If Ubuntu had locales for the Syriac language, you could expect those characters to have been given a different sorting order, but Ubuntu doesn't have such locales. You can look at the output of locale -a for a list of supported locales. You can enable more locales by running dpkg-reconfigure locales as root . You can also define more locales manually using localedef based on the definition files in /usr/share/i18n/locales , but you'll find no data for the Syriac language there. Note that in: LC_COLLATE=syr_SY.utf8 cat file.txt | sort | uniq -c You're only setting the LC_COLLATE variable for the cat command (which doesn't affect the way it outputs the content of the file, cat doesn't care about collation nor even character encoding as it's not a text utility). You'd want to set it for both sort and uniq . You'd also want to set LC_CTYPE to a locale that has a UTF-8 charset. As your system doesn't have syr_SY.utf8 locale, that's the same as using the C locale (the default locale). Actually, here the C locale or C.UTF-8 is probably the locale you'd want to use. In those locales, the collation order is based on code point, Unicode code point for C.UTF-8, byte value for C, but that ends up being the same as the UTF-8 character encoding has that property. $ LC_ALL=C expr ܐ = ܒ0$ LC_ALL=C.UTF-8 expr ܐ = ܒ0 So with: (export LANG=ar_SY.UTF-8 LC_COLLATE=C.UTF-8 LANGUAGE=syr:ar:en unset LC_ALL sort <file | uniq -c) You'd have a LC_CTYPE with UTF-8 as the charset, a collation order based on code point, and the other settings relevant to your region, so for instance error messages in Syriac or Arabic if GNU coreutils sort or uniq messages had been translated in those languages (they haven't yet). If you don't care about those other settings, it's just as easy (and also more portable) to use: <file LC_ALL=C sort | LC_ALL=C uniq -c Or (export LC_ALL=C; <file sort | uniq -c) as @isaac has already shown. ¹ note that POSIX compliant uniq implementations are not meant to compare strings using the locale's collation algorithm but instead do a byte-to-byte equality comparison. That was further clarified in the 2018 edition of the standard (see the corresponding Austin group bug ). But GNU uniq currently does use strcoll() , even under POSIXLY_CORRECT ; it also has a -i option for case-insenstive comparison which ironically doesn't use locale information and only works correctly on ASCII input | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469347",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256446/"
]
} |
469,441 | I am debugging a program and not quite sure why I can not drop privileges. I have root permissions via sudo and I can call setgid/setuid , but the operation [is] is not supported. Basic code to reproduce (golang): package mainimport ( "fmt" "os" "strconv" "syscall")func main() { if os.Getuid() != 0 { fmt.Println("run as root") os.Exit(1) } uid, err := strconv.Atoi(os.Getenv("SUDO_UID")) check("", err) gid, err := strconv.Atoi(os.Getenv("SUDO_GID")) check("", err) fmt.Printf("uid: %d, gid: %d\n", uid, gid) check("gid", syscall.Setgid(gid)) check("uid", syscall.Setuid(uid))}func check(message string, err error) { if err != nil { fmt.Printf("%s: %s\n", message, err) os.Exit(1) }} Example output: $ sudo ./drop-sudo uid: 1000, gid: 1000gid: operation not supported System info: $ uname -aLinux user2460234 4.15.0-34-generic #37-Ubuntu SMP Mon Aug 27 15:21:48 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | Your programming language simply does not support such things. It's complex to do this stuff on Linux, because of the architecture of Linux. The C libraries (e.g. GNU and musl) hide this complexity. It continues to be one of the known problems with threads on Linux. The Go language does not replicate the mechanism of the C libraries. The current implementation of those functions is not a system call, and has not been since 2014 . Further reading Jonathan de Boyne Pollard (2010). The known problems with threads on Linux . Frequently Given Answers. Michał Derkacz (2011-01-21). syscall: Setuid/Setgid doesn't apply to all threads on Linux . Go bug #1435 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/280031/"
]
} |
469,537 | The established structure of the .deb file name is package_version_architecture.deb . According to this paragraph: Some packages don't follow the name structure package_version_architecture.deb . Packages renamed by dpkg-name will follow this structure. Generally this will have no impact on how packages are installed by dselect/dpkg, but other installation tools might depend on this naming structure. Question: However, are there any real situations when renaming the .deb package file is highly un recommended? Is it a normal practice to provide a custom .deb file name for my software? Example: My Program for Linux v1.0.0 (Pro).deb — the custom naming my-program_1.0.0-1_amd64.deb — the proper official naming Note: I'm not planning to create a repo, I'm just hosting the .deb package of my software on my website for direct download. | Over the years, I’ve accumulated a large number of .deb packages with non-standard names, and I don’t remember running into any problems. “Famous” packages with non-standard names that people might come across nowadays include google-chrome-stable_current_amd64.deb and steam.deb . (In both cases, the fixed, versionless name ensures that a stable URL can be used for downloads, and a stable name for installation instructions.) However I don’t remember running across any with spaces in their names; that shouldn’t cause issues with tools either, but it might cause confusion for your users (since they’ll need to quote the filename or escape the spaces if they’re using shell-based tools). Another point to note is that using a non-standard name which isn’t the same as your package name (as stored in the control file) could also cause confusion, e.g. when attempting to remove the package (since the package name won’t be the same as the name used to install it). As a result of all this, if you don’t want to stick to the canonical name I would recommend something like my-program.deb or my-program_amd64.deb (depending on whether you want to support multiple architectures). You can make that a symlink to the versioned filename too if you want to allow older versions to be downloaded. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/469537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/183516/"
]
} |
469,543 | I have seen related questions such as this but they don't provide the exact answer to my question From my experiments as well as this answer , printenv and env pretty much show the same set of system variables. If I set the variables in /etc/bash.bashrc (supposed to be for system-wide system variables) SYSTEM_ENVI=1000 ~/.bashrc (supposed to be for user-specific system variables) USER_ENVI=10 I even logged out and logged in so the /etc/environment takes effect.The following scenario takes place: $echo $SYSTEM_ENVI//outputs 1000$echo $USER_ENVI//outputs 10$CURR_ENVI=1$env | grep USER_ENVI//nothing shows up, the same if I grepped SYSTEM_ENVI or CURR_ENVI$set | grep USER_ENVI//shows up USER_ENVI assignment, the same if I grepped SYSTEM_ENVI or CURR_ENVI My questions are: What system variables do printenv / env print? Should one use set to see all accessible variable (system variable and local variable) instead of printenv or env ? Regarding not duplication justification As far as I am concern, this question and the marked answer helped me realize the facts that: Shell variables are not environment variables Assignments in /etc/bash.bashrc or ~/.bashrc don't create environment variables but rather instruct the interactive non-login-shell processes to create and initialize these shell variables on startups. I think my question is not necessarily different from this one but reading the marked answer on that one doesn't satisfy me as much as the answer given in this post. | env and printenv are printing the list of environment strings (meant to contain environment variable definitions) that is given to them by the command that executes them. The caller will eventually do a: execve("/usr/bin/env", argv, envp); system call where argv and envp are two lists of strings. env / printenv just print the list of strings in envp , one per line. By convention, the strings in envp are in the format var=value , but they don't have to be (I don't know of any execve() implementation that enforces it) and most env , printenv implementations don't care when they display them. When the caller is a POSIX shell, it will include in the envp that it passes to env the list of its shell variables that are marked for export (either because the user called export / typeset -x on it, or because the variable was already in the environment that the shell received on start-up). If some of the environment variables that the shell received on start-up could not be mapped to a shell variable, or if any of the envp strings it received didn't contain a = character, depending on the shell implementation, those strings will be passed along untouched, or the shell will strip them or some of them. Example with bash , using GNU env to pass a list of arbitrary variable names ( env can't pass arbitrary envp strings though, they have to contain a = , and the ones that use setenv() can't pass some that start with = ¹). $ env -i '=foo' '1=x' '+=y' bash -c printenv+=y1=x[...] (the variable with the empty name was removed but not the other ones). Also, if the shell received multiple envp strings for the same variable name, depending on the shell, they will all be passed along, or only the first one, or only the last one. set in POSIX shells prints the list of shell variables, including non-scalar ones for shells that support array/hash types, whether they've been marked for export or not. In POSIX shells you can also use export -p to list the variables that have been marked for export. Contrary to env / printenv , that also lists variables that have been marked for export but not be given any value yet. In Korn-like shells like ksh , zsh or bash , you can also use typeset to get more information including attributes of variables, and list variables by type (like typeset -a to list the array variables). Here, by adding USER_ENVI=10 to your ~/.bashrc , you're configuring the interactive non-login invocations of the bash shell to define a USER_ENVI shell variable on start-up. Since you've not used export , that variable stays a shell variable (unless it was in the environment when bash started), so it's not passed as environment variables to commands executed by that shell. /etc/environment , itself, on Ubuntu 16.04 is read by the pam_env.so pluggable authentication module. Applications that log you in like login , sshd , lightdm will read those files if configured to use pam_env.so in /etc/pam.d and pass the corresponding environment variables (nothing to do with shell variables here) to the command they start in your name after you authenticate (like your login shell for login / sshd , or your graphical session manager for lightdm ...). Since the environment is inherited by default, when your session manager executes a terminal emulator which in turn executes your login shell, those environment variables will be passed along at each step, and your shell will map them to shell variables which you can expand in command line with things like echo "$VAR" . pam_env env files like /etc/environment look like shell scripts, but pam_env doesn't invoke a shell to parse them and understands only a subset of the shell syntax and only allows defining variables whose name is made of one or more ASCII alpha-numeric characters or underscores (it does let you define a 123 variable though which is not a valid POSIX shell variable name). ¹, to pass a list of arbitrary env strings, you can also call execve() directly like with: perl -e 'require "syscall.ph"; $cmd = "/bin/zsh"; $args = pack("p*x[p]", "sh", "-c", "printenv"); $env = pack("p*x[p]", "a=b", "a=c", "", "+=+", "=foo", "bar"); syscall(SYS_execve(), $cmd, $args, $env)' here testing with zsh instead of bash | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311201/"
]
} |
469,547 | I have a file with two columns of data, let's say: kevin n1edwin n2mevin n3 I would like to generate a statement with something like this. --This is kevin and his 'roll number' is n1--This is edwin and his 'roll number' is n2--This is mewin and his 'roll number' is n3 Now, I am unable to do this with awk . It doesn't like dash lines "--" or single quotes (') in the middle of the statement. I would like the output to be in the way I have shown above? | Using awk: awk 'NF{print "--This is " $1 " and his \047roll number\047 is " $2 }' file \047 is the octal code the for single quote ' . Another alternative is to define a variable holding the single quote character: awk -v sq="'" 'NF{print "--This is " $1 " and his "sq"roll number"sq" is " $2 }' file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311257/"
]
} |
469,550 | I would like to extract data from a file and organize it in a big fixed-widths table. I can expect that this table will have multiple columns, let's say 30 columns. If I create this table using the traditional awk command line, then I will need to write a very long awk command line, something similar to the following: awk '{printf "%-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s %-5s\n", $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30}' Is there anyway to make this linear shorter? For example, I am thinking of implementing an array inside the previous long command. This array will tell awk what are the numbers and the widths of the columns that I would like to create, instead of defining each column separately, something like: awk 'BEGIN {for i in {1..30}; do echo %-5s\n print i} How can I implement that correctly inside awk to create multiple fixed-widths columns? | You can do the print, itself, inside a loop, one field at a time. awk '{for(i=1;i<=NF;i++) { printf "%-5s",$i } ; printf("\n"); }' Note the printing of the newline is needed after the loop to prevent multiple lines all merging into one. e.g echo a b c 32 87 x5 | awk '{for(i=1;i<=NF;i++) { printf "%-5s",$i } ; printf("\n"); }'a b c 32 87 x5 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
469,561 | Right now my approach is to: find directories containing a.txt find -type f -iname "a.txt" | sed -r 's|/[^/]+$||' > a_paths.txt find directories containing b.txt find -type f -iname b.txt | sed -r 's|/[^/]+$||' > b_paths.txt print the difference comm -23 <(sort a_paths.txt ) <(sort b_paths.txt) Is there some efficient way with a find one-liner? | If your find supports -execdir and -printf : find . -name a.txt -execdir [ ! -e b.txt ] \; -printf %h\\n will look for files named a.txt , check whether they have a sibling b.txt file, and if they don’t, output the containing directory’s name. Without -execdir or -printf : find . -name a.txt -exec sh -c ' for file do dir=${file%/*} [ -e "$dir/b.txt" ] || printf "%s\n" "$dir" done' sh {} + It's also more efficient in that it doesn't run one command per file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40110/"
]
} |
469,585 | When using the command strace with the flag -T , I would like to know what is the time unit used to display time spent in syscalls? I assume it should be in seconds, but I am not quite sure and it seems to be omitted from the manual. | From the source code : if (Tflag) { ts_sub(ts, ts, &tcp->etime); tprintf(" <%ld.%06ld>", (long) ts->tv_sec, (long) ts->tv_nsec / 1000);} This means that the time is shown in seconds, with microseconds (calculated from the nanosecond value) after the decimal point. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/469585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311285/"
]
} |
469,645 | How to delete the complete word where cursor is positioned in nano text editor? Or if cursor is on white space, I assume it should delete the next word? Nano help shows these two functions but they are not bound to any shortcuts: Cut backward from cursor to word startCut forward from cursor to next word start Those don't appear to be what I'm looking for, but if nothing else is available, I'd like to know how to use them (especially with a shortcut key). | Save this file to ~/.nanorc and ctrl + ] cuts the word to the left, and ctrl + \ cuts right This works for me in nano version 2.5 bind ^] cutwordleft mainbind ^\\ cutwordright main This works for me in nano version 2.9.3 bind ^] cutwordleft mainbind ^\ cutwordright main | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
469,736 | I need to run tail -f against a log file, but only for specific amount of time, for example 20 seconds, and then exit. What would be an elegant way to achieve this? | With GNU timeout: timeout 20 tail -f /path/to/file | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/469736",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256195/"
]
} |
469,770 | When running top -n1 | head the terminal's cursor disappears.Running top -n1 brings it back. Tested in gnome-terminal and tilix in Ubuntu 16.04 and CentOS 7.5. Running top -n1 | tail doesn't have this issue, so I think, something at the end of top output let the cursor reappear which is not executed when printing the head only. What causes this and how can I get back the cursor more elegantly ? | Best way IMHO is to make top use "batch" mode ( -b flag) which is intended to be used with non-interactive use cases such as piping to another program or to a file. So, this top -n1 -b | head won't be leaving the shell without a cursor. As for the why the cursor disappears ... Since top is an interactive program, it "messes" with the terminal in order to grab input, scroll content, etc, and it hides the cursor. When terminating it has to restore the cursor and the display status it found before being called, and it does so by sending one or more control codes to the terminal itself. By piping the command through head , this control code won't get through ( head prints just the first 10 lines by default, and the output of both top and the control codes to restore the terminal state is always >10 lines). In fact, if you give head enough lines to print, the cursor appears! For example, top -n1 | head -n 100 leaves a cursor on my system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/469770",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236063/"
]
} |
469,950 | Recently I've been digging up information about processes in GNU/Linux and I met the infamous fork bomb : :(){ : | :& }; : Theoretically, it is supposed to duplicate itself infinitely until the system runs out of resources... However, I've tried testing both on a CLI Debian and a GUI Mint distro, and it doesn't seem to impact much the system. Yes there are tons of processes that are created, and after a while I read in console messages like : bash: fork: Resource temporarily unavailable bash: fork: retry: No child processes But after some time, all the processes just get killed and everything goes back to normal. I've read that the ulimit set a maximum amount of process per user, but I can't seem to be able to raise it really far. What are the system protections against a fork-bomb? Why doesn't it replicate itself until everything freezes or at least lags a lot? Is there a way to really crash a system with a fork bomb? | You probably have a Linux distro that uses systemd. Systemd creates a cgroup for each user, and all processes of a user belong to the same cgroup. Cgroups is a Linux mechanism to set limits on system resources like max number of processes, CPU cycles, RAM usage, etc. This is a different, more modern, layer of resource limiting than ulimit (which uses the getrlimit() syscall). If you run systemctl status user-<uid>.slice (which represents the user's cgroup), you can see the current and maximum number of tasks (processes and threads) that is allowed within that cgroup. $ systemctl status user-$UID.slice ● user-22001.slice - User Slice of UID 22001 Loaded: loaded Drop-In: /usr/lib/systemd/system/user-.slice.d └─10-defaults.conf Active: active since Mon 2018-09-10 17:36:35 EEST; 1 weeks 3 days ago Tasks: 17 (limit: 10267) Memory: 616.7M By default, the maximum number of tasks that systemd will allow for each user is 33% of the "system-wide maximum" ( sysctl kernel.threads-max ); this usually amounts to ~10,000 tasks. If you want to change this limit: In systemd v239 and later, the user default is set via TasksMax= in: /usr/lib/systemd/system/user-.slice.d/10-defaults.conf To adjust the limit for a specific user (which will be applied immediately as well as stored in /etc/systemd/system.control), run: systemctl [--runtime] set-property user-<uid>.slice TasksMax=<value> The usual mechanisms of overriding a unit's settings (such as systemctl edit ) can be used here as well, but they will require a reboot. For example, if you want to change the limit for every user, you could create /etc/systemd/system/user-.slice.d/15-limits.conf . In systemd v238 and earlier, the user default is set via UserTasksMax= in /etc/systemd/logind.conf . Changing the value generally requires a reboot. More info about this: man 5 systemd.resource-control man 5 systemd.slice man 5 logind.conf http://0pointer.de/blog/projects/systemd.html (search this page for cgroups) man 7 cgroups and https://www.kernel.org/doc/Documentation/cgroup-v1/pids.txt https://en.wikipedia.org/wiki/Cgroups | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/469950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/275586/"
]
} |
470,031 | I have a file that contains some numbers $ cat file.dat0.0925930.0486310.0279570.0306990.0262500.0381560.0118230.0132840.0245290.0224980.0132170.0071050.0189160.014079 I want to make a new file that contains the difference of the current line with the previous line. Expected output should be $ cat newfile.dat-0.043962-0.0206740.002742-0.0044490.011906-0.0263330.0014610.011245-0.002031-0.009281-0.0061120.011811-0.004837 Thinking this was trivial, I started with this piece of code f="myfile.dat" while read line; do curr=$line prev= bc <<< "$line - $prev" >> newfile.datdone < $f but I realized quickly that I have no idea how to access the previous line in the file. I guess I also need to account for that no subtraction should take place when reading the first line. Any guidance on how to proceed is appreciated! | $ awk 'NR > 1 { print $0 - prev } { prev = $0 }' <file.dat-0.043962-0.0206740.002742-0.0044490.011906-0.0263330.0014610.011245-0.002031-0.009281-0.0061120.011811-0.004837 Doing this in a shell loop calling bc is cumbersome. The above uses a simple awk script that reads the values off of the file one by one and for any line past the first one, it prints the difference as you describe. The first block, NR > 1 { print $0 - prev } , conditionally prints the difference between this and the previous line if we've reached line two or further ( NR is the number of records read so far, and a "record" is by default a line). The second block, { prev = $0 } , unconditionally sets prev to the value on the current line. Redirect the output to newfile.dat to save the result there: $ awk 'NR > 1 { print $0 - prev } { prev = $0 }' <file.dat >newfile.dat Related: Why is using a shell loop to process text considered bad practice? There was some mentioning of the slowness of calling bc in a loop. The following is a way of using a single invocation of bc to do the arithmetics while still reading the data in a shell loop (I would not actually recommend solving this problem in this way, and I'm only showing it here for people interested in co-processes in bash ): #!/bin/bashcoproc bc{ read prev while read number; do printf '%f - %f\n' "$number" "$prev" >&"${COPROC[1]}" prev=$number read -u "${COPROC[0]}" result printf '%f\n' "$result" done} <file.dat >newfile.datkill "$COPROC_PID" The value in ${COPROC[1]} is the standard input file descriptor of bc while ${COPROC[0]} is the standard output file descriptor of bc . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/470031",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/168566/"
]
} |
470,048 | When I try to sudo ssh ... I get permission denied because ssh looks in /root/.ssh for the keys, not in /home/me/.ssh . But $HOME is still set to /home/me . Why doesn't ssh look in $HOME/.ssh ? | ssh ignores $HOME , it gets the home directory from the user database based on the real ¹ uid (using the pw_dir field of the structure returned by getpwuid() ). Given that ssh could write files in there like the known_hosts one, it's just as well that it does not do is in /home/me/.ssh as you'd end up with a root-owned file there. You can always use sudo ssh -i ~/.ssh/id_rsa ... , or use an authentication agent and make sure you pass the path of the socket to that authentication agent to root: sudo --preserve-env=SSH_AUTH_SOCK ssh ... or sudo SSH_AUTH_SOCK="$SSH_AUTH_SOCK" ssh ... Also, are you sure you need to run ssh as root? Is it to be able to create tunnels or bind local port-forwards on TCP ports below 1024? If it's just to be able to login as root on the remote host, doing ssh root@host should be enough. ¹ you could actually restore the real uid to our original one while preserving the effective uid of 0: sudo perl -e '$<=getpwnam($ENV{SUDO_USER}); exec@ARGV' ssh ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24186/"
]
} |
470,226 | I'm currently building my dotfiles and I want to list all TODO: ... comments in my project's directory, I created a bash alias to do this: alias mytodo='grep --recursive "TODO: "' It works well, however, it also returns the alias definition. this is the sample output. devs@dotfiles$ mytodo bash/profile.d/aliases/: alias mytodo='grep --recursive "TODO: "' bin/git_branch_status/: # TODO: Add checking of remote branch status. tools/setup/: # TODO: Add search for existing symlinks. how can I specifically exclude that line with the alias definition? | One way to prevent a regexp from matching itself is to enclose a single character in a character class: alias mytodo='grep --recursive "TOD[O]: "' Alternatively (hat-tip to Stéphane Chazelas ), you could save the alias with extra quote-marks in the pattern: alias mytodo='grep --recursive "TO''DO: "' Then the line still won't be found, but the alias contains the original pattern. This is helpful when you want to use eg. a fixed-string match ( grep -F ). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/470226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305401/"
]
} |
470,250 | I have file called foo.txt . This file contains values: abc.tar.gzabc.1.1.tar.gzbca-1.2.tar.gz I would like to get an output like this abc abc.tar.gzabc.1.1 abc.1.1.tar.gzbca-1.2 bca-1.2.tar.gz Same value/text has to appear before original value/text. How to achieve this using regular expressions? %s/^[a-z_-]*./\1/g Above expression I used but I got wrong output. | Capture groups :help /\( let you store what's matched by the pattern inside \(...\) ; you can then reference the match (via \1 for the first group, \2 , and so on) in the replacement (or even afterwards in the pattern itself). One approach (there are many) to your problem is to capture the filename before the .tar.gz extension. In the replacement, put the capture ( \1 ), a space, then the original text ( \0 , or & ): :%substitute/\(.*\)\.tar\.gz$/\1 &/ Alternatively, you can just match the stuff before the extension (ending the match with \ze ), and then duplicate that: :%substitute/.*\ze\.tar\.gz$/& &/ Problems with your attempt You used the backreference \1 , but never captured anything. The [a-z_-] does not match a literal . , but this appears in your example. No escaping of the final . (as \. ); it would match any character. No duplication in the replacement part; you effectively removed text instead of adding. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470250",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119746/"
]
} |
470,262 | I have more than fifty files with a distinct name in a directory. For example: File1: Type,A,RR,1,CD,2, File2: Type,B,CD,2,FG,3, File3: Type,C,RR,5,FG,8,QR,9, Desired output Type,A,B,C,CD,2,2,,FG,,3,8,QR,,,9,RR,1,,5 I tried with join and paste but no luck... Any suggestions? | Capture groups :help /\( let you store what's matched by the pattern inside \(...\) ; you can then reference the match (via \1 for the first group, \2 , and so on) in the replacement (or even afterwards in the pattern itself). One approach (there are many) to your problem is to capture the filename before the .tar.gz extension. In the replacement, put the capture ( \1 ), a space, then the original text ( \0 , or & ): :%substitute/\(.*\)\.tar\.gz$/\1 &/ Alternatively, you can just match the stuff before the extension (ending the match with \ze ), and then duplicate that: :%substitute/.*\ze\.tar\.gz$/& &/ Problems with your attempt You used the backreference \1 , but never captured anything. The [a-z_-] does not match a literal . , but this appears in your example. No escaping of the final . (as \. ); it would match any character. No duplication in the replacement part; you effectively removed text instead of adding. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270935/"
]
} |
470,266 | I use yum list php-imap list the php-imap: # yum list php-imapLoaded plugins: fastestmirrorLoading mirror speeds from cached hostfile * base: mirrors.zju.edu.cn * epel: ftp.cuhk.edu.hk * extras: mirrors.zju.edu.cn * updates: mirrors.zju.edu.cn * webtatic: sp.repo.webtatic.comInstalled Packagesphp-imap.x86_64 5.4.16-7.el7 @epel But how can I find the location of it? I know I can use find / -name php-imap for searching, but it is every long time, even the command do not have reactivity. In my CentOS 7, the /var/tmp/ is an empty directory. and list the /var/cache/yum/x86_64/7/ are: base epel extras mysql56-community mysql-connectors-community mysql-tools-community timedhosts timedhosts.txt updates webtatic. there is no php-imap . | If you want to know where the rpm file is, depending on your yum config your system may or may not keep it. Check /etc/yum.conf (not sure this is the right location on ALL systems but on my Centos box this is the right place) for the line "cachedir=" and this will tell you where the cache of rpms is located. For example: grep cachedir /etc/yum.conf My system says /var/cache/yum/$basearch/$releasevar In the same file, if keepcache=0 is included, your system will not save the rpms. Change this to keepcache=1 to keep them around. Depending on your storage space you might need to clean this up now and then. If you want to know where the actual software is on your system, do this: rpm -qa | grep php-imap Then take the package name from the result (looks like it might be php-imap.x86_64) and do this rpm -q --filesbypkg <package full name here> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260857/"
]
} |
470,288 | What's a good one-liner to generate an easily memorable password, like xkcd's correct horse battery staple or a Bitcoin seed? EDIT 1 : This is not the same as generating a random string since random strings are not at all memorable. Compare to the obligatory xkcd ... | First of all, install a dictionary of a language you're familiar with, using: sudo apt-get install <language-package> To see all available packages: apt-cache search wordlist | grep ^w Note: all installation instructions assume you're on a debian-based OS. After you've installed dictionary run: WORDS=5; LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${WORDS} | paste -sd "-" Which will output ex: blasphemous-commandos-vasts-suitability-arbor To break it down: WORDS=5; — choose how many words you want in your password. LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words — choose only words containing lowercase alphabet characters (it excludes words with ' in them or funky characters like in éclair ). LC_ALL=C ensures that [a-z] in the regex won't match letter-like symbols other than lowercase letters without diacritics. shuf --random-source=/dev/urandom -n ${WORDS} — chose as many WORDS as you've requested. --random-source=/dev/urandom ensures that shuf seeds its random generator securely; without it, shuf defaults to a secure seed, but may fall back to a non-secure seed on some systems such as some Unix emulation layers on Windows. paste -sd "-" — join all words using - (feel free to change the symbol to something else). Alternatively you can wrap it in a function: #!/bin/bashfunction memorable_password() { words="${1:-5}" sep="${2:--}" LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep"} or #!/bin/shmemorable_password() { words="$1" if [ -z "${words}" ]; then words=5 fi sep="$2" if [ -z "${sep}" ]; then sep="-" fi LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep"} Both of which can be called as such: memorable_password 7 _memorable_password 4memorable_password Returning: skipped_cavity_entertainments_gangway_seaports_spread_communiqueevaporated-clashes-bold-presumingexcelling-thoughtless-pardonable-promulgated-forbearing Bonus For a nerdy and fun, but not very secure password, that doesn't require dictionary installation, you can use (courtesy of @jpa): WORDS=5; man git | \ tr ' ' '\n' | \ egrep '^[a-z]{4,}$' | \ sort | uniq | \ shuf --random-source=/dev/urandom -n ${WORDS} | \ paste -sd "-" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/470288",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311902/"
]
} |
470,299 | I have a number of variables in a .dat file that I automatically changed with a script. One of these files has a parameter, bar , that is arranged as such in a .dat file: var1 var2 var3 foo barT T T 100 100 I used to use the following lines of a bash script to change the value of bar from an arbitrary initial value to the desired value, in this case 2000. This script would change 'bar' to 2000. LINE1=$(awk '/bar/ {++n;if (n==1) {print FNR}}' data.dat)((LINE1=$LINE1 + 1))OLD1=$(awk '{for(i=1;i<'$LINE1';i++) getline}{print $12}' data.dat)sed -i '' "${LINE1}s/$OLD1/2000/" data.dat However, I now must now change foo alongside bar . In this example, this is setting foo and bar both to 2000. LINE1=$(awk '/foo/ {++n;if (n==1) {print FNR}}' data.dat)((LINE1=$LINE1 + 1))OLD1=$(awk '{for(i=1;i<'$LINE1';i++) getline}{print $12}' data.dat)sed -i '' "${LINE1}s/$OLD1/2000/" data.datLINE1=$(awk '/bar/ {++n;if (n==1) {print FNR}}' data.dat)((LINE1=$LINE1 + 1))OLD1=$(awk '{for(i=1;i<'$LINE1';i++) getline}{print $12}' data.dat)sed -i '' "${LINE1}s/$OLD1/2000/" data.dat This instead only changed the foo to 2000 while leaving bar unchanged. I realize that this is an issue with the way I've described the regular expression, but I have been unable to change both variables with an awk/sed expression. | First of all, install a dictionary of a language you're familiar with, using: sudo apt-get install <language-package> To see all available packages: apt-cache search wordlist | grep ^w Note: all installation instructions assume you're on a debian-based OS. After you've installed dictionary run: WORDS=5; LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${WORDS} | paste -sd "-" Which will output ex: blasphemous-commandos-vasts-suitability-arbor To break it down: WORDS=5; — choose how many words you want in your password. LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words — choose only words containing lowercase alphabet characters (it excludes words with ' in them or funky characters like in éclair ). LC_ALL=C ensures that [a-z] in the regex won't match letter-like symbols other than lowercase letters without diacritics. shuf --random-source=/dev/urandom -n ${WORDS} — chose as many WORDS as you've requested. --random-source=/dev/urandom ensures that shuf seeds its random generator securely; without it, shuf defaults to a secure seed, but may fall back to a non-secure seed on some systems such as some Unix emulation layers on Windows. paste -sd "-" — join all words using - (feel free to change the symbol to something else). Alternatively you can wrap it in a function: #!/bin/bashfunction memorable_password() { words="${1:-5}" sep="${2:--}" LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep"} or #!/bin/shmemorable_password() { words="$1" if [ -z "${words}" ]; then words=5 fi sep="$2" if [ -z "${sep}" ]; then sep="-" fi LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep"} Both of which can be called as such: memorable_password 7 _memorable_password 4memorable_password Returning: skipped_cavity_entertainments_gangway_seaports_spread_communiqueevaporated-clashes-bold-presumingexcelling-thoughtless-pardonable-promulgated-forbearing Bonus For a nerdy and fun, but not very secure password, that doesn't require dictionary installation, you can use (courtesy of @jpa): WORDS=5; man git | \ tr ' ' '\n' | \ egrep '^[a-z]{4,}$' | \ sort | uniq | \ shuf --random-source=/dev/urandom -n ${WORDS} | \ paste -sd "-" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/470299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311907/"
]
} |
470,301 | I have a text file, and I would like to add lines to the file arranged as follows; #define ICFGx 0x2y where x is a decimal number that begins at 0 and ends at 255, incrementing by 1 with each line and y is a hexadecimal number that begins at 000 and ends at 3FC, incrementing by 0x004 with each line. #define ICFG0 0x2000#define ICFG1 0x2004#define ICFG2 0x2008#define ICFG3 0x200C I would also like add them from a certain line onward, say line 500. Is there any way to go about this task from the command line? I'm fairly new to using the linux terminal and I haven't done much bash scripting yet. | First of all, install a dictionary of a language you're familiar with, using: sudo apt-get install <language-package> To see all available packages: apt-cache search wordlist | grep ^w Note: all installation instructions assume you're on a debian-based OS. After you've installed dictionary run: WORDS=5; LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${WORDS} | paste -sd "-" Which will output ex: blasphemous-commandos-vasts-suitability-arbor To break it down: WORDS=5; — choose how many words you want in your password. LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words — choose only words containing lowercase alphabet characters (it excludes words with ' in them or funky characters like in éclair ). LC_ALL=C ensures that [a-z] in the regex won't match letter-like symbols other than lowercase letters without diacritics. shuf --random-source=/dev/urandom -n ${WORDS} — chose as many WORDS as you've requested. --random-source=/dev/urandom ensures that shuf seeds its random generator securely; without it, shuf defaults to a secure seed, but may fall back to a non-secure seed on some systems such as some Unix emulation layers on Windows. paste -sd "-" — join all words using - (feel free to change the symbol to something else). Alternatively you can wrap it in a function: #!/bin/bashfunction memorable_password() { words="${1:-5}" sep="${2:--}" LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep"} or #!/bin/shmemorable_password() { words="$1" if [ -z "${words}" ]; then words=5 fi sep="$2" if [ -z "${sep}" ]; then sep="-" fi LC_ALL=C grep -x '[a-z]*' /usr/share/dict/words | shuf --random-source=/dev/urandom -n ${words} | paste -sd "$sep"} Both of which can be called as such: memorable_password 7 _memorable_password 4memorable_password Returning: skipped_cavity_entertainments_gangway_seaports_spread_communiqueevaporated-clashes-bold-presumingexcelling-thoughtless-pardonable-promulgated-forbearing Bonus For a nerdy and fun, but not very secure password, that doesn't require dictionary installation, you can use (courtesy of @jpa): WORDS=5; man git | \ tr ' ' '\n' | \ egrep '^[a-z]{4,}$' | \ sort | uniq | \ shuf --random-source=/dev/urandom -n ${WORDS} | \ paste -sd "-" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/470301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311910/"
]
} |
470,408 | Network Manager is posting the correct name servers and search domains into /etc/resolv.conf when I restart the network. However, it is not in an order I like. How can I tell Network Manager to prioritize the nameservers and search domain information of a certain interface over another? Example: What I get: # cat /etc/resolv.conf # Generated by NetworkManagersearch silatria.org relinq.org pripylen.org acarime.orgnameserver 120.052.0.2nameserver 120.052.0.1nameserver 10.66.66.1 What I want # Generated by NetworkManagersearch acarime.org silatria.org relinq.org pripylen.org nameserver 10.66.66.1 nameserver 120.052.0.2nameserver 120.052.0.1 acarime.org & nameserver 10.66.66.1 belongs to my network interface enp3s0120.052.0.2.1, 120.052.0.1 & silatria.org relinq.org pripylen.org belongs to my network interface enp4s0 | Set ipv4.dns-priority of at least one of the profiles, to specify the relative order. For example nmcli connection modify "$PROFILE" ipv4.dns-priority 5 and reactivate the connection. See the manual nm-settings(5) for details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470408",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142792/"
]
} |
470,412 | I have file called file.txt . In this file there are words composed of upper and lowercase letters, also there are words consist of upper or lowercase letters and numbers. I would like to filter this file, so the output is free of the words that contain both upper and lower case letters. For example, the input file.txt : AaaaBbaBAa1212aA123123AbAAAaaa In this file there are words with upper and lowercase letters (e.g. Aaa, aBp), and words contain upper/lower case letters AND digits (e.g. 123Ab). In addition, to words contain only small letters (e.g. aaa), or only capital letters (e.g. AAA). I would like to remove only the words that contain upper AND lowercase letters (e.g. Aaa, aBp), so the output is as follows: Aa1212aA123123AbAAAaaa Any ideas? | grep -Exv '[A-Za-z]*([A-Z][a-z]|[a-z][A-Z])[A-Za-z]*' Explanation The idea is to match the opposite of what you want first, i.e. those lines that contain only upper- and lower-case letters. This uses grep -Ex , i.e. grep with extended regex, match the whole line. The -v flag then negates the regex, i.e. return those lines that do not match the following regex. The central part ([A-Z][a-z]|[a-z][A-Z]) matches a single upper-case letter followed by a lower-case letter, or vice versa. The outer part [A-Za-z]*...[A-Za-z]* means that the rest of the line must comprise upper- or lower-case letters only. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311980/"
]
} |
470,427 | I noticed these 2 command formats give different results: $ sudo route -v add -net <IP> netmask 255.255.255.255 gw <gateway># succeeds without outputting text$ sudo route -v add -net <IP>/32 gw <gateway>SIOCADDRT: Invalid argument The man file for route clearly says CIDR format should work: route [-v] [-A family] add [-net|-host] target [netmask Nm] [gw Gw]... [...] target: the destination network or host. You can provide IP addresses in dotted decimal or host/network names. So what am I missing? Note: also, the verbose option seems to be useless on this command. | The difference should be in the arguments analysis made by the route command. In my opinion it is probably inappropriate that the result of the first command is not the error that you get in the second one, since you are trying to set a route to a host specifying that is a route to a network. If you replace -net by -host the second command will be acepted: $ route -v add -host <IP>/32 gw <gateway> In any case I would recommend to use the ip command, with it you could add the route in these ways: $ ip route add <IP>/32 via <gateway> or $ ip route add <IP> via <gateway> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470427",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/230295/"
]
} |
470,438 | In my .bash_profile I have a parse_git_branch function from the internet, and a PS1 to color some of my output. If possible, I would like to make my git branch name colored red, instead of white. I tired changing a few variables, but with no luck. I would like (master) to be red, if possible. | The difference should be in the arguments analysis made by the route command. In my opinion it is probably inappropriate that the result of the first command is not the error that you get in the second one, since you are trying to set a route to a host specifying that is a route to a network. If you replace -net by -host the second command will be acepted: $ route -v add -host <IP>/32 gw <gateway> In any case I would recommend to use the ip command, with it you could add the route in these ways: $ ip route add <IP>/32 via <gateway> or $ ip route add <IP> via <gateway> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470438",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120169/"
]
} |
470,440 | A few times when I read about programming I came across the "callback" concept. Funnily, I never found an explanation I can call "didactic" or "clear" for this term "callback function" (almost any explanation I read seemed to me enough different from another and I felt confused). Is the "callback" concept of programming existent in Bash? If so, please answer with a small, simple, Bash example. | In typical imperative programming , you write sequences of instructions and they are executed one after the other, with explicit control flow. For example: if [ -f file1 ]; then # If file1 exists ... cp file1 file2 # ... create file2 as a copy of a file1fi etc. As can be seen from the example, in imperative programming you follow the execution flow quite easily, always working your way up from any given line of code to determine its execution context, knowing that any instructions you give will be executed as a result of their location in the flow (or their call sites’ locations, if you’re writing functions). How callbacks change the flow When you use callbacks, instead of placing the use of a set of instructions “geographically”, you describe when it should be called. Typical examples in other programming environments are cases such as “download this resource, and when the download is complete, call this callback”. Bash doesn’t have a generic callback construct of this kind, but it does have callbacks, for error-handling and a few other situations; for example (one has to first understand command substitution and Bash exit modes to understand that example): #!/bin/bashscripttmp=$(mktemp -d) # Create a temporary directory (these will usually be created under /tmp or /var/tmp/)cleanup() { # Declare a cleanup function rm -rf "${scripttmp}" # ... which deletes the temporary directory we just created}trap cleanup EXIT # Ask Bash to call cleanup on exit If you want to try this out yourself, save the above in a file, say cleanUpOnExit.sh , make it executable and run it: chmod 755 cleanUpOnExit.sh./cleanUpOnExit.sh My code here never explicitly calls the cleanup function; it tells Bash when to call it, using trap cleanup EXIT , i.e. “dear Bash, please run the cleanup command when you exit” (and cleanup happens to be a function I defined earlier, but it could be anything Bash understands). Bash supports this for all non-fatal signals, exits, command failures, and general debugging (you can specify a callback which is run before every command). The callback here is the cleanup function, which is “called back” by Bash just before the shell exits. You can use Bash’s ability to evaluate shell parameters as commands, to build a callback-oriented framework; that’s somewhat beyond the scope of this answer, and would perhaps cause more confusion by suggesting that passing functions around always involves callbacks. See Bash: pass a function as parameter for some examples of the underlying functionality. The idea here, as with event-handling callbacks, is that functions can take data as parameters, but also other functions — this allows callers to provide behaviour as well as data. A simple example of this approach could look like #!/bin/bashdoonall() { command="$1" shift for arg; do "${command}" "${arg}" done}backup() { mkdir -p ~/backup cp "$1" ~/backup}doonall backup "$@" (I know this is a bit useless since cp can deal with multiple files, it’s only for illustration.) Here we create a function, doonall , which takes another command, given as a parameter, and applies it to the rest of its parameters; then we use that to call the backup function on all the parameters given to the script. The result is a script which copies all its arguments, one by one, to a backup directory. This kind of approach allows functions to be written with single responsibilities: doonall ’s responsibility is to run something on all its arguments, one at a time; backup ’s responsibility is to make a copy of its (sole) argument in a backup directory. Both doonall and backup can be used in other contexts, which allows more code re-use, better tests etc. In this case the callback is the backup function, which we tell doonall to “call back” on each of its other arguments — we provide doonall with behaviour (its first argument) as well as data (the remaining arguments). (Note that in the kind of use-case demonstrated in the second example, I wouldn’t use the term “callback” myself, but that’s perhaps a habit resulting from the languages I use. I think of this as passing functions or lambdas around, rather than registering callbacks in an event-oriented system.) | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/470440",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
470,538 | I am trying to install Matlab using Arch linux via the official install script. Everything works perfectly fine and the download starts after selecting all components. Now the problem is that apparently the installer puts the downloaded content into /tmp, which I have assigned 4gb (half of my ram). This has never been a problem until now. About at 25% of the installation/download the installer raises an error, saying that there is no more space left in /tmp.I checked the directory before starting the installation and it had 4gb free space. Can I redirect a different directory to the installer where there is plenty of space, because there is plenty of it free on the drive ? The installer asked for installation directory but didn't give me the option to select this. | If the installer doesn't honor the TMP or TMPDIR environment variables, as @thrig pointed out in their answer, and the /tmp partition / ramdisk by itself is too small, then simply mount something else on it: mkdir "$HOME/matlabdl"mount --bind -o nonempty "$HOME/matlabdl" /tmp Contrary to a normal mount, a --bind mount takes an existing directory and mounts it at a different place, i.e. instead of downloading into the ramdisk that normally is at /tmp the download actually goes into $HOME/matlabdl in this case. -o nonempty makes sure that the mount takes place even if /tmp is not empty, as would normally be required. After the installation completes, unmount /tmp again: umount /tmp This will make the ramdisk visible again. In case some process is still using your overridden /tmp , look for which one it is with tools like lsof . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243719/"
]
} |
470,561 | Suppuse I have some text file and I want to open it with text editor, how to do that from terminal?. I need it that it would work for Red hat 5.3 enterprise. Without need to download nothing, I need the built in text editor. I need something like: [root@localhost]# open /home/Plompy/Desktop/README_PLOMPY That equivalent to this: | In Ubuntu exist a command called xdg-open , that opens a file or URL in the user's preferred application, so you can open several types of files with the default program pre-defined. xdg-open hello_word.tiff Open the file using the default image visualizer. xdg-open Template.odt Open the file using with LibreOffice. xdg-open myfile.txt Open the file using gedit (Text editor). By my knowledge the xdg-utils are already installed in Red hat. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/311733/"
]
} |
470,676 | I recently discovered the neat kitty as some aspects of the themes I'm using under ZSH don't render quite right under Xfce Terminal but do under kitty. Unfortunately I've hit a snag when it comes to using tmux on SSH connections, if SSH from my desktop/server (running Gentoo ) to any of my Raspberry Pis (running Arch Linux ARM ) or my VPS (also running Arch Linux) and start a Tmux session I'm informed.... open terminal failed: missing or unsuitable terminal: xterm-kitty However, I've a laptop which is also running Arch and if I SSH to it from my desktop/server and start an SSH session there are no problems, and vice versa, SSHing from laptop to desktop/server and Tmux runs fine. I should add that I can run Tmux sessions when SSHing to the Raspberry Pis/VPS's that are running Arch Linux if its under an Xfce Terminal. Any ideas as to how I can investigate or solve this such that Tmux sessions work everywhere? | If you receive error messages such as "Terminal unknown, missing or unsuitable terminal" upon logging in, this means the server does not recognize your terminal. The correct solution is to install the client terminal's terminfo file on the server. This tells console programs on the server how to correctly interact with your terminal. You can get info about current terminfo using infocmp and then find out which package owns it. If you cannot install it normally, you can copy your terminfo to your home directory on the server: $ ssh myserver mkdir -p ~/.terminfo/${TERM:0:1}$ scp /usr/share/terminfo/${TERM:0:1}/$TERM myserver:~/.terminfo/${TERM:0:1}/ After logging in and out from the server the problem should be fixed. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/470676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39149/"
]
} |
470,681 | I am running a Perl script on a Linux machine via a cron job. However, from time to time (around 1% of all cases), the script gets stuck and runs infinitely. If I list processes, I can see its PID. However, I don't want to kill it right away; I would rather know what went wrong. Is there a way how to show what lines are being executed from the script? Something like step-by-step debugging of the script based on PID. | Try to follow these steps: - find the process pid of the shell, you may use a command like: ps -ef | grep <your_script_name> Let's set this pid in the shell variable $PID. Find all the child processes of this $PID by run the command: ps --ppid $PID You might find one or more (if for example it's stuck in a pipelined series of commands). Repeat this command couple of times. If it doesn't change this means the script is stuck in certain command. In this case, you may attach trace command to the running child process: sudo strace -p $PID This will show you what is being executed, either indefinite loop (like reading from a pipe), or waiting on some event that never happens. In case you find ps --ppid $PID changes, this indicates that your script is advancing but it's stuck somewhere, e.g. local loop in the script. From the changing commands, it can give you a hint where in the script it's looping. Finally, a very simple method to debug a perl is to use perl debugger: perl -d script.pl More: 1 , 2 , 3 , 4 , 5 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470681",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180442/"
]
} |
470,856 | I have a problem with the timestamps of files copied from my PC or laptop to USB drives: the last modification time of the original file and that of the copied file are different. Therefore, synchronizing files between my PC and my USB drive is quite cumbersome. A step by step description I copy an arbitrary file from my PC/laptop to a USB drive using the GUI or with the command cp -a file.txt /media/gabor/CORSAIR/ I check the last modification time of the original file: $ ls -l --time-style=full-iso file.txt-rw-rw-r-- 1 gabor gabor 0 2018-09-22 15:09:23.317098281 +0200 file.txt I check the last modification time of the copied file: $ ls -l --time-style=full-iso /media/gabor/CORSAIR/file.txt-rw-r--r-- 1 gabor gabor 0 2018-09-22 15:09:23.000000000 +0200 /media/gabor/CORSAIR/file.txt As you can see, the seconds in the last modification time of the copied file are truncated to zero decimal digits. However, if I enter the command if ! [ file.txt -nt /media/gabor/CORSAIR/file.txt ] && ! [ file.txt -ot /media/gabor/CORSAIR/file.txt ]; then echo "The last modification times are equal."; fi I get the output The last modification times are equal. The situation changes if I unmount and remount the USB drive and I execute the last two commands again: $ ls -l --time-style=full-iso /media/gabor/CORSAIR/file.txt-rw-r--r-- 1 gabor gabor 0 2018-09-22 15:09:22.000000000 +0200 /media/gabor/CORSAIR/file.txt$ if [ file.txt -nt /media/gabor/CORSAIR/file.txt ]; then echo "The file is newer on the PC."; fiThe file is newer on the PC. So after remount, the last modification time of the copied file is further reduced by one second. Further unmounting and remounting, however, doesn't affect the last modification time any more. Besides, the test on the files now shows that the file on the PC is newer (although it isn't). The situation is further complicated by the fact that the last modification time of files is shown differently on my PC and on my laptop , the difference being exactly 2 hours, although the date and time setting is the same on my PC and on my laptop! Further information Both my PC and laptop show the behaviour, described above. I have Ubuntu 14.04.5 (trusty) on my PC and Ubuntu 16.04.2 (xenial) on my laptop. My USB drives have vfat file system. The output of mount | grep CORSAIR on my PC is /dev/sdb1 on /media/gabor/CORSAIR type vfat (rw,nosuid,nodev,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush,uhelper=udisks2) The output of mount | grep CORSAIR on my laptop is /dev/sdb1 on /media/gabor/CORSAIR type vfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2) My other USB drives show the same behaviour. Question Can the difference in the last modification times be eliminated somehow? For example, using other parameters at mounting/unmounting? Or is it a bug in Ubuntu? I would like to achieve that the timestamps of the original and copied files are exactly the same, so that synchronization can be done more efficiently. Also, I would like to keep the vfat file system on my USB drives, so that I can use them under Windows, too. | The problem with the timestamp seconds changing comes from the fact that a VFAT (yes, even FAT32) filesystem stores the modification time with only 2-second resolution. Apparently, as long as the filesystem is mounted, the filesystem driver caches timestamps accurate to 1-second resolution (probably to satisfy POSIX requirements), but once the filesystem is unmounted, the caches are cleared and you'll see what is actually recorded on the filesystem directory. The two-hour difference between the PC and the laptop are probably caused by different timezone settings and/or different default mount options for VFAT filesystem. (I'm guessing that you're located in a timezone whose UTC offset is currently 2 hours, either positive or negative.) Internally, Linux uses UTC timestamps on Unix-style filesystems; but on VFAT filesystems, the (current) default is to use local time on VFAT filesystem timestamps, because that is what MS-DOS did and Windows still does. But there are two mount options that can affect this: you can specify the mount option tz=UTC to use UTC-based timestamps on VFAT filesystems, or you can use time_offset=<minutes> to explicitly specify the timezone offset to be used with this particular filesystem. It might be that the default mount options for VFAT have changed between Ubuntu 14.04 and 16.04, either within the kernel or the udisks removable-media helper service, resulting in the two-hour difference you see. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/470856",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312333/"
]
} |
470,871 | I use xinetd and it works for my purposes. However I recently discovered that systemd has something built in called "socket activation". These two seem very similar, but systemd is "official" and seems like the better choice. However before using it, are they really the same? Are there differences I should be aware of? For example, I want to start some dockerised services only when they are first requested - my first thought would be to use xinetd. But is socket activation better / faster / stabler / whatever? | I don’t think systemd socket activation is significantly better than xinetd activation, when considered in isolation; the latter is stable too and has been around for longer. Socket activation is really interesting for service decoupling: it allows services to be started in parallel, even if they need to communicate, and it allows services to be restarted independently. If you have a service which supports xinetd -style activation, it can be used with socket activation: a .socket description with Accept=true will behave in the same way as xinetd . You’ll also need a .service file to describe the service. The full benefits of systemd socket activation require support in the dæmon providing the service. See the blog post on the topic . My advice tends to be “if it isn’t broken, don’t fix it”, but if you want to convert an xinetd -based service to systemd it’s certainly feasible. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291147/"
]
} |
470,880 | I want a service to start on demand rather than on boot. To do that I could use systemd socket activation (with the service and socket files). But this is a resource limited server, so after some time (e.g. 1 hour) of inactivity, I want to stop the service (until it is triggered again). How can I do that? I looked through some of the documentation but I can't figure out if this is supported. Update: Assuming this is unsupported, the use case is still probably quite common. What would be a good way / workaround to achieve this? | Socket activation in systemd can work in two modes: Accept=true : systemd keeps the listening socket, accepts every incoming connection, spawns a new process for each connection and passes the established socket to it. This case is trivial (each process exits when it's done). Accept=false : systemd creates the listening socket and watches it for incoming connection. As soon as one comes in, systemd spawns the service and passes the listening socket to it. The service then accepts the incoming connection and any subsequent ones. Systemd doesn't track what's happening on the socket anymore, so it can't detect inactivity. In the latter case, I think the only truly clean solution is to modify the application to make it exit when it's idle for some time. If you can't do that, a crude workaround could be to set up cron or a systemd timer to kill the service once an hour. This could be a reasonable approximation if the service is only spawned really infrequently. Note that the use case is probably pretty rare. A process sitting in poll()/select() waiting for a connection doesn't consume any CPU time, so the only resource that's used in that situation is memory. It's probably both easier and more efficient to just set up some swap and let the kernel decide whether it's worth keeping the process in RAM all the time or not. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470880",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/291147/"
]
} |
470,903 | Using Debian 9.5, fresh install. I want to install mysql-server, but I run into dependency problems. sudo apt-get install mysql-serverReading package lists... DoneBuilding dependency treeReading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: mysql-server : Depends: default-mysql-server but it is not going to be installedE: Unable to correct problems, you have held broken packages.apt-get install mysql-server default-mysql-serverReading package lists... DoneBuilding dependency treeReading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: default-mysql-server : Depends: mariadb-server-10.1 but it is not going to be installedE: Unable to correct problems, you have held broken packages.sudo apt-get install mysql-server default-mysql-server mariadb-server-10.1Reading package lists... DoneBuilding dependency treeReading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: mariadb-server-10.1 : Depends: libdbi-perl but it is not going to be installed Recommends: libhtml-template-perl but it is not going to be installedE: Unable to correct problems, you have held broken packages.sudo apt-get install mysql-server default-mysql-server mariadb-server-10.1 libhtml-template-perlReading package lists... DoneBuilding dependency treeReading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: libhtml-template-perl : Depends: libcgi-pm-perl but it is not going to be installed or perl (< 5.19) but 5.26.2-7 is to be installed mariadb-server-10.1 : Depends: libdbi-perl but it is not going to be installedE: Unable to correct problems, you have held broken packages.sudo apt-get install mysql-server default-mysql-server mariadb-server-10.1 libdbi-perlReading package lists... DoneBuilding dependency treeReading state information... DoneSome packages could not be installed. This may mean that you haverequested an impossible situation or if you are using the unstabledistribution that some required packages have not yet been createdor been moved out of Incoming.The following information may help to resolve the situation:The following packages have unmet dependencies: libdbi-perl : Depends: perlapi-5.24.1E: Unable to correct problems, you have held broken packages. EDIT1: apt-cache policy : apt-cache policyPackage files: 100 /var/lib/dpkg/status release a=now 500 http://security.debian.org/debian-security stretch/updates/contrib amd64 Packages release v=9,o=Debian,a=stable,n=stretch,l=Debian-Security,c=contrib,b=amd64 origin security.debian.org 500 http://security.debian.org/debian-security stretch/updates/main amd64 Packages release v=9,o=Debian,a=stable,n=stretch,l=Debian-Security,c=main,b=amd64 origin security.debian.org 500 http://deb.debian.org/debian stretch-updates/main amd64 Packages release o=Debian,a=stable-updates,n=stretch-updates,l=Debian,c=main,b=amd64 origin deb.debian.org 500 http://deb.debian.org/debian stretch/main amd64 Packages release v=9.5,o=Debian,a=stable,n=stretch,l=Debian,c=main,b=amd64 origin deb.debian.orgPinned packages: EDIT2: apt policy perl perl-base : apt policy perl perl-baseperl: Installed: 5.26.2-7 Candidate: 5.26.2-7 Version table: *** 5.26.2-7 100 100 /var/lib/dpkg/status 5.24.1-3+deb9u4 500 500 http://deb.debian.org/debian stretch/main amd64 Packages 500 http://security.debian.org/debian-security stretch/updates/main amd64 Packagesperl-base: Installed: 5.26.2-7 Candidate: 5.26.2-7 Version table: *** 5.26.2-7 100 100 /var/lib/dpkg/status 5.24.1-3+deb9u4 500 500 http://deb.debian.org/debian stretch/main amd64 Packages 500 http://security.debian.org/debian-security stretch/updates/main amd64 Packages How can I fix these dependency problems? | As indicated by your apt policy perl perl-base output, and pointed out by jordanm , your system has the Buster version of Perl, not the Debian 9 version. So your system isn’t really a “fresh install” of Debian 9.5; and since Perl is such an important component of a Debian setup, it’s likely there are many other packages which have been upgraded to the Buster version. This Perl mismatch is the reason you can’t install the MySQL packages. I’ll assume this is a recent installation and therefore you don’t have too much invested in it; so jordanm’s recommendation to re-install is probably the best solution in this case. Debian 9 and Buster have diverged quite a bit, so rolling back could become rather complicated, especially since you’ve upgraded Perl. Removing mc certainly won’t be sufficient. In future, don’t mix stable and testing. If you run into a bug which prevents you from using a package, file a bug ( reportbug mc ); if it’s severe enough it could qualify for a stable update. You could also ask for a backport; that would get you the current Buster version of mc , rebuilt for Debian 9. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208148/"
]
} |
470,915 | I'm trying to understand what cloned_interfaces in FreeBSD's rc.conf really does. Manual page says: cloned_interfaces : (str) Set to the list of clonable network interfaces to create on this host. Further cloning arguments may be passed to the ifconfig(8) create command for each interface by setting the create_args_<interface> variable. If an interface name is specified with sticky keyword, the interface will not be destroyed even when rc.d/netif script is invoked with stop argument. This is useful when reconfiguring the interface without destroying it. Entries in cloned_interfaces are automatically appended to network_interfaces for configuration. This doesn't give any useful information of what it does. It is used by for example if_bridge , if_tap and if_epair . What does it actually do? Why do I need it for specific network modules and not for others? Does it create some kind of dummy device? When is it needed? Security implications? Performance implications? | cloned_interfaces is one of the several settings in rc.conf , rc.conf.local , et al. that control the setting up and shutting down of network interfaces. In the Mewburn rc system it is /etc/rc.d/netif that is mostly responsible for using these settings. With nosh system management the external formats import subsystem takes these settings and translates them into a suite of single-shot and long-running services in /var/local/sv . Both systems at their bases run ifconfig a lot and run some long-running dæmons. cloned_interfaces is almost the same as the network_interfaces setting in that it lists network interfaces to be brought up and shut down. The single difference between the twain is that network_interfaces describes network interfaces that pre-exist, because hardware detection (of network interface hardwares) has brought them into existence; whereas cloned_interfaces are network interfaces that are brought into existence by dint of these service startup and shutdown actions alone. A bridge , tap , or epair network interface does not represent actual network interface hardware. Thus an extra step is necessary in startup and shutdown, the point where a new network interface is cloned and destroyed. This is done with, again, the ifconfig command. The first bridge network interface is cloned by running ifconfig bridge0 create , and destroyed with ifconfig bridge0 destroy . Listing bridge0 in the cloned_interfaces list causes this to happen and these commands to be run first and last; whereas listing it in network_interfaces would not, and the system would assume that there was an existing bridge0 device to be manipulated. (Technically, the loopback interface is not hardware, either. It is cloned, too; hence the first cloned loopback interface being lo0 , for those who have ever wondered about the name. But there is special casing for it because it is not optional as bridges, taps, and epairs are.) Other than that, the two sets of interfaces are treated the same. Further reading Jonathan de Boyne Pollard (2017). " Networking ". nosh Guide . Softwares. Andrew Thompson. " Bridging ". FreeBSD Handbook . Brooks Davis (2004). The Challenges of Dynamic Network Interfaces . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31463/"
]
} |
470,922 | I know that the TSTP will stop the process that were executing, and the signal id is 20, and it's equal to ctrl+z. I searched for the abbreviation, but i didn't find anything!anybody know? | Literally " t emporary st o p ". You can find it mentioned in old(er) papers such as Evolving the UNIX System Interface to Support Multithreading Programs (Paul R. McJones and Garret F. Swart, September 28, 1987): If a signal is received for which no handler procedure was registered, a default action takes place. Depending on the signal, the default action is either to do nothing, to terminate the process, to stop the process temporarily, or to continue the stopped process. ... To stop a process, send it a stop signal (e.g., SigTStp; see page 44). To restart a stopped process, send it a continue signal (SigCont). By the way, 4.3BSD's <signal.h> has a different slant: #define SIGTSTP 18 /* stop signal from tty */ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305541/"
]
} |
470,986 | A shell variable holds a path. How does one get its filename portion? In bash(1), I experimented and found that I can do it with ."${i/*\///}" where i is the name of the environment variable. Such a method is not only ugly but also erroneous in the case where the path does not contain any / character. I'll demonstrate a practical case where such a function is needed. Let's say we want to make a symbolic link for every PDF files in a source directory to the current directory. $ for i in /source/path/*.pdf; do\ ln -s "$i" ."${i/*\///}"; \ done | ${i##*/} This works in Posix shells, including bash , dash , ksh , zsh , etc. From the standard POSIX Parameter Expansion section of the Posix Shell & Utilities specification: ${parameter##[word]} Remove Largest Prefix Pattern. The word shall be expanded to produce a pattern. The parameter expansion shall then result in parameter, with the largest portion of the prefix matched by the pattern deleted. Alternatively, and traditionally, the basename command has been used for this purpose. Caution: Performance can become a significant issue because basename is implemented as an external command (for example, /usr/bin/basename ). Because you're performing this inside a loop, you will call an external command for each file. On a list of 1000 files, this may be the difference between 0.05 seconds (parameter expansion) and 2.0 seconds ( basename command). But for a list of 10,000 files, it could be the difference between 0.5 seconds (expansion) vs 20 seconds ( basename ). The difference in performance becomes more extreme as the number of files increases. For readability as well as performance, you can implement your own basename function, for example: mybasename() { echo "${1##*/}"; } (The selection of a better name for the function, and/or the implementation of the full basename command line interface, are left as an exercise for the reader. :) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/470986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54687/"
]
} |
471,080 | The following commands work $ ls -1t | head -1git_sync_log20180924_00.txt$ vi git_sync_log20180924_00.txt But this does not $ ls -1t | head -1 | viVim: Warning: Input is not from a terminalVim: Error reading input, exiting...Vim: preserving files...Vim: Finished. How can I accomplish this (open most recently modified file in vi)? | vi -- "$(ls -t | head -n 1)" (that assumes file names don't contain newline characters). Or if using zsh : vi ./*(om[1]) Or: vi ./*(.om[1]) To only consider regular files. vi ./*(.Dom[1]) To also consider hidden files (as if using ls -At ). Those work regardless of what characters or byte values the file names may contain. For a GNU shell&utilities equivalent of that latter one, you can do: IFS= read -rd '' file < <( find . ! -name . -prune -type f -printf '%T@\t%p\0' | sort -zrn | cut -zf2-) && vi "$file" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/471080",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285600/"
]
} |
471,116 | Mark wrote a comment for me I don't know offhand how to make cups not spool, that is, how to make the lpr command only exit after the printer driver has run. What does "spool" for printing mean? Google says it is a verb meaning "send (data that is intended for printing or processing on a peripheral device) to an intermediate store." What is the intermediate store that printing spool represents, for example, when printing by lpr command Mark seems to relate the meaning of spool with blocking. But I can't figure that out by looking at the definition given by Google. Thanks. | A print spool is effectively a buffer, managed per job, with a program (the spooler) responsible for receiving jobs from submitting programs and feeding them to one or more printers. The point of a spool is to handle communication between two systems with different speeds, and to control access to shared devices. The former means programs can submit print jobs as fast as they want, and those jobs are dealt with as fast (or slowly) as printers can handle. The latter (as pointed out by RonJohn ) ensures that jobs are handled coherently: thus when printing, jobs aren’t mixed up. Networked printers provide their own spools, and print servers (CUPS, lpd etc.) also implement spools. Most print systems also handle access control, quotas, banners, print options etc. Spools are used in other contexts; for example, tape-based backup servers now spool backup data from networked hosts on a fast disk-based storage system, so that they can then feed modern tape drives at the tremendous speeds they need to avoid tape shoe-shine. In the context of the comment, the relevance of a spool is that it decorrelates the print job submission from its fulfillment. Not spooling would mean that the submission would only complete with the print job, and thus your lpr command would only complete once the job completed. Removing the spool on your computer might not have the desired result though since the printer itself could spool too! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/471116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
471,130 | I would like to know how to install a security update for Python 2.7.5. I was notified that my CentOS 7 Python version had a security patch update. The details are here: https://lists.centos.org/pipermail/centos-announce/2018-July/022964.html I have scoured the internet trying to look for ways to install that specific patch, but I couldn't find. I have tried sudo yum update python-2.7.5-69.el7_5.x86_64.rpm , but you can probably tell that I have no idea what I'm doing. | A print spool is effectively a buffer, managed per job, with a program (the spooler) responsible for receiving jobs from submitting programs and feeding them to one or more printers. The point of a spool is to handle communication between two systems with different speeds, and to control access to shared devices. The former means programs can submit print jobs as fast as they want, and those jobs are dealt with as fast (or slowly) as printers can handle. The latter (as pointed out by RonJohn ) ensures that jobs are handled coherently: thus when printing, jobs aren’t mixed up. Networked printers provide their own spools, and print servers (CUPS, lpd etc.) also implement spools. Most print systems also handle access control, quotas, banners, print options etc. Spools are used in other contexts; for example, tape-based backup servers now spool backup data from networked hosts on a fast disk-based storage system, so that they can then feed modern tape drives at the tremendous speeds they need to avoid tape shoe-shine. In the context of the comment, the relevance of a spool is that it decorrelates the print job submission from its fulfillment. Not spooling would mean that the submission would only complete with the print job, and thus your lpr command would only complete once the job completed. Removing the spool on your computer might not have the desired result though since the printer itself could spool too! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/471130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312523/"
]
} |
471,134 | Is it possible to grep for a string that is hidden within multiple directories from the root directory of my server? For instance, if I SSH into my Ubuntu server and then want to grep for a certain string, but I don't know which subfolder it is in, can I just grep from the root directory? | A print spool is effectively a buffer, managed per job, with a program (the spooler) responsible for receiving jobs from submitting programs and feeding them to one or more printers. The point of a spool is to handle communication between two systems with different speeds, and to control access to shared devices. The former means programs can submit print jobs as fast as they want, and those jobs are dealt with as fast (or slowly) as printers can handle. The latter (as pointed out by RonJohn ) ensures that jobs are handled coherently: thus when printing, jobs aren’t mixed up. Networked printers provide their own spools, and print servers (CUPS, lpd etc.) also implement spools. Most print systems also handle access control, quotas, banners, print options etc. Spools are used in other contexts; for example, tape-based backup servers now spool backup data from networked hosts on a fast disk-based storage system, so that they can then feed modern tape drives at the tremendous speeds they need to avoid tape shoe-shine. In the context of the comment, the relevance of a spool is that it decorrelates the print job submission from its fulfillment. Not spooling would mean that the submission would only complete with the print job, and thus your lpr command would only complete once the job completed. Removing the spool on your computer might not have the desired result though since the printer itself could spool too! | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/471134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/306138/"
]
} |
471,258 | I typed |> by mistake but bash didn't print any errors. (But it did create an empty file.) For example, date |> tmp.txt I thought maybe it actually means something ? | That seems to be just a pipeline where the second part is an empty command, only containing the redirection. Writing it as date | >file might make it easier to interpret. The empty command doesn't do anything but process the redirection, creating the file. date >| file on the other hand would act as an override for the noclobber shell option, which prevents the regular > from overwriting existing files. $ touch foo; set -o noclobber$ date > foobash: foo: cannot overwrite existing file$ date >| foo # works | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471258",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208887/"
]
} |
471,327 | I've installed some app under /opt/myapp , which has a /opt/myapp/share directory. As I finish installing, it tells me: Note that '/opt/myapp/share' is not in the search pathset by the XDG_DATA_HOME and XDG_DATA_DIRSenvironment variables, so applications may notbe able to find it until you set them. Thedirectories currently searched are:- /usr/share/gnome- /home/joeuser/.local/share/flatpak/exports/share- /var/lib/flatpak/exports/share- /usr/local/share- /usr/share What's the right way to add directories to that list - system-wide and as a single user? | The German ubuntuusers wiki has a nice list of files and directories that can be used for that purpose. Setting it globally From my research, appending to that environment variable globally is not trivial, but here are some pointers: If you want to overide the existing value, /etc/environment is the easiest way Depending on the system configuration /etc/profile might also be a good way, because you it is executed by the shell Other files to try might be /etc/X11/Xsession.d/* and /etc/security/pam_env.conf Setting it per-user $HOME/.profile (or $HOME/.zprofile for zsh users) is suggested in multiple places, however adding the line XDG_DATA_DIRS="$HOME/.local/xdg:$XDG_DATA_DIRS" in there rendered my desktop completely non-functional upon login The way that worked for me was to create $HOME/.xsessionrc and put the line export XDG_DATA_DIRS="$HOME/.local/xdg:$XDG_DATA_DIRS" in there. Of course, you have to replace $HOME/.local/xdg by the directory you want to add. Please also note that this will only set the variable for graphical applications, not for the shell (so your value won't be mentioned in echo $XDG_DATA_DIRS ), but that should not be a problem. Recommendation Just execute this line and log in again, and it should work: echo export 'XDG_DATA_DIRS="/opt/myapp/share:$XDG_DATA_DIRS"' >> ~/.xsessionrc If for whatever reason your system is nonfunctional after that, enter Recovery Mode , go into the root shell and type rm /home/<username>/.xsessionrc and then reboot to get back into your system. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
471,345 | I'm trying to use a binary of Qemu that I compiled using this tutorial, since the version of Qemu that's packaged with my OS, Debian, doesn't seem to support OpenGL acceleration with Spice. After a successful compilation, I tried to set the <emulator> tag to the path to new Qemu executable in /usr/local/bin, but I receive the following error: error: internal error: Failed to probe QEMU binary with QMP: libvirt: error : cannot execute binary /usr/local/bin/qemu-2.12.1/x86_64-softmmu/qemu-system-x86_64: Permission denied The 'emulator' part of my virsh edit configuration file is as follows: <emulator>/usr/bin/kvm</emulator> I have experimented with changing the permissions and ownership of the file, made sure to allow execution ( chmod a+x ), however none seem to work. If there are any other ways of using the OpenGL acceleration feature of Qemu, please let me know. I am currently using Debian Stretch, with the the virt-manager, libvirt-daemon and qemu-kvm from the 'testing' repository, on an Intel Core i5-8400, using the integrated GPU. I have compiled Qemu so I could use the OpenGL 3D acceleration feature with 'libvirglrenderer'. | I just solved same problem on Debian Buster. Apparmor was denying access to my compiled qemu binary.You can check if apparmor is enabled on your system using command: sudo aa-status If your output contains following lines then apparmor is surely enabled and needs to be configured to allow access to your compiled binary: ...22 profiles are in enforce mode.... /usr/sbin/libvirtd...3 processes are in enforce mode. /usr/sbin/libvirtd (1098)... Add apparmor permission for libvirt to execute your qemu binary. You can do it, for example, with placing following lines in /etc/apparmor.d/usr.sbin.libvirtd just before last '}' symbol in config. End of config would look then like this: # ... skipped lines /usr/bin/kvm rmix, /usr/local/bin/qemu-2.12.1/x86_64-softmmu/qemu-system-x86_64 rmix,} Probably you will also need to add same apparmor permissions for qemu in /etc/apparmor.d/abstractions/libvirt-qemu : /usr/bin/kvm rmix, /usr/local/bin/qemu-2.12.1/x86_64-softmmu/qemu-system-x86_64 rmix, You can reload apparmor rulesets using sudo systemctl reload apparmor .Apparmor rules syntax described, for example, here . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312014/"
]
} |
471,353 | 5678 []testing,\ group [][testing []ip\ 5.6.7.8 []launch-wizard-1 0.0.0.0/0456dlkjfa []1.2.3.4 []test 1.2.3.4/32 4.3.2.0/23 4.3.2.0/23default 4.3.2.0/23 4.3.2.0/23launch-wizard-2 0.0.0.0/0launch-wizard-3 0.0.0.0/02.3.4.5/32 [] I would like to get the first column of the above but the catch is that, I need to treat \ (backslash space) as a part of the column, so awk '{print $1}' should give me 5678testing,\ group[testingip\ 5.6.7.8launch-wizard-1456dlkjfa1.2.3.4testdefaultlaunch-wizard-2launch-wizard-32.3.4.5/32 | with gnu awk ( gawk ) you can use some zero-length assertions like \< or \> : $ echo 'a\ b c' | gawk 'BEGIN{FS="\\> +"} {print $1}'a\ b but unfortunately not the full-blown ones from perl or pcre (eg. (?<!\\) , (?<=\w) , etc): $ echo 'a\ b, c' | perl -nle '@a=split /(?<!\\)\s+/, $_; print $a[0]'a\ b, | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471353",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224025/"
]
} |
471,397 | I am trying to find out what the following command means: sed 's/-\([0-9.]\+\)/(\1)/g' inputfile I know it has to do with non-digit characters. | In GNU sed, it looks for a dash followed by some digits or dots, and replaces that with the same digits in parenthesis. That is, it turns -123.45 into (123.45) . In some other sed s, like the BSD-based one on macOS, it looks for a dash, a digit or dot, and a literal plus sign, and then removes the dash and surrounds the rest in parenthesis. That is, it turns -1+ into (1+) , but leaves stuff like -123 as-is. The difference is because \+ does not have a standard meaning in basic regular expressions. GNU interprets it as the same as + in extended regexes, i.e. "one or more of the previous", while others take it as literal + . As for the other parts in the left-hand pattern, the dash matches itself, [0-9.] matches any one digit or dot, and the \( .. \) capture the part in between. In the replacement, the parenthesis are literal, and \1 puts back whatever was within the \( .. \) . More portably, that should be either in extended RE, assuming your sed supports -E , which many do: sed -E 's/-([0-9.]+)/(\1)/' in basic RE: sed 's/-\([0-9.]\{1,\}\)/(\1)/' Both should replace a dash in front of a number with parenthesis around it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471397",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312661/"
]
} |
471,405 | I am working on AIX unix and trying to remove non-printable characters from file the data looks like Caucasian male lives in Arizona w/ fiancÃÂÃÂÃÂÃÂÃÂ in file when I view in Notepad++ using UTF-8 encoding. When I try to view file in unix I get ^▒▒^▒▒^▒▒^▒▒^▒▒^▒▒ instead of the special characters. I want to replace all those special characters with space. I tried sed 's/[^[:print:]]/ /g' file but it does not remove those characters.My locale are listed below when I run locale -a CPOSIXen_US.8859-15en_US.ISO8859-1en_US I even tried sed -e 's/[^ -~]/ /g' file and it did not remove the characters. I see that others stackflow answers used UTF-8 locale with GNU sed and this worked but I do not have that locale. Also I am using ksh . | In GNU sed, it looks for a dash followed by some digits or dots, and replaces that with the same digits in parenthesis. That is, it turns -123.45 into (123.45) . In some other sed s, like the BSD-based one on macOS, it looks for a dash, a digit or dot, and a literal plus sign, and then removes the dash and surrounds the rest in parenthesis. That is, it turns -1+ into (1+) , but leaves stuff like -123 as-is. The difference is because \+ does not have a standard meaning in basic regular expressions. GNU interprets it as the same as + in extended regexes, i.e. "one or more of the previous", while others take it as literal + . As for the other parts in the left-hand pattern, the dash matches itself, [0-9.] matches any one digit or dot, and the \( .. \) capture the part in between. In the replacement, the parenthesis are literal, and \1 puts back whatever was within the \( .. \) . More portably, that should be either in extended RE, assuming your sed supports -E , which many do: sed -E 's/-([0-9.]+)/(\1)/' in basic RE: sed 's/-\([0-9.]\{1,\}\)/(\1)/' Both should replace a dash in front of a number with parenthesis around it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471405",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312736/"
]
} |
471,460 | I am tring to write sed command to find and replace a key=value in a comma delimited string. Example string in file: KEY_1=value_1,KEY_2=value_2,SOMEKEY=lastValue The sed command used: sed -r 's/KEY_2=.*?((?=,)|$)/KEY_2=new_value/' myFile.txt If the Key exists replace it and its value with a new key=value. Most of the values will end with a comma ',' however the outlying case is the last key=value in the string will not have a ,. Its giving me the following error message on RedHat Linux VM sed: -e expression #1, char 55: Invalid preceding regular expression which I believe is the last '/' I tried /g which would also be excptable as no key should be duplicated in the original string. | The sed utility does not support Perl-like regular expressions. Instead you may use $ sed 's/KEY_2=[^,]*/KEY_2=new value/' fileKEY_1=value_1,KEY_2=new value,SOMEKEY=lastValue or $ sed 's/\(KEY_2\)=[^,]*/\1=new value/' fileKEY_1=value_1,KEY_2=new value,SOMEKEY=lastValue Or, with awk (without using regular expressions and instead using exact string matches against the keys which prevents confusion when you have both KEY_2 and SOME_OTHER_KEY_2 ): $ awk -F, -v OFS=, '{ for (i = 1; i <= NF; ++i) if (split($i, a, "=") == 2 && a[1] == "KEY_2") { $i = "KEY_2=new value" break } } 1' fileKEY_1=value_1,KEY_2=new value,SOMEKEY=lastValue | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312780/"
]
} |
471,474 | I want to write an alias as such: alias add="java -jar vc.jar name" Is there a way I can use a wildcard for name and thus only have to type:add name - with name being any name of my choice? name being an argument. | I think you are looking for functions . function add() { local name="$1" java -jar vc.jar "${name}" } Add this to your ~/.bashrc or ~/.profile and just call like this; user@host$ add samplename Alternatively you can trigger an alias expansion by adding a space or tab character at the end of the alias definition. alias add='java -jar vc.jar ' (Note the space at the end of definition). Then just call it normally; user@host$ add samplename It should work. EDIT: As pointed out by @kusalananda you can omit the space and it will still work just fine. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312791/"
]
} |
471,476 | I want to try cgroup v2 but am not sure if it is installed on my linux machine >> uname -r4.14.66-041466-generic Since cgroup v2 is available in 4.12.0-rc5, I assume it should be available in the kernel version I am using. https://www.infradead.org/~mchehab/kernel_docs/unsorted/cgroup-v2.html However, it does not seem like my system has cgroup v2 as the memory interface files mentioned in its documentation are not available on my system. https://www.kernel.org/doc/Documentation/cgroup-v2.txt It seems like I still have cgroup v1. /sys/fs/cgroup/memory# lscgroup.clone_children memory.kmem.failcnt memory.kmem.tcp.usage_in_bytes memory.memsw.usage_in_bytes memory.swappinesscgroup.event_control memory.kmem.limit_in_bytes memory.kmem.usage_in_bytes memory.move_charge_at_immigrate memory.usage_in_bytescgroup.procs memory.kmem.max_usage_in_bytes memory.limit_in_bytes memory.numa_stat memory.use_hierarchycgroup.sane_behavior memory.kmem.slabinfo memory.max_usage_in_bytes memory.oom_control notify_on_releasedocker memory.kmem.tcp.failcnt memory.memsw.failcnt memory.pressure_level release_agentmemory.failcnt memory.kmem.tcp.limit_in_bytes memory.memsw.limit_in_bytes memory.soft_limit_in_bytes tasksmemory.force_empty memory.kmem.tcp.max_usage_in_bytes memory.memsw.max_usage_in_bytes memory.stat Follow-up questions Thanks Brian for the help. Please let me know if I should be creating a new question but I think it might be helpful to other if I just ask my questions here. 1) I am unable to add cgroup controllers, following the command in the doc >> echo "+cpu +memory -io" > cgroup.subtree_control However, I got "echo: write error: Invalid argument". Am I missing a prerequisite to this step? 2) I ran a docker container but the docker daemon log complained about not able to find "/sys/fs/cgroup/cpuset/docker/cpuset.cpus". It seems like docker is still expecting cgroupv1. What is the best way to enable cgroupv2 support on my docker daemon? docker -vDocker version 17.09.1-ce, build aedabb7 | You could run the following command: grep cgroup /proc/filesystems If your system supports cgroupv2, you would see: nodev cgroupnodev cgroup2 On a system with only cgroupv1, you would only see: nodev cgroup | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/471476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312790/"
]
} |
471,521 | I want to get only the version of php installed on CentOS. Output of php -v PHP 7.1.16 (cli) (built: Mar 28 2018 13:19:29) ( NTS )Copyright (c) 1997-2018 The PHP GroupZend Engine v3.1.0, Copyright (c) 1998-2018 Zend Technologies I tried this following: php -v | grep PHP | awk '{print $2}' But the output I got was: 7.1.16(c) How can I get only 7.1.16? | Extending Jeff Schaller's answer , skip the pipeline altogether and just ask for the internal constant representation: $ php -r 'echo PHP_VERSION;'7.1.15 You can extend this pattern to get more, or less, information: $ php -r 'echo PHP_MAJOR_VERSION;'7 See the PHP list of pre-defined constants for all available. The major benefit: it doesn't rely on a defined output format of php -v . Given it's about the same performance as a pipeline solution, then it seems a more robust choice. If your objective is to test for the version, then you can also use this pattern. For example, this code will exit 0 if PHP >= 7, and 1 otherwise: php -r 'exit((int)version_compare(PHP_VERSION, "7.0.0", "<"));' For reference, here are timings for various test cases, ordered fastest first: $ time for (( i=0; i<1000; i++ )); do php -v | awk '/^PHP [0-9]/ { print $2; }' >/dev/null; donereal 0m13.368suser 0m8.064ssys 0m4.036s$ time for (( i=0; i<1000; i++ )); do php -r 'echo PHP_VERSION;' >/dev/null; donereal 0m13.624suser 0m8.408ssys 0m3.836s$ time for (( i=0; i<1000; i++ )); do php -v | head -1 | cut -f2 -d' ' >/dev/null; donereal 0m13.942suser 0m8.180ssys 0m4.160s | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/471521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138782/"
]
} |
471,522 | I have read and search, but I don't understand what's wrong with it, I want to match: a space, the string 00011, and either a space or a new line. sed 's:\(\s\)\(00011\)\([\s\n]\):\1$03\3:g' EDIT: the data looks like this: ADD 00000 00001 00011LSH 00011 00100 01111ADD 00011 10100 00010JSR 00011101000111010101100010 and $03 is just a string to replace the 00011 I want to end up with something like this: ADD 00000 00001 $03LSH $03 00100 01111ADD $03 10100 00010JSR 00011101000111010101100010 Thanks | sed works on a line at a time and it will strip the newlines when processing each line. So, in order to do what you want, you should match the end of line anchor ( $ ) rather than a literal newline character. This should work: sed 's:\(\s\)\(00011\)\(\s\|$\):\1$03\3:g' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/144279/"
]
} |
471,533 | I would like to redirect my custom message when checking connectivity from my server to a log file. I am breaking from the condition using exit 1, but how to make it more verbose and redirect to a log file? How can I accomplish that within awk itself to fit nicely the message with the redirect: awk { ... "Connectivity to ${rda} failed, exiting" >> ${log} ... } then My script is as follows: #/bin/bashrda="www.google.com" log="/var/log/connectivity.log"if nc -zw1 ${rda} 22 && echo |openssl s_client -connect ${rda}:22 2>&1 |awk ' handshake && $1 == "Verification" { if ($2=="OK") exit;exit 1; } $1 $2 == "SSLhandshake" { handshake = 1 }'then echo -e "Connectivity works" >> ${log} exit 0fi My awk version is: # awk -W version mawk 1.3.3 Nov 1996, Copyright (C) Michael D. Brennancompiled limits: max NF 32767 sprintf buffer 2040 | sed works on a line at a time and it will strip the newlines when processing each line. So, in order to do what you want, you should match the end of line anchor ( $ ) rather than a literal newline character. This should work: sed 's:\(\s\)\(00011\)\(\s\|$\):\1$03\3:g' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/304247/"
]
} |
471,577 | My requirement is to list all files in a directory, except files ending with a ~ (backup files). I tried to use command: ls -l | grep -v ~ I get this output: asdasadasdasad~file_names.txtnormaltest.txttarget_filenametestshell1.shtestshell1.sh~testshell2.shtestshell2.sh~testtwo.txttesttwo.txt~test.txttest.txt~ I want to get only these files: asdasadfile_names.txtnormaltest.txttarget_filenametestshell1.shtestshell2.shtesttwo.txttest.txt | ls -l | grep -v ~ The reason this doesn't work is that the tilde gets expanded to your home directory, so grep never sees a literal tilde. (See e.g. Bash's manual on Tilde Expansion .) You need to quote it to prevent the expansion, i.e. ls -l | grep -v "~" Of course, this will still remove any output lines with a tilde anywhere, even in the middle of a file name or elsewhere in the ls output (though it's probably not likely to appear in usernames, dates or such).If you really only want to ignore files that end with a tilde, you can use ls -l | grep -v "~$" | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/471577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/312867/"
]
} |
471,589 | I try to understand Stephen Kitt's answer to this question where he created a temporary directory with the following code: #!/bin/bashscripttmp=$(mktemp -d) # Create a temporary directory (these will usually be created under /tmp or /var/tmp/) Each time I run this command I see a new temporary directory created under /tmp/ (I didn't know it will appear there until reading Roaima's answer here ): IIUC, there is no programmatical difference between a regular directory to a temporary directory (the only difference is in how these directories are used, by means of the time each one stays on the machine). If there is no programmatical difference, why should one prefer mktemp -d over the more minimal mkdir ? | When using mkdir , the script would have to make sure that it creates a directory with a name that does not already exist. It would be an error to use mkdir dirname if dirname is an existing name in the current directory. When creating a temporary directory (i.e. a directory that is not needed much longer than during the lifetime of the current script), the name of the directory is not generally important, and mktemp -d will find a name that is not already taken by something else. mktemp -d makes it easier and more secure to create a temporary directory. Without mktemp -d , one would have to try to mkdir with several names until one succeeded. This is both unnecessarily complicated and can be done wrongly (possibly introducing subtle race conditions in the code). mktemp also gives the user of the script a bit of control in where they want the temporary directory to be created. If the script, for example, produces a massive amount of temporary data that has to be stored in that directory, the user may set the TMPDIR environment variable (before or as they are invoking the script) to point to a writable directory on a partition where there is enough space available. mktemp -d would then create the temporary directory beneath that path. If TMPDIR is not set, mktemp will use /tmp in its place. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/471589",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.