source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
433,702 | After running ltrace -S on two programs that were compiled by gcc (version 5.4.0), one that calls vfork() and one that calls fork() , I find that vfork() calls SYS_vfork whilst fork() calls SYS_clone . I could not find any information about this specific behavior anywhere (some sources say each of fork() , vfork() and clone() are implemented by the correspondingly named sys_ call, whilst other sources say all three calls are implemented using sys_clone ). Source code: #include<stdio.h>main(){ int pid; pid=vfork();} Output from ltrace -S : ...__libc_start_main([some stuff here] <unfinished ...>vfork([more stuff] <unfinished ...>SYS_vfork([more stuff])--- SIGCHLD (Child exited) --- Is there a reason libc uses SYS_vfork for vfork() but doesn't use SYS_fork for fork() ? I’ve read Thomas Nyman’s answer to Which file in kernel specifies fork(), vfork()…to use sys_clone() system call , which says: vfork() in turn is implemented via a separate CLONE_VFORK flag, which will cause the parent process to sleep until the child process wakes it via a signal. The child will be the sole thread of execution in the parent's namespace, until it calls exec() or exits. The child is not allowed to write to the memory. The corresponding clone() call could be as follows: clone(CLONE_VFORK | CLONE_VM | SIGCHLD, 0) This seems to be at odds with what I've observed of the output of ltrace -S . Did I mess something up or did the glibc writers deliberately choose to implement vfork() using SYS_vfork instead of SYS_clone for a reason? Is this considered something that could change at any time or can we rely on it? | Did I mess something up or did the glibc writers deliberately choose to implement vfork() using SYS_vfork instead of SYS_clone for a reason? Historically, I think it’s more likely that this is simply a result of vfork not needing to be changed. Both vfork and fork initially used the equivalent system calls. When NPTL threading was implemented, the fork implementation was changed to use clone , because the C library needs to reset the thread id . vfork doesn’t need to worry about threads, because it’s only intended for use with execve (which is going to reset all that state anyway), so it was left untouched. The NPTL design paper explains why the fork system call isn’t sufficient to implement the fork library call when threads are liable to be used: To implement the fork function without memory leaks it is necessary that the memory used for stacks and other internal information of all threads except the thread calling fork is reclaimed. The kernel cannot help in this situation. Is this considered something that could change at any time or can we rely on it? Since you’re using the C library to fork, you can only rely on the C library providing the behaviour documented in the APIs; you can’t rely on a specific implementation. You shouldn’t rely on vfork(3) using the vfork(2) system call instead of clone(2) , nor should you rely on fork(3) using clone(2) instead of fork(2) . Note that the system calls which are used can vary from one architecture to another... If you really need to rely on specific system calls, you should use those directly and forego the C library wrappers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/433702",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/282845/"
]
} |
433,711 | I have a bash function like so: run_mongo(){ mongo foo bar baz 2>&1 # send stderr to stdout} unfortunately this mongo command doesn't exit with 1 if there is error, so I have to match on stdout/stderr is there some way to exit with code > 0 with grep if there is a first match? something like this: run_mongo | grep -e "if stdout/stderr matches this we exit with 1" I assume the way to do this would be like so: run_mongo | grep -e "if stdout/stderr matches" | killer where killer is a program that dies as soon as it gets its first bit of stdin. | Yes, you can do it with grep -vz which tells grep to find lines that don't match the pattern you give it ( -v ) and to read the entire input at once ( z ), so that one match means the whole thing fails: $ printf 'foo\nbar\n' | grep -zqv foo && echo no || echo yesyes$ printf 'foo\nbar\n' | grep -zq foo && echo no || echo yesno So, in your case, something like: run_mongo(){ mongo foo bar baz 2>&1 | grep -vzq "whatever means failure" }if run_mongo; then echo "Worked!"else echo "Failed!"fi If you want to avoid reading the whole output, just use another tool. Perl, for example: mongo foo bar baz 2>&1 | perl -ne 'exit 1 if /whatever means failure/' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/433711",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
433,712 | I learned that I can do modprobe brd rd_nr=1 rd_size=4585760 max_part=1 if I want to create a ram block device at /dev/ram0 but lets say I want to flush the device (to free the ram), then delete it and create another. How would I do this running modprobe brd rd_nr=1 rd_size=4585760 max_part=1 again doesn't seem to create another ram device in /dev Recreate steps: 1) create disk: modprobe brd rd_nr=1 rd_size=4585760 max_part=1 2) use the ram disk for some arbitrary task: ex: dd if=/dev/zero of=/dev/ram0 count=1000 3) free up the memory blockdev --flushbufs /dev/ram0 4) delete device file: rm /dev/ram0 5) try to create another one: modprobe brd rd_nr=1 rd_size=4585760 max_part=1 6) ls /dev/ram* gives me an error I know that I can change the rd_nr to be whatever number I desire but I want to be able to create these on the fly. Edit: I don't want to create a tmpfs, my use case requires a block device | You should not delete /dev/ram0 yourself. It will be deleted when you do sudo rmmod brd , which frees the space and removes the module.You can then start again from modprobe . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/433712",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266457/"
]
} |
433,765 | Is there a way to avoid doing grep two times in the file and just populate thevariables in one pass? The file is small so it is not that a big deal I was just wondering if I could do it in one pass FIRST_NAME=$(grep "$customer_id" customer-info|cut -f5 -d,)LAST_NAME=$(grep "$customer_id" customer-info|cut -f6 -d,) | You could grep once and split twice using shell string substitution: NAME=$(grep "$customer_id" customer-info | cut -f5,6 -d,)FIRST_NAME=${NAME%,*}LAST_NAME=${NAME#*,} Or, with bash, using process substitution: IFS=, read FIRST_NAME LAST_NAME < <(grep "$customer_id" customer-info | cut -f5,6 -d,) read will split input on IFS and assign the first value to FIRST_NAME and the rest to LAST_NAME . Using process substitution and redirection < <(...) allows you to pass the output of grep ... | cut ... to read without using a subshell. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/433765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264385/"
]
} |
433,782 | The output of the above command when passed through echo is: # echo systemctl\ {restart,status}\ sshd\;systemctl restart sshd; systemctl status sshd; Even if I paste the output to the terminal, the command works. But when I try to directly run the command, I get: # systemctl\ {restart,status}\ sshd\;bash: systemctl restart sshd;: command not found... I have two questions.. What exactly is this method of substitution and expansion called? (So that I can research it and learn more about it and how to use it properly). What did I do wrong here? Why doesn't it work? | It is a form of Brace expansion done in the shell. The brace-expansion idea is right, but the way it was used is incorrect here. When you meant to do: systemctl\ {restart,status}\ sshd\; The shell interprets systemctl restart sshd; as one long command and tries to run it, and it couldn't locate a binary to run it that way. Because at this stage, the shell tries to tokenize the items in the command line before building the complete command with arguments -- but it has not happened yet. For such known expansion values, you could use eval and still be safe, but be sure of what you are trying to expand with it. eval systemctl\ {restart,status}\ sshd\; But I would rather use a loop instead with for , instead of trying to do write a one-liner or use eval : for action in restart status; do systemctl "$action" sshddone | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/433782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210853/"
]
} |
433,840 | Suppose I have four very large text files, all compressed with xz. file1.log.xzfile2.log.xzfile3.log.xzfile4.log.xz What I'd like to do is concatenate the uncompressed contents of these four files into a new file file.xz . The thing is, I would ideally like to not have to go through intermediate files. The files are very large log files that are gigabytes in size. Compressed, they're under 100MB, but if I were to expand all four files then re-concatenate, I'd need at least 30GB of storage to store the uncompressed files. I could, of course, then cat all the uncompressed files into xz to recompress them: cat file1.log file2.log file3.log file4.log | xz -ve9 - > newfile.log.xz I know how I could concatenate two files at the command line without an intermediate, assuming one was uncompressed and one was compressed: xz -d -c file2.log.xz | cat file1.log - | xz -ve9 - > files1and2.log.xz But this will only work for one file, and one of them has to already be uncompressed. I'm not sure if I can just cat the various .xz files together - let's assume they may have been compressed with different parameters. On a higher level, the question itself could be asked: can you take the output of multiple (more than two) commands, concatenate those outputs, and pipe them into another process without intermediate files? (Hypothetical scenario: imagine I'm doing some kind of processing on all four very huge files using a script that outputs to stdout, and wanting to put the output into another compressed file.) Is it possible to do this using only shell commands? | The xz documentation says It is possible to concatenate .xz files as is. xz will decompress such files as if they were a single .xz file. From my tests, this works even if the different files are compressed with different options; so in your case cat -- *.log.xz > newfile.log.xz will work fine. To answer your more general question, you can pipe the output of a compound command, e.g. for file in -- *.log.xz; do xzcat -- "$file"; done | xz -ve9 > newfile.log.xz or any subshell. This would allow you to perform any processing you want to on your log files before recompressing them. However in the basic case this isn’t necessary either; you can decompress and recompress all your files by running xzcat -- *.log.xz | xz -ve9 > newfile.log.xz If you add -f this even works with uncompressed files, so xzcat -f -- uncompressed.log *.log.xz | xz -ve9 > newfile.log.xz would allow you to combine uncompressed and compressed logs. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/433840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65884/"
]
} |
433,874 | The ports repository (svnweb.freebsd.org/ports/head/) shows haproxy is version 1.7.10 but pkg search haproxy gives me haproxy-1.7.9 Reliable, high performance TCP/HTTP load balancer It suggests that pkg search uses other source instead of FreeBSD ports. Is that true? How can I install the latest version of haproxy-1.7.10 in FreeBSD (preferably using a binary package)? | I guess that you're installing packages from the quarterly branch. The quarterly branch does not always have the latest software, which is why it is regarded more stable than the latest branch. The newest one is 2018Q1 currently and as you can see here the newest haproxy is not there yet. You may switch to the latest branch as described in the " PKG Repository Changed to Quarterly in 10.2? " thread on FreeBSD Forums. Put the following code into /usr/local/etc/pkg/repos/FreeBSD.conf : FreeBSD: { url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest"} It is generally recommended to store your installed software configuration files under /usr/local . This way it is easier to separate those settings from the configuration files of software shipped with FreeBSD. pkg(8) is somewhat special. Although being an essential utility it is not installed by default (it has to be bootstraped using pkg(7) ). This is why its configuration files fit both /etc and /usr/local/etc . See also: pkg.conf(5) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/433874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236188/"
]
} |
433,894 | Input.txt -------Database alias = ABCNode name = node01Hostname = hostnode01Service name = 12345-------Hostname = hostnode01Service name = 12345-------Database alias = PQRNode name = node01Hostname = hostnode01Service name = 12345-------Hostname = hostnode01Service name = 12345-------Database alias = XYZ ...... Expected Output -------Database alias = ABCNode name = node01Hostname = hostnode01Service name = 12345-------Database alias = PQRNode name = node01Hostname = hostnode01Service name = 12345-------Database alias = XYZ ...... Idea is to remove the Duplicate Values of Hostname & Service Name if repetativeor you can sayRemove lines between '------- ', if there are 2 lines in between. Tried using sed with Multiple Match, but not getting desired o/p sed '/-------/{$!N;/\n.*Hostname/d;}' Input.txt | I guess that you're installing packages from the quarterly branch. The quarterly branch does not always have the latest software, which is why it is regarded more stable than the latest branch. The newest one is 2018Q1 currently and as you can see here the newest haproxy is not there yet. You may switch to the latest branch as described in the " PKG Repository Changed to Quarterly in 10.2? " thread on FreeBSD Forums. Put the following code into /usr/local/etc/pkg/repos/FreeBSD.conf : FreeBSD: { url: "pkg+http://pkg.FreeBSD.org/${ABI}/latest"} It is generally recommended to store your installed software configuration files under /usr/local . This way it is easier to separate those settings from the configuration files of software shipped with FreeBSD. pkg(8) is somewhat special. Although being an essential utility it is not installed by default (it has to be bootstraped using pkg(7) ). This is why its configuration files fit both /etc and /usr/local/etc . See also: pkg.conf(5) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/433894",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283032/"
]
} |
433,916 | I am looking for a command which prints all visual output ports a laptops has. I have searched Stack and Google for a while now but I cannot find an answer. The closest I got is xandr : eDP-1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 277mm x 156mm 1366x768 60.00*+ 40.00 1360x768 59.80 59.96 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 960x600 60.00 960x540 59.99 800x600 60.00 60.32 56.25 840x525 60.01 59.88 800x512 60.17 700x525 59.98 640x512 60.02 720x450 59.89 640x480 60.00 59.94 680x384 59.80 59.96 576x432 60.06 512x384 60.00 400x300 60.32 56.34 320x240 60.05 HDMI-1 disconnected (normal left inverted right x axis y axis)DP-1 disconnected (normal left inverted right x axis y axis)HDMI-2 disconnected (normal left inverted right x axis y axis) Looking for something like HDMI-1 disconnected (normal left inverted right x axis y axis)DP-1 disconnected (normal left inverted right x axis y axis)HDMI-2 disconnected (normal left inverted right x axis y axis) Unfortunately this output doesn't show the information I need. The information from xrandr is not accurate. Tried lspci , dmesg (maybe it is in there but couldn't find it), lshw and possibly some more hardware listing commands. The ideal situation would be VGA x1HDMI x1 or miniDP x1DVI x1 But a finger at the right direction would be greatly appreciated. | So I am messing with trying to change dual monitor setup on my machine and found your post. Because I'm interested in the actual display I'm looking for EDID resource from the attached monitor: find /sys/devices -name "edid" which produces an output like: /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-HDMI-A-1/edid/sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DVI-D-1/edid/sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DP-2/edid/sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-HDMI-A-2/edid/sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DP-1/edid not all of which are valid but if you look at the individual folders in the /sys stuff theres file called status that looks like: cat /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DP-1/statusconnected also more details about the connected display devices (vs the actual video card output) by doing something like: cat /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DP-1/edid | edid-decodeExtracted contents:header: 00 ff ff ff ff ff ff 00serial number: 41 0c 0b 09 cd 0e 00 00 32 1aversion: 01 04basic params: b5 46 28 78 3achroma info: 59 05 af 4f 42 af 27 0e 50 54established: bd 4b 00standard: d1 c0 81 80 81 40 95 0f 95 00 b3 00 81 c0 01 01descriptor 1: 4d d0 00 a0 f0 70 3e 80 30 20 35 00 ba 8e 21 00 00 1adescriptor 2: a3 66 00 a0 f0 70 1f 80 30 20 35 00 ba 8e 21 00 00 1adescriptor 3: 00 00 00 fc 00 50 48 4c 20 33 32 38 50 36 56 0a 20 20descriptor 4: 00 00 00 fd 00 17 50 1e a0 3c 01 0a 20 20 20 20 20 20extensions: 01checksum: 74Manufacturer: PHL Model 90b Serial Number 3789Made week 50 of 2016EDID version: 1.4Digital display10 bits per primary color channelDisplayPort interfaceMaximum image size: 70 cm x 40 cmGamma: 2.20DPMS levels: OffSupported color formats: RGB 4:4:4, YCrCb 4:4:4, YCrCb 4:2:2First detailed timing is preferred timingEstablished timings supported: 720x400@70Hz 640x480@60Hz 640x480@67Hz 640x480@72Hz 640x480@75Hz 800x600@60Hz 800x600@75Hz 1024x768@60Hz 1024x768@75Hz 1280x1024@75HzStandard timings supported: 1920x1080@60Hz 1280x1024@60Hz 1280x960@60Hz 1440x900@75Hz 1440x900@60Hz 1680x1050@60Hz 1280x720@60HzDetailed mode: Clock 533.250 MHz, 698 mm x 398 mm 3840 3888 3920 4000 hborder 0 2160 2163 2168 2222 vborder 0 +hsync -vsync Detailed mode: Clock 262.750 MHz, 698 mm x 398 mm 3840 3888 3920 4000 hborder 0 2160 2163 2168 2191 vborder 0 +hsync -vsync Monitor name: PHLMonitor ranges (bare limits): 23-80Hz V, 30-160kHz H, max dotclock 600MHzHas 1 extension blocksChecksum: 0x74 (valid)CEA extension blockExtension version: 334 bytes of CEA data Video data block VIC 16 1920x1080@60Hz VIC 31 1920x1080@50Hz VIC 4 1280x720@60Hz VIC 19 1280x720@50Hz VIC 3 720x480@60Hz VIC 18 720x576@50Hz VIC 2 720x480@60Hz VIC 17 720x576@50Hz VIC 1 640x480@60Hz VIC 5 1920x1080i@60Hz VIC 20 1920x1080i@50Hz Audio data block Linear PCM, max channels 2 Supported sample rates (kHz): 48 44.1 32 Supported sample sizes (bits): 24 20 16 Speaker allocation data block Speaker map: FL/FR Vendor-specific data block, OUI 000c03 (HDMI) Source physical address 1.0.0.0 DC_30bit DC_Y444 DVI_Dual Maximum TMDS clock: 600MHz Extended HDMI video details: HDMI VIC 0 3840x2160@30Hz HDMI VIC 1 3840x2160@25Hz HDMI VIC 2 3840x2160@24HzUnderscans PC formats by defaultBasic audio supportSupports YCbCr 4:4:4Supports YCbCr 4:2:21 native detailed modesDetailed mode: Clock 27.000 MHz, 698 mm x 398 mm 720 736 798 858 hborder 0 480 489 495 525 vborder 0 -hsync -vsync Detailed mode: Clock 74.250 MHz, 698 mm x 398 mm 1280 1390 1430 1650 hborder 0 720 725 730 750 vborder 0 +hsync +vsync Detailed mode: Clock 148.500 MHz, 698 mm x 398 mm 1920 2448 2492 2640 hborder 0 1080 1084 1089 1125 vborder 0 +hsync +vsync Detailed mode: Clock 147.170 MHz, 698 mm x 398 mm 2048 2096 2128 2208 hborder 0 1080 1083 1093 1111 vborder 0 +hsync -vsync Checksum: 0x18 (valid)EDID block does NOT conform to EDID 1.3! Detailed block string not properly terminated | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/433916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/96456/"
]
} |
433,931 | Case 1: For the below incoming message of syslog, <14>Mar 22 11:12:06 RT_FLOW: RT_FLOW_SESSION_CREATE: session created 1.2.3.4/62963->23.58.169.35/443 0x0 junos-https 6.7.8.9/32359->23.58.169.35/443 0x0 source rule 1 N/A N/A 6 1 Trust Untrust 60471 N/A(N/A) ge-0/0/1.0 UNKNOWN UNKNOWN UNKNOWN syslog-ng successfully adds hostname between timestamp( Mar 22 11:12:06 ) and type of message( RT_FLOW ) Case 2: But , for the below incoming message of syslog, <14> Mar 22 11:04:17 206.133.74.126 03362 auth: User 'sup_ogr' logged in from 206.133.74.127 to SSH session peculiarly, IP address is already existing between timestamp( Mar 22 11:04:17 ) & type of message( auth ). syslog-ng is not able to add host name between time stamp and type of message Configurations used are: use_dns (yes); & keep_hostname (yes); In second case, does syslog message having IP, in compliance with RFC 5424 standards? If yes, then, what configuration is required to set the host name, between time stamp & type of message? | So I am messing with trying to change dual monitor setup on my machine and found your post. Because I'm interested in the actual display I'm looking for EDID resource from the attached monitor: find /sys/devices -name "edid" which produces an output like: /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-HDMI-A-1/edid/sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DVI-D-1/edid/sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DP-2/edid/sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-HDMI-A-2/edid/sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DP-1/edid not all of which are valid but if you look at the individual folders in the /sys stuff theres file called status that looks like: cat /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DP-1/statusconnected also more details about the connected display devices (vs the actual video card output) by doing something like: cat /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0/drm/card0/card0-DP-1/edid | edid-decodeExtracted contents:header: 00 ff ff ff ff ff ff 00serial number: 41 0c 0b 09 cd 0e 00 00 32 1aversion: 01 04basic params: b5 46 28 78 3achroma info: 59 05 af 4f 42 af 27 0e 50 54established: bd 4b 00standard: d1 c0 81 80 81 40 95 0f 95 00 b3 00 81 c0 01 01descriptor 1: 4d d0 00 a0 f0 70 3e 80 30 20 35 00 ba 8e 21 00 00 1adescriptor 2: a3 66 00 a0 f0 70 1f 80 30 20 35 00 ba 8e 21 00 00 1adescriptor 3: 00 00 00 fc 00 50 48 4c 20 33 32 38 50 36 56 0a 20 20descriptor 4: 00 00 00 fd 00 17 50 1e a0 3c 01 0a 20 20 20 20 20 20extensions: 01checksum: 74Manufacturer: PHL Model 90b Serial Number 3789Made week 50 of 2016EDID version: 1.4Digital display10 bits per primary color channelDisplayPort interfaceMaximum image size: 70 cm x 40 cmGamma: 2.20DPMS levels: OffSupported color formats: RGB 4:4:4, YCrCb 4:4:4, YCrCb 4:2:2First detailed timing is preferred timingEstablished timings supported: 720x400@70Hz 640x480@60Hz 640x480@67Hz 640x480@72Hz 640x480@75Hz 800x600@60Hz 800x600@75Hz 1024x768@60Hz 1024x768@75Hz 1280x1024@75HzStandard timings supported: 1920x1080@60Hz 1280x1024@60Hz 1280x960@60Hz 1440x900@75Hz 1440x900@60Hz 1680x1050@60Hz 1280x720@60HzDetailed mode: Clock 533.250 MHz, 698 mm x 398 mm 3840 3888 3920 4000 hborder 0 2160 2163 2168 2222 vborder 0 +hsync -vsync Detailed mode: Clock 262.750 MHz, 698 mm x 398 mm 3840 3888 3920 4000 hborder 0 2160 2163 2168 2191 vborder 0 +hsync -vsync Monitor name: PHLMonitor ranges (bare limits): 23-80Hz V, 30-160kHz H, max dotclock 600MHzHas 1 extension blocksChecksum: 0x74 (valid)CEA extension blockExtension version: 334 bytes of CEA data Video data block VIC 16 1920x1080@60Hz VIC 31 1920x1080@50Hz VIC 4 1280x720@60Hz VIC 19 1280x720@50Hz VIC 3 720x480@60Hz VIC 18 720x576@50Hz VIC 2 720x480@60Hz VIC 17 720x576@50Hz VIC 1 640x480@60Hz VIC 5 1920x1080i@60Hz VIC 20 1920x1080i@50Hz Audio data block Linear PCM, max channels 2 Supported sample rates (kHz): 48 44.1 32 Supported sample sizes (bits): 24 20 16 Speaker allocation data block Speaker map: FL/FR Vendor-specific data block, OUI 000c03 (HDMI) Source physical address 1.0.0.0 DC_30bit DC_Y444 DVI_Dual Maximum TMDS clock: 600MHz Extended HDMI video details: HDMI VIC 0 3840x2160@30Hz HDMI VIC 1 3840x2160@25Hz HDMI VIC 2 3840x2160@24HzUnderscans PC formats by defaultBasic audio supportSupports YCbCr 4:4:4Supports YCbCr 4:2:21 native detailed modesDetailed mode: Clock 27.000 MHz, 698 mm x 398 mm 720 736 798 858 hborder 0 480 489 495 525 vborder 0 -hsync -vsync Detailed mode: Clock 74.250 MHz, 698 mm x 398 mm 1280 1390 1430 1650 hborder 0 720 725 730 750 vborder 0 +hsync +vsync Detailed mode: Clock 148.500 MHz, 698 mm x 398 mm 1920 2448 2492 2640 hborder 0 1080 1084 1089 1125 vborder 0 +hsync +vsync Detailed mode: Clock 147.170 MHz, 698 mm x 398 mm 2048 2096 2128 2208 hborder 0 1080 1083 1093 1111 vborder 0 +hsync -vsync Checksum: 0x18 (valid)EDID block does NOT conform to EDID 1.3! Detailed block string not properly terminated | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/433931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/62659/"
]
} |
433,987 | My bash script contains many mysqldump blabla > dump.sql and mysq balbla < dump.sql in order to make it possible to run it in dry-run mode. Actually the point is to create a funtion run to run anything I ask it to. run echo 'hello world' run mysqldump blabla > dump.sql run mysql blabla < dump.sql run ssh blabla # etcrun() { if [[ "$(printenv DRY_RUN)" = "yes" ]] then echo "${@}" else ${@} fi} However, this is doesn't work: run "mysqldump -uuser -ppass dbase > dump.sql" I get this error: mysqldump: couldn't find table: ">" | You should use "${@}" instead of ${@} (like with echo "${@}" ) but that is not the reason for your problem. The reason is that redirection takes places very early in command line parsing, before parameter substitution. Thus after the variable has put > in the command line, the shell is not looking for > any more. An important point which I noticed after I published my answer: With a call like run mysqldump blabla > dump.sql the function run does not see > and dump.sql . That is probably not what you want because it prevents you from changing all the redirections with a single environment variable, as the output of echo "${@}" is redirected to the file then, too. Thus, you should use something like run --redirect dump.sql mysqldump blabla , see below. There are two possibilities: Stick with "$@" and use eval . This may take you to a quoting nightmare, of course. You have to quote everything except for the > so that the shell sees a bare > in the command line before it does quote removal. Handle the redirection separately: run --redirect dump.sql mysqldump blablarun() { if [ "$1" == '--redirect' ]; then shift redirect_target="$1" shift else redirect_target='/dev/stdout' # works at least with Linux fi if [[ "$(printenv DRY_RUN)" = "yes" ]] then echo "${@}" else "${@}" > "$redirect_target" fi} You can avoid the redirect_target='/dev/stdout' if you put the if [ "$1" == '--redirect' ] in the else branch of if [[ "$(printenv DRY_RUN)" = "yes" ]] . if [[ "$(printenv DRY_RUN)" = "yes" ]] then if [ "$1" == '--redirect' ]; then # do not output "--redirect file" shift shift fi echo "${@}" else if [ "$1" == '--redirect' ]; then shift redirect_target="$1" shift "${@}" > "$redirect_target" else "${@}" fi fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/433987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/142331/"
]
} |
433,991 | I know that i can use nmap to see which ports are open on specific machine.But what i need is a way to get it from the host side itself. Currently, if i use nmap on one of my machines to check the other one, i get for an example: smb:~# nmap 192.168.1.4PORT STATE SERVICE25/tcp open smtp80/tcp open http113/tcp closed ident143/tcp open imap443/tcp open https465/tcp open smtps587/tcp open submission993/tcp open imaps Is there a way to do this on the host itself? Not from a remote machine to a specific host. I know that i can do nmap localhost But that is not what i want to do as i will be putting the command into a script that goes through all the machines. EDIT: This way, nmap showed 22 5000 5001 5432 6002 7103 7106 7201 9200 but lsof command showed me 22 5000 5001 5432 5601 6002 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7201 7210 11211 27017 | On Linux, you can use: ss -ltu or netstat -ltu To list the l istening T CP and U DP ports. Add the -n option (for either ss or netstat ) if you want to disable the translation from port number and IP address to service and host name. Add the -p option to see the processes (if any, some ports may be bound by the kernel like for NFS) which are listening (if you don't have superuser privileges, that will only give that information for processes running in your name). That would list the ports where an application is listening on (for UDP, that has a socket bound to it). Note that some may only listen on a given address only (IPv4 and/or IPv6), which will show in the output of ss / netstat ( 0.0.0.0 means listen on any IPv4 address, [::] on any IPv6 address). Even then that doesn't mean that a given other host on the network may contact the system on that port and that address as any firewall, including the host firewall may block or mask/redirect the incoming connections on that port based on more or less complex rules (like only allow connections from this or that host, this or that source port, at this or that time and only up to this or that times per minutes, etc). For the host firewall configuration, you can look at the output of iptables-save . Also note that if a process or processes is/are listening on a TCP socket but not accepting connections there, once the number of pending incoming connection gets bigger than the maximum backlog, connections will no longer be accepted, and from a remote host, it will show as if the port was blocked. Watch the Recv-Q column in the output of ss / netstat to spot those situations (where incoming connections are not being accepted and fill up a queue). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/433991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255177/"
]
} |
433,998 | I am reading a book "Linux Command Line", there's -u update option for command mv and `cp' -u, --update When moving files from one directory to another, only move files that either don't exist, or are newer than the existing corresponding files in the destination directory. The option is not included in BSD 'mv' command. What's the alternative options for --update ? | On Linux, you can use: ss -ltu or netstat -ltu To list the l istening T CP and U DP ports. Add the -n option (for either ss or netstat ) if you want to disable the translation from port number and IP address to service and host name. Add the -p option to see the processes (if any, some ports may be bound by the kernel like for NFS) which are listening (if you don't have superuser privileges, that will only give that information for processes running in your name). That would list the ports where an application is listening on (for UDP, that has a socket bound to it). Note that some may only listen on a given address only (IPv4 and/or IPv6), which will show in the output of ss / netstat ( 0.0.0.0 means listen on any IPv4 address, [::] on any IPv6 address). Even then that doesn't mean that a given other host on the network may contact the system on that port and that address as any firewall, including the host firewall may block or mask/redirect the incoming connections on that port based on more or less complex rules (like only allow connections from this or that host, this or that source port, at this or that time and only up to this or that times per minutes, etc). For the host firewall configuration, you can look at the output of iptables-save . Also note that if a process or processes is/are listening on a TCP socket but not accepting connections there, once the number of pending incoming connection gets bigger than the maximum backlog, connections will no longer be accepted, and from a remote host, it will show as if the port was blocked. Watch the Recv-Q column in the output of ss / netstat to spot those situations (where incoming connections are not being accepted and fill up a queue). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/433998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
434,055 | Is there a pictorial representation of Linux filesystem, to understand the Linux filesystem. Currently running Ubuntu 16.04 and I want to efficiently re-install, using 2 HDD, 1x250GB and 1x500GB. The 250GB being the faster drive. | Currently FileSystem Hierarchy Standard (FHS) is in version 2.3 . To get an in-depth knowledge about it, Visit this page on Linux Foundation. Also as answered by dr01 , you can have a crisp knowledge about it at Wikipedia : FileSystem Hierarchy Standard . Would like to add this beautiful image from this Source . I reference this image every now and then. But please note that none of the directories should be capitalized. Feel free to add-in more details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283164/"
]
} |
434,066 | centos@ip-10-0-5-4 ~ $ sudo ls -l /var/solr/data/new_core/_default/*zsh: no matches found: /var/solr/data/new_core/_default/*centos@ip-10-0-5-4 ~ $ sudo ls -l /var/solr/data/new_core/_default/ total 4drwxr-xr-x. 3 root root 4096 Mar 28 07:34 conf | The * is expanded by the shell before sudo is invoked. If you don't have access to that directory, the zsh shell will complain with "no matches found". If the NOMATCH shell option is unset in the zsh shell, the shell would have left the pattern unexpanded and ls would instead generate a "no such file or directory" error (unless there was something with the literal name * in that directory). With NOMATCH set, which it is by default, sudo ls would not even be invoked. You may do this instead: sudo sh -c 'ls -l /var/solr/data/new_core/_default/*' This prevents the current shell from expanding the * and instead invokes sh with the command line that you want to execute as root. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283133/"
]
} |
434,092 | Linux doesn't actually distinguish between processes and threads, and implements both as a data structure task_struct . So what does Linux provide to some programs for them to tell threads of a process from its child processes? For example, Is there a way to see details of all the threads that a process has in Linux? Thanks. | From a task_struct perspective, a process’s threads have the same thread group leader ( group_leader in task_struct ), whereas child processes have a different thread group leader (each individual child process). This information is exposed to user space via the /proc file system. You can trace parents and children by looking at the ppid field in /proc/${pid}/stat or .../status (this gives the parent pid); you can trace threads by looking at the tgid field in .../status (this gives the thread group id, which is also the group leader’s pid). A process’s threads are made visible in the /proc/${pid}/task directory: each thread gets its own subdirectory. (Every process has at least one thread.) In practice, programs wishing to keep track of their own threads would rely on APIs provided by the threading library they’re using, instead of using OS-specific information. Typically on Unix-like systems that means using pthreads. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/434092",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
434,094 | I have a server running Centos 7 which needs to be rebooted to upgrade some software. Some of the physical NICs have around 5-10 VLAN interfaces each. They're subject to change on a weekly/monthly basis so storing the details in /etc/sysconfig/network-scripts to persist across reboots isn't practical. Is there an simple way to take a snapshot of the current networking stack and restore after the reboot? Similar to the way you can save/restore iptables rules? I've found several references to the system-config-network-cmd but I'm wary of using this tool in the event it overwrites the static configs for the physical interfaces we do have in /etc/sysconfig/network-scripts Thanks! | From a task_struct perspective, a process’s threads have the same thread group leader ( group_leader in task_struct ), whereas child processes have a different thread group leader (each individual child process). This information is exposed to user space via the /proc file system. You can trace parents and children by looking at the ppid field in /proc/${pid}/stat or .../status (this gives the parent pid); you can trace threads by looking at the tgid field in .../status (this gives the thread group id, which is also the group leader’s pid). A process’s threads are made visible in the /proc/${pid}/task directory: each thread gets its own subdirectory. (Every process has at least one thread.) In practice, programs wishing to keep track of their own threads would rely on APIs provided by the threading library they’re using, instead of using OS-specific information. Typically on Unix-like systems that means using pthreads. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/434094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143817/"
]
} |
434,136 | I am reading through Salzman's Linux Kernel Module Programming Guide, and I was wondering about where the file linux/kernel.h is located. I couldn't find it with find . Or rather the files I found did not have any printk priority macros in them. | The linux/kernel.h header which gets used for module builds is the header which is part of the kernel source . When modules are built in the kernel source tree, that’s the version which is used. For external module builds, the build process looks for the header in /lib/modules/$(uname -r)/build/include/linux/sched.h . That file is provided by kernel header packages, e.g. on Debian derivatives, the linux-headers-$(uname -r) package. The /usr/include/linux/kernel.h is intended for user processes, not for kernel modules. The printk priority macros now live in linux/printk.h and linux/kern_levels.h . I’m guessing you’re reading the original guide , which is based on the 2.6 kernel series; for modern kernels you should read the updated guide (currently for 5.6.7 ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134933/"
]
} |
434,278 | When I wanted to create a hard link in my /home directory in root mode, Linux showed the following error message: ln: failed to create hard link ‘my_sdb’ => ‘/dev/sda1’: Invalid cross-device link The above error message is shown below: # cd /home/user/# ln /dev/sda1 my_sdb But I could only create a hard link in the /dev directory, and it was not possible in other directories. Now, I want to know how to create a hard link from an existing device file (like sdb1 ) in /home directory (or other directories) ? | But I could only create a hard link in the /dev directory and it was not possible in other directories. As shown by the error message, it is not possible to create a hard link across different filesystems; you can create only soft (symbolic) links. For instance, if your /home is in a different partition than your root partition, you won't be able to hard link /tmp/foo to /home/user/ . Now, as @RichardNeumann pointed out, /dev is usually mounted as a devtmpfs filesystem. See this example: [dr01@centos7 ~]$ dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/centos_centos7-root 46110724 3792836 42317888 9% /devtmpfs 4063180 0 4063180 0% /devtmpfs 4078924 0 4078924 0% /dev/shmtmpfs 4078924 9148 4069776 1% /runtmpfs 4078924 0 4078924 0% /sys/fs/cgroup/dev/sda1 1038336 202684 835652 20% /boottmpfs 815788 28 815760 1% /run/user/1000 Therefore you can only create hard links to files in /dev within /dev . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/434278",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273848/"
]
} |
434,289 | This is the output of cat /proc/cpuinfo processor : 0vendor_id : GenuineIntelcpu family : 6model : 78model name : Intel(R) Core(TM) i5-6200U CPU @ 2.30GHzstepping : 3microcode : 0x74cpu MHz : 2400.000cache size : 3072 KBphysical id : 0siblings : 4core id : 0cpu cores : 2apicid : 0initial apicid : 0fpu : yesfpu_exception : yescpuid level : 22wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti retpoline intel_pt rsb_ctxsw tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_eppbugs : cpu_meltdown spectre_v1 spectre_v2bogomips : 4800.00clflush size : 64cache_alignment : 64address sizes : 39 bits physical, 48 bits virtualpower management:processor : 1vendor_id : GenuineIntelcpu family : 6model : 78model name : Intel(R) Core(TM) i5-6200U CPU @ 2.30GHzstepping : 3microcode : 0x74cpu MHz : 2400.000cache size : 3072 KBphysical id : 0siblings : 4core id : 1cpu cores : 2apicid : 2initial apicid : 2fpu : yesfpu_exception : yescpuid level : 22wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti retpoline intel_pt rsb_ctxsw tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_eppbugs : cpu_meltdown spectre_v1 spectre_v2bogomips : 4800.00clflush size : 64cache_alignment : 64address sizes : 39 bits physical, 48 bits virtualpower management:processor : 2vendor_id : GenuineIntelcpu family : 6model : 78model name : Intel(R) Core(TM) i5-6200U CPU @ 2.30GHzstepping : 3microcode : 0x74cpu MHz : 2400.000cache size : 3072 KBphysical id : 0siblings : 4core id : 0cpu cores : 2apicid : 1initial apicid : 1fpu : yesfpu_exception : yescpuid level : 22wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti retpoline intel_pt rsb_ctxsw tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_eppbugs : cpu_meltdown spectre_v1 spectre_v2bogomips : 4800.00clflush size : 64cache_alignment : 64address sizes : 39 bits physical, 48 bits virtualpower management:processor : 3vendor_id : GenuineIntelcpu family : 6model : 78model name : Intel(R) Core(TM) i5-6200U CPU @ 2.30GHzstepping : 3microcode : 0x74cpu MHz : 2400.000cache size : 3072 KBphysical id : 0siblings : 4core id : 1cpu cores : 2apicid : 3initial apicid : 3fpu : yesfpu_exception : yescpuid level : 22wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti retpoline intel_pt rsb_ctxsw tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_eppbugs : cpu_meltdown spectre_v1 spectre_v2bogomips : 4800.00clflush size : 64cache_alignment : 64address sizes : 39 bits physical, 48 bits virtualpower management: Would I be right in assuming I have 4 processors that have 2 cores each?Why then, are the core id's in each processor numbered 1, 0, 1, 0 as seen from the output of cat /proc/cpuinfo | grep 'core id' ? core id : 0core id : 1core id : 0core id : 1 | /proc/cpuinfo 's information might be a bit confusing. These are kernel processors which are not necessarily physical processors. These include cores/modules and threads, too. When you take a look at this: physical id : 0siblings : 4core id : 0cpu cores : 2 physical id is the actual physical processor count. It remains always 0, meaning that you have 1 CPU in your system. siblings designates that it is one of the 4 in total kernel processors ( threads , if you like). core id can be explained as the thread per core meaning you have 2 threads per each core , each with ids 0 and 1. cpu cores is the total number of cores that this processor has. Perhaps it would be easier to use lscpu - information is presented in a more straightforward way. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434289",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185275/"
]
} |
434,361 | How can I do a fast text replace with recursive directories and filenames with spaces and single quotes ? Preferably using standard UNIX tools, or alternatively a well-known package. Using find is extremely slow for many files, because it spawns a new process for each file, so I'm looking for a way that has directory traversing and string replacement integrated as one operation. Slow search: find . -name '*.txt' -exec grep foo {} \; Fast search: grep -lr --include=*.txt foo Slow replace: find . -name '*.txt' -exec perl -i -pe 's/foo/bar/' {} \; Fast replace: # Your suggestion here (This one is rather fast, but is two-pass and doesn't handle spaces.) perl -p -i -e 's/foo/bar/g' `grep -lr --include=*.txt foo` | You'd only want to use the: find . -name '*.txt' -exec cmd {} \; form for those cmd s that can only take one argument. That's not the case of grep . With grep : find . -name '*.txt' -exec grep foo /dev/null {} + (or use -H with GNU grep ). More on that at Recursive grep vs find / -type f -exec grep {} \; Which is more efficient/faster? Now for replacement, that's the same, perl -pi can take more than one argument: find . -name '*.txt' -type f -exec perl -pi -e s/foo/bar/g {} + Now that would rewrite the files regardless of whether they contain foo or not. Instead, you may want (assuming GNU grep and xargs or compatible): find . -name '*.txt' -type f -exec grep -l --null foo {} + | xargs -r0 perl -pi -e s/foo/bar/g Or: grep -lr --null --include='*.txt' foo . | xargs -r0 perl -pi -e s/foo/bar/g So only the files that contain foo be rewritten. BTW, --include=*.txt ( --include being another GNU extension) is a shell glob, so should be quoted. For instance, if there was a file called --include=foo.txt in the current directory, the shell would expand --include=*.txt to that before calling grep . And if not, with many shells, you'd get an error about the glob failing to match any file. So you'd want grep --include='*.txt' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434361",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40335/"
]
} |
434,417 | In the first terminal A, I create a directory, enter the directory, and create a file: $ mkdir test$ cd test$ touch file1.txt$ lsfile1.txt Then in another terminal B, I delete the directory: $ rm -r test$ mkdir test$ cd test$ touch file2.txt And back again the terminal A (not doing any cd ), I try to list the files: $ ls ls doesn't see anything and it doesn't complain either. What happens in the background? How comes that ls doesn't see the problem? And is there a standard, portable, and/or recommended way to find out that something is not right in the terminal A? pwd just prints the seemingly correct directory name. touch file3.txt says no such file or directory which is not helpful. Only bash -c "pwd" gives a two long error lines which somehow give away that something is wrong but are not really descriptive and I'm not sure how portable that is between different systems (I'm on Ubuntu 16.04). cd .. && cd test fixes the problem, but does not really explain what happened. | How comes that ls doesn't see the problem? There is no "problem" in the first place. something is not right in the terminal A There is nothing not right. There are defined semantics for processes having unlinked directories open just as there are defined semantics for processes having unlinked files open. Both are normal things. There are defined semantics for unlinking a directory entry that referenced something (whilst having that something open somewhere) and then creating a directory entry by the original name linking to something else : You now have two of those somethings, and referencing the open description for the first does not access the second, or vice versa. This is as true of directories as it is of files. A process can have an open file description for a directory by dint of: it being the process's working directory; it being the process's root directory; it being open by the process having called the opendir() library function; or it being open by the process having called the open() library function. rmdir() is allowed to fail to remove links to a still-open directory (which was the behaviour of some old Unices and is the behaviour of some non-Unix-non-Linux POSIX-conformant systems), and is required to fail if the still-open directory is unlinked via a name that ends in a pathname component . ; but if it succeeds and removes the final link to the directory the defined semantics are that a still-open but unlinked directory: has no directory entries at all ; cannot have any directory entries created thereafter, even if the attempting process has write access or privileged access. Your operating system is one of the ones that does not return EBUSY from rmdir() in these circumstances, and your shell in the first terminal session has an unlinked but still open directory as its current directory. Everything that you saw was the defined behaviour in that circumstance. ls , for example, showed the empty still open first directory, of the two directories that you had at that point. Even the output of pwd was. When run as a built-in command in that shell it was that shell internally keeping track of the name of the current directory in a shell/environment variable. When run as a built-in command in another shell, it was the other shell failing to match the device and i-node number of its working directory to the second directory now named by the contents of the PWD environment variable that it inherited, thus deciding not to trust the contents of PWD , and then failing in the getcwd() library function because the working directory does not have any names any longer, it having been unlinked. Further reading rmdir() . "System Interfaces". The Open Group Base Specifications . IEEE 1003.1:2017. https://unix.stackexchange.com/a/413225/5132 Why can't I remove the '.' directory? Does 'rm .*' ever delete the parent directory? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434417",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58065/"
]
} |
434,532 | When I run these four commands on Xubuntu 16.04, either locally or over ssh, they all seem to do the exact same thing: export DISPLAY=:0.0 #not necessary unless you have logged in over ssh instead of starting a terminal locally gedit & gedit & disown nohup gedit nohup gedit & disown I don't get the the difference between gedit & and gedit & disown because if I kill the parent terminal or log out out of an ssh session, it would seem that gedit is "disowned" in either scenario. As for two and three, the only difference I see is that the command output is logged to a separate file and will continue to be logged to that separate log even if the original shell session that spawned the bg process is killed. As for three and four, I keep reading that there is a technical difference, but cannot understand at all why would you would prefer one over the other. Which one should I use? I have seen all four commands used in tutorials and Q&As, and despite some really great answers describing the technical differences between nohup and disown, I can't seem to get a clear recommendation (except perhaps for logging purposes or shell compatibility) for which one I should use. | When I need to run a script that will run for a long time , and I'm on an ssh session, I want either: The task should continue even when the network breaks or when I pack my laptop and go away. a. The task can finish without interactive input from me. nohup do_my_stuff & b. The task might need something from me on stdin. man tmux history -w tmux do_my_stuff The background process is somehow enhancing my current session and should die together with the session. A rarity. enhance_my_session >>/tmp/enhance.$$.log 2>&1 & I want the thing to spit some logs randomly at my ssh session. Wait, what? No, I would never want that. Thank you disown . Another thing that I never want: convert the process to a fully detached daemon, but avoid starting it automatically at the next boot. I would never want that because I cannot predict when the system will reboot and who will be rebooting it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273175/"
]
} |
434,608 | I have installed a minimal Debian 9 through CD image. I configured Apt to not install recommended packages and now the system don't have any man page. man apt-get , man mkdir , or man ping doesn't show any man page; instead, I get bash: man: command not found | bash: man: command not found means that you need to install the man-db package . Manpages are installed by default in most cases, because Debian policy strongly encourages them to be shipped in the same package as the commands themselves : Each program, utility, and function should have an associated manual page included in the same package. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434608",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283579/"
]
} |
434,611 | I am trying to append and prepend text to every line in a .txt file. I want to prepend: I am a I want to append: 128... [} to every line. Inside of a.txt: fruit, likebike, likedino, like Upon performing the following command: $ cat a.txt|sed 'I am a ',' 128... [}' it does not work how I want it to. I would really want it to say the following: I am a fruit, like 128... [}I am a bike, like 128... [}I am a dino, like 128... [} | Simple sed approach: sed 's/^/I am a /; s/$/ 128... [}/' file.txt ^ - stands for start of the string/line $ - stands for end of the string/line The output: I am a fruit, like 128... [}I am a bike, like 128... [}I am a dino, like 128... [} Alternatively, with Awk you could do: awk '{ print "I am a", $0, "128... [}" }' file.txt | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/434611",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237911/"
]
} |
434,884 | I did some research, but unfortunately my question was never answered. The closest question to mine is this: Search for bash command What I need: A command to search for installed commands using wildcards. Let's say I want to unmount something but forgot the command. I know it contains "mount". So I would do something like this: searchfor *mount* To find that unmount is the command I need. I know man -k or yum whatprovides . But that's not what I'm looking for. I want to search for all installed commands (that can be found in the directories provided by the $PATH variable). | My favorite way is to use compgen -c . For example, to find allcommands that contain mount : $ compgen -c | grep mountgvfs-mountmmountideviceimagemountergrub-mounthumounthmountumountmountpointmountfusermountumountmount.ntfsmount.lowntfs-3gmount.cifsumount.udisksmount.nfsumount.nfsmountmount.ntfs-3gmount.fuseshowmountrpc.mountdmountespumount.udisks2mountstatsautomount What's good about compgen -c is that it also finds aliases, user functions and Bash built-in commands, for example: $ alias aoeuidhtn='echo hi'$ compgen -c | grep aoeuidhtnaoeuidhtn$ my-great-function() { printf "Inside great function()\n"; }$ compgen -c | grep greatmy-great-function$ compgen -c | grep '^cd$'cd Also, compgen is a part of Bash, so it's always available. As described by help compgen : Display possible completions depending on the options. Intended to be used from within a shell function generating possible completions. If the optional WORD argument is supplied, matches against WORD are generated. Exit Status: Returns success unless an invalid option is supplied or an error occurs. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/434884",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283845/"
]
} |
434,916 | I just subscribed to a VPN provider. I have Xubuntu 17.10, openvpn 2.4.3. After launching the openvpn command I check the IP (fine) and performed a simple DNS leak test : not fine, it shows my Internet Service Provider! How to fix this DNS leak? I have one preliminary interrogation: is it "fixable" on my side? Or is the remote server wrongly configured? On my side, I tried changing some values in the .ovpn config file for openvpn: Originally there were already these lines, that are expected to work, but nope: script-security 2up /etc/openvpn/update-resolv-confdown /etc/openvpn/update-resolv-conf I changed them according to this reddit answer (explicitly specifying DNS addresses): dhcp-option DNS 208.67.222.222dhcp-option DNS 208.67.220.220dhcp-option DNS 8.26.56.26up "/etc/openvpn/update-resolv-conf foreign_option_1='dhcp-option DNS 208.67.222.222' foreign_option_2='dhcp-option DNS 208.67.220.220' foreign_option_3='dhcp-option DNS 8.26.56.26'"down "/etc/openvpn/update-resolv-conf foreign_option_1='dhcp-option DNS 208.67.222.222' foreign_option_2='dhcp-option DNS 208.67.220.220' foreign_option_3='dhcp-option DNS 8.26.56.26'" Doing that seems to do the job, as the content of /etc/resolvconf gets updated by the up/down scripts: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN# 127.0.0.53 is the systemd-resolved stub resolver.# run "systemd-resolve --status" to see details about the actual nameservers.nameserver 208.67.222.222nameserver 208.67.220.220nameserver 8.26.56.26search lan but DNSleaktest still showing my ISP. So then I learned the existence of the ubuntu package openvpn-systemd-resolved which provides a script similar to update-resolve-conf but makes it work with systemd (here I have no idea what processes use this: network-manager? openvpn?). I installed the package and replaced the script name in my .ovpn file: up "/etc/openvpn/update-systemd-resolved ..."down "..."down-pre Still no luck. [While writing this I just figured out the solution, see my answer below] Then I played a lot with the /etc/resolv.conf file. Normally it should not be changed, so I put my DNS servers addresses into /etc/resolvconf/resolv.conf.d/base , but issuing resolvconf -u did not appear to work. Chatted with a support person from the VPN company, no solution. I tried various solutions like this one , and subsequent unaccepted answers: installing dnsmasq and putting server=... into /etc/dnsmasq.conf ; putting a "supersede" line in the /etc/dhcp/dhclient.conf ( details ); the chattr -based hack . I forgot the other things I tried, then I thought, stackexchange will save me from my misery, and it miraculously did, just by the power of formulating a question. [Edit 1: Not solved! Actually my first answer is not the reason it works] I noticed it after more checking. I can remove the systemd-update-resolved lines and it still works, but only on certain conditions: When the openvpn service is running, I get DNS leaks.If I stop it, and then restart only the service for my client: sudo service openvpn stopsudo service openvpn@client start then it works. Sorry, I suppose I haven't check the openvpn manual thoroughly, but why is that ? Isn't it a security leak? Especially because the openvpn service is activated automatically after installation from apt. How to make the change permanent? (I tried sudo systemctl disable openvpn , but at next startup I still had the same problem). [Edit 2: routing tables] Once I stopped openvpn and started openvpn@client , I don't have DNS leaks and the output of route -n is: Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 91.240.65.1 128.0.0.0 UG 0 0 0 tun00.0.0.0 192.168.1.254 0.0.0.0 UG 100 0 0 eno191.240.64.17 192.168.1.254 255.255.255.255 UGH 0 0 0 eno191.240.65.0 0.0.0.0 255.255.255.224 U 0 0 0 tun0128.0.0.0 91.240.65.1 128.0.0.0 UG 0 0 0 tun0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eno1192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1 After a sudo service openvpn restart : Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 91.240.66.1 128.0.0.0 UG 0 0 0 tun00.0.0.0 192.168.1.254 0.0.0.0 UG 100 0 0 eno191.240.64.16 192.168.1.254 255.255.255.255 UGH 0 0 0 eno191.240.66.0 0.0.0.0 255.255.255.224 U 0 0 0 tun0128.0.0.0 91.240.66.1 128.0.0.0 UG 0 0 0 tun0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eno1192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1 Not working anymore, I get DNS leaks in both cases. I tried installing the package openresolv (which replaces resolvconf), and it seems to work. Here is the new routing table: Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 91.240.66.161 128.0.0.0 UG 0 0 0 tun00.0.0.0 192.168.1.254 0.0.0.0 UG 100 0 0 eno191.240.64.15 192.168.1.254 255.255.255.255 UGH 0 0 0 eno191.240.66.160 0.0.0.0 255.255.255.224 U 0 0 0 tun0128.0.0.0 91.240.66.161 128.0.0.0 UG 0 0 0 tun0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eno1192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1 | I had this DNS leak issue on Ubuntu 17.10 and now 18.04 LTS. It must have started when I updated from 16.10 a while back and I never thought to check until now, by accident. None of the above (and other things I found and tried) helped, until I ran into this URL below, reading all the way through the bug report. The comment on adding a dns-priority line worked for me. https://bugs.launchpad.net/network-manager/+bug/1624317 look at comment #103. Look for your installed NetworkManager VPN connections (the ' $ ' is just my system prompt, to show you're at the command line in a terminal window): $ ls -la /etc/NetworkManager/system-connections/* Then choose the one you want to fix and run this command on it (or you can just edit the config file manually, as this command just adds a dns-priority entry under section ipv4): $ sudo nmcli connection modify <vpn-connection-name> ipv4.dns-priority -42 And restart: $ sudo service network-manager restart Note that at least for me, putting it in the OpenVPN .ovpn config file that came from my VPN (ProtonVPN) did not work. For some reason it did not make it into the NetworkManager config when it was installed using the GUI dialog. Only by updating the config after it was installed, and then restarting NetworkManager, did it work. And you need to do this for each installed VPN config you want to use. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/152329/"
]
} |
434,964 | Let's say the variable numbers=$@ where $@ is from user input. The user typed in ./script.sh 901.32.02 and I want to get the first digit 9 and store in another variable. How can I do this? I was told to do for n in `seq 1 $count` do var=${numbers[0]} done but that prints out the whole input if I echo $var instead of just 9 . | In Bash, you can extract the first character using parameter expansion : ${parameter:offset:length} Example: $ var=901.32.02$ first_char="${var:0:1}"$ echo "${first_char}"9 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/434964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274999/"
]
} |
435,043 | How do I get perl to properly replace UTF-8 character from a shell? The examples use stdin, but I need something that works for perl ... file too. This is what I expect: $ echo ABCæøåDEF | perl -CS -pe "s/([æøå])/[\\1]/g"ABC[æ][ø][å]DEF This is what I get: $ echo ABCæøåDEF | perl -CS -pe "s/([æøå])/[\\1]/g"ABCæøåDEF Replacing the Unicode characters with ASCII works instantly: $ echo ABC123DEF | perl -CS -pe "s/([123])/[\\1]/g"ABC[1][2][3]DEF My environment: perl 5.18.2Bash 3.2.57LC_ALL=en_US.UTF-8LANG=en_US.UTF-8 | Use this : $ echo 'ABCæøåDEF' | perl -CSD -Mutf8 -pe 's/([æøå])/[$1]/g' Works also for files Output : ABC[æ][ø][å]DEF Note : substitutions: \\1 is for awk , \1 is for sed and in perl we use $1 check perldoc perlrun for -CSD utf8 tricks | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/435043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40335/"
]
} |
435,179 | I have the following lines 3, 3, 1004, 2, 508, 5, 80... and I want the following output line starts at 3 and ends at 3 with value 100line starts at 4 and ends at 2 with value 50line starts at 8 and ends at 5 with value 80... I tried the following: sed 's/^/line starts at /' then applying this command for the output: sed 's/, / and ends at /' then applying this command for the output sed 's/, / with value /' . Is there any way to do it in a single line? | awk is good for this kind of formatted input - formatted output: awk -F, '{printf("line starts at %d and ends at %d with value %d\n", $1, $2, $3)}' file line starts at 3 and ends at 3 with value 100line starts at 4 and ends at 2 with value 50line starts at 8 and ends at 5 with value 80 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/435179",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284049/"
]
} |
435,193 | I am running such a program: min_val=1 max_val=100 int=50 if [[ "$int" =~ ^-?[0-9]+$ ]]; then if [[ "$int" -ge "$min_val" && -le "$max_val" ]]; then echo "$int is within $min_val to $max_val." else echo "$int is out of range." fi else echo "int is not an integer." >&2 exit 1 fi It report error $ bash test_integer3.sh test_integer3.sh: line 12: conditional binary operator expectedtest_integer3.sh: line 12: syntax error near `"$max_val"'test_integer3.sh: line 12: ` if [[ "$int" -ge "$min_val" && -le "$max_val" ]]; then' I have examined carefully line-by-line. What might be the problem? | if [[ "$int" -ge "$min_val" && -le "$max_val" ]]; then You will have to compare against $int in both comparisons: if [[ "$int" -ge "$min_val" ]] && [[ "$int" -le "$max_val" ]]; then or, if (( int >= min_val )) && (( int <= max_val )); then | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/435193",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
435,233 | I am writing a simple packet processing program. Here is code excerpt: void print_ethernet_header(unsigned char* buffer){ struct ethhdr *eth = (struct ethhdr *)buffer; fprintf(logfile , " |-Protocol : %x \n",eth->h_proto);} This simple function should print to logfile the hex value of protocol type. And indeed it does print value '8'.However, both in source /usr/include/net/ethernet.h and online ( https://en.wikipedia.org/wiki/EtherType ) I see that IP protocol type is defined as 0x0800. So I actually expected to see value 800 (in hex) or 2048 (in dec) to be printed to file, not 8. I thought that maybe this has something to do with endianess and a need to convert from net byte order to host, but have not found anything about this in recvfrom() man page.Here is the call that fills up the buffer variable: sock_raw = socket(AF_PACKET,SOCK_RAW,htons(ETH_P_ALL));//some code here...data_size = recvfrom(sock_raw , buffer , bufsize , 0 , (struct sockaddr*)&saddr , (socklen_t*)&saddr_size); The machine I work on is little-endian (Ubuntu 16.04). Why does the protocol type show 8 ? | The structure definition shows that h_proto is a big-endian 16-bit integer: struct ethhdr { unsigned char h_dest[ETH_ALEN]; /* destination eth addr */ unsigned char h_source[ETH_ALEN]; /* source ether addr */ __be16 h_proto; /* packet type ID field */} __attribute__((packed)); So you do need to process it with ntohs before reading it. Once you do that, you’ll see the correct value, 0x0800. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/435233",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/271829/"
]
} |
435,326 | ConsloeKit is the traditional mechanism for tracking user sessions on Linux. eLogind has similar functionality, but is based on systemd and "independentized". What are the differences in their functionality/feature set? What are their pros and cons? | Aside from the difference in maintainership pointed out by Ortomala Lokni (which I might add is only accurate for the original ConsoleKit, there is a fork called ConsoleKit2 which is actively maintained), there are a handful of mostly minor differences: Configuration is handled differently. ConsoleKit has it's own configuration directory, while elogind uses the same configuration locations as systemd-logind. Exact functionality is slightly different. I don't remember all of the specifics here, but it's mostly minor stuff that is not widely used. The DBus API's are sufficiently different that most software needs to be built to use one or the other. I'm pretty sure most of this is just a change to the name of the DBus endpoints, but there might be a few other things too. ConsoleKit either doesn't support cgroups , or only supports version one cgroups (if using ConsoleKit2), elogind only supports v2 cgroups. Elogind actually needs cgroups , and in fact may have build problems on systems that do not have them configured the way it expects them to be. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/435326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
435,413 | The issue of jq needing an explicit filter when the output is redirected is discussed all over the web. But I'm unable to redirect output if jq is part of a pipe chain, even when an explicit filter is in use. Consider: touch in.txttail -f in.txt | jq '.f1'# in a different terminal:echo '{"f1":1,"f2":2}' >> in.txtecho '{"f1":3,"f2":2}' >> in.txt As expected, the output in the original terminal from the jq command is: 13 But if I add any sort of redirection or piping to the end of the jq command, the output goes silent: rm in.txttouch in.txttail -f in.txt | jq '.f1' | tee out.txt# in a different terminal:echo '{"f1":1,"f2":2}' >> in.txtecho '{"f1":3,"f2":2}' >> in.txt No output appears in the first terminal and out.txt is empty. I've tried hundreds of variations but it's an elusive issue. The only workaround I've found , as discovered through mosquitto_sub and The Things Network (which was where I also discovered the issue), is to wrap the tail and jq functions in a shell script: #!/bin/bashtail -f $1 | while IFS='' read line; doecho $line | jq '.f1'done Then: ./tail_and_jq.sh | tee out.txt# in a different terminal:echo '{"f1":1,"f2":2}' >> in.txtecho '{"f1":3,"f2":2}' >> in.txt And sure enough, the output appears: 13 This is with the latest jq installed via Homebrew: $ echo $SHELL/bin/bash$ jq --versionjq-1.5$ brew install jqWarning: jq 1.5_3 is already installed and up-to-date Is this a (largely undocumented) bug in jq or with my understanding of pipe chains? | The output from jq is buffered when its standard output is piped. To request that jq flushes its output buffer after every object, use its --unbuffered option, e.g. tail -f in.txt | jq --unbuffered '.f1' | tee out.txt From the jq manual: --unbuffered Flush the output after each JSON object is printed (useful if you're piping a slow data source into jq and piping jq 's output elsewhere). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/435413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216323/"
]
} |
435,482 | Say I want to have a template somewhere which is multiline string: I have sometext with ${placeholders}in this and some ${different}ones ${here} and ${there} What would be my best way of replacing the placeholders with input from a user? Would here-documents be a good use? | Assuming [a] that no \<newline>, nor the characters \ , $ , or ` are used in the multiline string (or they are properly quoted), a here-document (and variables) is your best option: #!/bin/shplaceholders="value one"different="value two"here="value three"there="value four"cat <<-_EOT_I have sometext with ${placeholders}in this and some ${different}ones ${here} and ${there}._EOT_ If executed: $ sh ./scriptI have sometext with value onein this and some value twoones value three and value four. Of course, correctly playing with qouting, even one variable could do: $ multilinevar='I have some> text with '"${placeholders}"'> in this and some '"${different}"'> ones '"${here}"' and '"${there}"'.'$ echo "$multilinevar"I have sometext with value onein this and some value twoones value three and value four. Both solutions could accept multiline variable placeholders. [a] From the manual: ... the character sequence \<newline> is ignored, and \ must be used to quote the characters \, $, and `. ... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/435482",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284297/"
]
} |
435,579 | There is a long list of stats in /proc/net/netstat and /proc/net/snmp , both of which I think come from the net-tools project. Is there any official or unofficial documentation about these fields? Or even a good source of networking terminology that would help identify them? Some seem pretty clear: SyncookiesSentSyncookieFailedTCPTimeoutsTCPKeepalive Others less clear: ActiveOpensPassiveOpens Some fully cryptic to me: EmbryonicRstsRcvPruned ... many more ... Update: I've found definitions in the source but still wondering where these descriptions go. Are they compiled and published anywhere? | The /proc/net/* files are generated by the kernel: the entries are in net/ipv4/proc.c in the kernel source, and the entry list is found in include/uapi/linux/snmp.h . It grabs the values from various MIB databases that the kernel keeps. According to the snmp.h header file, the MIB definitions come from the following documents: RFC 1213 : MIB-II RFC 2011 (updates 1213): SNMPv2-MIB-IP RFC 2863 : Interfaces Group MIB RFC 2465 : IPv6 MIB: General Group draft-ietf-ipv6-rfc2011-update-10.txt : MIB for IP: IP Statistics Tables ActiveOpens is from RFC 1213 (page 47): tcpActiveOpens OBJECT-TYPE SYNTAX Counter ACCESS read-only STATUS mandatory DESCRIPTION "The number of times TCP connections have made a direct transition to the SYN-SENT state from the CLOSED state." ::= { tcp 5 } If you can't find the netstat entry in the RFCs, you'll have to search around.Quite a few of the items are not listed in detail in these documents. If you want more than the brief summary, you'll have to search the kernel source for some of the entries that you described. EmbryonicRsts is modified in net/ipv4/tcp_minisocks.c , at line 796 in Linux 4.16 at least , and appears to count invalid SYN resets on non-fast opened connections . This is probably not likely to occur unless you're in a SYN cookie flood. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/435579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227660/"
]
} |
435,621 | I'm trying to copy a file to a different name into the same directory using brace expansion. I'm using bash 4.4.18. Here's what I did: cp ~/some/dir/{my-file-to-rename.bin, new-name-of-file.bin} but I get this error: cp: cannot stat '/home/xyz/some/dir/{my-file-to-rename.bin,': No such file or directory Even a simple brace expansion like this gives me the same error: cp {my-file-to-rename.bin, new-name-of-file.bin} What am I doing wrong? | The brace expansion syntax accepts commas, but it does not accept a space after the comma. In many programming languages, spaces after commas are commonplace, but not here. In Bash, the presence of an unquoted space prevents brace expansion from being performed. Remove the space, and it will work: cp ~/some/dir/{my-file-to-rename.bin,new-name-of-file.bin} While not at all required, note that you can move the trailing .bin outside the braces: cp ~/some/dir/{my-file-to-rename,new-name-of-file}.bin If you want to test the effect of brace expansion, you can use echo or printf '%s ' , or printf with whatever format string you prefer, to do that. (Personally I just use echo for this, when I am in Bash, because Bash's echo builtin doesn't expand escape sequences by default, and is thus reasonably well suited to checking what command will actually run.) For example: ek@Io:~$ echo cp ~/some/dir/{my-file-to-rename,new-name-of-file}.bincp /home/ek/some/dir/my-file-to-rename.bin /home/ek/some/dir/new-name-of-file.bin | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/435621",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266221/"
]
} |
435,696 | I have seen some screenshots where ranger uses syntax-highlighting in its preview window. So I was wondering how to enable this feature and found out that I need the scope.sh file in the directory /home/user/.config/ranger/scop.sh , which will be generated with the command $ ranger --copy-config=scope .After generating it however the preview window completely disappears,although I inserted set use_preview_script true in rc.config . Q: Does someone know how to enable syntax-highlighting (especially for C/C++) in the ranger preview window? | $ sudo apt install highlight then reopen ranger . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/435696",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245933/"
]
} |
435,778 | As for the "Spectre" security vulnerability, "Retpoline" was introduced to be a solution to mitigate the risk. However, I've read a post that mentioned: If you build the kernel without CONFIG_RETPOLINE , you can't build modules with retpoline and then expect them to load — because the thunk symbols aren't exported. If you build the kernel with the retpoline though, you can successfully load modules which aren't built with retpoline. ( Source ) Is there an easy and common/generic/unified way to check if kernel is "Retpoline" enabled or not? I want to do this so that my installer can use the proper build of kernel module to be installed. | If you’re using mainline kernels, or most major distributions’ kernels, the best way to check for full retpoline support ( i.e. the kernel was configured with CONFIG_RETPOLINE , and was built with a retpoline-capable compiler) is to look for “Full generic retpoline” in /sys/devices/system/cpu/vulnerabilities/spectre_v2 . On my system: $ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2Mitigation: Full generic retpoline, IBPB, IBRS_FW If you want more comprehensive tests, to detect retpolines on kernels without the spectre_v2 systree file, check out how spectre-meltdown-checker goes about things. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/435778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284565/"
]
} |
435,786 | New to Linux - I'm using Debian 9. I want to run a script to install pwndbg , following the tutorial here . I'm using my root account to do so and want to install it to my root account's home directory. The output is as follows: root@My-Debian-PC:~/pwndbg# ./setup.sh+ PYTHON=+ INSTALLFLAGS=+ osx+ uname+ grep -i Darwin+ '[' '' == --user ']'+ PYTHON='sudo '+ linux+ uname+ grep -i Linux+ sudo apt-get update./setup.sh: line 24: sudo: command not found+ true+ sudo apt-get -y install gdb python-dev python3-dev python-pip python3-pip libglib2.0-dev libc6-dbg./setup.sh: line 25: sudo: command not foundroot@My-Debian-PC:~/pwndbg# Evidentally the script is presumed to be ran as an account with sudo priveliges, hence giving the error because the root account can't use the sudo command. So is there a way of removing the errors? Should I simply edit the script and remove the word sudo from lines 24 and 25, or is it bad practice to do so? Or is it possible to add my root account to the sudo user group in case I come across the error with another script in the future? Or should I just run the script as-is, then afterwards run apt-get update and then apt-get -y install gdb python-dev python3-dev python-pip python3-pip libglib2.0-dev libc6-dbg ? Thanks! | If you’re using mainline kernels, or most major distributions’ kernels, the best way to check for full retpoline support ( i.e. the kernel was configured with CONFIG_RETPOLINE , and was built with a retpoline-capable compiler) is to look for “Full generic retpoline” in /sys/devices/system/cpu/vulnerabilities/spectre_v2 . On my system: $ cat /sys/devices/system/cpu/vulnerabilities/spectre_v2Mitigation: Full generic retpoline, IBPB, IBRS_FW If you want more comprehensive tests, to detect retpolines on kernels without the spectre_v2 systree file, check out how spectre-meltdown-checker goes about things. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/435786",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277651/"
]
} |
435,868 | Unix file systems usually have an inode table, and the number of entries in this table is usually fixed at the time the file system is created. This sometimes leads to people with plenty of disk space getting confusing error messages about no free space, and even after they figure out what the problem is, there is no easy solution for what to do about it. But it seems (to me) that it would be very desirable to avoid this whole mess by allocating inodes on demand, completely transparently to users and system administrators. If you're into cute hacks, you could even make the inode table itself be a file, and thus reuse the code you already have that finds free space on the disk. If you're lucky, you might even end up with the inodes near the files themselves, without explicitly trying to achieve this result. But nobody (that I know of) actually does this, so there's probably a catch that I'm missing. Any idea what it might be? | Say you did make the inode table a file; then the next question is... where do you store information about that file? You'd thus need "real" inodes and "extended" inodes, like an MS-DOS partition table. Given, you'd only need one (or maybe a few — e.g., to also have your journal be a file). But you'd actually have special cases, different code. Any corruption to that file would be disastrous, too. And consider that, before journaling, it was common for files that were being written e.g., when the power went out to be heavily damaged. Your file operations would have to be a lot more robust vs. power failure/crash/etc. than they were on, e.g., ext2. Traditional Unix filesystems found a simpler (and more robust) solution: put an inode block (or group of blocks) every X blocks. Then you find them by simple arithmetic. Of course, then it's not possible to add more (without restructuring the entire filesystem). And even if you lose/corrupt the inode block you were writing to when the power failed, that's only losing a few inodes — far better than a substantial portion of the filesystem. More modern designs use things like B-tree variants. Modern filesystems like btrfs, XFS, and ZFS do not suffer from inode limits. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/435868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284635/"
]
} |
435,964 | Bash manual says: A login shell is one whose first character of argument zero is‘ -’, or one invoked with the --login option. It defines a login shell in terms of the ways to start a login shell. Alternatively, can a login shell be defined in terms of its intended purpose? For example, can a login shell be defined as a shell which requires user to log in?For example, in an interactive nonlogin bash shell, when I run bash --login to create a bash login shell, I don't have to log in. Is it because my username and password are cached and reused implicitly, or simply it doesn't perform the job of login? If a login shell doesn't necessarily have to perform log in, what is its intended purpose that can characterize a login shell from a nonlogin shell? Thanks. | Login is handled by tools other than the shell, e.g. login itself, or your desktop manager (with the help of PAM and various other tools). The purpose of a login shell isn’t to handle login, it’s to behave appropriately as the first shell in a login session: mainly, that means processing startup files which should only be processed once per login session, and protecting the login session from unwanted interaction with certain system features (job suspension in particular). The specifics of a login shell, at least as implemented in Bash , are as follows: a login shell processes commands from /etc/profile , then the first file it finds among ~/.bash_profile , ~/.bash_login , and ~/.profile (unless it’s a non-interactive login shell started without the --login option); exiting a login shell runs logout instead of exit ; exiting a login shell hangs up all jobs; a login shell can’t be suspended; a login shell sets the HOME variable (except in POSIXly-correct mode); a login shell sets the login_shell shell option. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/435964",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
435,970 | I'm using Debian and today I typed: exec bash in my terminal and somehow the user@xxx changed to bash-4.4 . How do I get back the user@xxx ? I think it's better for me because for example it shows the path to my current folder etc... | exec bash -l This will replace the current shell session with a bash shell started as a login shell. A login shell will read your .bash_profile (or .bash_login or .profile , whichever it finds first) and other files where your prompt may be defined. With exec bash , you replaced the current shell session with an interactive shell. This will read .bashrc from your home directory. If you don't set your prompt there, then you will get the default bash prompt. Without the exec , you would have been able to just exit to get back to your old shell session. With the exec , the old session is now gone. You may also simply exit the shell and start a new one. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/435970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284734/"
]
} |
436,102 | I have a line (or many lines) of numbers that are delimited by an arbitrary character. What UNIX tools can I use to sort each line's items numerically, retaining the delimiter? Examples include: list of numbers; input: 10 50 23 42 ; sorted: 10 23 42 50 IP address; input: 10.1.200.42 ; sorted: 1.10.42.200 CSV; input: 1,100,330,42 ; sorted: 1,42,100,330 pipe-delimited; input: 400|500|404 ; sorted: 400|404|500 Since the delimiter is arbitrary, feel free to provide (or extend) an Answer using a single-character delimiter of your choosing. | With gawk ( GNU awk ) for the asort() function : gawk -v SEP='*' '{ i=0; split($0, arr, SEP); len=asort(arr); while ( ++i<=len ){ printf("%s%s", i>1?SEP:"", arr[i]) }; print "" }' infile replace * as the field separator in SEP='*' with your delimiter . You can also do with the following command in case of a single line ( because it's better leave it alone of using shell-loops for text-processing purposes ) tr '.' '\n' <<<"$aline" | sort -n | paste -sd'.' - replace dots . with your delimiter. add -u to the sort command above to remove the duplicates. Notes: You may need to use -g, --general-numeric-sort option of sort instead of -n, --numeric-sort to handle any class of numbers (integer, float, scientific, Hexadecimal, etc). $ aline='2e-18,6.01e-17,1.4,-4,0xB000,0xB001,23,-3.e+11'$ tr ',' '\n' <<<"$aline" |sort -g | paste -sd',' --3.e+11,-4,2e-18,6.01e-17,1.4,23,0xB000,0xB001 In awk no need change, it still will handling those. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/436102",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117549/"
]
} |
436,114 | I've been using GNOME (in Arch Linux ) for a while. There is something that really bothers me (that I used to disable in Ubuntu) and it's the capability to: Maximize windows when dragging to the top of the screen Fill to the half the screen when dragging to the side(s) See Resizing Windows here . Is there any way to disable that in GNOME 3.28.0 ? The answers related with the change on gsettings set org.gnome.shell.extensions.classic-overrides edge-tiling to false don't work for me. | Open Terminal and run gsettings set org.gnome.mutter edge-tiling false You may also have to run gsettings set org.gnome.shell.overrides edge-tiling false | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/436114",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/280674/"
]
} |
436,133 | Lets say that I have a directory with the following files (among others): touch doc-mike.txt doc-jane.txt doc-susan.txt I can do something in a script with those files using a construct like: for fname in doc-*.txt; do echo input: ${fname}done But if I want to get the substring of the filename that matches the wildcard I have to jump through some ugly hoops: for fname in doc-*.txt; do wildcard=${fname#doc-} wildcard=${wildcard%.txt} echo input: ${fname} output: output-${wildcard}.resultsdone That works: input: doc-jane.txt output: output-jane.resultsinput: doc-mike.txt output: output-mike.resultsinput: doc-susan.txt output: output-susan.results but I feel like there has to be a better/easier way to get the substring that matches the "*" glob wildcard | The closest I can think of is BASH_REMATCH , since bash stores the results of a regex text in the that variable: $ for fname in doc-*.txt; do [[ $fname =~ doc-(.*).txt ]]; echo "input: ${fname} output: output-${BASH_REMATCH[1]}.results";doneinput: doc-jane.txt output: output-jane.resultsinput: doc-mike.txt output: output-mike.resultsinput: doc-susan.txt output: output-susan.results As (.*) is the first group in the regex, it's in BASH_REMATCH[1] . I think this is the behaviour you want, but with globs, I don't think bash makes that available in any way. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436133",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7084/"
]
} |
436,173 | I'm trying to return a 0 exit code if a command exit with the a code 143 (timeout, from the "timeout command"), 1 otherwise. Due to external constraint (CI script), I have to start the command and do the check in the if clause. Here is what I currently use : if timeout -t 10 *command* || [ $? = 143 ]then exit 0else exit 1fi At the moment, it always exit with a 0 code. | Your script is not doing what you want because, when timeout -t 10 command has an exit status of 0 -- which is the exit status that is regarded as true in shell scripting -- the command timeout -t 10 command || [ $? = 143 ] just completes with an exit status of 0 and does not cause the right-hand side of || to be run. Similarly, if you replaced || with && , then any nonzero exit status from the left-hand side would be the exit status of the whole command and the right-hand side would not be run. You should just run them as two separate commands by separating them with a ; or a newline. If necessary, you can still use them both as the if condition when you do that (see below). I am assuming that it is the exit status of timeout -t 10 command , and not command (if different), that you need. Otherwise it is unclear to me what you want; like ilkkachu , I am unfamiliar with a timeout command that accepts a -t option. Normally I would suggest that you do it this way, except that it sounds like your "external constraint" might prohibit it, since it doesn't use if at all: timeout -t 10 command test "$?" -eq 143exit The test / [ command returns an exit code of 0 to indicate true and 1 to indicate false, which is why you don't have to write separate branches with exit 0 and exit 1 . The exit builtin, when run with no arguments, causes the shell to return the exit status of the last command that ran. If you are already at the end of the script, then you can omit exit , because when control flows off the end of a script, it has the same effect as exit : timeout -t 10 command test "$?" -eq 143 (In either case, you can write [ "$?" -eq 143 ] instead of test "$?" -eq 143 if you like. See below for details on [ / test usage.) Although your description made it sound like you cannot use that code exactly, you should nonetheless be able to modify it to take the required form. You said you must not merely do the check but also start the command in the if clause. This seems to prohibit the readable and idiomatic approach suggested by Hauke Laging of running the command before the if . So if you can't do that, then you can include the command in the if condition as you were doing, but separate it from your test / [ command with a ; rather than a || : if timeout -t 10 command ; [ "$?" -eq 143 ]; then exit 0else exit 1fi (You can get away with omitting " " around $? . You can also get away with writing = instead of 143 , though it is confusing because = signifies textual rather than numeric comparison. You can write test "$?" -eq 143 instead of [ "$?" -eq 143 ] if you like.) The reason this works is that you are permitted to write an if condition that consists of multiple commands separated by semicolons or even newlines. That's why the shell grammar requires you to write then to indicate the end of the if condition. Therefore you do not have to attempt circuitous usage of the && or || operator when your goal is to run two commands as an if condition while causing if to itself test only the second. Note that I am only suggesting this because you said it had to all be in the if condition. I don't recommend scripting like that when you don't have to. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/70379/"
]
} |
436,176 | I need to understand why sed is able to work for 1) and not for 2). Please do not post me any alternative solutions. I have already found them on this forum. I just need to understand the behavior of sed regarding point 1) and 2). 1) sed -i s/\\r//g file.txt On checking od -c file.txt, sed has successfully removed \r 2) sed -i s/\\n//g file.txt On checking od -c file.txt, sed has not removed \n My question here is to just understand why its not working for point-2. Please do not post any alternative solutions. Wish to understand the internals thats it! | From GNU sed manual - How sed Works sed operates by performing the following cycle on each line of input: first, sed reads one line from the input stream, removes any trailing newline , and places it in the pattern space. Then commands are executed; each command can have an address associated to it: addresses are a kind of condition code, and a command is only executed if the condition is verified before the command is to be executed. When the end of the script is reached, unless the -n option is in use, the contents of pattern space are printed out to the output stream, adding back the trailing newline if it was removed . Then the next cycle starts for the next input line. From POSIX spec (thanks steeldriver for the link) In default operation, sed cyclically shall append a line of input, less its terminating newline , into the pattern space. Normally the pattern space will be empty, unless a D command terminated the last cycle. The sed utility shall then apply in sequence all commands whose addresses select that pattern space, and at the end of the script copy the pattern space to standard output (except when -n is specified) and delete the pattern space. Whenever the pattern space is written to standard output or a named file, sed shall immediately follow it with a newline . tl;dr the input record separator (which is newline by default) is removed before executing the commands and then added back while printing the record There are, however, cases where the newline character can be manipulated. Some examples given below: $ # this would still not allow newline of second line to be manipulated$ seq 5 | sed 'N; s/\n/ : /'1 : 23 : 45$ # here ASCII NUL is input record separator, so newline can be freely changed$ seq 5 | sed -z 's/\n/ : /g'1 : 2 : 3 : 4 : 5 : $ # default newline separator, so NUL character can be changed$ printf 'foo\0baz\0xyz\0' | sed 's/\x0/-/g'foo-baz-xyz-$ # NUL character is separator, so it cannot be changed now$ printf 'foo\0baz\0xyz\0' | sed -z 's/\x0/-/g' | cat -Afoo^@baz^@xyz^@ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436176",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284513/"
]
} |
436,211 | At the moment I have the next paths langs="EN GE"dir_EN=/xxdir_GE=/zz As you can see the variable $langs has all possible languages in a single array. I would like to save all those paths in a multilingual (dir_ML) array using a loop which checks what the languages are, and then save the corresponding path. Here is what I have so far for i in $(seq 0 1); do #I used seq because I didn't know how to specify the length of the variable $langs dir_ML[$i]=dir_${langs[$i]}done The output I am looking for is dir_ML[0]=/xx dir_ML[1]=/zz I hope you can understand what I am trying to do!Thanks in advance | langs is not an array but a string in your code. To make it an array and use it: langs=( EN GE )dir_EN=/xxdir_GE=/zzdir_ml=()for i in "${langs[@]}"; do declare -n p="dir_$i" dir_ml+=( "$p" )doneprintf 'dir_ml = "%s"\n' "${dir_ml[@]}" In the above loop, $i will take the values EN and GE in turn. This also introduces a name reference variable p . When the value of p is accessed, the string that was assigned to the variable when it was declared will be interpreted as a variable name and that variable's value is returned. The output of the above will be dir_ml = "/xx"dir_ml = "/zz" To use name references in bash , you will need bash version 4.3 or later. Another (interesting but inferior) possibility: dir_EN=/xxdir_GE=/zz# (can't set dir_ml=() here as its name would be picked up by the loop)unset dir_mlfor i in "${!dir_@}"; do dir_ml+=( "${!i}" )doneprintf 'dir_ml = "%s"\n' "${dir_ml[@]}" Here, $i will take the values of the variable names dir_EN and dir_GE in turn. We then use variable indirection with ${!i} to get the value of that variable. This variation does not need the langs array, but it instead assumes that no other variable is named dir_ -something (which may be considered a bit fragile as a user may easily inject variables with names like these into the script's environment). The output is the same for this code as for the code above. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/272793/"
]
} |
436,214 | Making data migration (Debian OS) from one server to another, is SFTP (Ftp with SSH) fully encrypted ? Data with file names ? | SFTP is not really FTP within SSH but it is completely within an established SSH session i.e. every information is encrypted. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/284916/"
]
} |
436,289 | I use i3 as my primary user interface. To start applications I use Rofi which I execute as: rofi -show drun -drun-icon-theme MacBuntu-Remix where MacBuntu-Remix is a theme I have installed in ~/.icons. On Gentoo Linux this command causes icons to appear next to application names in Rofi, but under Debian this exact same command, with the exact same ~/.icons folder, produces a Rofi menu without these icons next to applications. Anyone know why? Did the Debian packagers decide not to compile rofi with this option, or? To fix this I've tried switching from using Rofi in the Stretch repositories to using Rofi in the unstable repository, in the hope it was simply an older version of Rofi that was causing the problem, this didn't fix it, however. I have also tried switching icon themes to something I was certain was installed properly (although I was confident MacBuntu-Remix was it appeared fine under GNOME), Adwaita. The icons didn't appear then either. Here's a screenshot of Rofi running under Debian 9, started with the above MacBuntu-Remix theme command: | I wasn't seeing icons on Debian Sid either. Have you tried the -show-icons option? rofi -modi drun,run -show drun -font "DejaVu Sans 10" -show-icons | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436289",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27613/"
]
} |
436,305 | I wanted to simply calculate the length of a string (that is hash value). So, I opened terminal and did this: $ apropos length that returned me with a bunch of commands/functions having (3) or (3ssl) appended at the end of them. Now man man gives us information about what these section numbers mean. 3 Library calls (functions within program libraries) Out of curiosity, I just tried with all these commands (in hope at least one would work) strcspn (3) - get length of a prefix substringstrlen (3) - calculate the length of a stringstrnlen (3) - determine the length of a fixed-size stringstrspn (3) - get length of a prefix substringwcslen (3) - determine the length of a wide-character stringwcsnlen (3) - determine the length of a fixed-size wide-character string and got nothing but same error for every command $ strnlen HelloWorld $ strnlen: command not found Well, I know how to find length of string in shell using wc -m , expr length and other workarounds. But, I have 2 questions here : How to use any library calls (3) inside the shell? How to calculate string length using just library calls and not other commands? NOTE : Question focuses in general library calls and their usage in the shell. That makes first question more important to answer. | The apropos command is useful in many ways, but it does give you a lot of "junk" too. Most of the things that you list are C library routines (this is what section 3 of the manual is for), which you can not use directly from the shell. To use them, you would have to write C program that calls them. This falls outside of the range of topics covered by this particular site (it would be on topic at StackOverflow ). These, thus, are the answers to your questions: You can't, they are C library routines. You can't, unless you write a C program. I know you know this, but for the sake of completeness: In the shell, if you have a string in a variable string , you may do string='hello world'printf 'Length of string "%s" is %d\n' "$string" "${#string}" This will print Length of string "hello world" is 11 in the terminal where the 11 comes from ${#string} which expands to the length of the string in $string . Internally, the shell may well be using one of the library calls that you listed to do its length calculation. This is the most efficient way to get the length of a string that is stored in a shell variable, in the shell. Note too that ${#string} is a POSIX shell parameter expansion , it is therefore portable between all shells that claim any degree of POSIX compliance. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/436305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259047/"
]
} |
436,314 | I can use ls -l to get the logical size of a file, but is there a way to get the physical size of a file? | ls -l will give you the apparent size of the file, which is the number of bytes a program would read if it read the file from start to finish. du would give you the size of the file "on disk". By default, du gives you the size of the file in number of disk blocks, but you may use -h to get a human readable unit instead. See also the manual for du on your system. Note that with GNU coreutil's du (which is probably what you have on Linux), using -b to get bytes implies the --apparent-size option. This is not what you want to use to get number of bytes actually used on disk. Instead, use --block-size=1 or -B 1 . With GNU ls , you may also do ls -s --block-size=1 on the file. This will give the same number as du -B 1 for the file. Example: $ ls -l file-rw-r--r-- 1 myself wheel 536870912 Apr 8 11:44 file$ ls -lh file-rw-r--r-- 1 myself wheel 512M Apr 8 11:44 file$ du -h file24K file$ du -B 1 file24576 file$ ls -s --block-size=1 file24576 file This means that this is a 512 MB file that takes about 24 KB on disk. It is a sparse file (mostly zeros that are not actually written to disk but represented as logical "holes" in the file). Sparse files are common when working with pre-allocated large files, e.g. disk images for virtual machines or swap files etc. Creating a sparse file is quick, while filling it with zeros is slow (and unnecessary). See also the manual for fallocate on your Linux system. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/436314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285003/"
]
} |
436,443 | I'm wondering if there's a way to turn off the terminal bell for terminal applications such as man and less , e.g. when you're already at the top of the file/man page and press "k" to attempt to scroll up. Normally, I'd just turn off the bell on my terminal emulator altogether, but the popular xset b off command doesn't seem to be working for my setup: I'm running XTerm from Ubuntu 16.04 (specifically, in WSL) over X11 forwarding to Xming. So I'd also appreciate any notes on how to turn off XTerm's bell, too, if that's available. I'm aware of how to turn off readline 's bell by putting set bell-style none in ~/.inputrc , but unfortunately that only helps for input (e.g. multiple available tab completions), not for when scrolling man/less pages. I'm also aware of the -Q command line arg to less which turns off the bell, but I guess I'm hoping that there's a more general setting/command that will apply to both man and less (and possibly others). I figure if I can't turn off XTerm's bell altogether, I'll try and learn how to turn off each application's bell, one by one, until I get at least all of the annoying ones. | man uses your default pager, which on Ubuntu (and most other systems) is less . You can change this, but you would likely know you did. That's why the interface in which you page through man 's formatted output looks and feels like less : it is. After man formats the manpage, it uses less to display it. So what you probably want is to make less always behave as though the -Q option had been passed to it, including when it is used by man and other programs. When less runs, it examines the LESS environment variable for options to use in addition to those passed to it in command-line arguments. So you can put this in one of the scripts that gets sourced when you open a WSL command prompt: export LESS=-Q Or you might prefer this, which preserves any options already present in the LESS variable. Usually this is unnecessary because that variable is not usually defined already anyway, but this still works even if it isn't: export LESS="$LESS -Q" Most Ubuntu users will want to set this and and other environment variables in their ~/.profile file . (There is also a way with ~/.pam_environment that some people prefer, which uses a different syntax .) This is what I would recommend for you, too, if the shell WSL gives you is a login shell , which on recent builds (or if you have configured it to be) it should be . You can check this by running shopt login_shell in the shell provided when you open a WSL command prompt window. If it's not a login shell and you don't want to add -l or --login to the Windows shortcut, then put one of those export commands in .bashrc instead of .profile . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/436443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/153578/"
]
} |
436,460 | Every time I detach from all the tmux sessions in the terminal, after I want to get back, I get this: [me@CentOS7 ~]$ tmux lserror connecting to /tmp/tmux-1000/default (No such file or directory) It seems the /tmp directory get cleared in the meantime. It doesn't happen straight away and it's hard to tell when exactly but usually after a couple of days of running I know I can't detach in order not to lose the session. Anyone knows how to retain the session? Prevent CentOS from removing the tmux server somehow? (I assume it's CentOS as it never happened to me on Debian-based distros.) | You can attempt to send a SIGUSR1 to the process in order for the tmux server to recreate the socket pkill -USR1 tmux Source | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/436460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285123/"
]
} |
436,505 | Does anybody know why bash still has history substitution enabled by default? My .bashrc has included set +H for many many years but some other people are still getting bitten by this feature. Given that pretty much everybody are using terminals with copy-paste features and bash compiled with readline library and history substitution is enabled by default only in interactive shells, is there really any reason to have the feature at all? None of the existing scripts would be broken even if this was disabled by default for all shells. Try this if you do not know why history substitution is broken: $ set +H # disable feature history substitution$ echo "WTF???!?!!?"WTF???!?!!?$ set -H # enable feature history substitution$ echo "WTF???!?!!?"echo WTF???echo WTF???!?!!?WTF???echo WTF???!?!!? (Clearly the feature has major issues if it's disabled by default for all scripting and a feature exists to verify the results before executing: shopt -s histverify .) See also: Why does the exclamation mark `!` sometimes upset bash? Why does Bash history not record this command? How to echo a bang! Bash: History expansion inside single quotes after a double quote inside the same line | If you are already familiar with bash , then dealing with history substitution patterns is not much more likely to bite you than handling any other characters that are special to this shell. However, if one is unfamiliar with the shell or just never used its history substitution features, it will obviously be a surprise when seemingly innocuous unquoted or double-quoted strings triggers it. In an interactive shell with history substitutions enabled, the ! character is special in pretty much the same way as the $ character is special, i.e. everywhere unless escaped with \ or in single-quoted strings. As opposed to $ through, history substitutions do not expand in here-documents, and since they are line-oriented, they additionally will happen on lines where the substitution falls within an unquoted context or a double quoted context (in that line when scanned separately). See this bug report for more info . History substitution is disabled in non-interactive shells (scripts) because the shell's command history capability is not needed there, not because the feature has "major issues". In a script, saving every command to $HISTFILE makes no sense, and history substitution likewise is not something you'd want to rely on in a script. Whether or not it should be enabled by default or not in interactive shells can be debated (though I'm not entirely convinced that a debate here would matter much to the bash developers). You seem to think that most bash users are having problems with history expansions, but neither one of you and me know how common it is to use them. Unix shells allow one to modify the shell's behaviour to fit one's personal needs and taste. If you want to turn off history substitutions for all your interactive shells, continue doing what you are doing with using set +H in your ~/.bashrc file, or lobby the bash developers to change the default (which, I believe, would upset and confuse more people than it would help). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/436505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20336/"
]
} |
436,516 | Trying get awk to look into a file and check if a column has a value. If it has a value of "x" then print "x" into an email (via "| mail -s " ). If it does not match "x" then print "no value" but still send mail. Trying something along the lines of:- awk -F ''{if($3 != 0) {a = ($3); print $0, a;} else if ($3==0) print "No updates"}' file.in | mail...etc | awk '$3 == "x" { print $3 } $3 != "x" { print "no value" }' file.in | mail ... or awk '{ print ($3 == "x" ? $3 : "no value") }' file.in | mail ... or awk '$3 != "x" { $3 = "no value" } { print $3 }' file.in | mail ... Given the file 1 2 32 3 x4 5 x the three awk programs will produce the output no valuexx | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/281720/"
]
} |
436,520 | I have a text file that contains some IPs. I want to copy the contents of this text file into /etc/ansible/hosts without showing the output on the terminal (as shown in example 2). Note: root user is disabled. If I use the following: sudo cat myfile.txt >> /etc/ansible/host It will not work, since sudo cat didn't affect redirections (expected). cat myfile.txt | sudo tee --append /etc/ansible/hosts It will show the output in the terminal then copy them to /etc/ansible/hosts A.A.A.A B.B.B.B C.C.C.C Adding /dev/null will interrupt the result (nothing will be copied to /etc/ansible/hosts ). | sudo tee -a /etc/ansible/hosts <myfile.txt >/dev/null Or, if you want to use cat : cat myfile.txt | sudo tee -a /etc/ansible/hosts >/dev/null Either of these should work. It is unclear how you "added" /dev/null when you tried, but this redirects the standard output of tee to /dev/null . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/436520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/111780/"
]
} |
436,521 | I'm searching for a specific REGEX, 3 days I'm trying and trying but not founding the right answer. I need to delete specific parts of an xml feed, I tried with sed, awk and it's not working right. What I have : ...Something before<description><![CDATA[Des chercheurs de l'université de Columbia à New York ont mis au point un nouveau moyen de cacher un message dans un texte sans en altérer le sens et sans dépendre d'un format de fichier particulier. Nommée FontCode, cette idée est <a href="https://korben.info/cacher-des-informations-dans-un-texte-grace-a-des-modifications-sur-les-caracteres.html">Passage a la news suivante</a>]]></description>... Other news What I need : ...Something before<description><![CDATA[Des chercheurs de l'université de Columbia à New York ont mis au point un nouveau moyen de cacher un message dans un texte sans en altérer le sens et sans dépendre d'un format de fichier particulier.<a href="https://korben.info/cacher-des-informations-dans-un-texte-grace-a-des-modifications-sur-les-caracteres.html">Passage a la news suivante</a>]]></description>... Other news Select the multiples instances between "<\description></description> Remove the last sentence which is not complete (before a href, "Nommée FontCode, cette idée est ") Thank you for helping ! ;) | sudo tee -a /etc/ansible/hosts <myfile.txt >/dev/null Or, if you want to use cat : cat myfile.txt | sudo tee -a /etc/ansible/hosts >/dev/null Either of these should work. It is unclear how you "added" /dev/null when you tried, but this redirects the standard output of tee to /dev/null . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/436521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/268987/"
]
} |
436,603 | env and printenv are both external commands i.e. executable files, not bash builtins. The problem with them is that some environment variables (such as _ , and I also wonder if there are more) which they output don't belong to the shell which invokes them, see here . | declare -x or will list all variables marked for export , as will export : $ declare -xdeclare -x ALTERNATE_EDITOR=""declare -x COLORFGBG="7;0"declare -x COMMAND_MODE="unix2003"...declare -x VISUAL="gvim" It will not list any local variables that are not exported, but will include variables inherited from the shell's outer environment. declare -x -F -p will omit any exported functions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
436,612 | I've been trying to set up a shared folder to store some files for a group of people, for example /home/project . At the moment, I've done the following: Created a group, let's call it "members", and added two users, user1 and user2. When I run cat /etc/group I get the following return: members:x:1005:user1,user2 Which at least seems to be correct. Then I create the directory and addign permissions following, to be honest, some internet guides. mkdir /home/projectsudo chown -R root.members /home/projectsudo chmod 775 /home/projectsudo chmod 2775 /home/project All of that seems to go fine, but when I create a test text file as user2, user1 can read that file, but doesn't have write permissions. What am I doing wrong? | declare -x or will list all variables marked for export , as will export : $ declare -xdeclare -x ALTERNATE_EDITOR=""declare -x COLORFGBG="7;0"declare -x COMMAND_MODE="unix2003"...declare -x VISUAL="gvim" It will not list any local variables that are not exported, but will include variables inherited from the shell's outer environment. declare -x -F -p will omit any exported functions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436612",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11028/"
]
} |
436,615 | Bash Manual says (manpage, my emphasis): When Bash invokes an external command, the variable $_ is set to the full pathname of the command and passed to that command in its environment. And ( Special Parameters ): _ ( $_ , an underscore.) At shell startup, set to the absolute pathname used to invoke the shell or shell script being executed as passed in the environment or argument list. Subsequently, expands to the last argument to the previous command, after expansion. Also set to the full pathname used to invoke each command executed and placed in the environment exported to that command. When checking mail, this parameter holds the name of the mail file. In a bash shell, I run: $ bash$ export | grep '_=' According to the manual, _ should be an environment variable ofthe new bash shell. export is supposed to output all theenvironment variables of the new bash shell, but it doesn't output _ . So I wonder whether _ is an environment variable of the newbash shell? Actually in any bash shell, the same thing happens $ export | grep '_=' doesn't output anything. So I wonder if _ is ever an environmentvariable of a bash shell? For comparison: $ dash$ export | grep '_=' export _='/bin/dash' My post is inspired by Mike's comment and Stephane's reply . | Yes, _ is an environment variable of the new Bash shell; you can see that by running tr '\0' '\n' < /proc/$$/environ | grep _= inside the shell: that shows the contents of the shell’s initial environment. You won’t see it in the first shell because there wasn’t a previous shell to set it before it started. Expanding $_ inside Bash refers to the _ special parameter, which expands to the last argument of the previous command. (Internally Bash handles this by using a _ shell variable, which is updated every time a command is parsed, but that’s really an implementation detail. It is “unexported” every time a command is parsed. ) export doesn’t show _ because it isn’t a variable which is marked as exported; you can however see it in the output of set . In the first example, the new Bash shell parses and executes the commands in its startup files, so when running export | grep '_=' , _ has already been overwritten and marked as not exported. In the dash example, it doesn't seem to execute any start-up file, so you’re seeing the variable as an environment variable that was set by Bash before running dash . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/436615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
436,666 | I made a systemd service that launches a simple .sh to read data from a serial connection. If I run the service manually after booting it works fine, but when I try to run it automatically at boot it fails because the .sh can't yet read from ttyUSB0 ( awk: fatal: cannot open file /dev/ttyUSB0' for reading (No such file or directory ). Is there any way to make the service wait for ttyUSB0 and then run the .sh? I tried something like after=ttyUSB0 but that doesn't work. [Unit] Description=Serial logger[Service] ExecStart=/serial_script.sh[Install] WantedBy=default.target | Consider using an udev rule instead of a systemd service to start your script. Never mind, since starting long-running processes from udev is not recommended, and newer udev versions may actively try and prevent it by having a strict time limit for udev transactions and processes spawned by them. But if you need to do it in udev (e.g. in an old system that has an old version of systemd ), something like this in /etc/udev/rules.d/99-serial-logger.rules should work: SUBSYSTEM=="tty", ACTION=="add", KERNEL=="ttyUSB0", RUN+="/serial_script.sh" When implementing this as a systemd service (the current recommended way), remove the WantedBy=default.target line from your service and make your udev rule like this: SUBSYSTEM=="tty", KERNEL=="ttyUSB0", TAG+="systemd", ENV{SYSTEMD_WANTS}+="your-serial-logger.service" As a result, udev should tell systemd to start your service when the device appears, and to stop it if/when the device is removed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274656/"
]
} |
436,667 | So there's an access log entry file named access_log and I'm supposed to find all of the unique files that were accessed on the web server. access_log is formatted like this, this is just an excerpt: 66.249.75.4 - - [14/Dec/2015:08:25:18 -0600] "GET /robots.txt HTTP/1.1" 404 1012 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"66.249.75.4 - - [14/Dec/2015:08:25:18 -0600] "GET /~robert/class2.cgi HTTP/1.1" 404 1012 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"66.249.75.4 - - [14/Dec/2015:08:30:19 -0600] "GET /~robert/class3.cgi HTTP/1.1" 404 1012 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"202.46.61.93 - - [14/Dec/2015:09:07:34 -0600] "GET / HTTP/1.1" 200 5208 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)" The files, for example on the first one "robots.txt", are either after the word GET, HEAD, or POST. I've tried using the cut command using " as the delimeter which hasn't worked. I literally have no idea how to separate the fields on a file like this, so I can compare them. If anyone could point me in the right direction, I'd really appreciate it. Edit: Figured it out, you were right @MichaelHomer. My syntax was off so that's why cut wasn't working for me. I used space as the delimeter and it worked. | Consider using an udev rule instead of a systemd service to start your script. Never mind, since starting long-running processes from udev is not recommended, and newer udev versions may actively try and prevent it by having a strict time limit for udev transactions and processes spawned by them. But if you need to do it in udev (e.g. in an old system that has an old version of systemd ), something like this in /etc/udev/rules.d/99-serial-logger.rules should work: SUBSYSTEM=="tty", ACTION=="add", KERNEL=="ttyUSB0", RUN+="/serial_script.sh" When implementing this as a systemd service (the current recommended way), remove the WantedBy=default.target line from your service and make your udev rule like this: SUBSYSTEM=="tty", KERNEL=="ttyUSB0", TAG+="systemd", ENV{SYSTEMD_WANTS}+="your-serial-logger.service" As a result, udev should tell systemd to start your service when the device appears, and to stop it if/when the device is removed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285295/"
]
} |
436,802 | I've installed postgres 9.5 on ubuntu 16.04 which creates postgresql.service and [email protected] . I understand that the postgresql.service spawns all enabled instances of postgres, and I can call a specific instance with [email protected] but [email protected] is a template file, and I can't see any place where the instance string (represented by %i or %I in the template) would be passed by postgresql.service . How does postgresql.service know which instances are enabled, and how does it pass them to the systemd template file? | To answer this, start with checking the contents of the two files in question. If you aren't sure where to find them, you can search the package contents for systemd files: dpkg -L postgresql-common| grep systemd From looking at the postgresql.service file you can see you not doing much at all: # systemd service for managing all PostgreSQL clusters on the system. This# service is actually a systemd target, but we are using a service since# targets cannot be reloaded.[Unit]Description=PostgreSQL RDBMS[Service]Type=oneshotExecStart=/bin/trueExecReload=/bin/trueRemainAfterExit=on[Install]WantedBy=multi-user.target From the comments, we learn that the file is being used as a systemd "target". Moving on to the template file: # systemd service template for PostgreSQL clusters. The actual instances will# be called "postgresql@version-cluster", e.g. "[email protected]". The# variable %i expands to "version-cluster", %I expands to "version/cluster".# (%I breaks for cluster names containing dashes.)[Unit]Description=PostgreSQL Cluster %iConditionPathExists=/etc/postgresql/%I/postgresql.confPartOf=postgresql.serviceReloadPropagatedFrom=postgresql.serviceBefore=postgresql.service[Service]Type=forking# @: use "postgresql@%i" as process nameExecStart=@/usr/bin/pg_ctlcluster postgresql@%i --skip-systemctl-redirect %i startExecStop=/usr/bin/pg_ctlcluster --skip-systemctl-redirect -m fast %i stopExecReload=/usr/bin/pg_ctlcluster --skip-systemctl-redirect %i reloadPIDFile=/var/run/postgresql/%i.pidSyslogIdentifier=postgresql@%i# prevent OOM killer from choosing the postmaster (individual backends will# reset the score to 0)OOMScoreAdjust=-900# restarting automatically will prevent "pg_ctlcluster ... stop" from working,# so we disable it here. Also, the postmaster will restart by itself on most# problems anyway, so it is questionable if one wants to enable external# automatic restarts.#Restart=on-failure# (This should make pg_ctlcluster stop work, but doesn't:)#RestartPreventExitStatus=SIGINT SIGTERM[Install]WantedBy=multi-user.target The interesting directives are: PartOf=postgresql.serviceReloadPropagatedFrom=postgresql.service If you aren't sure where find the docs for a systemd directive, you can check: man systemd.directives . From there, we find both these directives in man systemd.unit . Your biggest clue comes when you enable the service: sudo systemctl enable [email protected] symlink /etc/systemd/system/multi-user.target.wants/[email protected] → /lib/systemd/system/[email protected]. Putting it all together: The symlink is how systemd knows to boot up PostgreSQL 9.6 when the server starts. The PartOf= and ReloadPropagatedFrom= directives handle making sure that stop , start , restart and reload on the postgresql service end up applying to all the related installed PostgreSQL instances. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436802",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285428/"
]
} |
436,849 | I use Ubuntu 16.04 with Nginx and Bash. I know that it's not possible to directly pipe data into zip . For example, if you host websites on Apache/Nginx webserver, this command set would fail after filling in the password: drt="/var/www/html"mysqldump -u root -p --all-databases | zip "$drt/db-$date.zip" What will be your workaround if you really desire the end file to be a zip file? | If you really want to use zip , you can use Jeff Schaller’s trick : drt="/var/www/html"mysqldump -u root -p --all-databases | zip "$drt/db-$date.zip" - This will create a ZIP file containing a file named - whose contents are the database dump. This is mentioned in the zip manpage: zip also accepts a single dash ("-") as the name of a file to be compressed, in which case it will read the file from standard input, allowing zip to take input from another program. For example: tar cf - . | zip backup - You could also use /dev/stdin instead: mysqldump -u root -p --all-databases | zip -FI "$drt/db-$date.zip" /dev/stdin This would result in an archive containing a file named dev/stdin which might be harder to handle properly. - is a common short-hand to tell programs to use standard input or output; it’s not something that the shell handles, it has to be supported by each individual program. In both cases you’d probably want to use funzip to extract the data; it extracts the first member of an archive to its standard output. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/436849",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273994/"
]
} |
436,864 | How does a FIFO (named pipe) differs from a regular pipe (|)? As I understand from Wikipedia , unlike a regular pipe, a FIFO pipe "keeps living" after the process has ended and can be deleted sometime afterwards. But if f the process is based on a shell command containing a pipe ( cat x | grep y ), we could "keep it alive after the process" if we store it in a variable or a file, isn't it then a FIFO? Also, a regular pipe also has the first stdout it gets, as stdin for another command , so isn't it also a kind of first in first out pipe as well? | "Named pipe" is actually a very accurate name for what it is — it is just like a regular pipe, except that it has a name (on a filesystem). A pipe — the regular, un-named ("anonymous") one used in some-command | grep pattern is a special kind of file. And I do mean file, you read and write to it just like you do every other file. Grep doesn't really care¹ that it's reading from a pipe instead of a terminal³ or an ordinary file. Technically, what goes on behind the scenes is that stdin, stdout, and stderr are three open files (file descriptors) passed to every command run. File descriptors (which are used in every syscall to read/write/etc. files) are just numbers; stdin, stdout, and stderr are file descriptors 0, 1, and 2. So when your shell sets up some-command | grep what it does is something this: Asks the kernel for an anonymous pipe. There is no name, so this can't be done with open like for a normal file — instead it's done with pipe or pipe2 , which returns two file descriptors.⁴ Forks off a child process ( fork() creates a copy of the parent process; both sides of the pipe are open here), copies the write-side of the pipe to fd 1 (stdout). The kernel has a syscall to copy around file descriptor numbers; it's dup2() or dup3() . It then closes the read side and other copy of the write side. Finally, it uses execve to execute some-command . Since the pipe is fd 1, stdout of some-command is the pipe. Forks of another child process. This time, it duplicates the read side of the pipe to fd 0 (stdin), and executes grep . So grep will read from the pipe as stdin. Then it waits for both of those children to exit. At this point, the kernel notices that the pipe isn't open any more, and garbage collects it. That's what actually destroys the pipe. A named pipe just gives that anonymous pipe a name by putting it in the filesystem. So now any process, at any point in the future, can obtain a file descriptor for the pipe by using an ordinary open syscall. Conceptually, the pipe won't be destroyed until both all readers/writers have closed it and it's unlink ed from the filesystem.² This, by the way, is how files in general work on Unix. unlink (the syscall behind rm ) just removes one of the names for the file; only when all names are removed and nothing has the file open will it actually be deleted. A couple of answers here explore this: Why do hard links seem to take the same space as the originals? How can a log program continue to log to a deleted file? What is Linux doing differently that allows me to remove/replace files where Windows would complain the file is currently in use? FOOTNOTES Technically this probably isn't true — it's probably possible to do some optimizations by knowing, and actual grep implementations have often been heavily optimized. But conceptually it doesn't care (and indeed a straightforward implementation of grep wouldn't). Of course the kernel doesn't actually keep all the data structures around in memory forever, but rather it recreates them, transparently, whenever the first program opens the named pipe (and then keeps them as long as its open). So it's as if they existed as long as the name does. Terminal isn't a common place for grep to read from, but it's the default stdin when you don't specify another. So if you type just grep pattern in to your shell, grep will be reading from the terminal. The only use for this that comes to mind is if you're about to paste something to the terminal. On Linux, anonymous pipes actually are created on a special filesystem, pipefs. See How pipes work in Linux for details. Note that this is an internal implementation detail of Linux. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/436864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/273994/"
]
} |
436,978 | I have a JSON fragment. The following does not work: VALUE=<<PERSON{ "type": "account", "customer_id": "1234", "customer_email": "[email protected]" }PERSONecho -n "$VALUE" | python -m json.tool The result is: No JSON object could be decoded Doing the same with jq , i. e. echo -n "$VALUE" | jq '.' There is no output. There is the same behavior for the following: VALUE=<<PERSON'{ "type": "account", "customer_id": "1234", "customer_email": "[email protected]" }'PERSONecho -n "$VALUE" | python -m json.tool Response: No JSON object could be decoded But the following works: VALUE='{ "type": "account", "customer_id": "1234", "customer_email": "[email protected]"}'echo -n "$VALUE" | jq '.'echo -n "$VALUE" | python -m json.tool | VALUE=<<PERSONsome dataPERSONecho "$VALUE" No output. A here-document is a redirection , you can't redirect into a variable. When the command line is parsed, redirections are handled in a separate step from variable assignments. Your command is therefore equivalent to (note the space) VALUE= <<PERSONsome dataPERSON That is, it assigns an empty string to your variable, then redirects standard input from the here-string into the command (but there is no command, so nothing happens). Note that <<PERSONsome dataPERSON is valid, as is <somefile It's just that there is no command whose standard input stream can be set to contain the data, so it's just lost. This would work though: VALUE=$(cat <<PERSONsome dataPERSON) Here, the command that receives the here-document is cat , and it copies it to its standard output. This is then what is assigned to the variable by means of the command substitution. In your case, you could instead use python -m json.tool <<END_JSONJSON data hereEND_JSON without taking the extra step of storing the data in a variable. It may also be worth while to look into tools like jo to create the JSON data with the correct encoding: For example: jo type=account customer_id=1234 [email protected] random_data="some^Wdata" ... where ^W is a literal Ctrl+W character, would output {"type":"account","customer_id":1234,"customer_email":"[email protected]","random_data":"some\u0017data"} So the command in the question could be written jo type=account customer_id=1234 [email protected] |python -m json.tool | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/436978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264385/"
]
} |
437,002 | Would someone please explain what is happening with below awk command? if it's no error then why notme is not getting print, and why I'm not getting syntax error then for close bracket ...} need close with quote ...}' ? $ awk '{print "me "$0 '"notme"} <<<"printme"me printme Then so I would tried this: $ awk '{print "me "$0 '"\"$(date)"\"} <<<"printme-at "me printme-at Wed Apr 11 16:41:34 DST 2018 Or awk '{print '"\"$(date)\""} <<<"run"Wed Apr 11 16:56:38 DST 2018 Which as it shows means I can do everything using shell command substitution. Is this a bug? Or maybe a special state that I cannot find. | For the first one $ awk '{print "me "$0 '"notme"} <<<"printme" What's happening here is: The part in single quotes is passed to awk verbatim The next part "notme"} is parsed by the shell and then passed to awk as the resulting string notme} awk gets to see this: {print "me "$0 notme} Since the awk variable notme has no value, that's what you get For the second one $ awk '{print "me "$0 '"\"$(date)"\"} <<<"printme-at "me printme-at Wed Apr 11 16:41:34 DST 2018 I'd be more inclined to write it like this, using an awk variable to carry the value of $(date) : awk -v date="$(date)" '{print "me "$0 date}' <<<"printme-at "me printme-at Wed Apr 11 13:43:31 BST 2018 You've asked why there is no syntax error in your version of this one. Let's take it apart: # typed at the shellawk '{print "me "$0 '"\"$(date)"\"} <<<"printme-at "# prepared for awkawk '{print "me "$0 "Wed Apr 11 16:41:34 DST 2018"}' <<<"printme-at "# seen by awk{print "me "$0 "Wed Apr 11 16:41:34 DST 2018"} The double-quoted string "\"$(date)\"" is parsed by the shell before awk gets anywhere near it, and is evaluated (by the shell) as something like the literal string "Wed Apr 11 13:43:31 BST 2018" (including the double-quote marks). I don't see why there needs to be a syntax error as what you have written is valid - although somewhat tortuous to read. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437002",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/72456/"
]
} |
437,041 | Example:FlightGear (2GB) is installing and I just need to install udftools quickly, and I wish not to break the giant flightgear installation for that. Windows also supports installing two programs simultaneously, but if I try it on Linux, even on a different user and tty, it fails. How do I install two applications simultaneously? | You can’t; APT, just like most other package managers, uses a lock to ensure that a single package management operation is ever in progress at any given time. This is done to enforce consistency: it’s important to keep the state of the package database, and the state of packages, coherent, and the easiest way to do that is to guarantee that they’re never undergoing several concurrent modifications. The locks are always in a fixed place (otherwise they wouldn’t be all that useful), so you can use them yourself to queue work up, using something like lockf : lockf /var/lib/dpkg/lock apt-get update will wait for the lock to be freed (if necessary) before running apt-get update . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437041",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/270469/"
]
} |
437,111 | I was just thinking about the rm -rf * command. The rm command removes any files following it, the -rf bit indicates to extend the rm command to include directories, and * means everything. So, I thought about what would happen If I did: cd rm -rf * Could this ruin a computer? I'm not very well versed with what everything in the root directory does, but it seems like a lot of it essentially runs the computer. So what would happen if I did this? How bad would it be? Could it break a computer? Further As an interesting additional point, are there any commands as basic as this that can be done in the terminal which would be very damaging? | If you ran the sequence of commands: cdrm -rf * All non-hidden files and directories in your home directory will be deleted. Any contents of any userfs -mounted partitions (networked or otherwise) will be deleted. You may or may not be a very sad panda. Would "this break the computer"? No. Would it cause you to lose any of your files, personally installed applications, desktop configurations, et cetera? Definitely. If you did this (with superuser permissions) in the root directory, the results would be catastrophic. Any (non-hidden) files in the root directory, and the contents of all (non-hidden) directories in the entire filesystem would be deleted. Again, this includes the contents of any remotely mounted media (such as that NAS mount in /mnt/media to your collection of TV shows and movies, for instance). Again - would this "break the computer"? No. Would it render it unusable until a new operating system is installed? Almost definitely unless another ( unmounted ) bootable partition exists. If you are unfortunate enough to have /boot mounted read-write after boot-time, there may be ramifications reaching to the bootability of other operating systems also. Don't do this. Even on a VM. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437111",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285657/"
]
} |
437,146 | I have a project where I know a single computer and a single printer will be the only things on the network. What I want to do is detect when the printer is connected to the network. I also know that the computer is 192.168.3.1 . However, with DHCP I won't know the printer address (yes, it could be made static to make it easier but, 'they' don't like that. 'They' want it dynamic) What I have is a script that does the following and it works. nmap -sP 192.168.3.0/24 \ | awk '/192.168.3/ && !/192.168.3.1$/' \ | sed 's/Nmap scan report for //' Nmap output Nmap scan report for 192.168.3.1 Host is up (0.014s latency). Nmap scan report for 192.168.3.100 Host is up (0.012s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 2.54 seconds Script output 192.168.3.100 It only takes a couple seconds to work but is there a better/cleaner/faster way? | You can accomplish this with the following awk command: nmap -sP 192.168.3.0/24 \ | awk '/192.168.3/ && !/192.168.3.1$/{print $NF}' This is telling awk to print the last field of the matched line(s) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285288/"
]
} |
437,242 | When connecting to my development server via ssh , I can forward remote ports to local ports via: ssh [email protected] -L 5432:localhost:5432 However I'd rather use mosh because my connection tends to drop. I tried extending my usual mosh command (that works) with the --ssh parameter: mosh --ssh "ssh -L 5432:localhost:5432" [email protected] Which gets me connected without error - but doesn't do anything for my ports. Is there a way to make port forwarding work when connecting via mosh ? | I found an open issue for this exact feature at Mosh's GitHub . And an open bounty at bountysource currently at $616. So it looks like it's not possible yet. -- As a workaround for my SSH disconnect issue I added the following lines to my server's /etc/ssh/sshd_config : ClientAliveInterval 60 # send null packet every x seconds to clientsClientAliveCountMax 720 # time them out after doing so y times Followed by a restart of the SSH daemon and a re-login via SSH. sudo /etc/init.d/ssh restartsudo service ssh restartsudo systemctl restart ssh This of course doesn't help with situations like changing cell towers on mobile connections like mosh does. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/437242",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285753/"
]
} |
437,261 | $_ is said to be the last argument of the previous command. So I wonder why it is not EDITOR="emacs -nw" but EDITOR in the following example? Why isn't "emacs -nw" part of the last argument? More generally, what are the definitions of an argument, and the last argument? Thanks. $ export EDITOR="emacs -nw"$ echo $_EDITOR | Bash processes variable assignments, when they’re allowed as arguments (with alias , declare , export , local , readonly , and typeset ), before anything else (or rather, it identifies them before anything else — expansion applies to the values assigned to variables). When it gets to word expansion, the remaining command is export EDITOR , so _ is set to EDITOR . Generally speaking, arguments are the “words” remaining after expansion (which doesn’t include variable assignments and redirections). See Simple command expansion in the Bash manual for details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
437,265 | sensors|grep -oP "Core 1:\s*\+\K[0-9]+" >> lmsenreading1.txt Then: sensors|grep -oP "Core 0:\s*\+\K[0-9]+" >> lmsenreading0.txt Then join the two .txt files with a delimiter , .This should give, for example, 65,66 If I use sensors|grep -oP ":\s*\+\K[0-9]+" my output is 276566 The 27 is not required. How do I format the output from sensors|grep -oP ":\s*\+\K[0-9]+" to give: 65,66 | Bash processes variable assignments, when they’re allowed as arguments (with alias , declare , export , local , readonly , and typeset ), before anything else (or rather, it identifies them before anything else — expansion applies to the values assigned to variables). When it gets to word expansion, the remaining command is export EDITOR , so _ is set to EDITOR . Generally speaking, arguments are the “words” remaining after expansion (which doesn’t include variable assignments and redirections). See Simple command expansion in the Bash manual for details. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437265",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285495/"
]
} |
437,298 | I'm starting/trying to learn a bit of bash scripting and i'm wondering what is wrong with my method of passing arguments to a function from the terminal (see below) as my method seems similar to many found in internet tutorials. #!/bin/bashfunction addition_example(){ result = $(($1+$2)) echo Addition of the supplied arguments = $result} I call the script as follows: source script_name.sh "20" "20"; addition_example This returns the following: bash: +: syntax error: operand expected (error token is "+") I've also tried: addition_example "$20" "$20" This returns the following: bash: result: command not foundAddition of the supplied arguments = | You are running the addition_example function with no arguments. Therefore, the $1 and $2 variables are empty and what you're actually executing is just result = $((+)) . This gives precisely the error you mention: $ result = $((+))bash: +: syntax error: operand expected (error token is "+") When you run source script_name.sh 20 20 , the shell will source script_name.sh , passing it 20 and 20 as arguments. However, script_name.sh doesn't actually contain any commands, it only has a function declaration. So the arguments are ignored. Then, in a subsequent command, you run addition_example but with no arguments, so you get the error described above. Also, you have a syntax error. You can't have spaces around the assignment operator ( = ) in shell scripts. You should change your script to: function addition_example(){ result=$(($1+$2)) echo "Addition of the supplied arguments = $result"} And then run the function with the desired arguments: $ source script_name.sh; addition_example 20 20Addition of the supplied arguments = 40 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/437298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/252422/"
]
} |
437,356 | Recently, i used echo command prefixing it with $. To my surprise what it resulted was an error.My command was something like this. # !/bin/bash$(echo 'a') The error was.. ./test1.sh: line 3: a: command not found Can anyone explain what is happening here.Thanks in advance. | $(echo a) is a "command substitution". The $(...) will be replaced by the output of the command within . The output in this case is a , which the shell then tries to execute. The shell can't locate the command called a and you get the error message. It is unclear what your intention with this was or what you expected to happen. It is highly unusual to want to execute the result of a command substitution. Some programs output strings that should be evaluated by the shell. It is therefore possible to see code like eval "$(ssh-agent)" which evaluates (runs) the output of the given command. These commands have a strictly specified output and are generally considered safe to run in this way. In the example above, ssh-agent will start the SSH agent process and output a few commands that will set the appropriate environment variables that the ssh client later will need for using the agent, for example, SSH_AUTH_SOCK=/tmp/ssh-Ppg1EO5eRIZp/agent.6017; export SSH_AUTH_SOCK;SSH_AGENT_PID=6018; export SSH_AGENT_PID;echo Agent pid 6018; This is then evaluated by eval . eval is used here rather than just simply using $(ssh-agent) since the output of the ssh-agent command is more a compound command. Without eval , the ; command terminators would net be special. Example: $ s='echo hello; echo world'$ $shello; echo world$ eval "$s"helloworld | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/283676/"
]
} |
437,383 | 2492 some string continues here I would like to convert this to 2492 in Bash. How would I go about that? This feels close, but is not working: var="2492 some string continues here "echo ${var%[[:space:]]*} | Because there are multiple spaces you want to use ${var%%[[:space:]]*}# ...^^ to remove the longest trailing substring that starts with a space With just a single % you're removing the shortest sequence of a space followed by zero or more characters, which is just the last space in the string. $ echo ">$var<"; echo ">${var%[[:space:]]*}<"; echo ">${var%%[[:space:]]*}<">2492 some string continues here <>2492 some string continues here <>2492< If you're just looking for the first word , you can do this: read -r word rest_of_string <<<"$var"echo "I have: $word" That will take care of leading whitespace, assuming you have not altered the IFS variable. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/110431/"
]
} |
437,415 | Here's my log format(simplified for demonstrating) 2018-04-12 14:43:00.000 ERROR hello2018-04-12 14:44:01.000 ERROR world2018-04-12 14:44:03.000 INFO this is a multi-line logNOTICE THIS LINE, this line is also part of the log2018-04-12 14:46:00.000 INFO foo So how to filter the log of [2018-04-12 14:44:00.000, 2018-04-12 14:45:00.000) to produce the following output? 2018-04-12 14:44:01.000 ERROR world2018-04-12 14:44:03.000 INFO this is a multi-line logNOTICE THIS LINE, this line is also part of the log | With awk : awk -v 'start=2018-04-12 14:44:00.000' -v end='2018-04-12 14:45:00.000' ' /^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2} / { inrange = $0 >= start && $0 <= end } inrange' < your-file It won't work with mawk which doesn't support POSIX character classes nor interval regexp operators. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437415",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208490/"
]
} |
437,426 | I am in a situation where I have the /tmp directory having atleast 25,000 - 50,000 directories in it. I am trying to use the following command to delete the directories which are older than 2 days in that directory. find /path/to/tmp/* -type d -ctime +2 -delete But I keep running into the error that the argument list is too long. How can I specifically limit the number of directories being deleted? I tried using the maxdepth 1 option as well and that didn't seem to work. | With awk : awk -v 'start=2018-04-12 14:44:00.000' -v end='2018-04-12 14:45:00.000' ' /^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2} / { inrange = $0 >= start && $0 <= end } inrange' < your-file It won't work with mawk which doesn't support POSIX character classes nor interval regexp operators. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285906/"
]
} |
437,431 | I am merging two movie libraries and am looking to "de-duplicate" manually via bash scripting. Here is my thought process so far: Find all files with same name regardless of extension Delete smaller file (I have storage for days! and prefer quality!) I could build on this, so if I can somehow make the delete part separate, I can build on it. My though being I could use ffmpeg to inspect the video and pick the better one, but I'm guessing bigger size = best option and simpler to code. I posted of Software Rec but didn't get what I wanted so I realized bash is my best bet, but my "find" knowledge is limited and most of the answers I am finding are way to complicated, I figure this should be a simple thing. Eg: Find files with same name but different content? | This is a nice way I wrote to just find the repeating files ignoring extension: find . -exec bash -c 'basename "$0" ".${0##*.}"' {} \; | sort | uniq --repeated Then I wrapped it in this loop to find the smaller of the two files for each: for i in $(find . -exec bash -c 'basename "$0" ".${0##*.}"' {} \; | sort | uniq --repeated); do find . -name "$i*" -printf '%s %p\n' | sort -n | head -1 | cut -d ' ' -f 2-; done Finally one more loop to (interactively, with rm -i so there's a prompt before every one) delete all those files: for j in $(for i in $(find . -exec bash -c 'basename "$0" ".${0##*.}"' {} \; | sort | uniq --repeated); do find . -name "$i*" -printf '%s %p\n' | sort -n | head -1 | cut -d ' ' -f 2-; done); do rm -i "$j"; done As this involves doing two find s on your directory, surely there is a better way. But this should work for simple cases. It also assumes you're working from the current directory, if you want to perform the command on a different one just change the . argument to both find commands. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130767/"
]
} |
437,468 | I connected a pair of AirPods to everything I could. Android, OSX, Linux Mint, Arch LInux. It sounds great on all of them, but when connected under Arch, I can get get less than half the volume even if I max all volumes I can find. It's strange that Mint gets the volume right. I switched to Linux Mint for a while for this exact reason. But I prefer Arch. It's smoother and faster. Pacman is another easy to use tool.However, I searched for all and any solutions to bluetooth volume, but none worked.Volume on wired headphones and laptop's speakers is loud and clear. Problem only exists in bluetooth device that relies on source to set volume. If the device has own volume buttons, then I can pump up the volume all the way. From Gnome Sound Settings I tried going over 100%, but the sound is distorted. I tried alsamixer and pavucontrol. All volumes are maxed, but I only get Intel card and PulseAudio. should I also have a bluetooth volume? I also found PulseAudio/Troubleshooting - Volume adjustment does not work properly which mentioned the volume cap of 65536. Since sound is clear, I believe this volume limit is the source of my problem. But even if I try to increase the volume as mentioned there, I cannot get past the upper limit of 65536. $ amixer set Master 12345+Simple mixer control 'Master',0 Capabilities: pvolume pswitch pswitch-joined Playback channels: Front Left - Front Right Limits: Playback 0 - 65536 Mono: Front Left: Playback 65536 [100%] [on] Front Right: Playback 65536 [100%] [on] Debugging Bad dB Information of ALSA Drivers describes the same problem, but I could not get any information using this tool. I believe there should be a way to set a config per bluetooth device and set the lower and upper limits.Alternative, maybe setting the volume to dB instead of absolute value might help, but disabling flat-volumes in /etc/pulse/daemon.conf did nothing. The only comparison I was able to make against LinuxMint is that Mint sets dB instead of absolute value. (I have a live USB so I can boot any time in Mint) Any suggestion is welcome. | VMG's answer is subtly wrong; it will technically work, but it will disable all other plugins than a2dp, meaning bluetooth keyboards/mice/gamepads/etc will stop working, when the only plugin causing issues seems to be one called avrcp. Edit /lib/systemd/system/bluetooth.service and change ExecStart=/usr/lib/bluetooth/bluetoothd to ExecStart=/usr/lib/bluetooth/bluetoothd --noplugin=avrcp and run sudo systemctl daemon-reloadsudo systemctl restart bluetooth | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/437468",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285941/"
]
} |
437,469 | The mails sent are waiting in queue with the error below: `(Host or domain name not found. Name service error for name=srvr1.com.my type=MX: Host not found, try again)` However, I have defined the host entry for that domain in /etc/hosts . | VMG's answer is subtly wrong; it will technically work, but it will disable all other plugins than a2dp, meaning bluetooth keyboards/mice/gamepads/etc will stop working, when the only plugin causing issues seems to be one called avrcp. Edit /lib/systemd/system/bluetooth.service and change ExecStart=/usr/lib/bluetooth/bluetoothd to ExecStart=/usr/lib/bluetooth/bluetoothd --noplugin=avrcp and run sudo systemctl daemon-reloadsudo systemctl restart bluetooth | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/437469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/176232/"
]
} |
437,524 | I am working on a file where I have columns of very large values. (like 40 digits : 646512354651316486246729519751795724672467596754627.06843 and so on ...) I would like to have those numbers in scientific notation but with only 3 or 4 numbers after the dot. Is there a way to use sed or something to do that on every number that appears in my file? | Your shell might have a printf builtin you can use to format numbers. $ type printfprintf is a shell builtin$ printf "%.3e\n" 646512354651316486246729519751795724672467596754627.068436.465e+50$ printf "%.4e\n" 646512354651316486246729519751795724672467596754627.068436.4651e+50$ _ If not, there's often a dedicated printf binary, too. $ which printf/usr/bin/printf$ _ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285950/"
]
} |
437,602 | Does POSIX mandate that stdin is 0, stdout is 1 and stderr is 2 or is this only a convention? Do other systems diverge from that convention or is it a safe assumption? | It seems that they are standardized in the POSIX spec, POSIX.1-2017 by proxy of unistd.h The header shall define the following symbolic constants for file streams: STDERR_FILENO File number of stderr; 2. STDIN_FILENO File number of stdin; 0. STDOUT_FILENO File number of stdout; 1. But also the POSIX docs on " stderr , stdin , stdout - standard I/O streams" state, This volume of POSIX.1-2017 defers to the ISO C standard. The ISO ISO/IEC 9899:201x Standard state only, The three predefined streams stdin, stdout, and stderr are unoriented at program startup. It seems ISO C is relatively mute on this allowing the kernel to assign whatever it wants to the descriptors known as STDOUT , STDERR , and STDIN . But that the POSIX docs on unistd.h are explicit about what they should resolve to at that level. Other Operating Systems Microsoft Windows follows the POISX convention in the "Low Level I/O" interface | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/437602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3285/"
]
} |
437,680 | Is there a data structure for bash scripts that can be used similar to how something like a java.util.Set would be used? Basically a collection that even if you add a duplicate element to it won't allow you to add two of the same element? I'm not looking to store anything complicated, just a set of strings. Also, if it does exist, does it require a particular version of bash or is it just a POSIX compliant thing? I'm aware that bash does have arrays, and some versions of bash have hashmaps (but not all versions). | If all you care about is a set of strings, you can just use an associative array ("hashmap"): declare -A hmhm[hello]=1hm[world]=1hm[hello]=1if [ "${hm[hello]}" ] ; then ... ; fi All you care about is whether there's something associated with the key or not. The value doesn’t matter, we only care that there’s a non-empty string there (so you can "delete" an entry by setting it to an empty string). This would be analogous to using a HashMap<String,Object> to represent a set (which is actually what Java’s HashSet does , in fact). These associative arrays are available in Bash 4 and later, and also in zsh and ksh. They don't work in 3-series Bash versions and earlier, including macOS’s Bash 3.2. There is no POSIX equivalent. You could simulate the effect using eval if your strings are suitably restricted, or have a natural transformation to valid variable names: hm_hello=1hm_world=1key=testeval "hm_$key=1"if [ "$(eval hm_$key)" ] ; then ... ; fi You could also use a temporary file and grep , for example, or even lots of temporary files and the filesystem as a key store. It's also possible (perhaps likely) that using some other tool or language is more suitable than shell script. At a minimum, awk is available on all POSIX systems and it does support string-keyed associative arrays. If you really do have complex data-structure needs a conventional general-purpose language may be still more appropriate. Perl and Python are also widely available. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/437680",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3570/"
]
} |
437,761 | I have configured the Debian workstations at our department to use Exim 4 for mail delivery. Also I have created an alias such that I receive all the root emails. The Exim 4 configuration (via Ansible and debconf) has those settings: exim4_dc_eximconfig_configtype: internetexim4_dc_readhost: …exim4_dc_smarthost: …exim4_dc_use_split_config: 'true'exim4_dc_hide_mailname: 'true'exim4_dc_mailname_in_oh: 'true' On each of the machines, I can use mailx to send an email to root and it will show up in my inbox just fine. Also some executions of the cron jobs are properly sent to me. However, most cron jobs fail to deliver their emails and instead I get the following email: This message was created automatically by mail delivery software.A message that you sent could not be delivered to one or more of itsrecipients. This is a permanent error. The following address(es) failed: ueding@… (generated from root@echo)Reporting-MTA: dns; echoAction: failedFinal-Recipient: rfc822;ueding@…Status: 5.0.0Return-path: <root@echo>Received: from root by echo with local (Exim 4.89) (envelope-from <root@echo>) id 1f7Jqz-0007jU-7y for root@echo; Sat, 14 Apr 2018 14:00:25 +0200From: root@echo (Cron Daemon)To: root@echoSubject: Cron <root@echo> ansible-pull -U [email protected]:…/….git --private-key /root/.ssh/ansible_pull localhost.ymlMIME-Version: 1.0Content-Type: text/plain; charset=US-ASCIIContent-Transfer-Encoding: 8bitX-Cron-Env: <SHELL=/bin/sh>X-Cron-Env: <HOME=/root>X-Cron-Env: <PATH=/usr/bin:/bin>X-Cron-Env: <LOGNAME=root>Message-Id: <E1f7Jqz-0007jU-7y@echo>Date: Sat, 14 Apr 2018 14:00:25 +0200X-Exim-DSN-Information: Due to administrative limits only headers are returned I really do not understand why this is happening. Either all email delivery fails, or almost all succeed. How can the email from cron fail on most workstations but succeed on others, while the delivery failure emails always get through? The system log regarding exim on the machine, echo, is really sparse: # journalctl -u exim4.service -- Logs begin at Tue 2018-03-06 18:35:11 CET, end at Sat 2018-04-14 17:13:08 CEST. --Apr 02 18:00:30 echo systemd[1]: Starting LSB: exim Mail Transport Agent...Apr 02 18:01:23 echo exim4[27433]: Starting MTA: exim4.Apr 02 18:01:23 echo systemd[1]: Started LSB: exim Mail Transport Agent. Looking into /var/log/exim4/mainlog serves the explanation on a silver platter: 2018-04-14 14:00:25 1f7Jqz-0007jU-7y <= root@echo U=root P=local S=79482018-04-14 14:00:25 1f7Jqz-0007jU-7y ** ueding@… <root@echo> R=dnslookup T=remote_smtp: message is too big (transport limit = 1)2018-04-14 14:00:25 1f7Jqz-0007jW-BM <= <> R=1f7Jqz-0007jU-7y U=Debian-exim P=local S=18562018-04-14 14:00:25 1f7Jqz-0007jU-7y Completed2018-04-14 14:00:26 1f7Jqz-0007jW-BM => ueding@… <root@echo> R=dnslookup T=remote_smtp H=… […] X=TLS1.0:RSA_AES_256_CBC_SHA1:256 CV=yes DN="C=DE,ST=…,L=…,O=…,OU=…,CN=…" C="250 2.0.0 Ok: queued as 6FCA1155FC32"2018-04-14 14:00:26 1f7Jqz-0007jW-BM Completed The error likely is “message is too big (transport limit = 1)”. But that still does not make so much sense since I have 30 identically configured workstations and for some of them the messages get through several days in a row. The length of the message should be the same for each machine (except the length of the hostname), and the two machines which currently get their emails through have longer names. | example in log: [email protected] R=smarthost T=remote_smtp_smarthost: message is too big (transport limit = 1) this means a max LINE limit of 998 chars has been hit solution: in "/etc/exim4/exim4.conf.localmacros" add the following line: IGNORE_SMTP_LINE_LENGTH_LIMIT = 1 and then restart exim4. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8251/"
]
} |
437,812 | Bash uses exclamation marks for history expansions, as explained in the answers to this question (e.g. sudo !! runs the previous command-line with sudo ). However, I can't find anywhere that explains what running the following command (i.e. a single exclamation mark) does: ! It appears to print nothing and exit with 1, but I'm not sure why it does that. I've looked online and in the Bash man page, but can't find anything, apart from the fact that it's a "reserved word" – but so is } , and running this: } prints an error: bash: syntax error near unexpected token `}' | The lone ! at the start of a command negates the exit status of the command or pipeline : if the command exits 0 , it will flip into 1 (failure), and if it exits non-zero it will turn it into a 0 (successful) exit. This use is documented in the Bash manual: If the reserved word ‘!’ precedes the pipeline, the exit status is the logical negation of the exit status as described above. A ! with no following command negates the empty command, which does nothing and returns true (equivalent to the : command ). It thus inverts the true to a false and exits with status 1, but produces no error. There are also other uses of ! within the test and [[ commands, where they negate a conditional test. These are unrelated to what you're seeing. In both your question and those cases it's not related to history expansion and the ! is separated from any other terms. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/437812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226269/"
]
} |
437,822 | I confess I am not an expert on unix commands but I moved a file to another directory and got an error message - yet the file was moved. Why is this message ' mv: rename to to /Users/billtubbs/Library/Script Libraries/to: No such file or directory ' raised? Is it because Library is a protected folder and I was supposed to use sudo ... ? This is in a Terminal on Mac OS X 10.13.4: BillsMacBookPro:Scripts billtubbs$ ls ~/Library/'Script Libraries'BillsMacBookPro:Scripts billtubbs$ mv FileHandlers.scpt to ~/Library/'Script Libraries'mv: rename to to /Users/billtubbs/Library/Script Libraries/to: No such file or directoryBillsMacBookPro:Scripts billtubbs$ lsAddress Book Scripts Terminal scriptsApplications Test Script read html page .scptBus Data Test script parse html form.scptFirefox scripts TextDataFromFile.scptListHandlers.scpt TextHandlers.scptMail Scripts What Time Is It? Old.scptMorning routine OLD.scpt What Time Is It?.appMorning routine.app What time is it?.scptMorning routine.scpt What time is it?.scptdNumberHandlers.scpt When's the next bus?.scptNumbers scripts When's the next number 19 bus?.scptPOF member details.scpt When's the next number 20 bus?.scptREADME.md When's the next number 25 bus?.scptSafari scripts mail subject line.scptSave mail message to file.scpt mail_read.scptSpeak_time.applescript save mail_copy.scptSpeak_time.scpt search POF script.scptSpeak_time.zipBillsMacBookPro:Scripts billtubbs$ mv TextHandlers.scpt to ~/Library/'Script Libraries'mv: rename to to /Users/billtubbs/Library/Script Libraries/to: No such file or directoryBillsMacBookPro:Scripts billtubbs$ ls ~/Library/'Script Libraries'FileHandlers.scpt TextHandlers.scptBillsMacBookPro:Scripts billtubbs$ ls TextH*ls: TextH*: No such file or directory | Because you said mv (source filename) to (target directory) and Unix commands aren’t English —you don’t say things like mv something to somewhere. mv saw mv (source filename 1 ) (source filename 2 ) (target directory) where (source filename 2 ) was to ,and the error message says that there’s no such file as to . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/437822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286198/"
]
} |
437,965 | In order to remind myself when I try to use shopt in Zsh instead of setopt , I created the following alias, testing it first at a shell prompt: $ alias shopt='echo "You\'re looking for setopt. This is Z shell, man, not Bash."' Despite the outer single quotes matching and the inner double quotes matching, and the apostrophe being escaped, I was prompted to finish closing the quotes with: dquote > _ What's going on? It appeared that the escaping was being ignored, or that it needed to be double-escaped because of multiple levels of interpretation... So, just to test this theory, I tried double-escaping it (and triple-escaping it, and so on) all the way up until: alias shopt='echo "You\\\\\\\\\\\\\\\\\\\\\\'re looking for setopt. This is Z shell, man, not Bash." ' and never saw any different behavior. This makes no sense to me. What kind of weird voodoo is preventing the shell from behaving as I expect? The practical solution is to not use quotes for echo , since it doesn't really need any, and to use double quotes for alias , and to escape the apostrophe so it is ignored when the text is echo ed. Then all of the practical problems go away. Can you help me? I need resolution to this perplexing problem. | This is zsh , man, not fish . In zsh , like in every Bourne-like shell (and also csh ), single quotes are strong quotes, there is no escaping within them (except by using the rcquotes options as hinted by @JdeBP where zsh emulates rc quotes¹). You cannot have a single quote inside a single-quoted string, you need to first close the single quoted string and enter the literal single quote using another quoting mechanism (like \ or " ): alias shopt='echo "You'\''re looking for setopt. This is Z shell, man, not Bash."' Or: alias shopt='echo "You'"'"'re looking for setopt. This is Z shell, man, not Bash."' Though you could also do: alias shopt="echo \"You're looking for setopt. This is Z shell, man, not Bash.\"" ( "..." are weaker quotes inside which several characters, including \ (here used to escape the embedded " ) are still special). Or: alias shopt=$'echo "You\'re looking for setopt. This is Z shell, man, not Bash."' ( $'...' is yet another kind of quotes from ksh93, where the ' can be escaped with \' ). (and BTW, you can also use the standard set -o in place of setopt in zsh . bash , for historical reasons, has two sets of options, one that you set with set -o one with shopt ; zsh like most other shells has only one set of options). ¹ In `rc`, the shell of Plan9, with a version for unix-likes also available, [single quotes are the only quoting mechanism](/a/296147) (backslash and double quotes are ordinary characters there), the only way to enter a literal single-quote there is with `''` inside single quotes, so with `zsh -o rcquotes`, you could do: alias shopt='echo "You''re looking for setopt. This is Z shell, man, not Bash."' | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/437965",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3358/"
]
} |
438,064 | I'm trying to set up watchman as a user service. I've followed their documentation as closely as possible. This is what I have: The socket file: [Unit]Description=Watchman socket for user %i[Socket]ListenStream=/usr/local/var/run/watchman/%i-state/sockAccept=falseSocketMode=0664SocketUser=%iSocketGroup=%i[Install]WantedBy=sockets.target The service file: [Unit]Description=Watchman for user %iAfter=remote-fs.targetConflicts=shutdown.target[Service]ExecStart=/usr/local/bin/watchman --foreground --inetd --log-level=2ExecStop=/usr/bin/pkill -u %i -x watchmanRestart=on-failureUser=%iGroup=%iStandardInput=socketStandardOutput=syslogSyslogIdentifier=watchman-%i[Install]WantedBy=multi-user.target Systemd attempts to run watchman but is stuck in a restart loop. These are the errors I get: Apr 16 05:41:00 debian systemd[20894]: [email protected]: Failed to determine supplementary groups: Operation not permittedApr 16 05:41:00 debian systemd[20894]: [email protected]: Failed at step GROUP spawning /usr/local/bin/watchman: Operation not permitted I'm 100% sure the group and user I'm enabling this service & socket exists.What am I doing wrong? | I was running into the same issue. Googling I found this thread: https://bbs.archlinux.org/viewtopic.php?id=233035 The problem is with how the service is being started. If you specify the user/group in the unit file then you should start the service as a system service. If you want to start the service as a user service then the User/Group is not needed and can be removed from the unit config. You simply start the service when logged in as the current user passing the --user flag to systemctl. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/438064",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2781/"
]
} |
438,086 | I often use a colorizer "ccze" which is pretty cool, - it colors text on my shell. I just pipe any output through it. cat /etc/nginx.nginx.conf | ccze -A How can I do this with all commands by default? | I was running into the same issue. Googling I found this thread: https://bbs.archlinux.org/viewtopic.php?id=233035 The problem is with how the service is being started. If you specify the user/group in the unit file then you should start the service as a system service. If you want to start the service as a user service then the User/Group is not needed and can be removed from the unit config. You simply start the service when logged in as the current user passing the --user flag to systemctl. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/438086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269003/"
]
} |
438,105 | I have a text file; its content is like below. $ cat file.txt[] [1]foo1 bar1[] [2]foo2 bar2[] [35]foo3 bar3[] [445]foo4 bar4[] [87898]foo5 bar5 I can successfully remove the first column using awk, but I'm unable to remove [num] characters as it is associated with the string. I'm trying to get a output like below $ cat file.txtfoo1 bar1 foo2 bar2foo3 bar3foo4 bar4foo5 bar5 | $ sed 's/.*]//' file.txt | tr -s ' 'foo1 bar1foo2 bar2foo3 bar3foo4 bar4foo5 bar5 The sed removes everything on the line up to (and including) the final ] , and the tr compresses multiple consecutive spaces into single spaces. Alternatively, using only sed : sed -e 's/.*]//' -e 's/ */ /g' file.txt With the given input data, this produces the same output as the first pipeline. This sed first does s/.*]// which deletes everything up to the ] (inclusive). The second expression matches ␣␣* , i.e. a space followed by zero or more spaces, and replaces these with a single space. The second expression is applied across the whole line and has the same effect as tr -s ' ' , i.e. it compresses multiple consecutive spaces into single spaces. Using awk : awk -F '[][:blank:]]*' '{ print $3,$4 }' file.txt Here, we use ] or spaces or tabs as field separators (multiples of these may separate two column, which is why we use * after the [...] ). Given those separators, the wanted data is available in fields 3 and 4 on each line. After the data in the question was edited to remove some spaces between the last two columns, the following will also do the job: cut -d ']' -f 3 file.txt alternatively just sed 's/.*]//' file.txt or awk -F ']' '{ print $3 }' file.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/438105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/277542/"
]
} |
438,130 | I am trying to understanding the concept of special files on Linux. However, having a special file in /dev seems plain silly when its function could be implemented by a handful of lines in C to my knowledge. Moreover you could use it in pretty much the same manner, i.e. piping into null instead of redirecting into /dev/null . Is there a specific reason for having it as a file? Doesn't making it a file cause many other problems like too many programs accessing the same file? | In addition to the performance benefits of using a character-special device, the primary benefit is modularity . /dev/null may be used in almost any context where a file is expected, not just in shell pipelines. Consider programs that accept files as command-line parameters. # We don't care about log output.$ frobify --log-file=/dev/null# We are not interested in the compiled binary, just seeing if there are errors.$ gcc foo.c -o /dev/null || echo "foo.c does not compile!".# Easy way to force an empty list of exceptions.$ start_firewall --exception_list=/dev/null These are all cases where using a program as a source or sink would be extremely cumbersome. Even in the shell pipeline case, stdout and stderr may be redirected to files independently, something that is difficult to do with executables as sinks: # Suppress errors, but print output.$ grep foo * 2>/dev/null | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/438130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/278472/"
]
} |
438,223 | I try to trace the route: $ traceroute www.google.comtraceroute to www.google.com (64.13.192.76), 64 hops max, 52 byte packets 1 xiaoqiang (192.168.31.1) 5.694 ms 2.697 ms 4.784 ms 2 117.101.192.1 (117.101.192.1) 3.123 ms 6.509 ms 3.693 ms What's xiaoqiang ? It means cockroach literally. | Because it starts with 192.168.*, I guess it might be your router. Probably you purchased a router from China, and the info I found in Chinese websites shows that your router is a “XIAOMI” (the company that makes MI MIX 2) router. This should be configurable. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/438223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
438,329 | I often have large directories that I want to transfer to a local computer from a server. Instead of using recursive scp or rsync on the directory itself, I'll often tar and gzip it first and then transfer it. Recently, I've wanted to check that this is actually working so I ran md5sum on two independently generated tar and gzip archives of the same source directory. To my suprise, the MD5 hash was different. I did this two more times and it was always a new value. Why am I seeing this result? Are two tar and gzipped directories both generated with the same version of GNU tar in the exact same way not supposed to be exactly the same? For clarity, I have a source directory and a destination directory. In the destination directory I have dir1 and dir2. I'm running: tar -zcvf /destination/dir1/source.tar.gz source && md5sum /destination/dir1/source.tar.gz >> md5.txttar -zcvf /destination/dir2/source.tar.gz source && md5sum /destination/dir2/source.tar.gz >> md5.txt Each time I do this, I get a different result from md5sum. Tar produces no errors or warnings. | From the looks of things you’re probably being bitten by gzip timestamps ; to avoid those, run GZIP=-n tar -zcvf ... Note that to get fully reproducible tarballs, you should also impose the sort order used by tar : GZIP=-n tar --sort=name -zcvf ... If your version of tar doesn’t support --sort , use this instead: find source -print0 | LC_ALL=C sort -z | GZIP=-n tar --no-recursion --null -T - -zcvf ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/438329",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286596/"
]
} |
438,341 | I have a use-case to process following file types: 1 - mylog_1.log2 - mylog_2.log.gz I have to run two different text processing commands on each of them as follows: cat mylog_1.log | grep text | sort | uniq -czcat mylog_2.log.gz | grep text | sort | uniq -c (cat, grep, awk and sed are the frequently used commands) Is there a way to process both file types in a single command without unzipping the file? | From the looks of things you’re probably being bitten by gzip timestamps ; to avoid those, run GZIP=-n tar -zcvf ... Note that to get fully reproducible tarballs, you should also impose the sort order used by tar : GZIP=-n tar --sort=name -zcvf ... If your version of tar doesn’t support --sort , use this instead: find source -print0 | LC_ALL=C sort -z | GZIP=-n tar --no-recursion --null -T - -zcvf ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/438341",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27945/"
]
} |
438,356 | I used VirtualBox to set up a small Arch Linux 64 bit x86 machine. Everything is running fine installed on the 8GB big virutal hard disk in the VMDK format. cfdisk shows that I created two partitions: Type: Primary, Type: Linux (ext4), Bootable, First Partition, Size: 7.5G Type: Primary, Type: Swap, Solaris, Bootable, Second Partition, Size: 0.5G I heavily customized Arch Linux and now want to make real bootable media out of it. 1. How could I create a bootable media without the complicated archiso installation process, but directly using a drive image that I create somehow? In fact, I tried already creating my bootable media. I used Ubuntu to convert my os.vmdk to /dev/sdc using the command qemu-img convert os.vmdk -O raw /dev/sdc Once completed, I rebooted to the PC's boot menu but the pendrive was not showing up. I turned off secure boot and enabled Legace Boot Support . Nothing helped. Hence, I entered the Grub command line of my Ubuntu installation and entered set root=(hd1,msdos2) # hd1 is my usb stick, msdos1 is swap, msdos2 is bootableconfigfile /boot/grub/grub.cfg # in order to load the usb's grub 2. Some strange things happened then I put the stick into my laptop -> USB 2.0 port (I have got a USB 3.0 stick) -> Booting... -> Graphical Arch Linux Splash Screen -> Emergency Shell stating something like Can't run fsck. Error: Can't find drive with UUID=... (However, the UUID is absolutely correct. I changed that on Ubuntu and within Arch itself in /etc/fstab! When I do blkid on the emergency shell, the USB drive is simply not there. Instead I get my internal SSD on /dev/sda. I even do not find any usb drive in /dev/* and dmesg states nothing as well!) I put the stick into my laptop -> USB 3.0 port -> The grub command line is not detecting the drive. There is only hd0 but not hd1 . However I sometimes boot from my external hard drive connected via usb 3 using the bios boot menu. I put the stick into my workstation PC on a USB 3.0 port -> Not in BIOS boot menu, started via grub and configfile command -> It booted whereas it did not boot on the USB 3.0 port on my laptop -> and then the same emergency shell as in 1. I put the stick into my workstation PC on a USB 2.0 port -> Started via GRUB -> It boots successfully without any issues. I have no clue, what I should do!? I basically just want to make an image out of my existing virtual Arch Linux that I can transfer to any device by simply copying it without a complicated Arch install. Any help is appreciated. | From the looks of things you’re probably being bitten by gzip timestamps ; to avoid those, run GZIP=-n tar -zcvf ... Note that to get fully reproducible tarballs, you should also impose the sort order used by tar : GZIP=-n tar --sort=name -zcvf ... If your version of tar doesn’t support --sort , use this instead: find source -print0 | LC_ALL=C sort -z | GZIP=-n tar --no-recursion --null -T - -zcvf ... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/438356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/285523/"
]
} |
438,460 | I am trying to copy a file (*.crt) from local to remote server. Unfortunately, I don't have rights to open the sshd_config file on remote server. Someone in our team configured an ssh agent for me; I'm not sure where he put this script, but I can connect to this remote server without a problem. Here is the output of the following command: scp -vvv /cygdrive/c/Users/myaccount/Downloads/certs/*.crt user@server:/tmp >$ scp -vvv /cygdrive/c/Users/myaccount/Downloads/certs/*.crt user@server:/tmpExecuting: program /usr/bin/ssh host server, user user, command scp -v -d -t /tmpOpenSSH_7.5p1, OpenSSL 1.0.2k 26 Jan 2017debug2: resolving "server" port 22debug2: ssh_connect_direct: needpriv 0debug1: Connecting to server [124.67.80.20] port 22.debug1: Connection established.debug1: key_load_public: No such file or directorydebug1: identity file /home/myaccount/.ssh/id_rsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/myaccount/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/myaccount/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/myaccount/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/myaccount/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/myaccount/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/myaccount/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/myaccount/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.5debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000debug2: fd 3 setting O_NONBLOCKdebug1: Authenticating to server:22 as 'user'debug3: hostkeys_foreach: reading file "/home/myaccount/.ssh/known_hosts"debug3: record_hostkey: found key type ECDSA in file /home/myaccount/.ssh/known_hosts:1debug3: load_hostkeys: loaded 1 keys from serverdebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521debug3: send packet: type 20debug1: SSH2_MSG_KEXINIT sentdebug3: receive packet: type 20debug1: SSH2_MSG_KEXINIT receiveddebug2: local client KEXINIT proposaldebug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,ext-info-cdebug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsadebug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbcdebug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbcdebug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: compression ctos: none,[email protected],zlibdebug2: compression stoc: none,[email protected],zlibdebug2: languages ctos:debug2: languages stoc:debug2: first_kex_follows 0debug2: reserved 0debug2: peer server KEXINIT proposaldebug2: KEX algorithms: [email protected],diffie-hellman-group-exchange-sha256,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256debug2: ciphers ctos: aes128-ctr,aes192-ctr,aes256-ctrdebug2: ciphers stoc: aes128-ctr,aes192-ctr,aes256-ctrdebug2: MACs ctos: hmac-sha2-256,hmac-sha2-512debug2: MACs stoc: hmac-sha2-256,hmac-sha2-512debug2: compression ctos: none,[email protected]: compression stoc: none,[email protected]: languages ctos:debug2: languages stoc:debug2: first_kex_follows 0debug2: reserved 0debug1: kex: algorithm: [email protected]: kex: host key algorithm: ecdsa-sha2-nistp256debug1: kex: server->client cipher: aes128-ctr MAC: hmac-sha2-256 compression: nonedebug1: kex: client->server cipher: aes128-ctr MAC: hmac-sha2-256 compression: nonedebug3: send packet: type 30debug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug3: receive packet: type 31debug1: Server host key: ecdsa-sha2-nistp256 SHA256:l19LX/CQNR9zxuvQpVrQn764H6u6wVxoprYFe6Z+Pf0debug3: hostkeys_foreach: reading file "/home/myaccount/.ssh/known_hosts"debug3: record_hostkey: found key type ECDSA in file /home/myaccount/.ssh/known_hosts:1debug3: load_hostkeys: loaded 1 keys from serverdebug3: hostkeys_foreach: reading file "/home/myaccount/.ssh/known_hosts"debug3: record_hostkey: found key type ECDSA in file /home/myaccount/.ssh/known_hosts:1debug3: load_hostkeys: loaded 1 keys from 172.27.40.30debug1: Host 'server' is known and matches the ECDSA host key.debug1: Found key in /home/myaccount/.ssh/known_hosts:1debug3: send packet: type 21debug2: set_newkeys: mode 1debug1: rekey after 4294967296 blocksdebug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug3: receive packet: type 21debug1: SSH2_MSG_NEWKEYS receiveddebug2: set_newkeys: mode 0debug1: rekey after 4294967296 blocksdebug2: key: /home/myaccount/.ssh/id_rsa (0x600072020), agentdebug2: key: /home/myaccount/.ssh/id_rsa (0x0)debug2: key: /home/myaccount/.ssh/id_dsa (0x0)debug2: key: /home/myaccount/.ssh/id_ecdsa (0x0)debug2: key: /home/myaccount/.ssh/id_ed25519 (0x0)debug3: send packet: type 5debug3: receive packet: type 7debug1: SSH2_MSG_EXT_INFO receiveddebug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>debug3: receive packet: type 6debug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug3: send packet: type 50debug3: receive packet: type 51debug1: Authentications that can continue: publickeydebug3: start over, passed a different list publickeydebug3: preferred publickey,keyboard-interactive,passworddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/myaccount/.ssh/id_rsadebug3: send_pubkey_testdebug3: send packet: type 50debug2: we sent a publickey packet, wait for replydebug3: receive packet: type 60debug1: Server accepts key: pkalg rsa-sha2-512 blen 279debug2: input_userauth_pk_ok: fp SHA256:0Ye9/EO8URVsdDLmSgDFlACsxRCJVSTtTmwNYr8SpZEdebug3: sign_and_send_pubkey: RSA SHA256:0Ye9/EO8URVsdDLmSgDFlACsxRCJVSTtTmwNYr8SpZEdebug3: send packet: type 50debug3: receive packet: type 52debug1: Authentication succeeded (publickey).Authenticated to server([124.67.80.20]:22).debug2: fd 5 setting O_NONBLOCKdebug2: fd 6 setting O_NONBLOCKdebug1: channel 0: new [client-session]debug3: ssh_session2_open: channel_new: 0debug2: channel 0: send opendebug3: send packet: type 90debug1: Requesting [email protected]: send packet: type 80debug1: Entering interactive session.debug1: pledge: networkdebug3: receive packet: type 80debug1: client_input_global_request: rtype [email protected] want_reply 0debug3: receive packet: type 91debug2: callback startdebug2: fd 3 setting TCP_NODELAYdebug3: ssh_packet_set_tos: set IP_TOS 0x08debug2: client_session2_setup: id 0debug1: Sending command: scp -v -d -t /tmpdebug2: channel 0: request exec confirm 1debug3: send packet: type 98debug2: callback donedebug2: channel 0: open confirm rwindow 0 rmax 32768debug2: channel 0: rcvd adjust 2097152debug3: receive packet: type 99debug2: channel_input_status_confirm: type 99 id 0debug2: exec request accepted on channel 0debug3: send packet: type 1debug1: channel 0: free: client-session, nchannels 1debug3: channel 0: status: The following connections are open: ^here everything hangs and after few minutes, I click Ctrl+C then comes this: #0 client-session (t4 r0 i0/0 o0/0 fd 5/6 cc -1)debug3: fd 0 is not O_NONBLOCKdebug3: fd 1 is not O_NONBLOCKKilled by signal 2. Where can the problem lie? @roaima Here is result of ls -ld : ╔═myaccount ▷ w00d76:[~]:╚> ls -ld /cygdrive/c/Users/myaccount /Downloads/certs/*.crt-rwx------+ 1 myaccount Domain Users 5037 17. Apr 12:40 /cygdrive/c/Users/myaccount/Downloads/certs/dm.cogist.com_server.crt-rwx------+ 1 myaccount Domain Users 5033 17. Apr 12:37 /cygdrive/c/Users/myaccount/Downloads/certs/dm1.cogist.ch_server.crt-rwx------+ 1 myaccount Domain Users 5037 17. Apr 12:41 /cygdrive/c/Users/myaccount/Downloads/certs/dm2.cogist.ch_server.crt-rwx------+ 1 myaccount Domain Users 5041 17. Apr 12:38 /cygdrive/c/Users/myaccount/Downloads/certs/dm1.cogist.com_server.crt-rwx------+ 1 myaccount Domain Users 5053 17. Apr 12:35 /cygdrive/c/Users/myaccount/Downloads/certs/dm3.cogist.ch_server.crt-rwx------+ 1 myaccount Domain Users 5069 17. Apr 12:36 /cygdrive/c/Users/myaccount/Downloads/certs/dm3.cogist.com_server.crt-rwx------+ 1 myaccount Domain Users 5025 17. Apr 12:30 /cygdrive/c/Users/myaccount/Downloads/certs/dm4.cogist.ch_server.crt-rwx------+ 1 myaccount Domain Users 5025 17. Apr 12:35 /cygdrive/c/Users/myaccount/Downloads/certs/dm5.cogist.ch_server.crt-rwx------+ 1 myaccount Domain Users 5021 17. Apr 12:33 /cygdrive/c/Users/myaccount/Downloads/certs/dm6.cogist.ch_server.crt-rwx------+ 1 myaccount Domain Users 5029 17. Apr 12:39 /cygdrive/c/Users/myaccount/Downloads/certs/dm7.cogist.ch_server.crt-rwx------+ 1 myaccount Domain Users 5025 17. Apr 12:40 /cygdrive/c/Users/myaccount/Downloads/certs/dm8.cogist.ch_server.crt-rwx------+ 1 myaccount Domain Users 5029 17. Apr 12:32 /cygdrive/c/Users/myaccount/Downloads/certs/dm9.cogist.ch_server.crt @roaima I'm logged in using ssh myaccount@server and here was no question. ╔═myaccount ▷ w00d76:[~]:╚> ssh myaccount@serverLast login: Wed Apr 18 11:38:30 2018 from w00d76.net.ch server.net.ch Inventory number: 25422250 OS responsible: IT245 APPL responsible: IT245 APPL description: Gateway Server Server function: Produktion Red Hat Enterprise Linux Server release 7.4 (Maipo) (x86_64) IT2 Operations [email protected] "akunamatata -> no worries mate .."╔═myaccount ▷ server:[~]:╚> | debug1: Sending command: scp -v -d -t /tmp[...]debug2: exec request accepted on channel 0 SCP works by opening an SSH connection to the remote server, then invoking another copy of the scp program there. The two scp instances communicate with each other through the SSH link. According to the log, your scp client successfully connected to the server, authenticated, and requested for the remote server to invoke scp to receive the files. However, it appears that the remote scp instance didn't actually start up correctly. One of these reasons seem to be likely: You have something in your .bashrc, .profile, or similar file on the remote system which prevented scp from starting. The remote server invokes requested commands using your login shell by running the equivalent of $SHELL -c 'the-requested-command' . Some things which you can put in your shell configuration files will prevent the shell from running the command. For example, if your .bashrc exec'ed a different shell, that would prevent scp from working. Since you authenticated using an SSH key, your probably have an entry for the SSH key in the remote system's .ssh/authorized_keys file. There's a directive named ForceCommand that can be placed in the authorized_keys file. If the key is subject to a forced command, then any request by the client to run a program will invoke the forced command, instead of the command requested by the client. The scp program on the remote system may be malfunctioning. Or perhaps someone has replaced it with a different program. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/438460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286743/"
]
} |
438,505 | I want a script to run minutely, but stop executing it during the nighttime. I tried OnCalendar=*-*-* 05..00:*:00 but that lead to Failed to parse calendar specification, ignoring: *-*-* 05..00:*:00Timer unit lacks value setting. Refusing Whats wrong here? | You're using an invalid time range. When using BEGIN..END , END must be later than BEGIN . Obviously, 00 is earlier than 05 so 05..00 errors out. You need OnCalendar=*-*-* 05..23:*:00 This will run your script every minute from 05:00 until 23:59 . I assume that was your intent. If instead you wanted to run from 05:00 until 0:59 you would use OnCalendar=*-*-* 00,05..23:*:00 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/438505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170631/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.