source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
475,922 | I need to edit a file like the following: auto wlx00allow-hotplug wlx00iface wlx000 inet dhcpiface wlx000 inet6 auto post-up sysctl -w net.ipv6.conf.wlx000.accept_ra=2auto wlx000 the goal is to delete the lines starting with 'iface...inet6' and also delete the next few that start with space (can be none or more than one): iface wlx000 inet6 auto post-up sysctl -w net.ipv6.conf.wlx000.accept_ra=2 and keep the rest intact for the following result: auto wlx00allow-hotplug wlx00iface wlx000 inet dhcpauto wlx000 I tried with sed using as follows: sed -i.old -r -e "/iface\s*\w*\s*inet6.*/,\${d;/^\s.*/d;}" /etc/configfile but it removes everything starting at the right place but erasing to the end. I just want to remove lines staring with space after the select iface text. | Try this adaption of your sed one liner: sed '/iface\s*\w*\s*inet6.*/,/^[^ ]/ {/^[^ i]/!d}' file It matches the range from your first pattern to the first line NOT starting with a space char, and deletes the lines starting with space or an "i" (for the leading iface ). Need to rethink should the i be required after the block. Looks like this works: sed -n '/iface\s*\w*\s*inet6.*/ {:L; n; /^[ ]/bL;}; p' file Pls try and report back. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/475922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134844/"
]
} |
475,971 | I would be interested in finding ways to reduce the boot time, specially in embedded-related environments. I've read somewhere of a method to avoid the kernel to load some drivers or modules but I'm completely lost and all the information I find on internet is quite complex and dense. Could anyone please suggest the general steps needed to achieve this? Maybe I'm wrong and this is nothing to do with the kernel. | The arch linux documentation Improving performance/Boot process may help you to learn how to improve the boot performance. Use systemd-analyze blame to check the timing for the enabled services, or systemd-analyze critical-chain to check the critical points then disable the unwanted services through systemctl disable service_name. or removing the un-necessary programs through apt . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/475971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316046/"
]
} |
475,983 | When .ONESHELL is not used Makefile executes each shell commands in a separate shell. What is the benefit of this? Why doesn't makefile uses the same shell? | One reason for not running all commands associated with a receipt in a single shell instance is that a failure in one of the commands would not be detected by make . Only the final exit status of the shell would be given to make . One would have to additionally set .SHELLFLAGS to -e to get the shell to terminate early upon errors (this is required for multi-command shell invocations even without .ONESHELL if they need to fail at the first error). This is all well and good for when SHELL is a POSIX shell. A Makefile can also set SHELL to e.g. /usr/bin/perl , /usr/bin/python , or some other command interpreter. It may then be appropriate, or not, to use .ONESHELL . Making .ONESHELL the default behaviour in make would likely also break older Makefiles. Even though this is not a question relating to the POSIX standard or the compliance to that standard by GNU make , the Rationale of the POSIX specification for make has this to say about the issue at hand: The default in some advanced versions of make is to group all the command lines for a target and execute them using a single shell invocation; the System V method is to pass each line individually to a separate shell . The single-shell method has the advantages in performance and the lack of a requirement for many continued lines. However, converting to this newer method has caused portability problems with many historical makefiles, so the behavior with the POSIX makefile is specified to be the same as that of System V . It is suggested that the special target .ONESHELL be used as an implementation extension to achieve the single-shell grouping for a target or group of targets. GNU make is POSIX compliant in this respect as it implements the System V behaviour and provides a .ONESHELL target for enabling the alternative behaviour, if wanted. ... which is another reason for GNU make to keep the current behaviour. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/475983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/220462/"
]
} |
475,987 | I have a raid5 array with quite large disks, so reconstruction is really slow in case of a power outage. Thankfully, there is the --write-journal option for linux md raid. The man page lists the --write-journal option in the For create, build, or grow: section, so I supposed it should work in grow mode, and tried to add a write journal on the fly: # mdadm --grow /dev/md1 --write-journal /dev/ssd/md1-journalmdadm: :option --write-journal not valid in grow mode Does anyone know whether I can add a write journal to an existing array? And if so, how? | It kind of should work like this: # mdadm --manage /dev/md42 --readonly --add-journal /dev/loop3mdadm: Journal added successfully, making /dev/md42 read-writemdadm: added /dev/loop3 However, currently (using kernel 4.18, mdadm 4.1-rc) that only seems to be possible for arrays that were created with journal in the first place. The above output was procuded after: # mdadm --create /dev/md42 --level=5 --raid-devices=3 /dev/loop[012] --write-journal /dev/loop3mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md42 started.# mdadm --manage /dev/md42 --fail /dev/loop3 --remove /dev/loop3mdadm: set /dev/loop3 faulty in /dev/md42mdadm: hot removed /dev/loop3 from /dev/md42 Creating an array without journal, all attempts to add a journal fail: # mdadm --create /dev/md42 --level=5 --raid-devices=3 /dev/loop[012]mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md42 started.# mdadm --manage /dev/md42 --readonly --add-journal /dev/loop3mdadm: /dev/md42 does not support journal device.# mdadm --manage /dev/md42 --readwrite --add /dev/loop3# echo journal > /sys/block/md42/md/dev-loop3/statebash: echo: write error: Invalid argument So it just doesn't seem to be possible yet. I have found a discussion on the linux-raid mailing list that this is a planned feature. If it has been implemented since, I don't see how. Perhaps contact the mailing list yourself to remind mdadm devs there are people who want this to work! You might have to resort to mdadm --create to re-create the raid or edit metadata of the array. Either option is a bit dangerous. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/475987",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8846/"
]
} |
476,048 | I previously used to create image files using dd , set up a filesystem on them using mkfs and mount them to access them as mounted partitions. Later on, I have seen on the internet that many examples use losetup beforehand to make a loop device entry under /dev , and then mount it. I could not tell why one would practically need an image file to behave as a loop device and have its own /dev entry while the same behaviour can be obtained without all the hassle. Summary: In a real-life scenario, why do we need a /dev/loopX entry to be present at all, when we can just mount the fs image without it? What's the use of a loop device? | Mounts, typically, must be done on block devices. The loop driver puts a block device front-end onto your data file. If you do a loop mount without losetup then the OS does one in the background. eg $ dd if=/dev/zero of=/tmp/foo bs=1M count=100100+0 records in100+0 records out104857600 bytes (105 MB) copied, 0.0798775 s, 1.3 GB/s$ mke2fs /tmp/foomke2fs 1.42.9 (28-Dec-2013)....$ losetup $ mount -o loop /tmp/foo /mnt1 $ losetupNAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE/dev/loop0 0 0 1 0 /tmp/foo$ umount /mnt1$ losetup$ You may need to call losetup directly if your file image has embedded partitions in it. eg if I have this image: $ fdisk -l /tmp/foo2 Disk /tmp/foo2: 104 MB, 104857600 bytes, 204800 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x1f25ff39 Device Boot Start End Blocks Id System/tmp/foo2p1 2048 204799 101376 83 Linux I can't mount that directly $ mount -o loop /tmp/foo2 /mnt1mount: /dev/loop0 is write-protected, mounting read-onlymount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error But if I use losetup and kpartx then I can access the partitions: $ losetup -f /tmp/foo2$ losetupNAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE/dev/loop0 0 0 0 0 /tmp/foo2$ kpartx -a /dev/loop0$ mount /dev/mapper/loop0p1 /mnt1$ | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/476048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104727/"
]
} |
476,080 | Say I run some processes: #!/usr/bin/env bashfoo &bar &baz &wait; I run the above script like so: foobarbaz | cat as far as I can tell, when any of the processes write to stdout/stderr, their output never interleaves - each line of stdio seems to be atomic. How does that work? What utility controls how each line is atomic? | They do interleave! You only tried short output bursts, which remain unsplit, but in practice it's hard to guarantee that any particular output remains unsplit. Output buffering It depends how the programs buffer their output. The stdio library that most programs use when they're writing uses buffers to make output more efficient. Instead of outputting data as soon as the program calls a library function to write to a file, the function stores this data in a buffer, and only actually outputs the data once the buffer has filled up. This means that output is done in batches. More precisely, there are three output modes: Unbuffered: the data is written immediately, without using a buffer. This can be slow if the program writes its output in small pieces, e.g. character by character. This is the default mode for standard error. Fully buffered: the data is only written when the buffer is full. This is the default mode when writing to a pipe or to a regular file, except with stderr. Line-buffered: the data is written after each newline, or when the buffer is full. This is the default mode when writing to a terminal, except with stderr. Programs can reprogram each file to behave differently, and can explicitly flush the buffer. The buffer is flushed automatically when a program closes the file or exits normally. If all the programs that are writing to the same pipe either use line-buffered mode, or use unbuffered mode and write each line with a single call to an output function, and if the lines are short enough to write in a single chunk, then the output will be an interleaving of whole lines. But if one of the programs uses fully-buffered mode, or if the lines are too long, then you will see mixed lines. Here is an example where I interleave the output from two programs. I used GNU coreutils on Linux; different versions of these utilities may behave differently. yes aaaa writes aaaa forever in what is essentially equivalent to line-buffered mode. The yes utility actually writes multiple lines at a time, but each time it emits output, the output is a whole number of lines. while true; do echo bbbb; done | grep b writes bbbb forever in fully-buffered mode. It uses a buffer size of 8192, and each line is 5 bytes long. Since 5 does not divide 8192, the boundaries between writes are not at a line boundary in general. Let's pitch them together. $ { yes aaaa & while true; do echo bbbb; done | grep b & } | head -n 999999 | grep -e ab -e babbaaaabbbbaaaabaaaabbbaaaabbaaaabbbaaaaabbbbbaaa As you can see, yes sometimes interrupted grep and vice versa. Only about 0.001% of the lines got interrupted, but it happened. The output is randomized so the number of interruptions will vary, but I saw at least a few interruptions every time. There would be a higher fraction of interrupted lines if the lines were longer, since the likelihood of an interruption increases as the number of lines per buffer decreases. There are several ways to adjust output buffering . The main ones are: Turn off buffering in programs that use the stdio library without changing its default settings with the program stdbuf -o0 found in GNU coreutils and some other systems such as FreeBSD. You can alternatively switch to line buffering with stdbuf -oL . Switch to line buffering by directing the program's output through a terminal created just for this purpose with unbuffer . Some programs may behave differently in other ways, for example grep uses colors by default if its output is a terminal. Configure the program, for example by passing --line-buffered to GNU grep. Let's see the snippet above again, this time with line buffering on both sides. { stdbuf -oL yes aaaa & while true; do echo bbbb; done | grep --line-buffered b & } | head -n 999999 | grep -e ab -e baabbbbabbbbabbbbabbbbabbbbabbbbabbbbabbbbabbbbabbbbabbbbabbbbabbbb So this time yes never interrupted grep, but grep sometimes interrupted yes. I'll come to why later. Pipe interleaving As long as each program outputs one line at a time, and the lines are short enough, the output lines will be neatly separated. But there's a limit to how long the lines can be for this to work. The pipe itself has a transfer buffer. When a program outputs to a pipe, the data is copied from the writer program to the pipe's transfer buffer, and then later from the pipe's transfer buffer to the reader program. (At least conceptually — the kernel may sometimes optimize this to a single copy.) If there's more data to copy than fits in the pipe's transfer buffer, then the kernel copies one bufferful at a time. If multiple programs are writing to the same pipe, and the first program that the kernel picks wants to write more than one bufferful, then there's no guarantee that the kernel will pick the same program again the second time. For example, if P is the buffer size, foo wants to write 2* P bytes and bar wants to write 3 bytes, then one possible interleaving is P bytes from foo , then 3 bytes from bar , and P bytes from foo . Coming back to the yes+grep example above, on my system, yes aaaa happens to write as many lines as can fit in a 8192-byte buffer in one go. Since there are 5 bytes to write (4 printable characters and the newline), that means it writes 8190 bytes every time. The pipe buffer size is 4096 bytes. It is therefore possible to get 4096 bytes from yes, then some output from grep, and then the rest of the write from yes (8190 - 4096 = 4094 bytes). 4096 bytes leaves room for 819 lines with aaaa and a lone a . Hence a line with this lone a followed by one write from grep, giving a line with abbbb . If you want to see the details of what's going on, then getconf PIPE_BUF . will tell you the pipe buffer size on your system, and you can see a complete list of system calls made by each program with strace -s9999 -f -o line_buffered.strace sh -c '{ stdbuf -oL yes aaaa & while true; do echo bbbb; done | grep --line-buffered b & }' | head -n 999999 | grep -e ab -e ba How to guarantee clean line interleaving If the line lengths are smaller than the pipe buffer size, then line buffering guarantees that there won't be any mixed line in the output. If the line lengths can be larger, there's no way to avoid arbitrary mixing when multiple programs are writing to the same pipe. To ensure separation, you need to make each program write to a different pipe, and use a program to combine the lines. For example GNU Parallel does this by default. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/476080",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
476,167 | I was reading the manpage for gdb and I came across the line: You can use GDB to debug programs written in C, C@t{++}, Fortran and Modula-2. The C@t{++} looks like a regex but I can't seem to decode it. What does it mean? | GNU hates man pages, so they usually write documentation in another format and generate a man page from that, without really caring if the result is usable. C@t{++} is some texinfo markup which didn't get translated. It wasn't intended to be part of the user-visible documentation. It should simply say C++ (possibly with some special font for the ++ to make it look nice). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/476167",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/223417/"
]
} |
476,206 | I sometimes need a command like pass in Python in my bash scripts. Like: if grep somethingthen passelse codefi In Python you have: >>> for element in a:... if not element:... pass... print element QUESTION: I always use continue but it gives an error that it should only be used in a for , while or until loop. What would you do in this circumstance? | Your title is fully answered by A do nothing line in a bash script : : or true are effectively equivalent to pass . However in these circumstances I would flip the condition: if ! grep somethingthen codefi and >>> for element in a:... if element:... print element | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36440/"
]
} |
476,253 | I have a function in my .bashrc file. I know what it does, it steps up X many directories with cd Here it is: up(){ local d="" limit=$1 for ((i=1 ; i <= limit ; i++)) do d=$d/.. done d=$(echo $d | sed 's/^\///') if [ -z "$d" ]; then d=.. fi cd $d} But can you explain these three things from it for me? d=$d/.. sed 's/^\///' d=.. Why not just do like this: up(){ limit=$1 for ((i=1 ; i <= limit ; i++)) do cd .. done} Usage: <<<>>>~$ up 3<<<>>>/$ | d=$d/.. adds /.. to the current contents of the d variable. d starts off empty, then the first iteration makes it /.. , the second /../.. etc. sed 's/^\///' drops the first / , so /../.. becomes ../.. (this can be done using a parameter expansion, d=${d#/} ). d=.. only makes sense in the context of its condition: if [ -z "$d" ]; then d=..fi This ensures that, if d is empty at this point, you go to the parent directory. ( up with no argument is equivalent to cd .. .) This approach is better than iterative cd .. because it preserves cd - — the ability to return to the previous directory (from the user’s perspective) in one step. The function can be simplified: up() { local d=.. for ((i = 1; i < ${1:-1}; i++)); do d=$d/..; done cd $d} This assumes we want to move up at least one level, and adds n - 1 levels, so we don’t need to remove the leading / or check for an empty $d . Using Athena jot (the athena-jot package in Debian): up() { cd $(jot -b .. -s / "${1:-1}"); } (based on a variant suggested by glenn jackman ). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/476253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36440/"
]
} |
476,284 | I have data and I want to summarize sentences to generate conclusions. The example below is not related to the data, but just to clarify the idea so I can replicate it. Employee Suzie signed one time.Employee Dan signed one time.Employee Jordan signed one time.Employee Suzie signed one time.Employee Suzie signed one time.Employee Harold signed one time.Employee Sebastian signed one time.Employee Jordan signed one time.Employee Suzie signed one time.Employee Suzan signed one time. I want to make a summary of these sentences, like this: Jordan signed 2 time(s)Dan signed 1 time(s)Suzie signed 4 time(s)Suzan signed 1 time(s)Sebastian signed 1 time(s)Harold signed 1 time(s) I played with awk , but it seems very hard to do it. Then I tried sed , but it didn't work. It seems sed is just for finding and changing things. | The general approach would be $ awk '{ count[$2]++ } END { for (name in count) printf("%s signed %d time(s)\n", name, count[name]) }' <fileHarold signed 1 time(s)Dan signed 1 time(s)Sebastian signed 1 time(s)Suzie signed 4 time(s)Jordan signed 2 time(s)Suzan signed 1 time(s) I.e., use an associative array/hash to store the number of times that a particular name is seen. In the END block, iterate over all the names and print out the summary for each. For slightly nicer formatting, change the %s placeholder in the printf() call to something like %-10s to reserve 10 characters for the names (left-justified). $ awk '{ count[$2]++ } END { for (name in count) printf("%-10s signed %d time(s)\n", name, count[name]) }' <fileHarold signed 1 time(s)Dan signed 1 time(s)Sebastian signed 1 time(s)Suzie signed 4 time(s)Jordan signed 2 time(s)Suzan signed 1 time(s) More fiddling around with the output (because I'm bored): $ awk '{ count[$2]++ } END { for (name in count) printf("%-10s signed %d time%s\n", name, count[name], count[name] > 1 ? "s" : "" ) }' <fileHarold signed 1 timeDan signed 1 timeSebastian signed 1 timeSuzie signed 4 timesJordan signed 2 timesSuzan signed 1 time | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/476284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
476,290 | I have a Red Hat 7.3 server running XVnc. On a Windows 10 desktop I have Putty and Xming installed. Putty is configured for X11 forwarding. When I SSH in as my standard/non-privileged user and launch an X application, it displays on my Windows 10 desktop without issue. Now within same session, if I su to a more privileged account and try to run an X application, it fails with "error: can't open display". In my standard user session if I echo $DISPLAY it is automatically set for me as "IP_ADDRESS:10.0". Under my su session, $DISPLAY is null. I tried exporting the DISPLAY variable to the same value but it now a different error appears: "PuTTY X11 proxy: Unsupported authorization protocol Error: Can't open display:server_IP:10.0". How can I configure the X11 forwarding to work under the context of the other user? | The below steps should fix the issue for you. Say it's working for user1 and you want to use it for user2 For user1 : $ xauth list $DISPLAY<output1>$ echo $DISPLAY<outoput2> Switch to other user , i.e user2 $ xauth add <output1> $ export DISPLAY=<output2> Try: $ xclock | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476290",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316462/"
]
} |
476,303 | I have network log file like this one: Nmap scan report for 192.168.1.51Host is up.PORT STATE SERVICE80/tcp open http443/tcp open https8080/tcp open http-proxy443/tcp open https8080/tcp open http-proxy8082/tcp filtered redcap8083/tcp filtered https-altNmap scan report for 192.168.1.201Host is up.PORT STATE SERVICE80/tcp open http443/tcp filtered https8281/tcp filtered http-proxy8080/tcp open sedan8801/tcp filtered https-altNmap scan report for 192.168.1.17Host is up.PORT STATE SERVICE80/tcp closed http443/tcp closed https9081/tcp open ecanNmap scan report for 192.168.1.10Host is up.PORT STATE SERVICE80/tcp closed ftp443/tcp open https9081/tcp open standard I want to extract the IP addresses and the counts of open ports for every IP address so the result: 192.168.1.10 - 2192.168.1.201 - 2192.168.1.51 - 5192.168.1.17 - 1 | The below steps should fix the issue for you. Say it's working for user1 and you want to use it for user2 For user1 : $ xauth list $DISPLAY<output1>$ echo $DISPLAY<outoput2> Switch to other user , i.e user2 $ xauth add <output1> $ export DISPLAY=<output2> Try: $ xclock | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
476,351 | I've been getting java.io.IOException: Too many open files while running a Kafka instance and using one topic with 1000 partitions so I started investigating the file descriptors limits in my ec2 vm. I cannot understand which is exactly the limit for open files on a Centos 7 machine since all the following commands produce different results. The commands are: ulimit -a : open files 1024 lsof | wc -l : 298280 cat /proc/sys/fs/file-max : 758881 (which is consistent with /proc/sys/fs/file-nr ) If the actual limit is the one the last command produces then I am well below it ( lsof | wc -l : 298280). But if this is the case, the output of the ulimit command is quite unclear to me since I am well above the 1024 open files. According to the official documentation the best way to check for file descriptors in Centos is the /proc/sys/fs/file-max file but are there all these seemingly "inconsistencies" between these commands? | file-max is the maximum number of files that can be opened across the entire system. This is enforced at the kernel level. The man page for lsof states that: In the absence of any options, lsof lists all open files belonging to all active processes. This is consistent with your observations, since the number of files as reported by lsof is well below the file-max setting. Finally, ulimit is used to enforce resource limits at a user level. The parameter 'number of open files' is set at the user level, but is applied to each process started by that user. In this case, a single Kafka process can have up to 1024 file handles open (soft limit). You can raise this limit on your own up to the hard limit, 4096. To raise the hard limit, root access is required. If Kafka is running as a single process, you could find the number of files opened by that process by using lsof -p [PID] . Hope this clears things up. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316495/"
]
} |
476,478 | How to concatenate text from file of lines in format: line1line2... to get results like -o line1:1 -o line2:1 ... I found solution how to concatenate with a separator like this: ds=`cat list.txt`${ds//$'\n'/','} But I can't figure out how to add prefix to each entry. | This depends on what you want to do with the string that you create. It looks like a set of command line options, so I'm going to assume that you want to use it as such together with some utility called util . Here's a solution for /bin/sh : #!/bin/shlistfile=$1set --while IFS= read -r line; do set -- "$@" -o "$line:1"done <$listfileutil "$@" This reads from the file given on the command line of the script and for each line read from that file, it sets the positional parameters to include -o and LINE:1 where LINE is the line read from the file. After reading all the lines, it calls util with the constructed list of command line arguments. By using "$@" (with the double quotes) we ensure that each individual item in the constructed list of arguments is individually quoted. With bash and using a bash array to hold the command line arguments that we create: #!/bin/bashlistfile=$1while IFS= read -r line; do args+=( -o "$line:1" )done <$listfileutil "${args[@]}" In both the examples above, the quoting is important. Likewise is the fact that we create an array of separate items (each -o and each LINE:1 are items in the list). Another way to do it would have been to create a single string -o LINE1:1 -o LINE2:1 etc. , but this would have been interpreted as one single argument if used as util "$string" and would have undergone word splitting and filename globbing if used as util $string (this would have not worked if any line in the input file contained spaces, tabs or filename globbing characters). Both scripts above would be used as $ ./script.sh file where script.sh is the executable script file and file is the input file name to read from. Related: Understanding "IFS= read -r line" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476478",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237668/"
]
} |
476,522 | I have a lot of rar files - Folder/--- Spain.rar--- Germany.rar--- Italy.rar All the files contains no root folder so it's just files. What I want to achieve when extracting is this structure: - Folder/-- Spain/---- Spain_file1.txt---- Spain_file2.txt-- Germany/---- Germany_file1.txt---- Germany_file2.txt-- Italy/---- Italy_file1.txt---- Italy_file2.txt So that a folder with the name of the archive is created and the archive is extracted to it. I found this bash example in another thread but it's not working for me, it's trying to create one folder with all the files as name. #!/bin/bashfor archive in "$(find . -name '*.rar')"; do destination="${archive%.rar}" if [ ! -d "$destination" ] ; then mkdir "$destination"; fi unrar e "$archive" "$destination"done Any ideas how I can do this? | I have a script in my personal archive that does exactly this. More precisely, it extracts e.g. Spain.rar to a new directory called Spain , except that if all the files in Spain.rar are already under the same top-level directory, then this top-level directory is kept. #!/bin/sh# Extract the archive $1 to a directory $2 with the program $3. If the# archive contains a single top-level directory, that directory# becomes $2. Otherwise $2 contains all the files at the root of the# archive.extract () ( set -e archive=$1 case "$archive" in -) :;; # read from stdin /*) :;; # already an absolute path *) archive=$PWD/$archive;; # make absolute path esac target=$2 program=$3 if [ -e "$target" ]; then echo >&2 "Target $target already exists, aborting." return 3 fi case "$target" in /*) parent=${target%/*};; */[!/]*) parent=$PWD/${target%/*};; *) parent=$PWD;; esac temp=$(TMPDIR="$parent" mktemp -d) (cd "$temp" && $program "$archive") root= for member in "$temp/"* "$temp/".*; do case "$member" in */.|*/..) continue;; esac if [ -n "$root" ] || ! [ -d "$member" ]; then root=$temp # There are multiple files or there is a non-directory break fi root="$member" done if [ -z "$root" ]; then # Empty archive root=$temp fi mv -v -- "$root" "$target" if [ "$root" != "$temp" ]; then rmdir "$temp" fi)# Extract the archive $1.process () { dir=${1%.*} case "$1" in *.rar|*.RAR) program="unrar x";; *.tar|*.tgz|*.tbz2) program="tar -xf";; *.tar.gz|*.tar.bz2|*.tar.xz) program="tar -xf"; dir=${dir%.*};; *.zip|*.ZIP) program="unzip";; *) echo >&2 "$0: $1: unsupported archive type"; exit 4;; esac if [ -d "$dir" ]; then echo >&2 "$0: $dir: directory already exists" exit 1 fi extract "$1" "$dir" "$program"}for x in "$@"; do process "$x"done Usage (after installing this script in your $PATH under the name extract and making it executable): extract Folder/*.rar | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316654/"
]
} |
476,533 | I am trying to create an application with remote controlling a Fingerprint sensor (Guide included on link) for enrolling and identifying fingerprints on are Raspberry PI 3 Model.There is a SDK_DEMO for this particular functionality on Windows only, which you can find in the Guile I mentioned above. SDK_DEMO is written in C++ on Visual studio so I can't manipulate the code to run it on Raspberry Pi 3. From the SDK_DEMO source code I figured out which command I need to send to execute tasks. The Commands CMD_NONE = 0x00,CMD_OPEN = 0x01,CMD_CLOSE = 0x02,CMD_USB_INTERNAL_CHECK = 0x03,CMD_CHANGE_BAUDRATE = 0x04,CMD_MODULE_INFO = 0x06,CMD_CMOS_LED = 0x12,CMD_ENROLL_COUNT = 0x20,CMD_CHECK_ENROLLED = 0x21,CMD_ENROLL_START = 0x22,CMD_ENROLL = 0x23,CMD_ENROLL1 = 0x23,CMD_ENROLL2 = 0x24,CMD_ENROLL3 = 0x25,CMD_IS_PRESS_FINGER = 0x26,CMD_DELETE = 0x40,CMD_DELETE_ALL = 0x41,CMD_VERIFY = 0x50,CMD_IDENTIFY = 0x51,CMD_VERIFY_TEMPLATE = 0x52,CMD_IDENTIFY_TEMPLATE = 0x53,CMD_CAPTURE = 0x60,CMD_GET_IMAGE = 0x62,CMD_GET_RAWIMAGE = 0x63,CMD_GET_TEMPLATE = 0x70,CMD_ADD_TEMPLATE = 0x71,CMD_GET_DATABASE_START = 0x72,CMD_GET_DATABASE_END = 0x73,CMD_FW_UPDATE = 0x80,CMD_ISO_UPDATE = 0x81,CMD_FAKE_DETECTOR = 0x91,CMD_SET_SECURITY_LEVEL = 0xF0,CMD_GET_SECURITY_LEVEL = 0xF1,ACK_OK = 0x30,NACK_INFO = 0x31, SKD_DEMO recognised the FingerPrint sensor as Mass Storage and somehow was running the commands like that. In Ubuntu though when I plug in the usb device I don't get any Mass storage mounting and on lsusb I get this: I have be searching about this and tried to echo "0x12" >> /dev/bus/usb/001/008 But I got a write error for invalid argument. Here are the terminal commands for the echo attempt: Is there a way I can send raw commands with this format and executing actions without needing to write a driver for this USB device on Linux? | I have a script in my personal archive that does exactly this. More precisely, it extracts e.g. Spain.rar to a new directory called Spain , except that if all the files in Spain.rar are already under the same top-level directory, then this top-level directory is kept. #!/bin/sh# Extract the archive $1 to a directory $2 with the program $3. If the# archive contains a single top-level directory, that directory# becomes $2. Otherwise $2 contains all the files at the root of the# archive.extract () ( set -e archive=$1 case "$archive" in -) :;; # read from stdin /*) :;; # already an absolute path *) archive=$PWD/$archive;; # make absolute path esac target=$2 program=$3 if [ -e "$target" ]; then echo >&2 "Target $target already exists, aborting." return 3 fi case "$target" in /*) parent=${target%/*};; */[!/]*) parent=$PWD/${target%/*};; *) parent=$PWD;; esac temp=$(TMPDIR="$parent" mktemp -d) (cd "$temp" && $program "$archive") root= for member in "$temp/"* "$temp/".*; do case "$member" in */.|*/..) continue;; esac if [ -n "$root" ] || ! [ -d "$member" ]; then root=$temp # There are multiple files or there is a non-directory break fi root="$member" done if [ -z "$root" ]; then # Empty archive root=$temp fi mv -v -- "$root" "$target" if [ "$root" != "$temp" ]; then rmdir "$temp" fi)# Extract the archive $1.process () { dir=${1%.*} case "$1" in *.rar|*.RAR) program="unrar x";; *.tar|*.tgz|*.tbz2) program="tar -xf";; *.tar.gz|*.tar.bz2|*.tar.xz) program="tar -xf"; dir=${dir%.*};; *.zip|*.ZIP) program="unzip";; *) echo >&2 "$0: $1: unsupported archive type"; exit 4;; esac if [ -d "$dir" ]; then echo >&2 "$0: $dir: directory already exists" exit 1 fi extract "$1" "$dir" "$program"}for x in "$@"; do process "$x"done Usage (after installing this script in your $PATH under the name extract and making it executable): extract Folder/*.rar | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316662/"
]
} |
476,536 | In the following json file, { "email": "xxx", "pass": "yyy", "contact": [ { "id": 111, "name": "AAA" } ], "lname": "YYY", "name": "AAA", "group": [ { "name": "AAA", "lname": "YYY", } ], I need to look for the key "name" and replace its value to "XXX" at all places. Which jq command does that ? | jq's assignment operations can perform an update on as many locations at once as you can name and are made for this sort of situation. You can use jq '(.. | .name?) |= "XXXX"' to find every field called "name" anywhere and replace the value in each all at once with "XXXX", and output the resulting object. This is just the ..|.a? example from the recursive-descent documentation combined with update assignment . It uses the recursive descent operator .. to find every single value in the tree, then pulls out the "name" field from each of them with .name , suppresses any errors from non-matching values with ? , and then updates the object in all those places at once with "XXXX" using the update-assignment operator |= , and outputs the new object. This will work no matter what the file structure is and update every name field everywhere. Alternatively, if the file always has this structure, and it's those particular "name" fields you want to change , not just any old name, you can also just list them out and assign to them as a group as well: jq '(.name, .contact[].name, .group[].name) |= "XXXX"' This does the same assignment to the "name" field of the top-level object; the "name" field of every object in the "contact" array; and the "name" field of every object in the "group" array. all in one go. This is particularly useful if the file might have other name fields in there somewhere unrelated that you don't want to change. It finds just the three sets of locations named there and updates them all simultaneously. If the value is just a literal like it is here then plain assignment with = works too and saves you a character: (..|.name?)="XXXX" - you'd also want this if your value is computed based on the whole top-level object. If instead you want to compute the new name based on the old one, you need to use |= . If I'm not sure what to use, |= generally has slightly nicer behaviour in the corner cases. If you have multiple replacements to do , you can pipe them together: jq '(..|.name?) = "XXXX" | (..|.lname?) = "1234"' will update both the "name" and "lname" fields everywhere, and output the whole updated object once. A few other approaches that may work: You could also be really explicit about what you're selecting with (..|objects|select(has("name"))).name |= "XXXX"` which finds everything, then just the objects, then just the objects that have a "name", then the name field on those objects, and performs the same update as before. If you're running the development version of jq (unlikely) then the walk function can also do the job: walk(.name?="XXXX") . All the other versions will work on the latest released version, 1.5. An alternative multi-update could be jq '(..|select(has("name"))?) += {name: "XXXX", lname: "1234"}' which finds everything with a name and then sets both "name" and "lname" on each object using arithmetic update-assignment *= and the merging behaviour that + has for objects . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476536",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316642/"
]
} |
476,593 | I am basically putting all my settings into my .bashrc and when I was using zsh it was all in my .zshrc. The Rust installer just informed me that it has added the new installation to my PATH by modifying .profile. When should things go into ~/.profile ? Is it only doing that because it doesn't know which shell I am using or should all somewhat general settings be in .profile? | .profile is read by every login shell, .xxxrc is read by every interactive shell after reading .profile . You need to decide yourself depending on what you like to add. A good idea is to put everything that sets exported environment variables and thus propagates to sub shells into .profile. Things that are not propagated should be in .bashrc or whatever your shell looks into. This is e.g. alias and function definitions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476593",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/239456/"
]
} |
476,684 | my system arch is amd64 , i installed i386 as foreign arch and ran sudo apt dist-upgrade but after it finished, i keep getting this error while using apt: apt-get: relocation error: /usr/lib/x86_64-linux-gnu/libapt-private.so.0.0: symbol ZN3URIcvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEB5cxx11Ev version APTPKG_5.0 not defined in file libapt-pkg.so.5.0 with link time reference even when i use aptitude: aptitude: relocation error: aptitude: symbol ZN3URIcvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEB5cxx11Ev version APTPKG_5.0 not defined in file libapt-pkg.so.5.0 with link time reference any solution? | This is bug #911090 . To work around it, you need to ensure that the apt and libapt-pkg5.0 packages are kept in sync; you might need to download them manually starting with the links at the top of this page . I’m not sure there’s a fix for aptitude yet. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/298117/"
]
} |
476,723 | I am trying to connect my Android phone to the KDE Connect app with my laptop. My system is a Minimal Debian Sid system with just i3wm no other alternate Desktop Environments or Window managers installed. When I initiate a pair request from my phone to the computer I get a notification as seen in the following screenshot. When I click on the notification nothing happens. On desktop environments such as Gnome or KDE, the notification also has an accept or reject button, which is not the case with the default notification handler of i3wm. So how do I get my laptop get paired with KDE Connect now? Any alternate notification handlers which would do the job here? I similar situation occurred to me a few months ago when I was trying to pair a bluetooth speaker to my laptop, which required me to enter a pairing key code which was not possible through a notification setup like this. Details of my setup:Debian GNU/Linux Unstable(sid) WM: i3 After following the instructions by cocoa1231 I tried launching the daemon from /usr/lib/ rajudev@sanganak:/usr/lib/x86_64-linux-gnu/libexec$ ./kdeconnectd kdeconnect.core: KdeConnect daemon startingkdeconnect.core: onStartkdeconnect.core: KdeConnect daemon startedkdeconnect.core: Broadcasting identity packetkdeconnect.core: TCP connection done (i'm the existing device)kdeconnect.core: Starting server ssl (I'm the client TCP socket)kdeconnect.core: TCP connection done (i'm the existing device)kdeconnect.core: Starting server ssl (I'm the client TCP socket)kdeconnect.core: Socket successfully established an SSL connectionkdeconnect.core: It is a new device "xiaomi"kdeconnect.core: Socket successfully established an SSL connectionkdeconnect.core: It is a known device "xiaomi"kdeconnect.core: TCP connection done (i'm the existing device)kdeconnect.core: Starting server ssl (I'm the client TCP socket)kdeconnect.core: TCP connection done (i'm the existing device)kdeconnect.core: Starting server ssl (I'm the client TCP socket)kdeconnect.core: Socket successfully established an SSL connectionkdeconnect.core: It is a known device "xiaomi"kdeconnect.core: Socket successfully established an SSL connectionkdeconnect.core: It is a known device "xiaomi"kdeconnect.core: creating pairing handler for "22d1625020250fbf"kdeconnect.core: Pair requestkdeconnect.core: Sending onNetworkChange to 1 LinkProviderskdeconnect.core: Broadcasting identity packetkdeconnect.core: Starting client ssl (but I'm the server TCP socket)kdeconnect.core: Socket successfully established an SSL connectionkdeconnect.core: It is a known device "xiaomi"Device pairing error "Timed out"kdeconnect.core: TCP connection done (i'm the existing device)kdeconnect.core: Starting server ssl (I'm the client TCP socket)kdeconnect.core: TCP connection done (i'm the existing device)kdeconnect.core: Starting server ssl (I'm the client TCP socket)kdeconnect.core: Socket successfully established an SSL connectionkdeconnect.core: It is a known device "xiaomi"kdeconnect.core: Socket successfully established an SSL connectionkdeconnect.core: It is a known device "xiaomi" | You're going to want to run /usr/lib/kdeconnectd and add that in your i3config exec --no-startup-id /usr/lib/kdeconnectd so that it works every time. And launch the settings through the indicator. Weirdly it doesn't launch directly. Gotta launch the indicator and from there launch the settings (during pairing process) For the pairing process, dunst doesn't support interactive notifications, so open up the KDE Connect Indicator and launch the settings from the indicator and when you try to pair you can accept from the configure dialog. Here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/117745/"
]
} |
476,731 | I am looking for a recommendation for virtualization software for IBM P5 series (ppc 64). I have Fedora 23 installed and just want to run 2-3 virtual machines but as I understood, given specific architecture of the computer, not every virtualization software will work. | You're going to want to run /usr/lib/kdeconnectd and add that in your i3config exec --no-startup-id /usr/lib/kdeconnectd so that it works every time. And launch the settings through the indicator. Weirdly it doesn't launch directly. Gotta launch the indicator and from there launch the settings (during pairing process) For the pairing process, dunst doesn't support interactive notifications, so open up the KDE Connect Indicator and launch the settings from the indicator and when you try to pair you can accept from the configure dialog. Here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476731",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316812/"
]
} |
476,852 | I've installed Ubuntu alongside Windows 7. When i try to mount /mnt/sda1 which is Windows part on it, i take error such that; "The device '/dev/sda1' doesn't seem to have a valid NTFS." NTFS signature is missing.Failed to mount '/dev/sda1': Invalid argumentThe device '/dev/sda1' doesn't seem to have a valid NTFS.Maybe the wrong device is used? Or the whole disk instead of apartition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? It is the result when i command fdisk -l; Disk /dev/sda: 298,1 GiB, 320072933376 bytes, 625142448 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x29af3b15Device Boot Start End Sectors Size Id Type/dev/sda1 2048 546911727 546909680 260,8G 7 HPFS/NTFS/exFAT/dev/sda2 546912254 625141759 78229506 37,3G 5 Extended/dev/sda5 * 546912256 625141759 78229504 37,3G 83 Linux | To get the exact information about the bootable windows partition before executing ntfsfix : sudo file -s /dev/sda1 Then use ntfsfix to fix this problem: sudo ntfsfix /dev/sda1 Finally mount your partition. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/316918/"
]
} |
476,883 | I have this program that can run with both a text user interface and a graphical user interface. It lacks any command line switch to force one or the other, rather I guess it somehow auto-detects whether we are in X or not (e.g. if I run it from a virtual terminal it enters its text mode, and if I run it from an X terminal emulator it opens a separate graphical window). I'd like to force it into text mode and have it run inside the X terminal. How would I go about doing it? | Usually just unset DISPLAY in command-line of the terminal. Some applications are smarter than that, and actually check permissions and type of the console versus pseudoterminal. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/476883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/271462/"
]
} |
476,971 | I want to create a chroot environment that has access to hand-picked programs but is completely isolated from the rest of the system. I created three folders in this chroot folder: bin , lib , lib64 . I then copied an executable, in this case /bin/bash into bin . ldd /bin/bash shows this output: linux-vdso.so.1 => (0x00007ffff01f6000)libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f35ed501000)libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f35ed2fd000)libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f35ecf33000)/lib64/ld-linux-x86-64.so.2 (0x00007f35ed72a000) I can copy all of these libraries, except linux-vdso.so.1 . If I sudo find / -name "linux-vdso.so.1" I get no output. What should I do now? | The VDSO is special, it is directly provided by the kernel. You see that it has addresses, even if it doesn't have a file name, so it got mapped fine. You don't need to do anything to get the VDSO in the chroot. The kernel VDSO is a collection of kernel functions that don't always require a mode switch, e.g. reading exact timers is handled by the rdtsc assembler instruction on processors that support it, and by a kernel syscalls on processors that don't. If this were a normal system call, modern processors would have to deal with the syscall overhead for a single non-privileged assembler instruction, and if rdtsc was always inlined, programs would no longer run on older machines. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99810/"
]
} |
476,972 | Let's say I have this alias in my .bashrc alias somedir="cd /var/www/site" how can I use somedir in say ... a cd command? e.g. cd somedir/app/ doing this currently returns: -bash: cd: somedir/app: No such file or directory Is it even possible to use an alias this way? | The bash shell has a CDPATH shell variable that helps you do this without an alias: $ CDPATH=".:/var/www/site" $ cd app /var/www/site/app If there's a subdirectory of app called doc : $ cd app/doc /var/www/site/app/doc With a CDPATH value of .:/var/www/site , the cd command will first look in the current directory for the directory path given on the command line, and if none is found it will look under /var/www/site . From the bash manual: CDPATH The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr" . Note that CDPATH should not be exported as you usually do not want this variable to affect bash scripts that you run from your interactive session. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/476972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/226212/"
]
} |
476,997 | i want to convert a PDF to PNG images using convert .The images must fit the 1920x1080 ratio by having a ?x1080 ratio, and have the best quality. Here are many options i can use with convert : https://imagemagick.org/script/command-line-options.php#append First i tried the following command line : convert my.pdf -geometry 1920x1080 -size 1920x1080 -density 1920x1080 my_resized_pdf.png The result of the command gives me an image with the good geometry (763x1080), but a low quality i don't want to get. I use convert command line without the geometry parameter as following : convert my.pdf -size 1920x1080 -density 1920x1080 my_resized_pdf.png The quality of the result is exactly what i want but the resolution is not 1920x1080 ratio, but 842x595. Its does not exactly fit on height the 1920x1080 ratio. Is it possible to get PNG images with a ?x1080 ratio and with a 100% quality from a PDF ?Or is 842x595 the biggest ratio to get a 100% quality image ?Should i set a DPI option to convert ? | This involves some trial & error and in the end, it's debatable which result you might consider to be the "best result". So allow me to just give some generic advice: use the -flatten option to get rid of transparent background. The transparency makes it hard to judge actual quality of the result. If you need the transparency in the final image, you can remove -flatten once you're sure of the quality. use something like -density 300 to get a high DPI result. The main issue with convert is that it uses a very low density by default (72 DPI). This parameter has to be specified before the input file. downscaling a high DPI image might cause additional blur, so perhaps calculating the correct DPI value to achieve the desired resolution is the way to go: $ convert -density 100 file.pdf -flatten file100.png$ file file100.pngfile100.png: PNG image data, 827 x 1169, 8-bit colormap, non-interlaced$ echo $((1080*10000/1169))9238$ convert -density 92.38 file.pdf -flatten file9238.png$ file file9238.pngfile9238.png: PNG image data, 764 x 1080, 8-bit colormap, non-interlaced I'm not sure if there is a way to have convert determine "ideal" DPI value by itself. If you take this question to the ImageMagick IRC channel or forum, I'm sure you'd get some more advice. It helps if you provide the link to the PDF file you're working with. ;) You can also improve quality in other ways, for example by trimming empty borders away. You're losing a lot of resolution if half of the page is white. There are even solutions that re-wrap PDF text to get the most out of available screenspace (e.g. k2pdfopt ). Finally, also try other programs. This is a matter of opinion, but I prefer using Inkscape or GhostScript directly. ImageMagick has characters "glued together", Inkscape has a more balanced result, and GhostScript allows you to render a blur-free pure pixel image (if that's something you like - use pngalpha for the blurry version, which is virtually identical to convert ). ImageMagick: Inkscape: GhostScript: gs -r92.38 -sDEVICE=png48 -sOutputFile=ghostscript.png file.pdf | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/476997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/310672/"
]
} |
477,115 | I am trying to make script that has two switches -h and -d, -d having a mandatory number argument. After it there will be undetermined number of paths to file. So far, I have this, but the code seems to not recognize invalid switch -r (can be any name) and also does not work when I do not input any switches: while getopts ":hd:" opt; docase $opt in h) echo $usage exit 0 ;; d) shift 2 if [ "$OPTARG" -eq "$OPTARG" ] ; then # ako dalsi argument mame cislo depth=$OPTARG fi ;; \?) shift 1 ;; :) shift 1 ;;esacdoneecho $1 when I type ./pripravne1.sh -d /home/OS/test_pz/test2 I get ./pripravne1.sh: [: /home/OS/test_pz/test2: integer expression expected when I type ./pripravne1.sh -r /home/OS/test_pz/test2 I get only empty string. | [ "$OPTARG" -eq "$OPTARG" ] ... is not the right way to check if $OPTARG is numeric -- it may print a nasty inscrutable error to the user if that's not the case, or it may just return true in all cases (in ksh ), or also return true for an empty $OPTARG (in zsh ). Also, an option taking an argument may be given as either -d12 or -d 12 , so a blind shift 2 won't cut it. And doing a shift inside the loop may badly interract with getopts , which is itself using the live argument list. Taking that into account, this is what I propose: die(){ echo >&2 "$@"; exit 1; }usage(){ echo >&2 "usage: $0 [-h] [-d num] files..."; exit 0; }depth=0while getopts :hd: opt; do case $opt in h) usage ;; d) case $OPTARG in ''|*[!-0-9]*|-|*?-*) die "invalid number $OPTARG" ;; *) depth=$OPTARG ;; esac ;; :) die "argument needed to -$OPTARG" ;; *) die "invalid switch -$OPTARG" ;; esacdoneshift "$((OPTIND - 1))"echo depth="$depth"echo files="$@" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317155/"
]
} |
477,160 | I have been writing Unix shell scripts, but I'm inexperienced in proper formatting. There were many instances where I had to write long lines to be executed as a single command. Question is: Is there a way that I can split a single long line of a shell command into multiple lines, yet make it execute as a single command? | Either break apart the long string into smaller, and maybe more readable, components or use a trailing "\" to denote a break in the line. from 'man bash': If a \<newline> pair appears, and the backslash is not itselfquoted, the \<newline> is treated as a line continuation (that is, itis removed from the input stream and effectively ignored). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
477,168 | I want to list only those directories which are a particular depth from current directory.Let's say depth=2 The directories listed can be: ./abc/abc./xyz/xyz If depth is 3 ./mvd/123/abc etc. | find allows you to specify both a minimal and maximal recursion depth: find . -mindepth 3 -maxdepth 3 -type d | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267386/"
]
} |
477,210 | Below is the curl command output (file information about branch), need script or command to print file name, filetype and size. I have tried with jq but was able fetch single value ( jq '.values[].size' ) { "path": { "components": [], "name": "", "toString": "" }, "revision": "master", "children": { "size": 5, "limit": 500, "isLastPage": true, "values": [ { "path": { "components": [ ".gitignore" ], "parent": "", "name": ".gitignore", "extension": "gitignore", "toString": ".gitignore" }, "contentId": "c9e472ef4e603480cdd85012b01bd5f4eddc86c6", "type": "FILE", "size": 224 }, { "path": { "components": [ "Jenkinsfile" ], "parent": "", "name": "Jenkinsfile", "toString": "Jenkinsfile" }, "contentId": "e878a88eed6b19b2eb0852c39bfd290151b865a4", "type": "FILE", "size": 1396 }, { "path": { "components": [ "README.md" ], "parent": "", "name": "README.md", "extension": "md", "toString": "README.md" }, "contentId": "05782ad495bfe11e00a77c30ea3ce17c7fa39606", "type": "FILE", "size": 237 }, { "path": { "components": [ "pom.xml" ], "parent": "", "name": "pom.xml", "extension": "xml", "toString": "pom.xml" }, "contentId": "9cd4887f8fc8c2ecc69ca08508b0f5d7b019dafd", "type": "FILE", "size": 2548 }, { "path": { "components": [ "src" ], "parent": "", "name": "src", "toString": "src" }, "node": "395c71003030308d1e4148b7786e9f331c269bdf", "type": "DIRECTORY" } ], "start": 0 }} expected output should be something like below .gitignore FILE 224Jenkinsfile FILE 1396 | For the use case provided in the Question, @JigglyNaga's answer is probably better than this, but for some more complicated task, you could also loop through the list items using keys : from file : for k in $(jq '.children.values | keys | .[]' file); do ...done or from string: for k in $(jq '.children.values | keys | .[]' <<< "$MYJSONSTRING"); do ...done So e.g. you might use: for k in $(jq '.children.values | keys | .[]' file); do value=$(jq -r ".children.values[$k]" file); name=$(jq -r '.path.name' <<< "$value"); type=$(jq -r '.type' <<< "$value"); size=$(jq -r '.size' <<< "$value"); printf '%s\t%s\t%s\n' "$name" "$type" "$size";done | column -t -s$'\t' if you have no newlines for the values, you can make it with a single jq call inside the loop which makes it much faster: for k in $(jq '.children.values | keys | .[]' file); do IFS=$'\n' read -r -d '' name type size \ <<< "$(jq -r ".children.values[$k] | .path.name,.type,.size" file)" printf '%s\t%s\t%s\n' "$name" "$type" "$size";done | column -t -s$'\t' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/477210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317218/"
]
} |
477,228 | I have the following JSON data: { "Name": "No.reply", "Email": "[email protected]", "ID": 5930, "details": { "message": "Your name: john doe\nEmail: [email protected]\nSubject: I need help with this\nDescription: I can find the download for the manual but I can only find the free updater for Windows or Mac. Can you help me please as I have a chrome notebook and Moto smart phone. Thank you. John doe" }} The name and email fields from the top level are irrelevant, as they are from the automated email. The information I need is in the message field and in the ID field, which is related to John Doe's info. Anyway, this is what I need to be filtered and how it should be saved to a new file in this order: Name: it should read the lines after this variable, regardless of the text. Email: same as above Subject: same as above Description: same as above ID: same as above So, I need to remove the quotes, the newline character, assign those specific strings to a variable via bash, and read what it's after those strings. I was able to come up with something, but it doesn't work for this JSON output: (only works if the text file is properly formatted) while IFS=''do case "$line" in "Name:"*) uservar="${line#*: }" ;; "Email:"*) emailvar="${line#*: }" ;; "Subject:"*) subject="${line#*: }" ;; "Message:"*) message="${line#*: }" ;; "ID:"*) ticketidvar="${line#*: }" ;; esacdone <<-EOF$(pbpaste)EOF | This assumes that the Description: ... part of the message is a single line, and that the headers are in the canonical form (no " subJECT :hey" , please). It's using jq 's @sh format spec to escape its output in a manner suitable for the shell (with single quotes). Thanks to @Stéphane Chazelas for corrections. parse_json(){ jq=$(jq -r '[.Email,.ID,(.details.message | split("\n")) | @sh] | join(" ")' -- "$1") eval "set -- $jq" email=$1; shift id=$1; shift for l; do case $l in "Your name: "*) name="${l#*: }";; "Subject: "*) subject="${l#*: }";; "Description: "*) description="${l#*: }";; # remove the following line if you want the .Email from json instead "Email: "*) email="${l#*: }";; esac done echo "id={$id}" echo "name={$name}" echo "email={$email}" echo "subject={$subject}" echo "description={$description}"}fz:/tmp% parse_json a.jsonid={5930}name={john doe}email={[email protected]}subject={I need help with this}description={I can find the download for the manual but I can only find the free updater for Windows or Mac. Can you help me please as I have a chrome notebook and Moto smart phone. Thank you. John doe The case ... esac above could be replaced with something that will create variables with the same names as the headers with the non-alphanumeric characters replaced by underscores. This will only work with shells that support ${var//pat/repl} substitutions( bash , zsh , ksh93 ): parse_json(){ jq=$(jq -r '[.Email,.ID,(.details.message | split("\n")) | @sh] | join(" ")' -- "$1") eval "set -- $jq" Email=$1; shift ID=$1; shift for l; do v="${l#*: }"; k="${l%%: *}"; eval "${k//[!a-zA-Z0-9]/_}=\$v" done}show_vars(){ for v in ID Your_name Email Subject Description; do eval "echo \"$v=[\$$v]\"" done}fz:/tmp$ parse_json a.jsonfz:/tmp$ show_varsID=[5930]Your_name=[john doe]Email=[[email protected]]Subject=[I need help with this]Description=[I can find the download for the manual but I can only find the free updater for Windows or Mac. Can you help me please as I have a chrome notebook and Moto smart phone. Thank you. John doe] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317265/"
]
} |
477,258 | I've installed Oh My Zsh with a few custom plugins, such as zsh-autosuggestions . Now while Oh My Zsh supports automatic updates , this doesn't apply to custom plugins (installed to the custom/ subdirectory). How can I make Oh My Zsh update those as well? | Oh My Zsh upgrades are handled by the $ZSH/tools/upgrade.sh script. To update any custom plugins (assuming those are Git clones), you can add these lines to the end of the script before the exit command: printf "\n${BLUE}%s${RESET}\n" "Updating custom plugins"cd custom/pluginsfor plugin in */; do if [ -d "$plugin/.git" ]; then printf "${YELLOW}%s${RESET}\n" "${plugin%/}" git -C "$plugin" pull fidone Now, whenever Oh My Zsh is updated , your custom plugins will be updated too. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/477258",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/903/"
]
} |
477,330 | #!/bin/bash -xecho This is a script that has debugging turned on This script outputs + echo This is a script that has debugging turned onThis is a script that has debugging turned on I want to get rid of these +'s by deleting them or replacing them. I expected sed could fix my problem ( sed 's/^\++//g' ) -- But this approach doesn't affect the debug output lines. With some more experimenting, I discovered that the debug output seems to be getting written to stderr (inferred this with the command ./test.sh 2>/dev/null which the output then excludes the debug lines) With this new information, I would expect this to work ./test.sh 2>&1 | sed 's/^\++//g' But, alas, I still get the same undesired output: + echo This is a script that has debugging turned onThis is a script that has debugging turned on | The + is the PS4 prompt. Set it to an empty string: #!/bin/bashPS4=''set -xecho 'This is a script that has debugging turned on' Testing: $ bash script.shecho 'This is a script that has debugging turned on'This is a script that has debugging turned on Or, with your original script, set PS4 to an empty string for the script when invoking it: $ PS4='' ./script.shecho This is a script that has debugging turned onThis is a script that has debugging turned on This could be used to insert a timestamp: $ PS4='$(date +"%T: ")' ./script.sh21:08:19: echo 'This is a script that has debugging turned on'This is a script that has debugging turned on21:08:19: echo 'Now sleeping for 2 seconds'Now sleeping for 2 seconds21:08:19: sleep 221:08:21: echo DoneDone | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/477330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317332/"
]
} |
477,401 | I tested cp with the following commands: $ lsfirst.html second.html third.html$ cat first.htmlfirst$ cat second.htmlsecond$ cat third.htmlthird Then I copy first.html to second.html : $ cp first.html second.html$ cat second.htmlfirst The file second.html is silently overwritten without any errors. However, if I do it in a desktop GUI by dragging and dropping a file with the same name, it will be suffixed as first1.html automatically. This avoids accidentally overwriting an existing file. Why doesn't cp follow this pattern instead of overwriting files silently? | The default overwrite behavior of cp is specified in POSIX. If source_file is of type regular file, the following steps shall be taken: 3.a. The behavior is unspecified if dest_file exists and was written by a previous step. Otherwise, if dest_file exists, the following steps shall be taken: 3.a.i. If the -i option is in effect, the cp utility shall write a prompt to the standard error and read a line from the standard input. If the response is not affirmative, cp shall do nothing more with source_file and go on to any remaining files. 3.a.ii. A file descriptor for dest_file shall be obtained by performing actions equivalent to the open() function defined in the System Interfaces volume of POSIX.1-2017 called using dest_file as the path argument, and the bitwise-inclusive OR of O_WRONLY and O_TRUNC as the oflag argument. 3.a.iii. If the attempt to obtain a file descriptor fails and the -f option is in effect, cp shall attempt to remove the file by performing actions equivalent to the unlink() function defined in the System Interfaces volume of POSIX.1-2017 called using dest_file as the path argument. If this attempt succeeds, cp shall continue with step 3b. When the POSIX specification was written, there already was a large number of scripts in existence, with a built-in assumption for the default overwrite behavior. Many of those scripts were designed to run without direct user presence, e.g. as cron jobs or other background tasks. Changing the behavior would have broken them. Reviewing and modifying them all to add an option to force overwriting wherever needed was probably considered a huge task with minimal benefits. Also, the Unix command line was always designed to allow an experienced user to work efficiently, even at the expense of a hard learning curve for a beginner. When the user enters a command, the computer is to expect that the user really means it, without any second-guessing; it is the user's responsibility to be careful with potentially destructive commands. When the original Unix was developed, the systems then had so little memory and mass storage compared to modern computers that overwrite warnings and prompts were probably seen as wasteful and unnecessary luxuries. When the POSIX standard was being written, the precedent was firmly established, and the writers of the standard were well aware of the virtues of not breaking backwards compatibility . Besides, as others have described, any user can add/enable those features for themselves, by using shell aliases or even by building a replacement cp command and modifying their $PATH to find the replacement before the standard system command, and get the safety net that way if desired. But if you do so, you'll find that you are creating a hazard for yourself. If the cp command behaves one way when used interactively and another way when called from a script, you may not remember that the difference exists. On another system, you might end up being careless because you're become used to the warnings and prompts on your own system. If the behavior in scripts will still match the POSIX standard, you're likely to get used to the prompts in interactive use, then write a script that does some mass copying - and then find you're again inadvertently overwritten something. If you enforce prompting in scripts too, what will the command do when run in a context that has no user around, e.g. background processes or cron jobs? Will the script hang, abort, or overwrite? Hanging or aborting means that a task that was supposed to get done automatically will not be done. Not overwriting may sometimes also cause a problem by itself: for example, it might cause old data to be processed twice by another system instead of being replaced with up-to-date data. A large part of the power of the command line comes from the fact that once you know how to do something on the command line, you'll implicitly also know how to make it happen automatically by scripting . But that is only true if the commands you use interactively also work exactly the same when invoked in a script context. Any significant differences in behavior between interactive use and scripted use will create a sort of cognitive dissonance which is annoying to a power user. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/477401",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
477,412 | How I want to display the folder that do not have certain file. But the concern is, the file is same name but different cases. Case study:In tools directory, there are subdirectories that contain readme / README file and some of them does not have. For example /toola/readme/toolb/README/toolc/ (does not have readme file) I want find command to display only toolc folder by using this command. find . -maxdepth 2 ! -name '*readme*' -o ! -name '*README*' | awk -F "/" '{print $$2}' | uniq But it doesn't work. It display all file since toola doesn't have README and toolb doesn't have readme | You can't use find to look for files that do not exist. However, you may use find to look for directories, and then test whether the given filenames exists in those directories. When using find to look for directories, make sure that you use -type d . Then test each of the found directories for the files README and readme . Assuming the following directory hierarchy for some top-directory projects : projects/|-- toola| |-- doc| |-- readme| `-- src|-- toolb| |-- doc| `-- src|-- toolc| |-- README| |-- doc| `-- src`-- toold |-- doc `-- src Using find to find the directories directly under projects that does not contain a README or readme file: $ find projects -mindepth 1 -maxdepth 1 -type d \ ! -exec test -f {}/README ';' \ ! -exec test -f {}/readme ';' -printprojects/toolbprojects/toold Here, we find any directory directly under projects and then use the test utility to determine which one of the found directories do not contain either of the two files. This is exactly equivalent of find projects -mindepth 1 -maxdepth 1 -type d \ -exec [ ! -f {}/README ] ';' \ -exec [ ! -f {}/readme ] ';' -print Another formulation of the above: find projects -mindepth 1 -maxdepth 1 -type d -exec sh -c ' for pathname do if [ ! -f "$pathname/README" ] && [ ! -f "$pathname/readme" ]; then printf "%s\n" "$pathname" fi done' sh {} + Here, we let a small in-line shell script do the actual testing for the two files and print the pathname of the directories that does not contain either of them. The find utility acts like a "pathname generator" of pathnames to directories for the in-line script to iterate over. In fact, if the directory structure is like this, we may choose to not use find at all: for pathname in projects/*/; do if [ ! -f "$pathname/README" ] && [ ! -f "$pathname/readme" ]; then printf '%s\n' "$pathname" fidone Note the trailing slash in the projects/*/ pattern. It's this that makes the pattern only match directories (or symbolic links to directories). A difference between doing it this way and using find is that with the above shell loop, we will exclude hidden directories under project and will include symbolic links to directories. In all cases, we iterate over the pathnames of directories, and we test for the non-existence of the two filenames. The only caveat is that the -f test will also be true for a symbolic link to a regular file. Related: Understanding the -exec option of `find` | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317398/"
]
} |
477,416 | This is not duplicate of delete line in vi , it's asking different question. I'd like to delete a line without cutting it (placing it in clipboard). I'd like to copy part of line, delete a line and then paste just that part of line somewhere else. Using v3w , dd and then p pastes whole line. | You're looking for the black hole register ( :help quote_ ). If you prepend "_ to a delete command, the contents will just be gone. So, to delete and keep the next three words, and then get rid of the entire line, you'd use d3w"_dd . Advanced mapping That use case of keeping a part of the line while removing the complete line is a common one; I've written a set of mappings for that: "["x]dDD Delete the characters under the cursor until the end" of the line and [count]-1 more lines [into register x]," and delete the remainder of the line (i.e. the" characters before the cursor) and possibly following" empty line(s) without affecting a register."["x]dD{motion} Delete text that {motion} moves over [into register x]" and delete the remainder of the line(s) and possibly" following empty line(s) without affecting a register."{Visual}["x],dD Delete the highlighted text [into register x] and delete" the remainder of the selected line(s) and possibly" following empty line(s) without affecting a register.function! s:DeleteCurrentAndFollowingEmptyLines() let l:currentLnum = line('.') let l:cnt = 1 while l:currentLnum + l:cnt < line('$') && getline(l:currentLnum + l:cnt) =~# '^\s*$' let l:cnt += 1 endwhile return '"_' . l:cnt . 'dd'endfunctionnnoremap <expr> <SID>(DeleteCurrentAndFollowingEmptyLines) <SID>DeleteCurrentAndFollowingEmptyLines()nnoremap <script> dDD D<SID>(DeleteCurrentAndFollowingEmptyLines)xnoremap <script> ,dD d<SID>(DeleteCurrentAndFollowingEmptyLines)function! s:DeleteCurrentAndFollowingEmptyLinesOperatorExpression() set opfunc=DeleteCurrentAndFollowingEmptyLinesOperator let l:keys = 'g@' if ! &l:modifiable || &l:readonly " Probe for "Cannot make changes" error and readonly warning via a no-op " dummy modification. " In the case of a nomodifiable buffer, Vim will abort the normal mode " command chain, discard the g@, and thus not invoke the operatorfunc. let l:keys = ":call setline('.', getline('.'))\<CR>" . l:keys endif return l:keysendfunctionfunction! DeleteCurrentAndFollowingEmptyLinesOperator( type ) try " Note: Need to use an "inclusive" selection to make `] include the last " moved-over character. let l:save_selection = &selection set selection=inclusive execute 'silent normal! g`[' . (a:type ==# 'line' ? 'V' : 'v') . 'g`]"' . v:register . 'y' execute 'normal!' s:DeleteCurrentAndFollowingEmptyLines() finally if exists('l:save_selection') let &selection = l:save_selection endif endtryendfunctionnnoremap <expr> dD <SID>DeleteCurrentAndFollowingEmptyLinesOperatorExpression() | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317402/"
]
} |
477,449 | I want to copy files from the copyDest to pastDest that contain number from 20 to 32. What am I dong wrong? cp -r ~/copyDest/*2[0-9]|3[0-2]* ~/pasteDest Thanks. | | is the pipeline operator. cp -r ~/copyDest/*2[0-9]|3[0-2]* ~/pasteDest is the cp command piped to the command whose name is the first file expanded from the 3[0-2]* glob. For | to be a glob alternation operator, it has be within (...) in zsh (but zsh has a dedicated operator for number range matching) and @(...) in ksh (or bash with extglob on). So, with zsh : cp -r ~/copyDest/(*[^0-9]|)<20-32>(|[^0-9]*) ~/pasteDest Without the (*[^0-9]|) , it would also match on foo120 With ksh or bash -O extglob (or use shopt -s extglob within bash ) or zsh -o kshglob ( set -o kshglob within zsh ), the equivalent (except for the order in which the files are copied) would look like: ( LC_ALL=C cp -r ~/copyDest/?(*[^0-9])*(0)@(2[0-9]|3[0-2])?([^0-9]*) ~/pasteDest) With ksh or bash, on most systems and most locales other than C, [0-9] matches a lot more characters than 0123456789, hence the LC_ALL=C (which also affects the glob expansion sorting order). If your file names contain only ASCII characters, you may omit it, as I don't think any locale on any sane system would have ASCII characters other than 0123456789 matched by [0-9] . Other alternative is to replace [0-9] with [0123456789] . Also note that except in zsh -o kshglob , if the pattern doesn't match any file, cp will be called with a literal .../?(*[^0-9])*(0)@(2[0-9]|3[0-2])?([^0-9]*) argument (a valid though unlikely file name) which if it exists would then be copied (or cp would return an error otherwise). In bash , you can use the failglob option to get a behaviour closer to zsh 's saner one (of cancelling the command if the pattern doesn't match). Above we take special care of copying files named foo20.txt , foo00020.txt , but not foo120.txt or foo200.txt (even though their name contains 20). It still copies foo32.12.txt or foo-1E-20.txt or foo0x20.txt files. If you still want to copy foo120 or foo200 files, then it becomes much simpler: zsh : cp -r ~/copyDest/*<20-32>* ~/pasteDest bash -O extglob and co: cp -r ~/copyDest/*@(2[0123456789]|3[012])* ~/pasteDest | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/477449",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317427/"
]
} |
477,537 | I have a small server which I use for testing and programming. Currently it runs Debian 9.4 stretch with 4.14.0-0.bpo.3-amd64 kernel. Today I tried to connect through SSH but I couldn't then I tried to ping it and it was unreachable. Therefore I had to hard-restart it by unplugging power cable. Then I went to /var/log/syslog and I found a strange line containing exactly 6140 characters like the following ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ then nothing else until new log entries of system restart. This is actually the first time it happens. Does someone know what could it be? | That syslog file content you are showing us of all zeros is indeed corruption of the filesystem/syslog writing. Your system crash caught the system mid-writing to the syslog file, and that is the end result. Already have seen it happening several times over the years, in Linux VMs and a couple more times in Raspberries and Banana Pis. Nothing to obsess (too much) about or lose a lot of time to investigate why you have this for a one-time event. I would be more worried at finding out why it crashed, especially if it is a regular event. PS getting into anecdotal territory, last time I had this happening regularly in a Banana Pi R1, I managed to trace the cause to a (faulty) realtek wifi chipset. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/477537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317486/"
]
} |
477,705 | When trying to resolve my public IP address I get an ampty string ip=$(dig +short myip.opendns.com @resolver1.opendns.com) | For some reason opendns is also not working for me at work. e.g. your command is not at fault, it is simply that opendns is not answering to that specific query to find the public IP address in some settings. Google also delivers a similar service for finding out which public IP address you are using. Do: ip=$(dig TXT +short o-o.myaddr.l.google.com @ns1.google.com) As IPv6 takes precedence when present, for forcing an IPv4 answer, do: ip=$(dig -4 TXT +short o-o.myaddr.l.google.com @ns1.google.com) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477705",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83275/"
]
} |
477,761 | Im tidying up my bash exports file and categorizing variables depending what environment they are belong to. for example the HISTIGNORE , PATH , PS1 , etc.. are on the "Bash Section"and MANPAGER on the "Man Section".. I'm just wondering how about $EDITOR and/or $VISUAL? I can't seem to find them on bash man page. | You have misclassified PATH and both EDITOR and VISUAL belong with it. The idea that these variables belong to particular applications is wrong. They are standardized and usable by potentially any application that needs them. If any application wants to search a path for executable programs, it can use PATH . (And indeed this is the case for any application that calls execvp() .) If any application wants to invoke a shell, it can use SHELL to find the program image file. If any application wants to invoke a line editor, it can use EDITOR . If any application wants to invoke a visual editor, it can use VISUAL . If any application wants to invoke a pager, it can use PAGER . If any application wants to know where the home directory is, it can use HOME . And so on. In contrast, HISTIGNORE and PS1 do not even really need to be environment variables at all; and only the latter is even mentioned (albeit without explanation) in the standard. One can set them as environment variables, in a session-leader process or in some other top-level parent, and rely upon environment inheritance to have them imported by shells. But one can instead just set them as shell variables, in a script automatically executed by every shell (the specifics depending from the shell), and not export them into the environment. For example: I have my ~/.zshrc set PS1 and RPROMPT as shell variables, and they are not exported to be environment variables at all. Further reading "Other environment variables" . Base Definitions . Single UNIX Specification. IEEE 1003.1. 2018. The Open Group. execvp() . System Interfaces . Single UNIX Specification. IEEE 1003.1. 2018. The Open Group. VISUAL vs. EDITOR – what’s the difference? What is the `editor` command in bash? Command for the default in-terminal text editor Which systems have 'pager' shortcut/alias? How to get rid of "nano not found" warnings, without installing nano? Jonathan de Boyne Pollard (2020). Unix editors and pagers . Frequently Given Answers. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305401/"
]
} |
477,794 | 'mount -a' works fine as a one-time action. But auto-mount of removable media reverts to settings that were in fstab at the last reboot. How to make the OS actually reload fstab so auto-mounts use the new settings when media is connected? Specific example seen with Raspbian (Debian) Stretch: FAT-formatted SD card; configured fstab to auto-mount; rebooted; volume auto-mounts, but RO Changed umask options in fstab; mount -a while media is connected, and volume is now RW Unmount and re-insert the media; auto-mount works, but using the options in fstab from the last reboot, so volume is RO Reboot; OS loads updated fstab; auto-mount works when media is connected, and volume is RW - how to get this effect without a reboot? FWIW, the (updated) fstab syntax was: /dev/sdb1 /Volumes/boot vfat rw,user,exec,nofail,umask=0000 0 0 | I suspect this is caused by systemd’s conversion of /etc/fstab ; traditional mount doesn’t remember the contents of /etc/fstab . To refresh systemd’s view of the world, including changes to /etc/fstab , run systemctl daemon-reload | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/477794",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317707/"
]
} |
477,820 | Expanding from this question , we have a use case where we want to pipe the stdout of a command depending on whether that command succeeded or failed. We start with a basic pipe command | grep -P "foo" However we notice that sometimes command does not output anything to stdout, but does have an exit code of 0 . We want to ignore this case and only apply the grep when the exit code is 1 For a working example, we could implement a command like this: OUTPUT=$(command) # exit code is 0 or 1RESULT=$?if [ $RESULT -eq 0 ]; then return $RESULT; # if the exit code is 0 then we simply pass it forwardelse grep -P "foo" <<< $OUTPUT; # otherwise check if the stdout contains "foo"fi but this has a number of disadvantages, namely that you have to write a script, meaning you can't just execute it in the console.It also seems somewhat amateurish. For a more concise syntax, I'm imagining a fictional ternary operator which pipes if the exit code is 1 , otherwise it passes the exit code forwards. command |?1 grep -P "foo" : $? Is there a series of operators and utils that will achieve this result? | Commands in a pipeline run concurrently, that's the whole point of pipes, and inter-process communication mechanism. In: cmd1 | cmd2 cmd1 and cmd2 are started at the same time, cmd2 processes the data that cmd1 writes as it comes. If you wanted cmd2 to be started only if cmd1 had failed, you'd have to start cmd2 after cmd1 has finished and reported its exit status, so you couldn't use a pipe, you'd have to use a temporary file that holds all the data the cmd1 has produced: cmd1 > file || cmd2 < file; rm -f file Or store in memory like in your example but that has a number of other issues (like $(...) removing all trailing newline characters, and most shells can't cope with NUL bytes in there, not to mention the scaling issues for large outputs). On Linux and with shells like zsh or bash that store here-documents and here-strings in temporary files, you could do: { cmd1 > /dev/fd/3 || cmd2 <&3 3<&-; } 3<<< ignored To let the shell deal with the temp file creation and clean-up. bash version 5 now removes write permissions to the temp file after creating it, so the above wouldn't work, you'll need to work around it by restoring the write permission first: { chmod u+w /dev/fd/3 cmd1 > /dev/fd/3 || cmd2 <&3 3<&-; } 3<<< ignored Manually, POSIXly: tmpfile=$( echo 'mkstemp(template)' | m4 -D template="${TMPDIR:-/tmp}/XXXXXX") && [ -n "$tmpfile" ] && ( rm -f -- "$tmpfile" || exit cmd1 >&3 3>&- 4<&- || cmd2 <&4 4<&- 3>&-) 3> "$tmpfile" 4< "$tmpfile" Some systems have a non-standard mktemp command (though with an interface that varies between systems) that makes the tempfile creation a bit easier ( tmpfile=$(mktemp) should be enough with most implementation, though some would not create the file so you may need to adjust the umask ). The [ -n "$tmpfile" ] should not be necessary with compliant m4 implementations, but GNU m4 at least is not compliant in that it doesn't return a non-zero exit status when the mkstemp() call fails. Also note that there's nothing stopping you running any code in the console . Your "script" can be entered just the same at the prompt of an interactive shell (except for the return part that assumes the code is in a function), though you can simplify it to: output=$(cmd) || grep foo <<< "$output" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/477820",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317729/"
]
} |
477,823 | EDIT: I removed and rewrote most of the post to make the question more direct and to make the post a lot shorter. You can of course view the edit history to get the previous version. Using bspwm on Arch Linux. playerctl to control media, sxhkd to bind media keys to playerctl commands. I'm trying to find a way to get the latest active media player so when I use the media keys to play/pause a song/movie/.. , my pc automatically controls the latest active mediaplayer. For instance, when Spotify happens to be open in the background and I'm watching something on VLC, it knows to control VLC and not Spotify when I press media keys. Right now, if both are open, VLC always gets priority from playerctl. What I need is a way to ask dbus which mediaplayer is currently playing a song, so I can keep it in a file. EDIT: I found a way to ask each spotify and vlc using: qdbus org.mpris.MediaPlayer2.vlc /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.PlaybackStatus and qdbus org.mpris.MediaPlayer2.spotify /org/mpris/MediaPlayer2 org.mpris.MediaPlayer2.Player.PlaybackStatus but I'd like to automatically ask all mediaplayers, not each one by name. I tried to do it with org.mpris.MediaPlayer2.* but that doesn't work. Any ideas? | Commands in a pipeline run concurrently, that's the whole point of pipes, and inter-process communication mechanism. In: cmd1 | cmd2 cmd1 and cmd2 are started at the same time, cmd2 processes the data that cmd1 writes as it comes. If you wanted cmd2 to be started only if cmd1 had failed, you'd have to start cmd2 after cmd1 has finished and reported its exit status, so you couldn't use a pipe, you'd have to use a temporary file that holds all the data the cmd1 has produced: cmd1 > file || cmd2 < file; rm -f file Or store in memory like in your example but that has a number of other issues (like $(...) removing all trailing newline characters, and most shells can't cope with NUL bytes in there, not to mention the scaling issues for large outputs). On Linux and with shells like zsh or bash that store here-documents and here-strings in temporary files, you could do: { cmd1 > /dev/fd/3 || cmd2 <&3 3<&-; } 3<<< ignored To let the shell deal with the temp file creation and clean-up. bash version 5 now removes write permissions to the temp file after creating it, so the above wouldn't work, you'll need to work around it by restoring the write permission first: { chmod u+w /dev/fd/3 cmd1 > /dev/fd/3 || cmd2 <&3 3<&-; } 3<<< ignored Manually, POSIXly: tmpfile=$( echo 'mkstemp(template)' | m4 -D template="${TMPDIR:-/tmp}/XXXXXX") && [ -n "$tmpfile" ] && ( rm -f -- "$tmpfile" || exit cmd1 >&3 3>&- 4<&- || cmd2 <&4 4<&- 3>&-) 3> "$tmpfile" 4< "$tmpfile" Some systems have a non-standard mktemp command (though with an interface that varies between systems) that makes the tempfile creation a bit easier ( tmpfile=$(mktemp) should be enough with most implementation, though some would not create the file so you may need to adjust the umask ). The [ -n "$tmpfile" ] should not be necessary with compliant m4 implementations, but GNU m4 at least is not compliant in that it doesn't return a non-zero exit status when the mkstemp() call fails. Also note that there's nothing stopping you running any code in the console . Your "script" can be entered just the same at the prompt of an interactive shell (except for the return part that assumes the code is in a function), though you can simplify it to: output=$(cmd) || grep foo <<< "$output" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/477823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302467/"
]
} |
477,827 | var1="temp-pprod-deployment" Need a shell script for the below use case; if the above variable $var1 value contains "prod" string then execute a print message eg. echo "Found" else echo "Not found" | Commands in a pipeline run concurrently, that's the whole point of pipes, and inter-process communication mechanism. In: cmd1 | cmd2 cmd1 and cmd2 are started at the same time, cmd2 processes the data that cmd1 writes as it comes. If you wanted cmd2 to be started only if cmd1 had failed, you'd have to start cmd2 after cmd1 has finished and reported its exit status, so you couldn't use a pipe, you'd have to use a temporary file that holds all the data the cmd1 has produced: cmd1 > file || cmd2 < file; rm -f file Or store in memory like in your example but that has a number of other issues (like $(...) removing all trailing newline characters, and most shells can't cope with NUL bytes in there, not to mention the scaling issues for large outputs). On Linux and with shells like zsh or bash that store here-documents and here-strings in temporary files, you could do: { cmd1 > /dev/fd/3 || cmd2 <&3 3<&-; } 3<<< ignored To let the shell deal with the temp file creation and clean-up. bash version 5 now removes write permissions to the temp file after creating it, so the above wouldn't work, you'll need to work around it by restoring the write permission first: { chmod u+w /dev/fd/3 cmd1 > /dev/fd/3 || cmd2 <&3 3<&-; } 3<<< ignored Manually, POSIXly: tmpfile=$( echo 'mkstemp(template)' | m4 -D template="${TMPDIR:-/tmp}/XXXXXX") && [ -n "$tmpfile" ] && ( rm -f -- "$tmpfile" || exit cmd1 >&3 3>&- 4<&- || cmd2 <&4 4<&- 3>&-) 3> "$tmpfile" 4< "$tmpfile" Some systems have a non-standard mktemp command (though with an interface that varies between systems) that makes the tempfile creation a bit easier ( tmpfile=$(mktemp) should be enough with most implementation, though some would not create the file so you may need to adjust the umask ). The [ -n "$tmpfile" ] should not be necessary with compliant m4 implementations, but GNU m4 at least is not compliant in that it doesn't return a non-zero exit status when the mkstemp() call fails. Also note that there's nothing stopping you running any code in the console . Your "script" can be entered just the same at the prompt of an interactive shell (except for the return part that assumes the code is in a function), though you can simplify it to: output=$(cmd) || grep foo <<< "$output" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/477827",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302964/"
]
} |
477,876 | I am confused about the outer $ in var3=$[$var1 * $var2] Suppose the following script: $ var1=5; var2=6; var3=$[$var1 * $var2]; echo $var330 If $ is removed, it report error: $ var1=5; var2=6; var3=[$var1 * $var2]; echo $var3-bash: Algorithms: command not found30$ var=[3 * 2]; echo $var-bash: Algorithms: command not found[3*2] I feel it very strange it to declare; $ var=$[3 * 2]; echo $var6 Perform very likely from intuitive perception: $ var=$6; echo $var It's odd. What's the mechanism which force the syntax should do it this way? is it variable substitution? | The syntax $[ ... ] (and the standard form, $(( ... )) ) interpret their contents as arithmetic expressions. Without the $ , it doesn't. It might interpret it as something completely different. You can see this better by echo ing the result directly: $ var1=5; var2=6$ echo $[$var1 * $var2] # This gets interpreted as arithmetic30$ echo [$var1 * $var2] # This doesn't[5 file1.txt file2.txt file3.txt file4.txt file5.txt file6.txt 6] Here, [$var1 and $var2] are treated as completely separate strings, which evaluate to "[5" and "6]" respectively. The * , on the other hand, got interpreted as a filename wildcard and expanded to a list of files in the current directory. Now, in the case of your command: $ var3=[$var1 * $var2]-bash: file1.txt: command not found What's happening is very similar to the above: the * expands to a list of files, so the command is effectively: var3=[5 file1.txt file2.txt file3.txt file4.txt file5.txt file6.txt 6] ...which is interpreted as var=value command arguments... , that is it tries to set the variable var3 to "[5" for the command file1.txt with the arguments "file2.txt", "file3.txt", etc. Except that in your case, the first file (or directory) in the current directory is named "Algorithms" instead of "file1.txt". In either case, it's not a valid command name, so you get a "command not found" error. BTW, as @devWeek pointed out, $[ ] is deprecated, and you should use $(( )) instead. But again, the $ is not optional: $ echo $(($var1 * $var2)) # This gets interpreted as arithmetic30$ echo (($var1 * $var2)) # This doesn't-bash: syntax error near unexpected token `(' BTW2, $6 means something still different; it refers to the sixth argument to the current script/function/whatever. In an interactive shell, there generally aren't any arguments, so it will evaluate to nothing: $ echo $[ 6 ] placeholder # This gets interpreted as arithmetic6 placeholder$ echo $6 placeholder # This doesn'tplaceholder Basic takeaway: shell syntax is extremely picky , and not particularly intuitive. Leave out a symbol or two (or even just add or remove a space in the wrong place), and you change the meaning completely. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
477,940 | Where can I find the official documentation that sysctl.conf is last match based? So, there are two entries in the /etc/sysctl.conf file: vm.swappiness=10vm.swappiness=11 Which will win? The last one? What happens if there are files in the /etc/sysctl.d directory? | I don’t think there is any such official documentation. sysctl entries are handled by procps and systemd; but neither projects’ documentation address how entries are processed within the same configuration file. The short version is that the last entry in sysctl.conf wins, even when other files are present (in /etc/sysctl.d or elsewhere), regardless of which system is used to load the settings. procps To understand how procps processes entries, we need to look at the source code for sysctl . This shows that later entries are processed without knowledge of earlier entries, so the last one wins (look at the Preload function). When multiple configuration files are given on the command line, these are processed in order, as described in the man page : Using this option will mean arguments to sysctl are files, which are read in the order they are specified. Things get a little more complex with the --system option, but at least that’s documented: Load settings from all system configuration files. Files are read from directories in the following list in given order from top to bottom. Once a file of a given filename is loaded, any file of the same name in subsequent directories is ignored. /run/sysctl.d/*.conf /etc/sysctl.d/*.conf /usr/local/lib/sysctl.d/*.conf /usr/lib/sysctl.d/*.conf /lib/sysctl.d/*.conf /etc/sysctl.conf The documentation isn’t quite complete. As mentioned above, entries within a given file are applied in order, and overwrite any value given to the same setting previously. In addition, looking at the PreloadSystem function show that files are processed in name order, and that /etc/sysctl.conf is processed unconditionnally ( i.e. an identically-named file in an earlier directory doesn’t override it). systemd systemd has its own sysctl handler, which is documented in the sysctl.d manpage ; that has a section on precedence: Configuration files are read from directories in /etc/ , /run/ , and /usr/lib/ , in order of precedence. Each configuration file in these configuration directories shall be named in the style of filename .conf . Files in /etc/ override files with the same name in /run/ and /usr/lib/ . Files in /run/ override files with the same name in /usr/lib/ . […] All configuration files are sorted by their filename in lexicographic order, regardless of which of the directories they reside in. If multiple files specify the same option, the entry in the file with the lexicographically latest name will take precedence. It is recommended to prefix all filenames with a two-digit number and a dash, to simplify the ordering of the files. Again, later entries within a single configuration file override earlier entries. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/292909/"
]
} |
477,941 | I have a config file thus: a: 123b: abcdevice: 1000c: xyz[old]a: 120b: xyzdevice: 200c: abc The section "[old]" and everything below it is not always present. How do I determine if the text "device: 1000" exists in the file BEFORE an optional "[old]" section? I have been messing around with the following (broken) command syntax and I can't get it to do what I need... sed -e '0,/^\[/; /device: /p' configfile ; echo $? ...where 0,/^\[/ was supposed to limit the search between the start of the file and the first occurrence of "[" in the first column. I am trying to get the return code to indicate whether the string was found or not. | I don’t think there is any such official documentation. sysctl entries are handled by procps and systemd; but neither projects’ documentation address how entries are processed within the same configuration file. The short version is that the last entry in sysctl.conf wins, even when other files are present (in /etc/sysctl.d or elsewhere), regardless of which system is used to load the settings. procps To understand how procps processes entries, we need to look at the source code for sysctl . This shows that later entries are processed without knowledge of earlier entries, so the last one wins (look at the Preload function). When multiple configuration files are given on the command line, these are processed in order, as described in the man page : Using this option will mean arguments to sysctl are files, which are read in the order they are specified. Things get a little more complex with the --system option, but at least that’s documented: Load settings from all system configuration files. Files are read from directories in the following list in given order from top to bottom. Once a file of a given filename is loaded, any file of the same name in subsequent directories is ignored. /run/sysctl.d/*.conf /etc/sysctl.d/*.conf /usr/local/lib/sysctl.d/*.conf /usr/lib/sysctl.d/*.conf /lib/sysctl.d/*.conf /etc/sysctl.conf The documentation isn’t quite complete. As mentioned above, entries within a given file are applied in order, and overwrite any value given to the same setting previously. In addition, looking at the PreloadSystem function show that files are processed in name order, and that /etc/sysctl.conf is processed unconditionnally ( i.e. an identically-named file in an earlier directory doesn’t override it). systemd systemd has its own sysctl handler, which is documented in the sysctl.d manpage ; that has a section on precedence: Configuration files are read from directories in /etc/ , /run/ , and /usr/lib/ , in order of precedence. Each configuration file in these configuration directories shall be named in the style of filename .conf . Files in /etc/ override files with the same name in /run/ and /usr/lib/ . Files in /run/ override files with the same name in /usr/lib/ . […] All configuration files are sorted by their filename in lexicographic order, regardless of which of the directories they reside in. If multiple files specify the same option, the entry in the file with the lexicographically latest name will take precedence. It is recommended to prefix all filenames with a two-digit number and a dash, to simplify the ordering of the files. Again, later entries within a single configuration file override earlier entries. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
477,959 | I notice that to set newline IFS should with a $ as prefix IFS=$'\n' but if set a colon, just IFS=: Is \n is a variable? | That $'...' in bash is not parameter expansion, it's a special kind of quote introduced by ksh93 that expands those \n , \x0a , \12 codes to a newline character. zsh also added \u000a / \U0000000a for the characters with the corresponding Unicode code point. ksh93 and bash also have \cj while zsh has \C-J . ksh93 also supports variations like \x{a} . The $ is a cue that it is some form or expansion. But in any case, it differs from other forms of expansions that use $ (like $((1 + 1)) , $param or $(cmd) ) in that it is not performed inside double quotes or here documents ( echo "$'x'" outputs $'x' in all shells though is unspecified per POSIX) and its expansion is not subject to split+glob, it's definitely closer to a quoting operator than an expansion operator. IFS=\n would set IFS to n ( \ is treated as a quoting operator) and IFS="\n" or IFS='\n' would set IFS to the two characters backslash and n . You can also use: IFS='' or IFS="" or IFS=$'' To pass a literal newline, though that's less legible (and one can't see other than using things like set list in vi whether $IFS contains other spacing characters in that code). IFS=: , IFS=':' , IFS=":" , IFS=$':' all set IFS to : so it doesn't matter which you use. $'...' is supported (with variations) by at least: ksh93 , zsh , bash , mksh , busybox sh , FreeBSD sh . ksh93 and bash also have a $"..." form of quotes used for localisation of text though it's rarely used as it's cumbersome to deploy and use portably and reliably. The es and fish shells can also use \n outside of quotes to expand to newline. Some tools like printf , some implementations of echo or awk can also expand those \n by themselves. For instance, one can do: printf '\n'awk 'BEGIN{printf "\n"}'echoecho '\n\c' # UNIX compliant echos only to output of newline character, but note that: IFS=$(printf '\n') won't work because command substitution ( $(...) ) strips all trailing newline characters. You can however use: eval "$(printf 'IFS="\n"')" Which works because the output of printf ends in a " character, not a newline. Now, for completeness, in the rc shell and derivatives (like es or akanga ), $'\n' is indeed the expansion of that \n variable (a variable whose name is the sequence of two characters \ and n ). Those shells don't have a limitation on what characters variable names may contain and only have one type of quotes: '...' . $ rc; '\n' = (foo bar); echo $'\n'foo bar; echo $'\n'(1)foo rc variables are also all exported to the environment, but at least in the Unix variant of rc , for variable names like \n , the environment variable version undergoes a form of encoding: ; env | grep foo | sed -n l__5cn=foo\001bar$ ( 0x5c being the byte value of ASCII \ ; see also how that array variable was encoded with a 0x1 byte as the separator). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/477959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
477,991 | Recently, I have noticed that I cannot access my USB flash drive filesystem (it was of FAT type, as far as I can remember). Fedora did not mount it automatically and lsblk could not see the partition. So, I decided to create new partition (instead of wiping the device at first) with fdisk . The process was straightforward ( fdisk did not complain a bit), but I noticed that fdisk asks me as a last step about removing a thing called "vfat signature": Created a new partition 1 of type 'Linux' and of size 7.3 GiB.Partition #1 contains a vfat signature.Do you want to remove the signature? [Y]es/[N]o: I am aware that happily creating new partition in such a case may not be the best thing. I do not know yet, what I would like to/should do. However, nonetheless, primarily I am curious, so I would like to know: What is a "vfat signature"? What is the reason that fdisk detects it? What is the purpose that fdisk detects it? Is it somehow related to a term that I have encountered in the context of Windows, "disk signature"? Why would I like to remove it or not? I have searched for a similar question here, but found only this question which does not answer my doubts. | The signature partition is basically a mark/beacon there is something there, and it is not empty. It also may identify a partition. It is useful on the context of several utilities/OS to tell the partition has already data there. Moving a partition size/recreating a partition is usually a non-destructive operation up to the point before formatting it . So a signature warning is signalling you "There is already data here!...are you sure you want to go ahead?" As for removing it or not, it depends on wether you are for instance resizing a partition or creating a partition anew. If you are creating a partition anew, obviously you may want to remove the signature, if you are resizing a partition you surely want to keep it. The use of partition signatures is not exclusive to Linux. From How to wipe a signature from a disk device on Linux with wipefs command Each disk and partition has some sort of signature and metadata/magic strings on it. The metadata used by operating system to configure disks or attach drivers and mount disks on your system | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/477991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/238409/"
]
} |
477,998 | ɛ ("Latin epsilon") is a letter used in certain African languages, usually to represent the vowel sound in English "bed". In Unicode it's encoded as U+025B, very distinct from everyday e . However, if I sort the following: ebedɛaɛc it seems that sort considers ɛ and e equivalent: ɛaebɛced What's going on here? And is there a way to make ɛ and e distinct for sort ing purposes? | No, it doesn't consider them as equivalent, they just have the same primary weight. So that, in first approximation, they sort the same. If you look at /usr/share/i18n/locales/iso14651_t1_common (as used as basis for most locales) on a GNU system (here with glibc 2.27), you'll see: <U0065> <e>;<BAS>;<MIN>;IGNORE # 259 e<U025B> <e>;<PCL>;<MIN>;IGNORE # 287 ɛ<U0045> <e>;<BAS>;<CAP>;IGNORE # 577 E e , ɛ and E have the same primary weight, e and E same secondary weight, only the third weight differentiates them. When comparing strings, sort (the strcoll() standard libc function is uses to compare strings) starts by comparing the primary weights of all characters, and only go for the second weight if the strings are equal with the primary weights (and so on with the other weights). That's how case seems to be ignored in the sorting order in first approximation. Ab sorts between aa and ac , but Ab can sort before or after ab depending on the language rule (some languages have <MIN> before <CAP> like in British English, some <CAP> before <MIN> like in Estonian). If e had the same sorting order as ɛ , printf '%s\n' e ɛ | sort -u would return only one line. But as <BAS> sorts before <PCL> , e alone sorts before ɛ . eɛe sorts after EEE (at the secondary weight) even though EEE sorts after eee (for which we need to go up to the third weight). Now if on my system with glibc 2.27, I run: sed -n 's/\(.*;[^[:blank:]]*\).*/\1/p' /usr/share/i18n/locales/iso14651_t1_common | sort -k2 | uniq -Df1 You'll notice that there are quite a few characters that have been defined with the exact same 4 weights. In particular, our ɛ has the same weights as: <U01DD> <e>;<PCL>;<MIN>;IGNORE<U0259> <e>;<PCL>;<MIN>;IGNORE<U025B> <e>;<PCL>;<MIN>;IGNORE And sure enough: $ printf '%s\n' $'\u01DD' $'\u0259' $'\u025B' | sort -uǝ$ expr ɛ = ǝ1 That can be seen as a bug of GNU libc locales. On most other systems, locales make sure all different characters have different sorting order in the end. On GNU locales, it gets even worse, as there are thousands of characters that don't have a sorting order and end up sorting the same, causing all sorts of problems (like breaking comm , join , ls or globs having non-deterministic orders...), hence the recommendation of using LC_ALL=C to work around those issues . As noted by @ninjalj in comments, glibc 2.28 released in August 2018 came with some improvements on that front though AFAICS, there are still some characters or collating elements defined with identical sorting order. On Ubuntu 18.10 with glibc 2.28 and in a en_GB.UTF-8 locale. $ expr $'L\ub7' = $'L\u387'1 (why would U+00B7 be considered equivalent as U+0387 only when combined with L / l ?!). And: $ perl -lC -e 'for($i=0; $i<0x110000; $i++) {$i = 0xe000 if $i == 0xd800; print chr($i)}' | sort > all-chars-sorted$ uniq -d all-chars-sorted | wc -l4$ uniq -D all-chars-sorted | wc -l1061355 (still over 1 million characters (95% of the Unicode range, down from 98% in 2.27) sorting the same as other characters as their sorting order is not defined). See also: What does "LC_ALL=C" do? Generate the collating order of a string What is the difference between "sort -u" and "sort | uniq"? | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/477998",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88106/"
]
} |
478,129 | I am trying to set up a virtual machine with passthrough graphics. I am able to get the passthrough working for the UEFI shell, but not the official Windows installer . I can get the Windows installer to work, but only with emulated graphics This boots the windows installer in a QEMU Window: sudo qemu-system-x86_64 --enable-kvm \-name TESTVM,process=TESTVM \-cdrom /media/big-tank-8TB/OSISOS/Windows/WindowsOct2018.iso \-smp 4 \-cpu core2duo \-m 4096 \-vga qxl This also boots the windows installer in a QEMU window (still no passthrough) sudo qemu-system-x86_64 --enable-kvm \-name TESTVM,process=TESTVM \-cdrom /media/big-tank-8TB/OSISOS/Windows/WindowsOct2018.iso \-smp 4 \-cpu core2duo \-m 4096 \-device vfio-pci,host=43:00.0,multifunction=on \-device vfio-pci,host=43:00.1 But if I specify the paths to UEFI firmware, I get the Tiano slpash screen and then the UEFI shell both on the monitor attached to my passed-through video card and in a QEMU window. sudo qemu-system-x86_64 --enable-kvm \-name TESTVM,process=TESTVM \-cdrom /media/big-tank-8TB/OSISOS/Windows/WindowsOct2018.iso \-smp 4 \-cpu core2duo \-m 4096 \-device vfio-pci,host=43:00.0,multifunction=on \-device vfio-pci,host=43:00.1 \-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \-drive if=pflash,format=raw,file=/usr/share/OVMF/OVMF_VARS.fd Why is the passthrough working only in the absence of the UEFI files? Or, why is specifying the UEFI files preventing me from starting Windows? Edit: Tried downloading a different version of Windows (April 2018 instead of the October one), same problem. Edit: Tried purging and reinstalling OVMF, but no luck. Edit: I can get to the boot manager by typing "exit" in the shell, but selecting the available DVD drive (and all other options) immediately falls back to the boot manager. Edit: Ran this: -name TESTVM,process=TESTVM \-drive file=/media/big-tank-8TB/OSISOS/Windows/Win10_1803_English_x64.iso,index=1,media=cdrom \-drive file=/media/big-tank-8TB/OSISOS/Windows/virtio-win-0.1.160.iso,index=2,media=cdrom \-smp 4 \-cpu core2duo \-m 4096 \-device vfio-pci,host=43:00.0,multifunction=on \-device vfio-pci,host=43:00.1 \-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd \-drive if=pflash,format=raw,file=/usr/share/OVMF/OVMF_VARS.fd When I got too the uefi shell, I typed "exit" to get to the boot manager. In the boot manager, selecting the available DVD drive instantly fell back to the boot manager. I then added another DVD drive by Boot Maintenance Manager > Boot Options > Add boot option, and then selected that in the Boot Manager menu. . This gave me a very brief "press any key to boot from CD". If I am fast enough, this boots into the Windows installer BUT ONLY IN THE QEMU WINDOW. The screen attached to the passed-through card was black with a simple cursor, as opposed to mirroring as with the UEFI/Boot Manager. Edit: I am trying to pass through an NVIDIA GTX1070. Mobo is ASRock x399 Taichi, CPU is Threadripper 1950X. OS is Ubuntu Server with XFCE installed. Edit: If I proceed with the installation, I still have Windows in the QEMU window and just a TianoCore splash screen on the passthrough card. If I go to the device manager, Windows sees the card, but it is stopped for some reason. Edit: I tried using these instructions to get rid of the code 43, to no avail. In order to try this, I used virt-install instead of qemu-system, and when doing this there is no TianoCore splash screen. But still code 43 when I get into Windows. Edit: used dmesg to check for memory reservation errors as described here. Found none.Edit: Also from the above link, used ROM parser and confirmed the presence of a "type 3 (EFI)" | You are on the correct track already. GPU Passthrough is not perfect, especially if it's an NVidia Card (Which you don't mention NVidia or AMD). Finish the setup on the Qemu Window. Make sure the Windows Machine is connected to Internet and let Windows Update install the graphics drivers for you. When you come back you should be greeted by a second monitor, if not, reboot. I usually then remove the spice/vnc console and only have the GPU monitor attached. Getting GPU Passthrough to work is all about trial and error. Other things to try: Install Windows without GPU passthrough, then attempt to passthrough GPU. Install drivers via NVidia_drivers.exe Install drivers via Windows Update Bios vs UEFI Q35 vs i440fx Note: Code 43 is a known error w/ NVidia relating to NVidia drivers checking if they are running in a VM. NVidia sells cards specifically for running in a VM environment and attempts to block installation of drivers for consumer grade cards in a VM. You need to make sure to use the following in your domain.xml <kvm> <hidden state='on'/></kvm> See https://passthroughpo.st/apply-error-43-workaround/ and other resources for examples. Here is a screenshot of my config: Here is the "relevant" part of my domain.xml, I can share entire thing if you want, but it's got a bunch of unnecessary things. <os> <type arch='x86_64' machine='pc-i440fx-2.10'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/edk2/ovmf/OVMF_CODE.fd</loader> <nvram>/var/lib/libvirt/qemu/nvram/Windows10_VARS.fd</nvram> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <kvm> <hidden state='on'/> </kvm> <vmport state='off'/> </features> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478129",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/124211/"
]
} |
478,217 | I was trying to install bash on a FreeBSD 10.2 system, see How to Install bash on FreeBSD But the install failed because pkg was trying to fetch from a too-new repository. I then tried following the recipe at https://glasz.org/sheeplog/2017/02/freebsd-usrlocalliblibpkgso3-undefined-symbol-utimensat.html , which several sources said was the right thing to do. However, part of the recipe involved uninstalling pkg and reinstalling it. That resulted in the following: # pkg install -y pkgThe package management tool is not yet installed on your system.Do you want to fetch and install it now? [y/N]: yBootstrapping pkg from pkg+http://pkg.FreeBSD.org/FreeBSD:10:amd64/release_2, please wait...Verifying signature with trusted certificate pkg.freebsd.org.2013102301... donepkg-static: warning: database version 34 is newer than libpkg(3) version 31, but still compatiblepkg-static: sqlite error while executing INSERT OR ROLLBACK INTO pkg_search(id, name, origin) VALUES (?1, ?2 || '-' || ?3, ?4); in file pkgdb.c:1542: no such table: pkg_search And so now I'm stuck. Can anybody tell me how I might recover from this state? | You can try removing everything in /var/db/pkg/ directory, but the proper solution is to upgrade to supported FreeBSD release (10.4 or 11.2) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478217",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23623/"
]
} |
478,335 | I read from an instruction to schedule a script on the last day of the month: Note: The astute reader might be wondering just how you would be able to set a command to execute on the last day of every month because you can’t set the dayofmonth value to cover every month. This problem has plagued Linux and Unix programmers, and has spawned quite a few different solutions. A common method is to add an if-then statement that uses the date command to check if tomorrow’s date is 01: 00 12 * * * if [`date +%d -d tomorrow` = 01 ] ; then ; command1 This checks every day at 12 noon to see if it's the last day of the month, and if so, cron runs the command. How does [`date +%d -d tomorrow` = 01 ] work? Is it correct to state then; command1 ? | Abstract The correct code should be: #!/bin/sh[ "$#" -eq 0 ] && echo "Usage: $0 command [args]" && exit 1[ "$(date -d tomorrow +'%d')" = 01 ] || exit 0exec "$@" Call this script end_of_month.sh and the call in cron is simply: 00 12 28-31 * * /path/to/script/end_of_month.sh command That would run the script end_of_month (which internally will check that the day is the last day of the month) only on the days 28, 29, 30 and 31. There is no need to check for end of month on any other day. Old post. That is a quote from the book "Linux Command Line and Shell Scripting Bible" by Richard Blum, Christine Bresnahan pp 442, Third Edition, John Wiley & Sons ©2015. Yes, that is what it says, but that is wrong/incomplete: Missing a closing fi . Needs space between [ and the following ` . It is strongly recommended to use $(…) instead of `…` . It is important that you use quotes around expansions like "$(…)" There is an additional ; after then How do I know? (well, by experience ☺ ) but you can try Shellcheck . Paste the code from the book (after the asterisks) and it will show you the errors listed above plus a "missing shebang". An script without any errors in Shellcheck is this: #!/bin/shif [ "$(date +%d -d tomorrow)" = 01 ] ; then script.sh; fi That site works because what was written is "shell code". That is a syntax that works in many shells. Some issues that shellcheck doesn't mention are: It is assuming that the date command is the GNU date version. The one with a -d option that accepts tomorrow as a value (busybox has a -d option but doesn't understand tomorrow and BSD has a -d option but is not related to "display" of time). It is better to set the format after all the options date -d tomorrow +'%d' . The cron start time is always in local time, that may make one job start 1 hour earlier of later than an exact day count if the DST (daylight saving time) got set or unset. What we got done is a shell script which could be called with cron. We can further modify the script to accept arguments of the program or command to execute, like this (finally, the correct code): #!/bin/sh[ "$#" -eq 0 ] && echo "Usage: $0 command [args]" && exit 1[ "$(date -d tomorrow +'%d')" = 01 ] || exit 0exec "$@" Call this script end_of_month.sh and the call in cron is simply: 00 12 28-31 * * /path/to/script/end_of_month.sh command That would run the script end_of_month (which internally will check that the day is the last day of the month) only on the days 28, 29, 30 and 31. There is no need to check for end of month on any other day. Make sure the correct path is included. The PATH inside cron will not (not likely) be the same as the user PATH. Note that there is one end of month script tested (as indicated below) that could call many other utilities or scripts. This will also avoid the additional problem that cron generates with the full command line: Cron splits the command line on any % even if quoted either with ' or " (only a \ works here). That is a common way in which cron jobs fail. You can test if end_of_month.sh script works correctly on some date (without waiting to the end of the month to discover it doesn't work) by testing it with faketime: $ faketime 2018/10/31 ./end_of_month echo "Command will be executed...."Command will be executed.... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/478335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
478,336 | I learn the command at schedule to run a script at a specified time at -f -m ./test.sh now + 10 minutes How could I use at to schedule a command ? Suppose the situation, I search all the musics but running silently on background find / -name *.mp3 1> ~/desktop/all_musics.md 2>/dev/null & I intent to open all_musics.md ten minutes later automatically. open all_music.md at now + 10 minutes Is it possible to get it done? | Abstract The correct code should be: #!/bin/sh[ "$#" -eq 0 ] && echo "Usage: $0 command [args]" && exit 1[ "$(date -d tomorrow +'%d')" = 01 ] || exit 0exec "$@" Call this script end_of_month.sh and the call in cron is simply: 00 12 28-31 * * /path/to/script/end_of_month.sh command That would run the script end_of_month (which internally will check that the day is the last day of the month) only on the days 28, 29, 30 and 31. There is no need to check for end of month on any other day. Old post. That is a quote from the book "Linux Command Line and Shell Scripting Bible" by Richard Blum, Christine Bresnahan pp 442, Third Edition, John Wiley & Sons ©2015. Yes, that is what it says, but that is wrong/incomplete: Missing a closing fi . Needs space between [ and the following ` . It is strongly recommended to use $(…) instead of `…` . It is important that you use quotes around expansions like "$(…)" There is an additional ; after then How do I know? (well, by experience ☺ ) but you can try Shellcheck . Paste the code from the book (after the asterisks) and it will show you the errors listed above plus a "missing shebang". An script without any errors in Shellcheck is this: #!/bin/shif [ "$(date +%d -d tomorrow)" = 01 ] ; then script.sh; fi That site works because what was written is "shell code". That is a syntax that works in many shells. Some issues that shellcheck doesn't mention are: It is assuming that the date command is the GNU date version. The one with a -d option that accepts tomorrow as a value (busybox has a -d option but doesn't understand tomorrow and BSD has a -d option but is not related to "display" of time). It is better to set the format after all the options date -d tomorrow +'%d' . The cron start time is always in local time, that may make one job start 1 hour earlier of later than an exact day count if the DST (daylight saving time) got set or unset. What we got done is a shell script which could be called with cron. We can further modify the script to accept arguments of the program or command to execute, like this (finally, the correct code): #!/bin/sh[ "$#" -eq 0 ] && echo "Usage: $0 command [args]" && exit 1[ "$(date -d tomorrow +'%d')" = 01 ] || exit 0exec "$@" Call this script end_of_month.sh and the call in cron is simply: 00 12 28-31 * * /path/to/script/end_of_month.sh command That would run the script end_of_month (which internally will check that the day is the last day of the month) only on the days 28, 29, 30 and 31. There is no need to check for end of month on any other day. Make sure the correct path is included. The PATH inside cron will not (not likely) be the same as the user PATH. Note that there is one end of month script tested (as indicated below) that could call many other utilities or scripts. This will also avoid the additional problem that cron generates with the full command line: Cron splits the command line on any % even if quoted either with ' or " (only a \ works here). That is a common way in which cron jobs fail. You can test if end_of_month.sh script works correctly on some date (without waiting to the end of the month to discover it doesn't work) by testing it with faketime: $ faketime 2018/10/31 ./end_of_month echo "Command will be executed...."Command will be executed.... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/478336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
478,353 | I have just upgraded my Debian server from Stretch (stable) to Buster (testing). One strange thing I can't seem to resolve: $ ssh [email protected] -p [censored] -o ConnectTimeout=5 -i /home/vlastimil/.ssh/id_rsa -vvv results in: OpenSSH_7.6p1 Ubuntu-4, OpenSSL 1.0.2n 7 Dec 2017debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 19: Applying options for *debug2: resolving "192.168.0.102" port [censored]debug2: ssh_connect_direct: needpriv 0debug1: Connecting to 192.168.0.102 [192.168.0.102] port [censored].debug2: fd 3 setting O_NONBLOCKdebug1: connect to address 192.168.0.102 port [censored]: Connection refused ssh: connect to host 192.168.0.102 port [censored]: Connection refused However, if I log in as that user locally (I can even log off then, it just needs one login), it does work. I was able to log in as root. However, only from the one machine, in spite of having public key exchanged. Further, only one root login from that one machine needed, and then it is possible to log in as root from the other machine. Could anyone elaborate as to, how do I debug this issue? Server's config: # grep -v '#' /etc/ssh/sshd_configPort [censored]Protocol 2SyslogFacility AUTHLogLevel INFOLoginGraceTime 120StrictModes yesHostbasedAuthentication noIgnoreRhosts yesPermitEmptyPasswords noChallengeResponseAuthentication noUsePAM yesX11Forwarding yesPrintMotd noPrintLastLog noBanner noneAcceptEnv LANG LC_*Subsystem sftp /usr/lib/openssh/sftp-serverKeyRegenerationInterval 3600ServerKeyBits 4096Ciphers [email protected],[email protected] [email protected] [email protected] sha512Match Address 192.168.0.* PermitRootLogin yesMatch all PermitRootLogin no | Thanks to telcoM 's answer about a Hardware RNG I did an apt search and found the package rng-tools5 and installed it: sudo apt-get install rng-tools5 This resolved the issue on my Intel NUC. Editor's note: My issue on Dell PowerEdge T20 with Xeon CPU was also resolved with this. Additional notes: After installation of the package, please do check if there is a random source with: rngd -v In my case, there is no TPM device, but the CPU has rdrand capability: Unable to open file: /dev/tpm0Available entropy sources: DRNG | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478353",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
478,433 | Scenario: I want to connect from Client A to Client B using SSH/SFTP. I can not open ports on either client. To solve this issue, I got a cheap VPS to use as a relay server. On Client B I connect to the VPS with remote port forwarding as followed: ssh -4 -N -f -R 18822:localhost:22 <user>@<vps-ip> On the VPS I've set up local port forwarding using -g (global) like this: ssh -g -f -N -L 0.0.0.0:18888:localhost:18822 <user>@localhost That way I can connect from Client A directly to Client B at <vps-ip>:18888 . Works great. Now my question is, how safe is this? As far as I know, SSH/SFTP connections are fully encrypted, but is there any chance of making it less secure by using the VPS in the middle? Let's assume these two cases: Case A: The VPS itself is not altered with, but traffic and files are monitored completely. Case B: The VPS is completely compromised, filesystem content can be altered. If I now send a file from Client A to Client B over SFTP, would it be possible for the company hosting the VPS to "intercept" it and read the file's (unencrypted) content? | What you did You used three ssh commands: While inside a B console you did: ssh -4 -N -f -R 18822:localhost:22 <user>@<vps> Command sshd (the server) to open port 18822 , a remote port vps:18822 connected to localhost (B) port 22. While at a vps console you did: ssh -g -f -N -L 0.0.0.0:18888:localhost:18822 <user>@localhost Command ssh (the client) to open port 18888 available as an external ( 0.0.0.0 ) port on ( vps ) that connects to internal port 18822. That opens an internet visible port vps:18888 that redirects traffic to 18822 which, in turn, redirects to B:22 . While at a A console (and the only connection in which A participate): Connect from Client A directly to Client B at vps:18888 . What matters is this last connection. The whole SSH security depends on the authentication of A to B . What it means The SSH protocol SSH provides a secure channel over an unsecured network By using end-to-end encryption End-to-end encryption (E2EE) is a system of communication where only the communicating users can read the messages. In principle, it prevents potential eavesdroppers – including telecom providers, Internet providers, and even the provider of the communication service – from being able to access the cryptographic keys needed to decrypt the conversation. End to end encryption is a concept. SSH is a protocol. SSH implements end to end encryption. So can https, or any other number of protocols with encryption. If the protocol is strong, and the implementation is correct, the only parties that know the encrypting keys are the two authenticated (end) parties. Not knowing the keys and not being able to break the security of the protocol, any other party is excluded from the contents of the communication. If , as you describe: from Client A directly to Client B you are authenticating directly to system B, then , only Client A and client B have the keys. No other. Q1 Case A: The VPS itself is not altered with, but traffic and files are monitored completely. Only the fact that a communication (day, time, end IPs, etc.) is taking place and that some amount of traffic (kbytes, MBytes) could be monitored but not the actual contents of what was communicated. Q2 Case B: The VPS is completely compromised, filesystem content can be altered. It doesn't matter, even if the communication is re-routed through some other sites/places, the only two parties that know the keys are A and B. That is: If the authentication at the start of the communication was between A and B. Optionally, check the validity of the IP to which A is connecting, then: use public key authentication (use only once a private-public key pair that only A and B know), done. Understand that you must ensure that the public key used is carried securely to the system B. You can not trust the same channel to carry the keys and then carry the encryption. There are Man-in-the-middle attacks that could break the protocol . Q3 If I now send a file from Client A to Client B over SFTP, would it be possible for the company hosting the VPS to "intercept" it and read the file's (unencrypted) content? No, if the public keys were safely placed on both ends, there is a vanishingly small probability of that happening. Walk with the disk with the public key to the other side to install it, never worry again. Comment From your comment: Q1 So, basically the VPS in my setup does nothing but forward the ports, and is not involved in the actual SSH connection or authentication happening from Client A to B, correct? Kind of. Yes the VPS should not be involved in the authentication. But it is "In-The-Middle", that is, it receives packets from one side and delivers them (if it is working correctly) to the other side. But there is an alternative, the VPS (or anything In-The-Middle) could choose to lie and perform a "Man-In-The-Middle-Attack" . It could lie to Client-A pretending to be Client-B and lie to Client-B pretending to be Client-A. That would reveal everything inside the communication to the "Man-In-The-Middle". That is why I stress the word should above. I should also say that : ...there are no tools implementing MITM against an SSH connection authenticated using public-key method... Password-based authentication is not the public-key method. If you authenticate with a password, you could be subject to a Man-In-The-Middle-Attack. There are several other alternatives but are out of scope for this post. Basically, use ssh-keygen to generate a pair of keys (lets assume on side A), and (for correct security) carry the public part inside a disk to Side B and install it in the Authorized-keys file. Do not use the network to install the public key, that is: do not use the ssh-copy-id over the network unless you really do know exactly what you are doing and you are capable of verifying the side B identity. You need to be an expert to do this securely. Q2 About the public key though, isn't it, well, public? Yes, its public. Well, yes, the entity that generated the public-private pair could publish the public part to anyone (everyone) and have lost no secrets. If anybody encrypts with its public key only it could decrypt any message with the matching (and secret) private key. SSH encryption. By the way, the SSH encryption is symmetric not asymmetric (public). The authentication is asymmetric (either DH ( Diffie-Hellman ) (for passwords) or RSA, DSA, Ed25519 Key strength or others (for public keys)), then a symmetric key is generated from that authentication and used as communication encryption key. Used for authentication. But to SSH, the public key (generated with ssh-keygen) carry an additional secret: It authenticates the owner of the public key. If you receive a public key from the internet: How do you know to whom it belongs? Do you trust whatever that public key claims it is? You should not !! That is why you should carry the public key file to the remote server (in a secure way) and install it there. After that, you could trust that (already verified) public key as a method to authenticate you to log-in to that server. Q3 I've connected from the VPS, mostly for testing, to Client B before too, doesn't that exchange the public key already? It exchange one set of public keys (a set of DH generated public keys) used for encryption. Not the authentication public key generated with ssh-keygen. The key used on that communication is erased and forgotten once the communication is closed. Well, you also accepted (and used) a key to authenticate the IP of the remote server. To ensure that an IP is secure gets even more complex than simple( ?? ) public-key authentication. My impression was that the public key can be shared, but the private key or passphrase must be kept safe. And your (general) impression is correct, but the devil is in the details ... Who generated a key pair could publish his public key without any decrease of his security. Who receives a public key must independently confirm that the public key belongs to whom he believes it belongs. Otherwise, the receiver of a public key could be communicating with an evil partner. Generate your key | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296862/"
]
} |
478,458 | I follows an instruction to install shtool Download and extract wget ftp://ftp.gnu.org/gnu/shtool/shtool-2.0.8.tar.gztar -zxvf shtool-2.0.8.tar.gz Build the library $ ./configure $ make I could refer to make manual by man make How could I reach the manual about configure | The configure script is a script that will configure the software that it was distributed with for compilation (if applicable) and installation. These scripts are often (as in this case) created by GNU autoconf (a tool used by developers specifically for creating portable configure scripts), which means that it will have at least a minimum of a particular set of options. One of these options is --help . $ ./configure --help`configure' configures this package to adapt to many kinds of systems.Usage: ./configure [OPTION]... [VAR=VALUE]...To assign environment variables (e.g., CC, CFLAGS...), specify them asVAR=VALUE. See below for descriptions of some of the useful variables.Defaults for the options are specified in brackets.Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print `checking...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for `--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or `..'] (etc.) There is no manual for configure as it's specific to the software package that it was distributed with. Some available options may depend on the software it configures (so it can't be a system-wide tool with its own manual). In particular, there are often --with-xxx and --without-xxx options to configure projects with or without some library xxx , and likewise --enable-xxx and --disable-xxx options to enable or disable certain features (not in this shtool distribution though, it seems). There is often (e.g., in this case) both a README and an INSTALL text file distributed with the source code. These files will describe the software and how to configure and install it. The INSTALL document will often tell you how the authors envisage the installation should happen, and you can refer to the configure --help output for how to customise this to your own needs. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/478458",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
478,480 | I want to check the dialect version in SMB connections. On Windows, Get-SmbConnection will get it. PS C:\Windows\system32> Get-SmbConnectionServerName ShareName UserName Credential Dialect NumOpens---------- --------- -------- ---------- ------- -------savdal08r2 c$ SAVILLTEC... SAVILLTEC... 2.10 1savdalfs01 c$ SAVILLTEC... SAVILLTEC... 3.00 1 on macOS, smbutil statshares -a works well. What should I do on linux? | If you are running a Samba server on Linux, smbstatus should show the protocol version used by each client. If Linux is the client, it depends on which client you're using: if you're using the kernel-level cifs filesystem support, in all but quite new kernels, the answer was that you look into /proc/mounts to see if the mount options for that filesystem include a vers= option; if not, assume it uses SMB 1. SMB protocol autonegotiation in kernel-level CIFS/SMB support is rather recent development, and as far as I know, if you don't specify the protocol version you want, the autonegotiation will only indicate the result if you enable CIFS debug messages. but fortunately the developers made it so the negotiation result will always be shown in /proc/mounts . If you use smbclient or other userspace SMB/CIFS clients (e.g. one integrated to your desktop environment), then it might have its own tools and diagnostics. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/478480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318307/"
]
} |
478,532 | I have two different machines (home and work) running Ubuntu 18.04. Last night vim froze at home. I was in insert mode and typing and went to save ( esc :w ) and nothing happened. The status bar still reads -- INSERT -- , the cursor is still blinking where it was. I was stuck. I couldn't find a way out. I couldn't type (nothing happened when I type), I couldn't move around (the up and down arrows did nothing). It was stuck in insert mode with the cursor blinking where it was. I was definitely multitasking and probably hit some other keys in there, but I don't know what keys. It was late, though, so I closed the terminal window and tried again (I was entering a git commit message). It happened again partway through my typing so I switched to git commit -m "don't need an editor for this" instead. And then I shut down my computer and stopped working. I figured I was just tired, but then it happened to me today at work on a different laptop altogether. Again I was multitasking and can't swear I didn't type any bizarro key sequence but if I did it was accidental. And other tabs in the same terminal aren't frozen. I'm used to getting trapped in visual mode in vim. That's a trick I've learned. But stuck in insert mode? Any ideas on what I might've done and how to get out of it? Per a comment suggestion I tried looking at .viminfo but the only .viminfo I see is owned exclusively by root and only appears to show things I would have edited with sudo : # Input Line History (newest to oldest):# Debug Line History (newest to oldest):# Registers:# File marks:'0 1 0 /etc/neomuttrc|4,48,1,0,1531789956,"/etc/neomuttrc"'1 1 66 /etc/apt/sources.list.d/signal-bionic.list|4,49,1,66,1530816565,"/etc/apt/sources.list.d/signal-bionic.list"'2 51 0 /etc/apt/sources.list|4,50,51,0,1530816531,"/etc/apt/sources.list"# Jumplist (newest first):-' 1 0 /etc/neomuttrc|4,39,1,0,1531789956,"/etc/neomuttrc"-' 1 66 /etc/apt/sources.list.d/signal-bionic.list|4,39,1,66,1530816565,"/etc/apt/sources.list.d/signal-bionic.list"-' 1 66 /etc/apt/sources.list.d/signal-bionic.list|4,39,1,66,1530816565,"/etc/apt/sources.list.d/signal-bionic.list"-' 51 0 /etc/apt/sources.list|4,39,51,0,1530816531,"/etc/apt/sources.list"-' 51 0 /etc/apt/sources.list|4,39,51,0,1530816531,"/etc/apt/sources.list"-' 51 0 /etc/apt/sources.list|4,39,51,0,1530816531,"/etc/apt/sources.list"-' 51 0 /etc/apt/sources.list|4,39,51,0,1530816531,"/etc/apt/sources.list"-' 1 0 /etc/apt/sources.list|4,39,1,0,1530816447,"/etc/apt/sources.list"-' 1 0 /etc/apt/sources.list|4,39,1,0,1530816447,"/etc/apt/sources.list"-' 1 0 /etc/apt/sources.list|4,39,1,0,1530816447,"/etc/apt/sources.list"-' 1 0 /etc/apt/sources.list|4,39,1,0,1530816447,"/etc/apt/sources.list"# History of marks within files (newest to oldest):> /etc/neomuttrc * 1531789952 0 " 1 0> /etc/apt/sources.list.d/signal-bionic.list * 1530816564 0 " 1 66 ^ 1 67 . 1 66 + 1 66> /etc/apt/sources.list * 1530816454 0 " 51 0 It seems odd that I wouldn't have an unprivileged .viminfo but I did sudo udpatedb and locate .viminfo and still didn't surface more than the one root-owned file. | One key that I frequently fat-finger by mistake is Ctrl S ; that stops all terminal output until a Ctrl Q is typed. That's the XON/XOFF control-flow, which is enabled by default, and ^S and ^Q are the default VSTART and VSTOP keys respectively -- see the stty(1) and termios(3) manpages. You can disable it with: stty -ixon vim will not reenable it as part of its changing the terminal settings. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/478532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141494/"
]
} |
478,543 | I have this line word1 word2 1234 4567 word3 8901 word4 word5 2541 5142 word5 I want to split this line in order to insert a line break before a numeric field or before an alphanumeric field that is just after a numeric field, so the output would be: word1 word212344567word38901word4 word52541 5142 word5 All alphanumeric fields begin with letters | One key that I frequently fat-finger by mistake is Ctrl S ; that stops all terminal output until a Ctrl Q is typed. That's the XON/XOFF control-flow, which is enabled by default, and ^S and ^Q are the default VSTART and VSTOP keys respectively -- see the stty(1) and termios(3) manpages. You can disable it with: stty -ixon vim will not reenable it as part of its changing the terminal settings. | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/478543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216688/"
]
} |
478,561 | I am trying to write a script (.awk) that will print out lines that contain a certain string between lines 7-13. I have it partially working however it prints out all lines that contain the string rather than only between 7-13. #!/usr/bin/awk -fBEGIN { (NR>=7) && (NR<=13) }/word/ {print $0} the output when running script.awk filename is all lines that contain the word edit: After trying out what jeff suggested, I get this with his suggestion. /needle/ being the keyword. Code Solved! The issue was that I had {print $0} on another line, used to it in other languages where I like to separate my code out | You've put the line restriction logic in the "BEGIN" block, which is executed before awk reads in any data. Move that logic to the main loop: NR >= 7 && NR <= 13 && /word/ { print } $0 is the default print argument, if none is given... or, even shorter as NR >= 7 && NR <= 13 && /word/ since {print} is the default action, if none is specified. The main body of an awk script is of the form "pattern" "action"; you want the pattern to prefix the action that you want. Here, the pattern requires the three tests to be true, and the action to be to print the line. Putting the print on a separate line means that there's no "action" when "passing" the tests, and there's no "pattern" for printing every line -- resulting in every line being printed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318369/"
]
} |
478,563 | If I have the following shell script sleep 30s And I hit Ctrl+C when the shell script is running, the sleep dies with it. If I have the following shell script sleep 30s &wait And I hit Ctrl+C when the shell script is running, the sleep continues on, and now has a parent of 1. Why is that? Doesn't bash propagate Ctrl+C to all the children? EDIT:If I have the following script /usr/bin/Xvfb :18.0 -ac -screen 0 1180x980x24 &wait where I am spawning a program, this time Ctrl+C on the main process kills the Xvfb process too. So how/why is Xvfb different from sleep? In the case of some processes I see that they get reaped by init , in some cases they die. Why does sleep get reaped by init? Why does Xvfb die? | tl;dr; the Xvfb process sets a signal handler for SIGINT and exits when it receives such a signal, but the sleep process doesn't, so it inherits the "ignore" state for SIGINT as it was set by the shell running the script before executing the sleep binary. When a shell script is run, the job control is turned off, and background processes (the ones started with & ) are simply run in the same process group, with SIGINT and SIGQUIT set to SIG_IGN (ignored) and with their stdin redirected from /dev/null . This is required by the standard : If job control is disabled (see the description of set -m) when the shell executes an asynchronous list, the commands in the list shall inherit from the shell a signal action of ignored (SIG_IGN) for the SIGINT and SIGQUIT signals. If the signal disposition is set to SIG_IGN (ignore), that state will be inherited through fork() and execve() : Signals set to the default action (SIG_DFL) in the calling process image shall be set to the default action in the new process image. Except for SIGCHLD, signals set to be ignored (SIG_IGN) by the calling process image shall be set to be ignored by the new process image. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17120/"
]
} |
478,568 | I am taking efforts to learn sed and encounter such a situation $ echo "abcd" | sed -n "/b/p"abcd it works properly, $ echo "abcd" | sed -n "→ /b/p"abcd good again, but $ echo "Abcd" | sed -n "/b/p"sed: -e expression #1, char 2: unterminated address regex What's the problem with the error report? | tl;dr; the Xvfb process sets a signal handler for SIGINT and exits when it receives such a signal, but the sleep process doesn't, so it inherits the "ignore" state for SIGINT as it was set by the shell running the script before executing the sleep binary. When a shell script is run, the job control is turned off, and background processes (the ones started with & ) are simply run in the same process group, with SIGINT and SIGQUIT set to SIG_IGN (ignored) and with their stdin redirected from /dev/null . This is required by the standard : If job control is disabled (see the description of set -m) when the shell executes an asynchronous list, the commands in the list shall inherit from the shell a signal action of ignored (SIG_IGN) for the SIGINT and SIGQUIT signals. If the signal disposition is set to SIG_IGN (ignore), that state will be inherited through fork() and execve() : Signals set to the default action (SIG_DFL) in the calling process image shall be set to the default action in the new process image. Except for SIGCHLD, signals set to be ignored (SIG_IGN) by the calling process image shall be set to be ignored by the new process image. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478568",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
478,590 | I was just going through the official bash repository(I don't usually do this) for something unrelated but noticed that bash 5 was already in beta. I was just curious about what's going to be new in bash 5 but couldn't find any information. Can someone summarize the changes between 4.4 and 5 version of Bash | The changes made to bash between release 4.4 and 5.0 (released 2019-01-07) may be found in the NEWS file in the bash source distribution. Here is a link to it (the changes are too numerous to list here). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/478590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90864/"
]
} |
478,592 | Using shell script I am making a db call on database VM and I am getting and storing the query response into a .txt file. Which looks like below: X folder Check:Number of Files on X Outbound17Y folder Check:Number of Files on Y Outbound17Z folder Check:Number of Files on Z Outbound18 Now for each of the X,Y and Z. I am basically receiving files(counts) on their respective locations. So I am expecting to get "18" files for each X,Y and Z. Now using shell I want to be able to know/store the folders for which I didn't receive 18 files. Example: here in the above case I should get that I am missing files for X and Y folders. | The changes made to bash between release 4.4 and 5.0 (released 2019-01-07) may be found in the NEWS file in the bash source distribution. Here is a link to it (the changes are too numerous to list here). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/478592",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/246749/"
]
} |
478,634 | root's default PATH is $ sudo su# echo $PATH/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games After creating /etc/cron.d/myjob 35 * * * * tim ( date && echo $PATH && date ) > /tmp/cron.log 2>&1 /tmp/cron.log shows the default value of PATH is: /usr/bin:/bin Is the default PATH value in a crontab file not the one for the root? Why? Whose PATH value is it? WIll the default PATH value be different if I add the job in /etc/crontab or a file under /etc/cronb.d/ ? Does it matter which user is specified in the cron job? (such as tim in the above example) Thanks. | This depends on the version of cron you’re using. I seem to remember you use Debian; cron there sets a number of variables up as follows: Several environment variables are set up automatically by the cron(8) daemon. SHELL is set to /bin/sh , and LOGNAME and HOME are set from the /etc/passwd line of the crontab ’s owner. PATH is set to "/usr/bin:/bin" . HOME , SHELL , and PATH may be overridden by settings in the crontab ; LOGNAME is the user that the job is running from, and may not be changed. (See the crontab manpage for details.) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
478,720 | I found some code for reading input from a file a while ago, I believe from Stack Exchange, that I was able to adapt for my needs: while read -r line || [[ -n "$line" ]]; do if [[ $line != "" ]] then ((x++)); echo "$x: $line" <then do something with $line> fidone < "$1" I'm reviewing my script now & trying to understand what it's doing ... I don't understand what this statement is doing: while read -r line || [[ -n "$line" ]]; I understand that the -r option says that we're reading raw text into line, but I'm confused about the || [[ -n "$line" ]] portion of the statement. Can someone please explain what that is doing? | [[ -n "$line" ]] tests if $line (the variable just read by read ) is not empty. It's useful since read returns a success if and only if it sees a newline character before the end-of-file. If the input contains a line fragment without a newline in the end, this test will catch that, and the loop will process that final incomplete line, too. Without the extra test, such an incomplete line would be read into $line , but ignored by the loop. I said "incomplete line", since the POSIX definitions of a text file and a line require a newline at the end of each line. Other tools than read can also care, e.g. wc -l counts the newline characters , and so ignores a final incomplete line. See e.g. What's the point in adding a new line to the end of a file? and Why should text files end with a newline? on SO. The cmd1 || cmd2 construct is of course just like the equivalent in C. The second command runs if the first returns a falsy status, and the result is the exit status of the last command that executed. Compare: $ printf 'foo\nbar' | ( while read line; do echo "in loop: $line"; done; echo "finally: $line" )in loop: foofinally: bar and $ printf 'foo\nbar' | ( while read line || [[ -n $line ]]; do echo "in loop: $line"; done; echo "finally: $line" )in loop: fooin loop: barfinally: | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478720",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318513/"
]
} |
478,734 | I am trying to understand what are file level snapshosts. Anyone has idea as to which filesystems supports this so that I can try it out. Ref. http://tracker.ceph.com/issues/24464 | [[ -n "$line" ]] tests if $line (the variable just read by read ) is not empty. It's useful since read returns a success if and only if it sees a newline character before the end-of-file. If the input contains a line fragment without a newline in the end, this test will catch that, and the loop will process that final incomplete line, too. Without the extra test, such an incomplete line would be read into $line , but ignored by the loop. I said "incomplete line", since the POSIX definitions of a text file and a line require a newline at the end of each line. Other tools than read can also care, e.g. wc -l counts the newline characters , and so ignores a final incomplete line. See e.g. What's the point in adding a new line to the end of a file? and Why should text files end with a newline? on SO. The cmd1 || cmd2 construct is of course just like the equivalent in C. The second command runs if the first returns a falsy status, and the result is the exit status of the last command that executed. Compare: $ printf 'foo\nbar' | ( while read line; do echo "in loop: $line"; done; echo "finally: $line" )in loop: foofinally: bar and $ printf 'foo\nbar' | ( while read line || [[ -n $line ]]; do echo "in loop: $line"; done; echo "finally: $line" )in loop: fooin loop: barfinally: | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318530/"
]
} |
478,740 | So basically I've a file that contains the below Data.txt <IP Address1>, 10, 23, <GW IP1>FINAL INPUT.45.324<IP Address2>, 40, 33, <GW IP2> Another file that has values for each TAG as specified below info.txt <IP Address1>10.155.120.20<GW IP1>10.155.120.30<IP address2>10.30.123.30<GW IP2>10.30.123.1 Would like the final to look like this (file.txt) 10.155.120.20, 10, 23, 10.155.120.30FINAL INPUT.45.32410.30.123.30, 40, 33, 10.30.123.1 Trying to find example but I'm unable to figure it out | [[ -n "$line" ]] tests if $line (the variable just read by read ) is not empty. It's useful since read returns a success if and only if it sees a newline character before the end-of-file. If the input contains a line fragment without a newline in the end, this test will catch that, and the loop will process that final incomplete line, too. Without the extra test, such an incomplete line would be read into $line , but ignored by the loop. I said "incomplete line", since the POSIX definitions of a text file and a line require a newline at the end of each line. Other tools than read can also care, e.g. wc -l counts the newline characters , and so ignores a final incomplete line. See e.g. What's the point in adding a new line to the end of a file? and Why should text files end with a newline? on SO. The cmd1 || cmd2 construct is of course just like the equivalent in C. The second command runs if the first returns a falsy status, and the result is the exit status of the last command that executed. Compare: $ printf 'foo\nbar' | ( while read line; do echo "in loop: $line"; done; echo "finally: $line" )in loop: foofinally: bar and $ printf 'foo\nbar' | ( while read line || [[ -n $line ]]; do echo "in loop: $line"; done; echo "finally: $line" )in loop: fooin loop: barfinally: | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478740",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318537/"
]
} |
478,742 | So when i try to use the Xorg command as a normal user, this is the error that it gives me : /usr/lib/xorg/Xorg.wrap: Only console users are allowed to run the X server but i don't understand, what are the "console users"? and when i switch to root it gives me another error : _XSERVTransSocketUNIXCreateListener: ...SocketCreateListener() failed_XSERVTransMakeAllCOTSServerListeners: server already running(EE) Fatal server error:(EE) Cannot establish any listening sockets - Make sure an X server isn't already running(EE) (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.(EE) (EE) Server terminated with error (1). Closing log file. So what is going on and what are the reason for each of these errors? UPDATE: and the output of the command netstat -ln | grep -E '[.]X|:6[0-9][0-9][0-9] is : unix 2 [ ACC ] STREAM LISTENING 18044 @/tmp/.X11-unix/X0unix 2 [ ACC ] STREAM LISTENING 47610 @/tmp/.X11-unix/X1unix 2 [ ACC ] STREAM LISTENING 18045 /tmp/.X11-unix/X0unix 2 [ ACC ] STREAM LISTENING 47611 /tmp/.X11-unix/X1 | /usr/lib/xorg/Xorg.wrap: Only console users are allowed to run the X server but i don't understand, what are the "console users"? It means you need to be running from the Linux text console , it actually does not matter what user you are. (Except that root is always allowed). Confusing :). There are two different examples of switching to the Linux text console (and back) here, depending on exactly how your system is configured: Switch to a text console in Fedora The details can vary, as to which numbered consoles (Ctrl+Alt+F1, Ctrl+Alt+F2, etc) allow a text login, and which ones are used for graphical sessions (or not used at all). I keep getting the message: "Cannot establish any listening sockets..." You get an error message like: _XSERVTransSocketINETCreateListener: ...SocketCreateListener() failed_XSERVTransMakeAllCOTSServerListeners: server already runningFatal server error:Cannot establish any listening sockets - Make sure an X server isn't already running This problem is very similar to the previous one. You will get this message possibly because the lock file was removed somehow or some other program which doesn't create a lock file is already listening on this port. You can check this by doing a netstat -ln . Xservers usually listen at tcp port 6000+, therefore if you have started your Xserver with the command line option :1 it will be listening on port 6001. Please check the article above for further information . As this says, there is more information about what :0 , :1 , :2 mean, immediately above the quoted section: https://www.x.org/wiki/FAQErrorMessages/#index5h2 (Note that you are using a more modern X server config, which does not listen on any TCP ports. This is why your error happens in _XSERVTransSocket UNIX CreateListener, instead of _XSERVTransSocket Inet CreateListener. But the principle is exactly the same). When i tried Xorg :2 in my virtual machine with Kali, the screen went black, why did this happen? A-ha, yes :-D. Xorg is a graphics server. If you want to show some graphics on it, you need to run some client programs. Xorg also starts up with an empty cursor nowadays. It's deliberately featureless, to avoid flashes / inconsistencies when starting your graphical stuff. This has changed - when I first used Xorg, the default background and cursor were quite obtrusive. If you want to see what that looked like, you can pass the -retro option :-). Traditionally - and I think this is the behaviour with Xwrapper - Xorg would grab an unused console and switch to it. In this case you can switch back to your previous console (see above). Of course you can switch back again to the Xorg server, once you find which number console it grabbed :-). If you are running a virtual machine on Linux, your VM will provide some method to inject the key combination Ctrl+Alt+F1 or whatever, because pressing that key combination probably switches consoles on your real machine. I would tell you to compare startx -- :2 , which (hopefully) launches some clients as well as an X server :-). However, the most popular modern GUIs now explicitly do not support multiple sessions. So you must make sure to logout your existing GUI session, before you run startx . Otherwise, it might look like it works, but then go wrong in weird ways that you don't understand. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/478742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302721/"
]
} |
478,765 | I have the a series of markdown files in the working directory: $ ls *.mdcsv_reader.md egrep.md find.md found_pdfs.md osPathSep_help.md readme.md smtplib_help.md today.md I want to remove them except "today.md" #!/usr/local/bin/bashfor i in ./*.md ; do if [[ $i != "today.md" ]]; then echo $i fidone Run it and get $ bash bash/remove_files.sh./csv_reader.md./egrep.md./find.md./found_pdfs.md./osPathSep_help.md./readme.md./smtplib_help.md./today.md Nonetheless, the structured commands are not handy in the command line, how could I accomplish such a task with shorter commands | Use a negative match (requires shopt -s extglob , but possibly already set): rm !(today).md (you can first use ls instead of rm to check the result). Lots of power in extglob , you could also do rm !(yesterday|today).md if you wanted to spare two files. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/478765",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
478,801 | I have a script which contains while true sudo mycmd sleep 10000 [ ... ] ; breakend when I run the script in bash, I will have to provide my password once in 10000 seconds after finishing running the previous instance of sudo mycmd . I remeber yes | somecommand can repeatedly provide yes as stdin input to somecommand , as answers to repeated questions of "Yes or No". I wonder how I can provide my password repeatedly to the script? Thanks. | The password can be piped into the sudo command but this is not secure in any way at all and should be avoided if possible. echo 'hunter2' | sudo -S mycmd A better method would be to just run the entire script with sudo and only require the password once. The sudo password can also be avoided entirely but is also not recommended for security reason. This can be done by updating the /etc/sudoers file with the visudo command and including a line such as this for all users in the wheel group. %wheel ALL=(ALL) NOPASSWD: ALL To disable it only for a single user switch %wheel with the users username. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478801",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
478,804 | I have a function f(){ echo 777} and a variable to which I assign the "return value" of the function. x=$(f) Very concise! However, in my real code, the variable and function names are quite a bit longer and the function also eats positional arguments, so that concise line above, gets very long. Since I like to keep things tidy, I would like to break the code above in two lines. x=\$(f) Still works! But: keeping things tidy also means respecting the indentation, so that gives something like if foo x=\ $(f)fi which does not work anymore due to whitespaces! Is there a good workaround for this? | Why go for complex, hard-to-read constructs? There is a perfectly natural way to present this which doesn't need any intermediate assignments, fancy ways of building an empty string, quoting subtleties or other cognitive burden. if foo; then x=$( a_very_long_command_name --option1='argument 1 is long' \ --option2='argument 2 is long as well' )fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58056/"
]
} |
478,823 | I am working on a CentOS server and schedule a task with command at # echo "touch a_long_file_name_file.txt" | at now + 1 minutejob 2 at Wed Oct 31 13:52:00 2018 One minute later, # ls | grep a_long_file_name_file.txa_long_file_name_file.txt the file was successful created. However, if I run it locally on my macOS, $ echo "touch a_long_file_name_file.txt" | at now + 1 minutejob 31 at Wed Oct 31 13:58:00 2018 Minutes later, if it failed to make such a file. I checked the version of at on the CentOS server AUTHOR: At was mostly written by Thomas Koenig, [email protected]. 2009-11-14 In contrast, the macOS version AUTHORS At was mostly written by Thomas Koenig <[email protected]>. The time parsing routines are by David Parsons <[email protected]>, with minor enhancements by Joe Halpin <[email protected]>.BSD January 13, 2002 I found that at , atq , atrm are not of GNU coreutils. $ ls /usr/local/opt/coreutils/libexec/gnubin/ | grep atcatdatepathchkrealpathstattruncate How could I install the latest version of at on macOS and make it work? | Instead of updating at and the associated tools on macOS, lets try to make the default at on macOS work. The at manual on macOS says (my emphasis): IMPLEMENTATION NOTES Note that at is implemented through the launchd(8) daemon periodically invoking atrun(8) , which is disabled by default . See atrun(8) for information about enabling atrun . Checking the atrun manual: DESCRIPTION The atrun utility runs commands queued by at(1) . It is invoked periodically by launchd(8) as specified in the com.apple.atrun.plist property list. By default the property list contains the Disabled key set to true, so atrun is never invoked. Execute the following command as root to enable atrun : launchctl load -w /System/Library/LaunchDaemons/com.apple.atrun.plist What I think may be happening here, and what is prompting your other at -related questions, is that you just haven't enabled atrun on your macOS installation. On macOS Mojave, in addition to running the above launchctl command (with sudo ), you will also have to add /usr/libexec/atrun to the list of commands/applications that have "Full Disk Access" in the "Security & Privacy" preferences on the system. Note that I don't know the security implications of doing this. Personally, I have also added /usr/sbin/cron there to get cron jobs to work (not shown in the screenshot below as this is from another computer). To add a command from the /usr path (which won't show up in the file selection dialog on macOS), press Cmd+Shift+G when the file selection dialog is open (after pressing the plus-icon/button in the bottom of the window). You do not need to reboot the machine after these changes. I have tested this on macOS Mojave 14.10.1. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/478823",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
478,839 | My Problem tmux key bindings require two separate key hits to enter a command. The first is called a prefix, and set to control + a . The second key performs the actual command, for example : c create window w list windows n next window w previous window f find window , name window & kill window The problem is that two consecutive key combinations are cumbersome and slow. Most other tabbed UIs - from Chrome to Vim - enable tab switching with a single combination. What Have I Tried STFW Played with bind-key and send-keys , for example bind-key C-c send-keys C-a n My Question How can I run specific tmux commands, like "next window" or "create window", with a single key combination? | Solution I have something like this in my ~/.tmux.conf file: bind -n C-h select-pane -L Maps <Ctrl>-h to select pane on the left. Manual reference for the -n argument bind-key [-nr] [-T key-table] key command [arguments] (alias: bind)Bind key key to command. Keys are bound in a key table. By default (without -T), the key isbound in the prefix key table. This table is used for keys pressed after the prefix key (forexample, by default `c' is bound to new-window in the prefix table, so `C-b c' creates a newwindow). The root table is used for keys pressed without the prefix key: binding `c' tonew-window in the root table (not recommended) means a plain `c' will create a new window.-n is an alias for -T root. Keys may also be bound in custom key tables and theswitch-client -T command used to switch to them from a key binding. The -r flag indicatesthis key may repeat, see the repeat-time option.To view the default bindings and possible commands, see the list-keys command. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478839",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1079/"
]
} |
478,922 | My Problem When I select text from tmux using the mouse, the block selection spans to neighbouring panes. What Have I Tried set -g mouse on Hitting option while selecting (Mac's equivalent for alt ) Hitting shift while selecting following Tmux mouse-mode on does not allow to select text with mouse My Question How can I configure tmux to allow mouse selection in multiple-pane mode? | It depends on the version of tmux. When tmux mouse is on then the mouse selections will not span panes and will be copied into tmux's selection buffer. When tmux mouse is off (as it is in the description) then the mouse selection will be native X (and span panes). I add the following to my ~/.tmux.conf . It will enable CTRL+b M (to turn tmux mouse on) and CTRL+b m (to turn tmux mouse off). For tmux 1.x - 2.0 # Toggle mouse onbind-key M \ set-window-option -g mode-mouse on \;\ set-option -g mouse-resize-pane on \;\ set-option -g mouse-select-pane on \;\ set-option -g mouse-select-window on \;\ display-message 'Mouse: ON'# Toggle mouse offbind-key m \ set-window-option -g mode-mouse off \;\ set-option -g mouse-resize-pane off \;\ set-option -g mouse-select-pane off \;\ set-option -g mouse-select-window off \;\ display-message 'Mouse: OFF' For tmux 2.1+ # Toggle mouse onbind-key M \ set-option -g mouse on \;\ display-message 'Mouse: ON'# Toggle mouse offbind-key m \ set-option -g mouse off \;\ display-message 'Mouse: OFF' Or, to use a single bind-key toggle for tmux 2.1+ # Toggle mouse on/offbind-key m \ set-option -gF mouse "#{?mouse,off,on}" \;\display-message "#{?mouse,Mouse: ON,Mouse: OFF}" When tmux mouse is on, and a selection is made with the mouse, releasing the left mouse button should copy it to the tmux selection buffer and CTRL+b ] will paste it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/478922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1079/"
]
} |
478,999 | On Ubuntu 18.04, I can start or stop some service by sudo service cron start/stop I can list some services by service --status-all The output matches the files under /etc/init.d/ . I heard there are several ways of managing services: system V init, systemd, upstart, .... Which one am I using? man service shows it is system V init. But I heard that Linux replaces init with systemd. Shall I use systemd instead of init on Ubuntu? How can I make an arbitrary executable file (either ELF or shell script) become a service? Do I need to explicitly daemonize the executable by setsid , like https://stackoverflow.com/a/19235243/156458 ? Does any of the post below apply to me? https://stackoverflow.com/a/40401349/156458 https://askubuntu.com/a/523269/1471 Thanks. | On Ubuntu 18.04, [...] I heard there are several ways of managing services: system V init, systemd, upstart, .... Which one am I using? You're using systemd, that's the init that's shipped on Ubuntu 18.04. (Also on Ubuntu 16.04, on Fedora, on openSUSE, on Arch Linux, on RHEL 7, on CentOS 7, on CoreOS, and it's also the default on Debian 9.) One good way to confirm that you're running systemd is to run the command systemctl . If it's available and it produces output when run, then you're running systemd. On Ubuntu 18.04, I can start or stop some service by sudo service cron start/stop I can list some services by service --status-all Please note that the service command shipped in some systemd distros is there mostly for backward compatibility. You should try to manage services using systemctl instead. For example: $ sudo systemctl start cron$ sudo systemctl stop cron$ systemctl status cron And you can find status of all units with a simple $ systemctl The output matches the files under /etc/init.d/ . That's not necessarily the case with systemctl , since systemd native units are stored in /etc/systemd/system/ and /usr/lib/systemd/system/ . systemd does include compatibility with old SysV init scripts (through systemd-sysv-generator , which creates a systemd native service unit calling the commands from the init script), so if you have init scripts under /etc/init.d/ , they'll most likely show up in systemd as well. Shall I use systemd instead of init on Ubuntu? This question is unclear. The term init generally refers to the first process run when the system boots, the process run with PID 1. systemd runs with PID 1, so by definition systemd is an init (and so was upstart before it, and SysV init as well.) If you're asking "should I use systemd instead of SysV init?", well then you're already using systemd instead of SysV init, since you're on Ubuntu 18.04. (And, as pointed out above, most distributions you'd pick these days would most likely include systemd as their init.) Now, you could be asking "should I use systemd units instead of init scripts ?" and that question is more relevant, since arguably you have a choice here where both options will work. My recommendation here is that you should manage services using systemd units, which is the native mode of operation. Creating an init script simply adds a layer of indirection (since the generator will just create a systemd unit for you anyways.) Furthermore, writing systemd units is simpler than writing init scripts, since you don't have to worry about properly daemonizing and scrubbing the environment before execution, since systemd does all that for you. How can I make an arbitrary executable file (either ELF or shell script) become a service? Create a systemd service unit for it. See the examples on the man page. The simplest example shows how easy it can be to create a service unit: [Unit]Description=Foo[Service]ExecStart=/usr/sbin/foo-daemon[Install]WantedBy=multi-user.target Store this unit under /etc/systemd/system/foo.service , then reload systemd to read this unit file with: $ sudo systemctl daemon-reload Start the service with: $ sudo systemctl start foo.service And enable it during startup with: $ sudo systemctl enable foo.service You can check the status of the service with: $ systemctl status foo.service Of course, systemd can do a lot more for you to manage services, so a typical systemd unit will be longer than this one (though not necessarily that much more complex.) Browse the units shipped with Ubuntu under /usr/lib/systemd/system/*.service to get a better picture of what's typical, of what to expect. Do I need to explicitly daemonize the executable by setsid , like https://stackoverflow.com/a/19235243/156458 ? No! Don't run in background, don't worry about process groups or sessions, etc. systemd takes care of all that for you. Just write your code to run in foreground and systemd will take care of the rest. (If you have a service that runs in background, systemd can manage it, with Type=forking , but things are much easier when just running in foreground, so just do that if you're starting a new service.) Does any of the post below apply to me? https://stackoverflow.com/a/40401349/156458 This one is about applications using the "Spring Boot" Java framework. Unless you're writing Java code and using that framework, it's not relevant. If you're writing Java code, try instead to just run your service in foreground instead. https://askubuntu.com/a/523269/1471 The question is about upstart, the answer is about SysV init scripts. While SysV init scripts will work with systemd, it's preferable that you write systemd units directly, as mentioned above. So, no, I'd say neither of those are relevant. I'd recommend trying to learn more about systemd service units instead. This site is also a great resource for that, so feel free to post more questions about it as you explore writing your own systemd units for your services. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/478999",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
479,001 | I have 2 files and I want to grep on them List1 ACY1RPL3RPL4 List2 1 ABHD14A-ACY12 ACY13 RPL344 RPL215 RPL36 RPL41 I tried something like this grep -Fwf list1.txt list2.txt and got this 1 ABHD14A-ACY12 ACY15 RPL3 My list1 did not have ABHD14A-ACY1. Is there a way in grep I can do this? Thanks | On Ubuntu 18.04, [...] I heard there are several ways of managing services: system V init, systemd, upstart, .... Which one am I using? You're using systemd, that's the init that's shipped on Ubuntu 18.04. (Also on Ubuntu 16.04, on Fedora, on openSUSE, on Arch Linux, on RHEL 7, on CentOS 7, on CoreOS, and it's also the default on Debian 9.) One good way to confirm that you're running systemd is to run the command systemctl . If it's available and it produces output when run, then you're running systemd. On Ubuntu 18.04, I can start or stop some service by sudo service cron start/stop I can list some services by service --status-all Please note that the service command shipped in some systemd distros is there mostly for backward compatibility. You should try to manage services using systemctl instead. For example: $ sudo systemctl start cron$ sudo systemctl stop cron$ systemctl status cron And you can find status of all units with a simple $ systemctl The output matches the files under /etc/init.d/ . That's not necessarily the case with systemctl , since systemd native units are stored in /etc/systemd/system/ and /usr/lib/systemd/system/ . systemd does include compatibility with old SysV init scripts (through systemd-sysv-generator , which creates a systemd native service unit calling the commands from the init script), so if you have init scripts under /etc/init.d/ , they'll most likely show up in systemd as well. Shall I use systemd instead of init on Ubuntu? This question is unclear. The term init generally refers to the first process run when the system boots, the process run with PID 1. systemd runs with PID 1, so by definition systemd is an init (and so was upstart before it, and SysV init as well.) If you're asking "should I use systemd instead of SysV init?", well then you're already using systemd instead of SysV init, since you're on Ubuntu 18.04. (And, as pointed out above, most distributions you'd pick these days would most likely include systemd as their init.) Now, you could be asking "should I use systemd units instead of init scripts ?" and that question is more relevant, since arguably you have a choice here where both options will work. My recommendation here is that you should manage services using systemd units, which is the native mode of operation. Creating an init script simply adds a layer of indirection (since the generator will just create a systemd unit for you anyways.) Furthermore, writing systemd units is simpler than writing init scripts, since you don't have to worry about properly daemonizing and scrubbing the environment before execution, since systemd does all that for you. How can I make an arbitrary executable file (either ELF or shell script) become a service? Create a systemd service unit for it. See the examples on the man page. The simplest example shows how easy it can be to create a service unit: [Unit]Description=Foo[Service]ExecStart=/usr/sbin/foo-daemon[Install]WantedBy=multi-user.target Store this unit under /etc/systemd/system/foo.service , then reload systemd to read this unit file with: $ sudo systemctl daemon-reload Start the service with: $ sudo systemctl start foo.service And enable it during startup with: $ sudo systemctl enable foo.service You can check the status of the service with: $ systemctl status foo.service Of course, systemd can do a lot more for you to manage services, so a typical systemd unit will be longer than this one (though not necessarily that much more complex.) Browse the units shipped with Ubuntu under /usr/lib/systemd/system/*.service to get a better picture of what's typical, of what to expect. Do I need to explicitly daemonize the executable by setsid , like https://stackoverflow.com/a/19235243/156458 ? No! Don't run in background, don't worry about process groups or sessions, etc. systemd takes care of all that for you. Just write your code to run in foreground and systemd will take care of the rest. (If you have a service that runs in background, systemd can manage it, with Type=forking , but things are much easier when just running in foreground, so just do that if you're starting a new service.) Does any of the post below apply to me? https://stackoverflow.com/a/40401349/156458 This one is about applications using the "Spring Boot" Java framework. Unless you're writing Java code and using that framework, it's not relevant. If you're writing Java code, try instead to just run your service in foreground instead. https://askubuntu.com/a/523269/1471 The question is about upstart, the answer is about SysV init scripts. While SysV init scripts will work with systemd, it's preferable that you write systemd units directly, as mentioned above. So, no, I'd say neither of those are relevant. I'd recommend trying to learn more about systemd service units instead. This site is also a great resource for that, so feel free to post more questions about it as you explore writing your own systemd units for your services. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479001",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/63486/"
]
} |
479,055 | When writing a shell script, in which some but not all commands in it need superuser privileges, shall I add sudo to those commands which need superuser privileges, and run the shell script without sudo, or don't add sudo to those commands which need superuser privileges, but run the shell script with sudo? In the second way I will only need to provide my password once, but all the commands in the script will be run with superuser privilleges, including those commands which don't need. In the first way I may need to provide my password multiple times for different sudo commands, while the superuser privileges are granted only to those commands which need them. From security concern, the first way is better. For convenience, the second way is better. I have been thinking of adopting the first way. So I have to dealwith the inconvenience of providing my passwords to multiple sudocommands in the shell script. Stephen Harris wrote : A well written script would detect if it was running with the right permissions and not call sudo at all, but there's a lot of bad scripts So should I use the second way? If so, how can I write "script would detect if it was running with the right permissions and not call sudo at all"? how can I improve its security to avoid the problem of giving superuser privileges to commands which don't need them when runningthe script with sudo? Would this simple approach have the best of of both approach:add sudo to commands which only need it, and run the script with orwithout sudo depending on whether I want convenience or security? Does this approach have some problem? Thanks. | To address your first issue: how can I write "script would detect if it was running with the right permissions and not call sudo at all"? There is a simple and POSIX check for root: #!/bin/shis_user_root (){ [ "$(id -u)" -eq 0 ]} Alternatively, in Bash, more performance-driven coders might want to use: #!/bin/bashis_user_root (){ [ "${EUID:-$(id -u)}" -eq 0 ]} Note that I intentionally wrapped the code in functions for re-use. To address your second issue: how can I improve its security to avoid the problem of giving superuser privileges to commands which don't need them when running the script with sudo? You can't do much about this. At least nothing comes to my mind. If I saw the script, I might have suggestions. But since you did not include it in your question... If you run the whole script with sudo or as root , I see no way to control this. To address the comment: What do you think of "use sudo inside it vs run it with sudo" In my scripts, I usually proceed with the latter approach, but that does not necessarily mean I recommend it to you. Because it depends on who the script is meant for - for root only; for user mostly with the exception of having some users having sudo rights; you would have to literally include your script into the question for me to be able to answer with any value. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479055",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
479,058 | UPDATE: Today's upgrade to alsa-lib-1.1.7-2 fixed the issue. (2018-11-23) Arch Linux, up-to-date. As so often, a -Syu upgrade broke things again. This time it's audacity. Usually audacity, and any other playback/record app would show up in pauvcontrol - Not audacity anymore. I could use pauvcontrol to choose sink and source from which I want to record or playback audio using audacity - not anymore. In audacity's preferences it only says ALSA in the top dropdown, im not sure if that is supposed to be like that or if it said PulseAudio before. There aren't even all my devices in the playback tab. Same goes for the recording device selection, but the loopback device I need to record from is there, so that works. But now I wanted to playback sound from audacity too and it just doesn't work anymore. It doesn't list my USB playback device and when I set it to sysdefault I just get an error. $ pulseaudio -vpulseaudio 12.2$ audacity --versionlilv_world_add_plugin(): warning: Duplicate plugin <http://lv2plug.in/plugins/eg-params>lilv_world_add_plugin(): warning: ... found in file:///usr/lib/lv2/eg-params.lv2/lilv_world_add_plugin(): warning: ... and file:///usr/lib64/lv2/eg-params.lv2/ (ignored)lilv_world_add_plugin(): warning: Duplicate plugin <http://lv2plug.in/plugins/eg-midigate>lilv_world_add_plugin(): warning: ... found in file:///usr/lib/lv2/eg-midigate.lv2/lilv_world_add_plugin(): warning: ... and file:///usr/lib64/lv2/eg-midigate.lv2/ (ignored)lilv_world_add_plugin(): warning: Duplicate plugin <http://lv2plug.in/plugins/eg-fifths>lilv_world_add_plugin(): warning: ... found in file:///usr/lib/lv2/eg-fifths.lv2/lilv_world_add_plugin(): warning: ... and file:///usr/lib64/lv2/eg-fifths.lv2/ (ignored)lilv_world_add_plugin(): warning: Duplicate plugin <http://lv2plug.in/plugins/eg-metro>lilv_world_add_plugin(): warning: ... found in file:///usr/lib/lv2/eg-metro.lv2/lilv_world_add_plugin(): warning: ... and file:///usr/lib64/lv2/eg-metro.lv2/ (ignored)lilv_world_add_plugin(): warning: Duplicate plugin <http://lv2plug.in/plugins/eg-amp>lilv_world_add_plugin(): warning: ... found in file:///usr/lib/lv2/eg-amp.lv2/lilv_world_add_plugin(): warning: ... and file:///usr/lib64/lv2/eg-amp.lv2/ (ignored)lilv_world_add_plugin(): warning: Duplicate plugin <http://lv2plug.in/plugins/eg-sampler>lilv_world_add_plugin(): warning: ... found in file:///usr/lib/lv2/eg-sampler.lv2/lilv_world_add_plugin(): warning: ... and file:///usr/lib64/lv2/eg-sampler.lv2/ (ignored)lilv_world_add_plugin(): warning: Duplicate plugin <http://lv2plug.in/plugins/eg-scope#Mono>lilv_world_add_plugin(): warning: ... found in file:///usr/lib/lv2/eg-scope.lv2/lilv_world_add_plugin(): warning: ... and file:///usr/lib64/lv2/eg-scope.lv2/ (ignored)lilv_world_add_plugin(): warning: Duplicate plugin <http://lv2plug.in/plugins/eg-scope#Stereo>lilv_world_add_plugin(): warning: ... found in file:///usr/lib/lv2/eg-scope.lv2/lilv_world_add_plugin(): warning: ... and file:///usr/lib64/lv2/eg-scope.lv2/ (ignored)lilv_world_add_plugin(): warning: Reloading plugin <http://lv2plug.in/plugins/eg-params>lilv_world_add_plugin(): warning: Reloading plugin <http://lv2plug.in/plugins/eg-midigate>lilv_world_add_plugin(): warning: Reloading plugin <http://lv2plug.in/plugins/eg-fifths>lilv_world_add_plugin(): warning: Reloading plugin <http://lv2plug.in/plugins/eg-metro>lilv_world_add_plugin(): warning: Reloading plugin <http://lv2plug.in/plugins/eg-amp>lilv_world_add_plugin(): warning: Reloading plugin <http://lv2plug.in/plugins/eg-sampler>lilv_world_add_plugin(): warning: Reloading plugin <http://lv2plug.in/plugins/eg-scope#Mono>lilv_world_add_plugin(): warning: Reloading plugin <http://lv2plug.in/plugins/eg-scope#Stereo> Not sure if those warnings mean anything, it won't print the version. The package is audacity-2.3.0-1-x86_64.pkg.tar.xz Full log of the console when starting audacity: ALSA lib pcm_dmix.c:1099:(snd_pcm_dmix_open) unable to open slaveALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rearALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfeALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.sideALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.iec958.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM iec958ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.iec958.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM spdifALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.iec958.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM spdifALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.hdmi.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM hdmiALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.hdmi.0:CARD=0,AES0=4,AES1=130,AES2=0,AES3=2'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM hdmiALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.modem.0:CARD=0'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline:CARD=0,DEV=0ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.modem.0:CARD=0'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.phoneline:CARD=0,DEV=0ALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.modem.0:CARD=0'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM phonelineALSA lib confmisc.c:1281:(snd_func_refer) Unable to find definition 'cards.Loopback.pcm.modem.0:CARD=0'ALSA lib conf.c:4555:(_snd_config_evaluate) function snd_func_refer returned error: No such file or directoryALSA lib conf.c:5034:(snd_config_expand) Evaluate error: No such file or directoryALSA lib pcm.c:2565:(snd_pcm_open_noupdate) Unknown PCM phonelineconnect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)attempt to connect to server failedconnect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)attempt to connect to server failedALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field portALSA lib pcm_oss.c:377:(_snd_pcm_oss_open) Unknown field portExpression 'alsa_snd_pcm_hw_params_set_period_size_near( pcm, hwParams, &alsaPeriodFrames, &dir )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 924ALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for cardALSA lib pcm_usb_stream.c:486:(_snd_pcm_usb_stream_open) Invalid type for cardALSA lib pcm_dmix.c:1099:(snd_pcm_dmix_open) unable to open slaveExpression 'alsa_snd_pcm_hw_params_set_period_size_near( pcm, hwParams, &alsaPeriodFrames, &dir )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 924connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)attempt to connect to server failed Even though I'm not sure if those errors are because of ALSA and maybe completely unrelated to the PulseAudio issue. | From the forums I found this bug . This is an issue with the latest (1.1.7) version of alsa-lib . Downgrading it to your previous version should work around the issue for now: pacman -U /var/cache/pacman/pkg/alsa-lib-1.1.6-1-x86_64.pkg.tar.xz | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/296862/"
]
} |
479,086 | I've created a new user in the Ubuntu 18.04.01 like this: sudo useradd svnsudo passwd svnsudo mkhomedir_helper svnsudo usermod -d /home/svn -m svn Problem is, that if I switch to the user by su svn I don't see a standard prompt command. Instead of standard prompt: svn@svn-server:/srv/svn/$ I see only: $ despite the content of the file /home/svn/.bashrc . And in this "crippled" prompt I also cannot use TAB key to autocomplete paths. If I run echo $PS1 as svn user I get empty result. How can I fix this user? | Check the shell you have assigned. If you used the useradd command in Ubuntu 18, the default login shell will be /bin/sh and you will get output like you mentioned. You can change the login shell by executing the command: sudo usermod -s /bin/bash svn | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318785/"
]
} |
479,094 | What is the naming convention standard for Ethernet and Wi-Fi interfaces on a Linux machine? We are developing a tool that should show only the Ethernet and Wi-Fi interfaces of the Linux machine and its current status. For example, below is the list of network interfaces (both physical and virtual) on my Linux (Ubuntu) machine: docker0 , enp0s25 , lo , wlp3s0 When I run the tool, below is the result I get: enp0s25 , wlp3s0 We have written the code with the logic that all the Ethernet interfaces always start with the letter e and Wi-Fi interfaces always start with the letter w . Is the logic right? If not, how can we address this? | For systemd's predictable interface names , the prefixes can be seen in udev-builtin-net_id.c . They are two-character prefixes based on the type of interface: en → Ethernet ib → InfiniBand sl → Serial line IP ( SLIP ) wl → WLAN ww → WWAN Meaning for both the traditional ethX style of naming and the newer systemd naming, an initial letter e should be an Ethernet interface for any automatically generated interface names. All Wi-Fi interfaces should begin with a w in both schemes, although not all interfaces beginning with w will be Wi-Fi. If this tool has to work in an arbitrary environment (rather than just on internal environments which you control), note that users may rename interfaces on Linux systems with arbitrary names, such as wan0 , lan0 , lan1 , dmz0 which will break any assumptions about initial letters. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/314175/"
]
} |
479,117 | Objective: Check in /etc/shadow if user password is locked, i.e. if the first character in the 2nd field in /etc/shadow, which contains the user's hashed password, is an exclamation mark ('!') Desired output: a variable named $disabled containing either 'True' or 'False' Username is in the $uname varable and I do something like this: disabled=`cat /etc/shadow |grep $uname |awk -F\: '{print$2}'`# I now have the password and need one more pipe into the check for the character# which is where I'm stuck. I would like to do like (in PHP syntax):| VARIABLE=="!"?"True":"False"` This is a fragment of a script that will be run by Cron with root permissions, so there is access to all desirable information. | Don't parse the shadow file manually Parsing such files is fragile if you fail to account for all eventualities (for example, disabled passwords are often encoded as a single * ; do other solutions deal with that?). Additionally, authentication may not happen through shadow (but instead through NIS or ldap or who knows what).There are standard tools that will deal with all this for you. In this case, passwd : -S, --status Display account status information. The status information consists of 7 fields. The first field is the user's login name. The second field indicates if the user account has a locked password (L), has no password (NP), or has a usable password (P). The third field gives the date of the last password change. The next four fields are the minimum age, maximum age, warning period, and inactivity period for the password. These ages are expressed in days. So passwd -S | cut -d ' ' -f 2 will yield what you need. A simple if/then will translate it to your desired variable: if [ "$(passwd -S "$USER" | cut -d ' ' -f 2)" = "P" ]then disabled="False"else disabled="True"fi The same applies to locking a user's password; this is preferably done through usermod ( --lock option), not editing shadow manually. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479117",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/313777/"
]
} |
479,177 | I writte the following script load function should set the array disk[a]=1 and disk[b]=2 and so on Then the function out should print the array ${disk[a]} , and ${disk[b]} , and so on But what we get from function out is always the number 4 Instead, I want to get the following: 1234 What is wrong here ? How to fix it so function out will print: 1234 the script: #!/bin/bashfunction load{ counter=1 for input in a b c d do disk[$input]=$counter let counter=$counter+1 echo ${disk[$input]} done}function out{ counter=1 for input in a b c d do echo ${disk[$input]} let counter=$counter+1 done}echo "run function load"loadecho "run function out"out the output: ./testrun function load1234run function out4444 | Bash arrays are indexed arrays by default: An indexed array is created automatically if any variable is assigned to using the syntax name[subscript]=value ... but you are using letters as the index, so you probably want an associative array, which means you need an: declare -A disk before calling the functions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479177",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
479,199 | Does the MBR of the disk contain this information and therefore when i call a command like fdisk, a kernel level code eventually runs and reads it from a specific part in MBR? If so, which part of it? What offset? If it's not in the MBR, then how can these types of commands find it? They can't be reading it from the beginning of a partition considering they need to calculate the starting address of that partition and they need the sector size to do so, don't they? How are commands like fdisk implemented to find this information? Where do they read it from? | A device’s sector size isn’t stored in the MBR. User space commands such as fdisk use the BLKBSZGET and BLKSSZGET ioctl s to retrieve the sector sizes from disks. Those ioctl s are handled by drivers in the kernel, which retrieve the relevant information from the drives themselves. (There isn’t much documentation about the relevant ioctl s; you need to check the kernel source code .) You can see the relevant information using other tools which query drives directly, for example hdparm . On a small SSD, hdparm -I tells me [...]Logical Sector size: 512 bytesPhysical Sector size: 512 bytesLogical Sector-0 offset: 0 bytes[...]cache/buffer size = unknownForm Factor: 2.5 inchNominal Media Rotation Rate: Solid State Device[...] On a large spinning disk with 4K sectors, I get instead [...]Logical Sector size: 512 bytesPhysical Sector size: 4096 bytesLogical Sector-0 offset: 0 bytes[...]cache/buffer size = unknownForm Factor: 3.5 inchNominal Media Rotation Rate: 5400[...] | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/302721/"
]
} |
479,254 | In cron's manpage (cronie) -p Allows Cron to accept any user set crontables. I learned that cron daemon will implicitly search for and run the cron jobs defined in /etc/crontab , /etc/cron.d/* and /var/spool/cron/cronstabs/* . What is -p used for? Is it to explicitly tell cron to search for and run the cron jobs defined in a crontab file which is stored in some place other than those mentioned above? Or is it to copy a crontab file stored in some place other than those mentioned above to one of the places mentioned above? Does the cron on Debian or its derivatives have -p option? I don't find -p on the manpage of cron on Ubuntu. Thanks. | The CAVEATS section of the cronie's cron(8) man page says (emphasis mine): All crontab files have to be regular files or symlinks to regular files, they must not be executable or writable for anyone else but the owner. This requirement can be overridden by using the -p option on the crond command line. So it is in fact documented on the man page, although not in the most obvious location. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479254",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
479,266 | I have a collection of opus music files that needs tagging and a text file containing the appropriate tags. I'm trying to accomplish the tagging through a Python script but I need a way to edit the metadata of the opus files. So a CLI program that can tag opus files. opusenc , which is part of opus-tools doesn't accept opus files as input. There are a lot of graphical programs that can edit the tags of opus files but that doesn't work in this case. I was thinking ffmpeg could do it but the wiki page doesn't mention opus (or ogg or flac which also uses a "Vorbis Comment" to store metadata as I understand it). I assume my two suggestions would re-encode the files and I'm not sure if that will damage the sound quality. If so it would be preferable to use something that doesn't re-encode. I'm running Manjaro Linux. | I guess I basically had the answer in my question. FFMpeg works just fine when I just decided to try it. It doesn't seem to re-encode because the process is instantaneous. I just did: ffmpeg -i <input-file> -acodec copy -metadata title="<title>" -metadata artist=<artist> <output-file> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257141/"
]
} |
479,267 | I have this: muh_dir=`cd $(dirname "$BASH_SOURCE") && pwd` and yeah I tested the above (it has backticks) and it doesn't work well with whitespace in the pwd.On the other hand, this is better: muh_dir="$(cd $(dirname "$BASH_SOURCE") && pwd)" My question is - this adds 3 chars to my command the syntax changes in my editor. The first way is much nicer..is there anyway to handle whitespace with the shorter syntax or do I just bite the bullet? | I guess I basically had the answer in my question. FFMpeg works just fine when I just decided to try it. It doesn't seem to re-encode because the process is instantaneous. I just did: ffmpeg -i <input-file> -acodec copy -metadata title="<title>" -metadata artist=<artist> <output-file> | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479267",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
479,298 | I am just starting learning regex and want to use it instead of others everywhere for practice. I encounter such a situation when tried to find files with extensions sh or md $ find . regex ".*\.(sh|md)$"../bogus.py./cofollow.py./data8.txt./example.sh./longest_word_2.sh./posit_param.sh./cobroadcast2.py Unfortunately it output /bogus.py , I notice the BRE rules and tried escape () $ find . -regex ".*\.\(sh|md\)$"#get nothing return After series of search, I got -regextype solution Regular Expressions - Finding Files $ find . -regextype posix-extended -iregex ".*\.(sh|md)$"./example.sh./longest_word_2.sh./posit_param.sh$ find . -regextype egrep -iregex ".*\.(sh|md)$"./example.sh./longest_word_2.sh./posit_param.sh./table_regex_bat.md Additionally, a nice modular solution $ find -type f | egrep ".*\.(sh|md)$"./example.sh./longest_word_2.sh./posit_param.sh./table_regex_bat.md However, there is a shortcut in BSD to accomplish such a taskwith a -E predicate. $ /usr/bin/find -E . -regex ".*\.(sh|md)$"./example.sh./longest_word_2.sh./posit_param.sh I am determined to exclusively take the GNU tool in order to make my codes and skills portable. So I am starting to alias 'find -regextype egrep`, Unfortunately find obtain the $1 as path. How could I solve them problem in a handy way? | Don't use an alias to pass arguments around. They are not portable and useful only on interactive shells. Use a function instead and pass the arguments as paths needed regexFind() { (( "$#" )) || { printf 'Insufficient arguments provided \n' >&2; return 1; } find "$1" -regextype egrep -iregex ".*\.(sh|md)$"} and call the function as regexFind "/home/foo/bar" Also to add to your findings, note that bash also has an intrinsic way to glob files. You just need to enable a couple of extended shell options to make it work. The -s enables the option and -u disables it. The nullglob allows to ignore un-expanded glob results as valid matches. So assuming you want to match files ending with *.sh and *.md , you just need to navigate to that particular directory and do shopt -s nullglobfileList=(*.sh)fileList+=(*.md)shopt -u nullglob and print the results to see below. Remember to quote the expansion to prevent the filenames from undergoing Word-Splitting. printf '%s\n' "${fileList[@]}" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260114/"
]
} |
479,349 | I have a college exercise which is "Find all files which name ends in ".xls" of a directory and sub-directories that have the word "SCHEDULE", without using pipes and using only some of the commands GREP, FIND, CUT, PASTE or LS I have reached this command: ls *.xls /users/home/DESKTOP/*SCHEDULE This shows me only the .xls files on the Desktop and opens all directories with SCHEDULE on the name but when it does it it shows me all the files on the directories insted of only the .xls ones. | Assuming that by "file" they mean "regular file", as opposed to directory, symbolic link, socket, named pipe etc. To find all regular files that have a filename suffix .xls and that reside in or below a directory in the current directory that contain the string SCHEDULE in its name: find . -type f -path '*SCHEDULE*/*' -name '*.xls' With -type f we test the file type of the thing that find is currently processing. If it's a regular file (the f type), the next test is considered (otherwise, if it's anything but a file, the next thing is examined). The -path test is a test agains the complete pathname to the file that find is currently examining. If this pathname matches *SCHEDULE*/* , the next test will be considered. The pattern will only match SCHEDULE in directory names (not in the final filename) due to the / later in the pattern. The last test is a test against the filename itself, and it will succeed if the filename ends with .xls . Any pathname that passes all tests will by default be printed. You could also shorten the command into find . -type f -path '*SCHEDULE*/*.xls' | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/479349",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/318998/"
]
} |
479,352 | We know that the backtick character is used for command substitution : chown `id -u` /mydir Which made me wonder: is the tick character ´ used for anything in the Linux shell? Note: incidentally, command substitution can also be written more readably as chown $(id -u) /mydir | The character sets used historically with Unix, including ASCII , don’t have a tick character, so it wasn’t used. As far as I’m aware no common usage for that character has been introduced since it’s become available; nor would it, since it’s not included in POSIX’s portable character set . ` was apparently originally included in ASCII (along with ^ and ~) to serve as a diacritic. When ASCII was defined, the apostrophe was typically represented by a ′-style glyph (“prime”, as used for minutes or feet) rather than a straight apostrophe ', and was used as a diacritic acute accent too. Historically, in Unix shell documentation, ` was referred to as a grave accent , not a backtick. The lack of a forward tick wouldn’t have raised eyebrows, especially since ' was used as the complementary character (see roff syntax). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/479352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34039/"
]
} |
479,355 | I was wondering if there's a shorthand for this kind of stuff. Currently I can do. var_empty=; [ -n "$var" ] || var_empty=1; #intermediary variableecho "REPL_if_var_empty_otherwise_empty=${var_empty:+REPL}" Is this doable without the intermediary? I tried sh -c 'readonly SAME=SAME; var=; echo test0=${var:-SAME} test1=${SAME:+REPL}; echo REPL_if_var_empty_otherwise_empty=${${var:-SAME}:+REPL}' but this results in a bad substitution error in the last echo ( test0=SAME test1=REPL ). Why is that? Is there another way? | In bash, ksh or zsh in ksh emulation, you could do: r=empty;output=${r[${#var}]} In zsh : output=${${var:-empty$var}%$var} Otherwise, you can always do output=;[ "$var" ]||output=empty | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23692/"
]
} |
479,371 | I can' t use the command route : routebash: route: command not found Why is it not found? (I'm using debian 9). I tried to run it as root but it still does not work. However it is supposed to work also without root. Additional diagnostics: whereis routeroute: which route (empty output). export PATH=$PATH:/sbin (no output) and nothing changes. I already have iproute2 installed, to be sure I ran: apt --reinstall install iproute2 | The "command not found" error means you don't have the command installed. Using Debian's "search the contents of packages" page brings up: .../sbin/route net-tools [not powerpc].... So (providing your CPU isn't PowerPC) you should install the net-tools package. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/293740/"
]
} |
479,421 | In the man page for ld.so(8) , it says that When resolving library dependencies, the dynamic linker first inspects each dependency string to see if it contains a slash (this can occur if a library pathname containing slashes was specified at link time). If a slash is found, then the dependency string is interpreted as a (relative or absolute) pathname, and the library is loaded using that pathname. How can gcc link against a library with a path with a slash? I have tried with -l but that seems to work only with a library name which it uses to search various paths, not with a path argument itself. One follow-on question: when linking to a relative path in this way, what is the path relative to (e.g. the directory containing the binary or the working directory at runtime)? All of the linking guides I find when searching discuss using RPATH , LD_LIBRARY_PATH , and RUNPATH . RPATH is deprecated and most discussions discourage using LD_LIBRARY_PATH . RUNPATH with a path starting with $ORIGIN allows for a link to a relative path, but it is a little fragile because it can be overridden by LD_LIBRARY_PATH . I wanted to know if a relative path would be more robust (since I can't find anything discussing this I am guessing not, likely because the path is relative to the runtime directory). | If we (for the moment) ignore the gcc or linking portion of the question and instead modify a binary with patchelf on a linux system $ ldd hello linux-vdso.so.1 => (0x00007ffd35584000) libhello.so.1 => not found libc.so.6 => /lib64/libc.so.6 (0x00007f02e4f6f000) /lib64/ld-linux-x86-64.so.2 (0x00007f02e533c000)$ patchelf --remove-needed libhello.so.1 hello$ patchelf --add-needed ./libhello.so.1 hello$ ldd hello linux-vdso.so.1 => (0x00007ffdb74fc000) ./libhello.so.1 => not found libc.so.6 => /lib64/libc.so.6 (0x00007f2ad5c28000) /lib64/ld-linux-x86-64.so.2 (0x00007f2ad5ff5000) We now have a binary with a relative path library, which if there exist suitable directories with libhello.so.1 files present in them $ cd english/$ ../hellohello, world$ cd ../lojban/$ ../hellocoi rodo we find that the path is relative to the working directory of the process , which opens up all sorts of problems, especially security problems . There might be some productive use for this, testing different versions of a library, perhaps. It would likely be simpler to compile two different binaries, or patchelf in the necessary library without the complication of a relative working directory. compile steps libhello only has a helloworld call $ cat libhello.c#include <stdio.h>void helloworld(void){ printf("coi rodo\n");} and was compiled via CFLAGS="-fPIC" make libhello.ogcc -shared -fPIC -Wl,-soname,libhello.so.1 -o libhello.so.1.0.0 libhello.o -lcln -s libhello.so.1.0.0 libhello.so.1ln -s libhello.so.1.0.0 libhello.so and the hello that makes the helloworld call was compiled via $ cat hello.cint main(void){ helloworld(); return 0;}$ CFLAGS="-lhello -L`pwd`/english" make hello without patchelf In hindsight, modify the gcc command to use a relative directory path: $ gcc -shared -fPIC -Wl,-soname,./libhello.so.1 -o libhello.so.1.0.0 libhello.o -lc$ cd ..$ rm hello$ CFLAGS="-lhello -L`pwd`/lojban" make hello$ ldd hello | grep hello ./libhello.so.1 => not found$ english$ ../hellohello, world It's probably more sensible to compile the library in a normal fashion and then fiddle around with any binaries as necessary using patchelf . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55666/"
]
} |
479,424 | I would like to rename a number of files that have the following formatting: ABCD_20181102.jpgDEDE_2_20181030.jpg i.e usually 4 random letters, followed by an underscore and a date (year, month, day). Or 4 random letters followed by an underscore, a random number, an underscore and then a date. I would like to rename these files to date, (random number if needed), and then random letters, like the following: 20181102_ABCD.jpg20181030_2_DEDE.jpg | If we (for the moment) ignore the gcc or linking portion of the question and instead modify a binary with patchelf on a linux system $ ldd hello linux-vdso.so.1 => (0x00007ffd35584000) libhello.so.1 => not found libc.so.6 => /lib64/libc.so.6 (0x00007f02e4f6f000) /lib64/ld-linux-x86-64.so.2 (0x00007f02e533c000)$ patchelf --remove-needed libhello.so.1 hello$ patchelf --add-needed ./libhello.so.1 hello$ ldd hello linux-vdso.so.1 => (0x00007ffdb74fc000) ./libhello.so.1 => not found libc.so.6 => /lib64/libc.so.6 (0x00007f2ad5c28000) /lib64/ld-linux-x86-64.so.2 (0x00007f2ad5ff5000) We now have a binary with a relative path library, which if there exist suitable directories with libhello.so.1 files present in them $ cd english/$ ../hellohello, world$ cd ../lojban/$ ../hellocoi rodo we find that the path is relative to the working directory of the process , which opens up all sorts of problems, especially security problems . There might be some productive use for this, testing different versions of a library, perhaps. It would likely be simpler to compile two different binaries, or patchelf in the necessary library without the complication of a relative working directory. compile steps libhello only has a helloworld call $ cat libhello.c#include <stdio.h>void helloworld(void){ printf("coi rodo\n");} and was compiled via CFLAGS="-fPIC" make libhello.ogcc -shared -fPIC -Wl,-soname,libhello.so.1 -o libhello.so.1.0.0 libhello.o -lcln -s libhello.so.1.0.0 libhello.so.1ln -s libhello.so.1.0.0 libhello.so and the hello that makes the helloworld call was compiled via $ cat hello.cint main(void){ helloworld(); return 0;}$ CFLAGS="-lhello -L`pwd`/english" make hello without patchelf In hindsight, modify the gcc command to use a relative directory path: $ gcc -shared -fPIC -Wl,-soname,./libhello.so.1 -o libhello.so.1.0.0 libhello.o -lc$ cd ..$ rm hello$ CFLAGS="-lhello -L`pwd`/lojban" make hello$ ldd hello | grep hello ./libhello.so.1 => not found$ english$ ../hellohello, world It's probably more sensible to compile the library in a normal fashion and then fiddle around with any binaries as necessary using patchelf . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479424",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/286799/"
]
} |
479,445 | I have two VMs that run a DB2 database under a Linux guest OS. Only one database will be active at a time, though I'd like both VMs to be running at the same time. The database would be stored on a virtual disk image (thinking .img, but open to other formats) backed by a SAN and that disk image would be attached to both VMs Is it safe to have the virtual disk file configured in each VM if the virtual disk is only mounted by the "active" machine? | If we (for the moment) ignore the gcc or linking portion of the question and instead modify a binary with patchelf on a linux system $ ldd hello linux-vdso.so.1 => (0x00007ffd35584000) libhello.so.1 => not found libc.so.6 => /lib64/libc.so.6 (0x00007f02e4f6f000) /lib64/ld-linux-x86-64.so.2 (0x00007f02e533c000)$ patchelf --remove-needed libhello.so.1 hello$ patchelf --add-needed ./libhello.so.1 hello$ ldd hello linux-vdso.so.1 => (0x00007ffdb74fc000) ./libhello.so.1 => not found libc.so.6 => /lib64/libc.so.6 (0x00007f2ad5c28000) /lib64/ld-linux-x86-64.so.2 (0x00007f2ad5ff5000) We now have a binary with a relative path library, which if there exist suitable directories with libhello.so.1 files present in them $ cd english/$ ../hellohello, world$ cd ../lojban/$ ../hellocoi rodo we find that the path is relative to the working directory of the process , which opens up all sorts of problems, especially security problems . There might be some productive use for this, testing different versions of a library, perhaps. It would likely be simpler to compile two different binaries, or patchelf in the necessary library without the complication of a relative working directory. compile steps libhello only has a helloworld call $ cat libhello.c#include <stdio.h>void helloworld(void){ printf("coi rodo\n");} and was compiled via CFLAGS="-fPIC" make libhello.ogcc -shared -fPIC -Wl,-soname,libhello.so.1 -o libhello.so.1.0.0 libhello.o -lcln -s libhello.so.1.0.0 libhello.so.1ln -s libhello.so.1.0.0 libhello.so and the hello that makes the helloworld call was compiled via $ cat hello.cint main(void){ helloworld(); return 0;}$ CFLAGS="-lhello -L`pwd`/english" make hello without patchelf In hindsight, modify the gcc command to use a relative directory path: $ gcc -shared -fPIC -Wl,-soname,./libhello.so.1 -o libhello.so.1.0.0 libhello.o -lc$ cd ..$ rm hello$ CFLAGS="-lhello -L`pwd`/lojban" make hello$ ldd hello | grep hello ./libhello.so.1 => not found$ english$ ../hellohello, world It's probably more sensible to compile the library in a normal fashion and then fiddle around with any binaries as necessary using patchelf . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/319070/"
]
} |
479,452 | I understand that in GNU/Linux, file permissions are also called a file's mode and that the term mask can mean at least these different meanings: The umask shell builtin (the usual meaning). The umask shell builtin's corresponding system call . The umask shell builtin's corresponding command umask . A shell process-value also referred to as file creation mask , as well as bitmask or just mask . A user-specific file creation mask effecting processes unique to that user (then it called a user mask → a user's file creation mask). The umask shell built in One can use the umask shell builtin by executing the command umask with a proper argument: By doing so, we set a mask for the current shell process tree ; Either for all users in the current shell process tree, or for our own user only; Yet, in general, any such changes will be inherited to new processes, possibly of another shell ). Mathematical logic basis I understand that Mathematical logic includes the operation of conjunction A.K.A anding ( ∧ ) which is in the basis of the umask shell builtin. Thus: The and of a set of operands is true, if and only if , all of its operands are true I further understand that there's an identically named bitwise operation based on that logic. Anding is different than addition of numbers ( x + y → z ) or concatenation of strings ( x alongside y → xy ). My problem I understand that one can "mask a mode" this way: OCTAL BINARY HUMAN-READABLE 0666 0110110110 -rw-rw-rw-∧ 0555 0101101101 -r-xr-xr-x 0444 0100100100 -r--r--r-- But I am not sure that's correct. My question What is masking a mode (and how come 0666 ∧ 0555 → 0444 )? | umask , the shell command, and umask , the function, both set the file creation mask, which is also known as the umask . You’ve rephrased this as A umask means both a shell builtin command and a shell function that bases that command and contains a variable commonly referred to as file creation mask with a value referred to as bitmask or just mask . which is incorrect on a number of counts: the umask function isn’t a shell function; see the link above; the function doesn’t contain a variable; it sets the current process’s file creation mask; the value acted upon isn’t just commonly referred to as “file creation mask”, it is the file creation mask (and its value isn’t referred to as “bitmask” or “mask”). A mask effects some utilities in the current process tree including other shells where it can be changed (hence shell Y won't necessarily have the mask of shell X). A mask doesn’t affect anything in general. The file creation mask affects the current process and is inherited by all newly created child processes. Child processes are free to change it themselves again. This file creation mask acts on the permissions of newly created files. File permissions, also known as the file mode, are a set of twelve bits encoding the access rights of the file’s owner, group, and other users; see this canonical answer for details. They are typically represented as a four- or three-digit octal value. They aren’t a stream of bits. The permissions of newly created files are either specified by the program which is creating a given file, or specified by default ( i.e. specified by the function they use to create files). Examples of the former include programs which create files (or directories) using open or creat or mkdir , which have to explicitly specify the mode they want. Examples of the latter include programs which use fopen , where files end up with the default 0666 mode. The current umask value masks this mode. You rephrased this as Some utilities like mkdir creates a file with a standalone mode (the mask is ignored). Some utilities use the fopen() function, where files are first created with the default 0666 mode but umask changes their mode as to that of the mask right after their creation. which is incorrect on a number of counts: the mask is applied to the requested mode before the file is created, not afterwards; mkdir (which is a function here, not a utility, but the same applies to the utility of the same name) most certainly does not ignore the file creation mask. When umask is taken into account, the resulting mode is the result of applying umask as a bitmask to the requested mode: each bit set in the requested mode is checked against the corresponding bit in umask , and preserved only if the latter isn’t set. In terms of binary operations, the requested mode is bitwise-anded with the complement of the umask . Thus a umask of 0022 with a mode of 0666 results in 0644; not by subtraction, but because 0666 & 0755 (0022’s complement) is 0644. Likewise, a umask of 0011 with a mode of 0666 results in 0666. Let’s look at the calculation in more detail. It’s often represented as a subtraction, including in the answer you linked to, but it’s important to understand that it isn’t; umask is applied as a mask. A value of 0022 is applied thus: Octal BinaryMode 0666 000110110110Mask 0022 000000010010 Bits set here mask bits aboveResult 0644 000110100100 Octal BinaryMode 0644 000110100100Mask 0022 000000010010Result 0644 000110100100 This is usually calculated by bitwise-anding the mode with the mask’s complement: Octal BinaryMask 0022 000000010010Compl. 7755 111111101101Mode 0666 000110110110Result 0644 000110100100 chmod applies the mode specified on its command-line without taking umask into account. Other tools do this too, even when creating files; thus cp and tar , when instructed to preserve permissions, will copy permissions or restore permissions without taking umask into account. This answer goes into more detail. Your final questions are Is my understanding accurate enough and how come 0666 ∧ 0555 → 0444? The answer to the first is apparently not. The answer to the second is because that’s how bitwise and works. Rewrite the operands in binary: Octal Binary0666 0001101101100555 000101101101 Now perform a bitwise and on each bit position. This means taking each vertically-aligned pair of bits, and and them (in the above example, 0 ∧ 0 three times, then 1 ∧ 1, 1 ∧ 0, 0 ∧ 1, 1 ∧ 1 etc.): 000100100100 (0 ∧ 0 is 0, 0 ∧ 1 is 0, 1 ∧ 0 is 0, 1 ∧ 1 is 1). Convert the above back to octal, and you end up with 0444. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
479,630 | Until recently my server with Postfix has worked well. Then I enforced some restrictions to a) combat spam b) disable sending emails to me on behalf on my own name -- I have begun receiving emails from my own email address demanding to send bitcoin to someone. I want to fix both a and b. And now I can't send email via my own postfix server. Client host rejected: cannot find your reverse hostname, [<my ip here>] Note that I carry my laptot to different places and countries, and connect to WiFi from those. And I want to be able to send email always. Here's a part of my config of Postfix. For database of the accounts and domains I use Postgresql. smtpd_helo_required = yessmtpd_client_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unknown_reverse_client_hostname, reject_unknown_client_hostname, reject_unauth_pipeliningsmtpd_helo_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_helo_hostname,### reject_non_fqdn_helo_hostname, reject_unauth_pipeliningsmtpd_sender_restrictions = permit_mynetworks, reject_sender_login_mismatch, permit_sasl_authenticated, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipeliningsmtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destinationsmtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_pipeliningsmtpd_data_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_multi_recipient_bounce, reject_unauth_pipelining# deliver mail for virtual users to Dovecot's LMTP socketvirtual_transport = lmtp:unix:private/dovecot-lmtp# query to find which domains we accept mail forvirtual_mailbox_domains = pgsql:/etc/postfix/virtual_mailbox_domains.cf# query to find which email addresses we accept mail forvirtual_mailbox_maps = pgsql:/etc/postfix/virtual_mailbox_maps.cf# query to find a user's email aliasesvirtual_alias_maps = pgsql:/etc/postfix/virtual_alias_maps.cfvirtual_alias_domains = alias_database = alias_maps = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128inet_interfaces = all | Short Answer Your postfix configuration is unnecessarily complex. It seems likely that some of the restrictions placed in your configuration either negate one another or are so restrictive that you may need to ssh into your server and manually send each outgoing mail. Rather than go through the posted configuration, this answer will provide an overview of what is generally required to configure a reasonably safe email system for most purposes. It's not intended to be an exhaustive tutorial on how to configure each component. However, there is a list of online resources at the end which I have found to be rather helpful and valuable in configuring my own email servers. There are a few extra requirements from your comments which will not be addressed, such as handling multiple domains using a single postfix installation. It is assumed that a reasonably adept administrator will be able to tweak the settings and add the necessary multi-domain configuration elements. Overview of Elements for Modern Small Email Service Providers Graphical View of Security and Reputation Related Email Headers Modern email systems have evolved to include many security and domain related reputation elements. Perhaps the easiest way to begin is looking at a diagram of some of the more important newer elements contained in an email's header. Protecting a Domain from Spoof Attempts and Reputation Problems There are three essential components to configure for ensuring the authenticity of email traffic that seems to originate from a domain. These are: Sender Policy Framework (SPF) Domain Keys Identified Mail (DKIM) Domain-based Message Authentication Reporting & Conformance (DMARC) Each of these has a daemon running on the server as well as DNS records for connecting servers in order to automate checking of domain policies and verifying cryptographic signatures. Simple SPF explanation: Postfix passes outgoing email through the SPF daemon which evaluates whether or not the sender matches the outgoing mail policy. The receiving mail server retrieves the domain's SPF record from DNS and checks the record against the SPF header the sending server placed on the email. postfix compatible SPF implementation Simple DKIM explanation: Postfix passes outgoing email through the DKIM daemon which automatically signs the message and includes a hash of the message in the email headers. The receiving mail server retrieves the domain's DKIM public key from a DNS record and verifies the body hash of the message. postfix compatible DKIM implementation Simple DMARC explanation: The receiving mail server retrieves the DMARC policy record from DNS and accepts or rejects the message or performs a soft fail of the message. postfix compatible DMARC implementation It is considered Best Security Practices to enter a "reject" DMARC policy record even if your domain is not sending any email. Example of DNS entries for SPF, DKIM, and DMARC MX 10 mail.domain.tld.TXT "v=spf1 a:mail.domain.tld -all"mail._domainkey IN TXT ( "v=DKIM1; h=sha256; k=rsa; " "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0w7N0fWtTndtlR+zOTbHyZOlvFiM73gyjjbHDN1OhhcPCbhRUqTsA7A8uXHGHao6nZ5qejlVtn6NfZwbn7rdhJ0MTjlgTnTsVa8E9rgS6dFo0bEIzeFecDr/4XOF9wpNjhHlnHm4wllkPheFnAWpZQiElZeYDN5Md47W1onwZ3DwcYJNX/3/GtfVZ0PrjisC4P0qeu+Z8jIgZc" "MLvBm8gj2pX3V6ntJY9QY09fWSVskvC6BQhi6ESOrqbM63f8ZJ4N/9ixPAMiD6k/lyGCokqc6sMuP6EC7z5McEOBbAVEuNy3idKi1sjwQH8WZHrvlSBlzx1wwmpFC1gqWcdTiEGwIDAQAB" ) ; ----- DKIM key mail for domain_dmarc IN TXT v=DMARC1;p=reject;sp=reject;fo=0:d;adkim=s;aspf=s;rua=mailto:[email protected];ruf=mailto:[email protected];_domainkey IN TXT o=-; You may notice that the DNS record named mail._domainkey contains a cryptographic public key. This key and associated record can be generated using the opendkim-genkey program installed when the opendkim package installed on your server. Key generation is rather simple: opendkim-genkey -b 2048 -d yourdomain -h sha256 -s mail This command will generate a private key, public key, and correctly formatted DNS record. The private key needs to be placed in the directory listed in your opendkim configuration. While the public key and its associated DNS record is placed in your domain's DNS zone file. Unfortunately, some DNS providers have length restrictions on records. So, make sure your DNS provider can accommodate the public key's length. Adding SPF and DKIM Milters SPF Excerpt from the policyd-spf man page: POSTFIX INTEGRATION 1. Add the following to /etc/postfix/master.cf: policyd-spf unix - n n - 0 spawn user=policyd-spf argv=/usr/bin/policyd-spf2. Configure the Postfix policy service in /etc/postfix/main.cf: smtpd_recipient_restrictions = ... reject_unauth_destination check_policy_service unix:private/policyd-spf ... policyd-spf_time_limit = 3600 DKIM The opendkim daemon runs on a UNIX socket which is configurable either as a standard UNIX socket or running on an inetd service port. On my Debian installations, this configuration is located at /etc/default/opendkim . Once opendkim is running, the milter needs to be added to the postfix configuration in /etc/postfix/main.cf . Here's an example from a working server: # DKIMmilter_default_action = acceptmilter_protocol = 2smtpd_milters = inet:localhost:8891 DMARC For small or personal email servers, DMARC can be simply limited to the DNS record. The DMARC checking daemon allows for rejecting incoming mail per sending domain's policy as well as sending any requested reporting back to the sending domain. The reporting is considered being "well-behaved neighbors". However, I generally don't enable it for small or personal systems since the configuration overhead is quite high. The DMARC DNS record, however, is very important to maintain domain reputation. The record is used by all modern large email providers to accept or reject mails that seem to originate from your domain. So, without the DMARC record, all incoming mail that looks like it was sent by your domain gets counted toward your domain's reputation score. Thus, a domain that doesn't expect to send any mail at all should publish a "reject" DMARC record to avoid reputation problems from spoofed messages sent by spammers. TLS Connections for Email Servers and Clients Your configuration information indicates you are running Dovecot and Postfix. Dovecot connects with Postfix on your server. In many small installations, the server connection is performed on the same physical/logical hardware through Unix sockets. So, the Mail User Agent (MUA) connection is handled by the middleware and not the actual mail server. In your case, that would be Dovecot. TLS should be enabled and setup properly in Dovecot in order to securely transmit your username and password from your MUA (ex: Evolution, Sylpheed, Mutt, etc). For reference, see Dovecot's TLS setup documentation . It's possible, but not necessary for the "server-to-server" or "middleware" to postfix connection be encrypted by the same TLS certificate. However, in the case of a small email server, the "middleware" to postfix connection doesn't necessarily need to be encrypted since it's on the same hardware. Obtaining a LetsEncrypt TLS Certificate for your Mail Server and MUA interface (POP3, IMAP, etc) The LetsEncrypt project has done a very good job simplifying obtaining Domain Validated TLS certificates. Assuming your domain already has a certificate, you can add the mail server's sub-domain to the certificate using the --expand option. Stop the postfix and dovecot services. Stop the web server, if one is running. Stop any service running that is currently included on the certificate. Expand the certificate certbot certonly --expand -d domain.tld,www.domain.tld,mail.domain.tld Then add the certificate path to your main.cf configuration. smtpd_tls_key_file = /etc/letsencrypt/live/domain.tld/privkey.pemsmtpd_tls_cert_file = /etc/letsencrypt/live/domain.tld/fullchain.pem And also add the certificate path to your Dovecot configuration, per Dovecot's documentation listed above. Restart all services and check that the configuration works. It should be noted that SMTP TLS connection is the connection your server makes with other servers. While, the Dovecot TLS connection is generally what someone would connect to in order to send email from a non-webmail client. SMTP Server to Server TLS Compatibility Setting Some mail servers are still not utilizing TLS encrypted connections for mails received from other servers. In such cases, strict TLS enforcement will result in undeliverable mail to those servers and domains. However, many large email providers will mark an incoming email as suspicious if the connection is not secured with TLS. So, in order to maintain the best compatibility include the following setting in your /etc/postfix/main.cf smtpd_tls_security_level = may It's also important to note that most email providers do not require this server to server connection to use a CA approved certificate and validation checks are generally not performed even if the certificate is CA approved. However, the TLS certificate included in Dovecot should be CA approved. A self-signed certificate in Dovecot will result in a warning when using most MUAs such as sylpheed , evolution , or thunderbird . Reasonable SMTP Client Restrictions In my experience, 99% of spam can be rejected via SPF, DKIM checking along with RBL checking. Here's a portion of my "standard" client restrictions. It's important to note that the restrictions are processed in order. The order I have below works very well in my experience: smtpd_client_restrictions = permit_mynetworks permit_sasl_authenticated check_helo_access hash:/etc/postfix/helo_access check_client_access hash:/etc/postfix/client_checks reject_unauth_destination check_policy_service unix:private/policy-spf reject_rbl_client cbl.abuseat.org reject_rbl_client pbl.spamhaus.org reject_rbl_client sbl.spamhaus.org reject_rbl_client bl.blocklist.de reject_unknown_client SMTPD Client Restrictions Compatibility Setting The restriction that will have the most exceptions will be the reject_unknown_client setting. Many online services do not configure their reverse domain correctly and/or utilize a series of sending domains which may or may not be mapped properly. So, for the most compatibility with poorly configured email providers, remove that restriction. However, nearly 100% of spam is sent from email servers without proper reverse domain records. HELO Checks It's common for spammers to attempt to spoof a HELO by sending your domain's name or IP address, or localhost. These spoof attempts can be rejected immediately using the check_helo_access option as shown above. The HELO text database consists of a domain name or IP address or IP address range followed by the action and a message to send back. A fairly simple HELO check follows: # helo access# check_helo_access hash:/etc/postfix/helo_accesslocalhost REJECT Only I am me127.0.0.1 REJECT Only I am meexample.com REJECT Only I am medns.host.ip.addr REJECT Only I am me "example.com" is your domain, and "dns.host.ip.addr" is your server's DNS listed IP address. This database example results in something like this from one my actual server logs: Oct 30 06:32:49 <domain> postfix/smtpd[22915]: NOQUEUE: reject: RCPT from xxx-161-xxx-132.dynamic-ip.xxxx.net[xxx.161.xxx.132]: 554 5.7.1 <xxx.xxx.xxx.xxx>: Helo command rejected: Only I am me; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<xxx.xxx.xxx.xxx> The potential spammer/spoofer gets the message "Only I am me". It doesn't matter what the message is, but at least the spammer/spoofer knows you know. Make sure to generate the postfix database using: postmap helo_access Adding Exceptions to the Restrictions via a client_check whitelist Individual client checking goes something like this: ip.addr.hack.attmpt REJECTmisconfig.server.but.good OK Make sure to generate the postfix database using: postmap client_checks And that's about it. I get about 3 spam mails a month, with hundreds of spam rejected. Resources DMARC/SPF Policy Evaluator DKIM Public Key Evaluator MxToolbox Website Email Security Grader | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/479630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/319211/"
]
} |
479,710 | We can use the following in order to test telnet VIA port; in the following example we test port 6667: [root@kafka03 ~]# telnet kafka02 6667Trying 103.64.35.86...Connected to kafka02.Escape character is '^]'.^CConnection closed by foreign host Since on some machines we can't use telnet (for internal reasons) what are the alternatives to check ports, as telnet? | Netcat ( nc ) is one option. nc -zv kafka02 6667 -z = sets nc to simply scan for listening daemons, without actually sending any data to them -v = enables verbose mode | {
"score": 8,
"source": [
"https://unix.stackexchange.com/questions/479710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
479,859 | I booted Ubuntu in recovery mode to try and fix a disk error. However, when I run: fsck I just get: fsck from util-linux 2.31.1 And nothing else happens. Any fsck command is simply printing 'fsck from util-linux 2.31.1' and exiting, regardless of options and arguments. This is preventing me from fixing the disk error and being able to recover the system. | The command I needed was fsck.ext4 , e.g. fsck.ext4 -F This let me fix the disk issues and recover the system. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/479859",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93285/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.