source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
403,728 | I'm doing a small project with two Raspberry Pi's. I want to sync their clocks using Precision Time Protocol (one as master clock and another as slave clock). Now I'm interested in logging the data you see when using the -V command. I'm doing the following atm: For RPi 1: sudo ptpd --interface wlan0 -M -L For RPi 2: sudo ptpd --interface wlan0 -s -L -V > onewaydelay.csv The output is the typical #Timestamp, State, Clock ID, One Way Delay, Offset From Master, Slave to Master, Master to Slave, Observed Drift, Last Packet Received. The thing is I'm only interested in logging the Timestamp and One Way Delay so that I can plot the One Way Delay and see how it changes over time. So my question is: Is it possible to edit the output (using the -V command) so that it only prints Timestamp and One Way Delay? Using the -V command the output looks (one line - comma separated): 2017-10-12 14:41:48.763883, slv, b827ebfffe9adfc7(unknown)/01, 0.045879356, -0.145651366, 0.319974024, -0.108966784, -500000.000000000, S | Have you tried logrotate? man logrotate Here is a guide that could help. How to Use logrotate to Manage Log Files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259885/"
]
} |
403,761 | I am learning C++, though I have now a general question, i.e. not for programmers, but IMHO best fit for Linux users. I want the: make command to always build a new binary, which could easily be accomplished by creating a Bash alias, something like: alias make='\make clean && \make' I would like not to be defining that alias , can this be achieved on the Makefile 's side? The simplest Makefile , I could come up with, follows: PROGRAM=projectCXXFLAGS=-std=c++17 -Wall -Wextra -Werror -Wpedantic -pedantic-errorsOPTIMIZATION=-O2.PHONY: build.PHONY: clean.PHONY: distribbuild: ${PROGRAM}clean: rm ${PROGRAM}distrib: tar -czf ${PROGRAM}.tar.gz ${PROGRAM}.cpp Makefile${PROGRAM}: g++ ${PROGRAM}.cpp ${CXXFLAGS} ${OPTIMIZATION} -o ${PROGRAM} Now, in this state it says: make: Nothing to be done for 'build'. Essentially, it probably sees recent timestamp and does not build, I think. EDIT1: I will only add, that this issue is not Tab related, it builds normal when binary is not in place. EDIT2: After I have made modifications as per Sato Katsura's answer: The Makefile looks like this: https://www.vlastimilburian.cz/public/linux/Makefile.gz (I uploaded it, because Tabs can't be shown in text) and it still says as the above information: make: Nothing to be done for 'build'. EDIT3: $ make --versionGNU Make 4.1Built for x86_64-pc-linux-gnuCopyright (C) 1988-2014 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law. | Here's a quick checklist: use TABs to indent action lines ( rm , tar , g++ etc.) use $(FOO) to expand variable FOO instead of ${FOO} or $FOO add dependencies where relevant. Your Makefile should look something like this (again, lines are indented with TABs, not spaces): PROGRAM=projectCXXFLAGS=-std=c++17 -Wall -Wextra -Werror -Wpedantic -pedantic-errorsOPTIMIZATION=-O2.PHONY: build.PHONY: clean.PHONY: distrib build: $(PROGRAM)clean: rm $(PROGRAM)distrib: tar -czf $(PROGRAM).tar.gz $(PROGRAM).cpp Makefile$(PROGRAM): $(PROGRAM).cpp g++ $(PROGRAM).cpp $(CXXFLAGS) $(OPTIMIZATION) -o $(PROGRAM) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403761",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
403,782 | I have this fstab entry : LABEL=cloudimg-rootfs / ext4 defaults,noatime,nobarrier,data=writeback,rw 0 0 I added rw to see if would fix my issue but it wont. After boot I get a read-only file system that I can't fix either using common results found on google. Useful output. There are no errors with dmesg | grep error root@w2:~# dmesg | grep EXT4[ 8.372564] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)[ 8.892244] EXT4-fs (sda1): Cannot change data mode on remount | Instead of setting it late in the fstab, why not use tune2fs to make it the default for that filesystem: tune2fs -o journal_data_writeback /dev/sdXY Do this once then reboot. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119404/"
]
} |
403,783 | The echo one; echo two > >(cat); echo three; command gives unexpected output. I read this: How process substitution is implemented in bash? and many other articles about process substitution on the internet, but don't understand why it behaves this way. Expected output: onetwothree Real output: prompt$ echo one; echo two > >(cat); echo three;onethreeprompt$ two Also, this two commands should be equivalent from my point of view, but they don't: ##### first command - the pipe is used.prompt$ seq 1 5 | cat12345##### second command - the process substitution and redirection are used.prompt$ seq 1 5 > >(cat)prompt$ 12345 Why I think, they should be the same? Because, both connects the seq output to the cat input through the anonymous pipe - Wikipedia, Process substitution . Question: Why it behaves this way? Where is my error? The comprehensive answer is desired (with explanation of how the bash does it under the hood). | Yes, in bash like in ksh (where the feature comes from), the processes inside the process substitution are not waited for (before running the next command in the script). for a <(...) one, that's usually fine as in: cmd1 <(cmd2) the shell will be waiting for cmd1 and cmd1 will be typically waiting for cmd2 by virtue of it reading until end-of-file on the pipe that is substituted, and that end-of-file typically happens when cmd2 dies. That's the same reason several shells (not bash ) don't bother waiting for cmd2 in cmd2 | cmd1 . For cmd1 >(cmd2) , however, that's generally not the case, as it's more cmd2 that typically waits for cmd1 there so will generally exit after. That's fixed in zsh that waits for cmd2 there (but not if you write it as cmd1 > >(cmd2) and cmd1 is not builtin, use {cmd1} > >(cmd2) instead as documented ). ksh doesn't wait by default, but lets you wait for it with the wait builtin (it also makes the pid available in $! , though that doesn't help if you do cmd1 >(cmd2) >(cmd3) ) rc (with the cmd1 >{cmd2} syntax), same as ksh except you can get the pids of all the background processes with $apids . es (also with cmd1 >{cmd2} ) waits for cmd2 like in zsh , and also waits for cmd2 in <{cmd2} process redirections. bash does make the pid of cmd2 (or more exactly of the subshell as it does run cmd2 in a child process of that subshell even though it's the last command there) available in $! , but doesn't let you wait for it. If you do have to use bash , you can work around the problem by using a command that will wait for both commands with: { { cmd1 >(cmd2); } 3>&1 >&4 4>&- | cat; } 4>&1 That makes both cmd1 and cmd2 have their fd 3 open to a pipe. cat will wait for end-of-file at the other end, so will typically only exit when both cmd1 and cmd2 are dead. And the shell will wait for that cat command. You could see that as a net to catch the termination of all background processes (you can use it for other things started in background like with & , coprocs or even commands that background themselves provided they don't close all their file descriptors like daemons typically do). Note that thanks to that wasted subshell process mentioned above, it works even if cmd2 closes its fd 3 (commands usually don't do that, but some like sudo or ssh do). Future versions of bash may eventually do the optimisation there like in other shells. Then you'd need something like: { { cmd1 >(sudo cmd2; exit); } 3>&1 >&4 4>&- | cat; } 4>&1 To make sure there's still an extra shell process with that fd 3 open waiting for that sudo command. Note that cat won't read anything (since the processes don't write on their fd 3). It's just there for synchronisation. It will do just one read() system call that will return with nothing at the end. You can actually avoid running cat by using a command substitution to do the pipe synchronisation: { unused=$( { cmd1 >(cmd2); } 3>&1 >&4 4>&-); } 4>&1 This time, it's the shell instead of cat that is reading from the pipe whose other end is open on fd 3 of cmd1 and cmd2 . We're using a variable assignment so the exit status of cmd1 is available in $? . Or you could do the process substitution by hand, and then you could even use your system's sh as that would become standard shell syntax: { cmd1 /dev/fd/3 3>&1 >&4 4>&- | cmd2 4>&-; } 4>&1 though note as noted earlier that not all sh implementations would wait for cmd1 after cmd2 has finished (though that's better than the other way round). That time, $? contains the exit status of cmd2 ; though bash and zsh make cmd1 's exit status available in ${PIPESTATUS[0]} and $pipestatus[1] respectively (see also the pipefail option in a few shells so $? can report the failure of pipe components other than the last) Note that yash has similar issues with its process redirection feature. cmd1 >(cmd2) would be written cmd1 /dev/fd/3 3>(cmd2) there. But cmd2 is not waited for and you can't use wait to wait for it either and its pid is not made available in the $! variable either. You'd use the same work arounds as for bash . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/403783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109397/"
]
} |
403,870 | I need to hide some sensitive arguments to a program I am running, but I don't have access to the source code. I am also running this on a shared server so I can't use something like hidepid because I don't have sudo privileges. Here are some things I have tried: export SECRET=[my arguments] , followed by a call to ./program $SECRET , but this doesn't seem to help. ./program `cat secret.txt` where secret.txt contains my arguments, but the almighty ps is able to sniff out my secrets. Is there any other way to hide my arguments that doesn't involve admin intervention? | As explained here , Linux puts a program's arguments in the program's data space, and keeps a pointer to the start of this area. This is what is used by ps and so on to find and show the program arguments. Since the data is in the program's space, it can manipulate it. Doing this without changing the program itself involves loading a shim with a main() function that will be called before the real main of the program. This shim can copy the real arguments to a new space, then overwrite the original arguments so that ps will just see nuls. The following C code does this. /* https://unix.stackexchange.com/a/403918/119298 * capture calls to a routine and replace with your code * gcc -Wall -O2 -fpic -shared -ldl -o shim_main.so shim_main.c * LD_PRELOAD=/.../shim_main.so theprogram theargs... */#define _GNU_SOURCE /* needed to get RTLD_NEXT defined in dlfcn.h */#include <stdlib.h>#include <stdio.h>#include <string.h>#include <signal.h>#include <unistd.h>#include <dlfcn.h>typedef int (*pfi)(int, char **, char **);static pfi real_main;/* copy argv to new location */char **copyargs(int argc, char** argv){ char **newargv = malloc((argc+1)*sizeof(*argv)); char *from,*to; int i,len; for(i = 0; i<argc; i++){ from = argv[i]; len = strlen(from)+1; to = malloc(len); memcpy(to,from,len); memset(from,'\0',len); /* zap old argv space */ newargv[i] = to; argv[i] = 0; } newargv[argc] = 0; return newargv;}static int mymain(int argc, char** argv, char** env) { fprintf(stderr, "main argc %d\n", argc); return real_main(argc, copyargs(argc,argv), env);}int __libc_start_main(pfi main, int argc, char **ubp_av, void (*init) (void), void (*fini)(void), void (*rtld_fini)(void), void (*stack_end)){ static int (*real___libc_start_main)() = NULL; if (!real___libc_start_main) { char *error; real___libc_start_main = dlsym(RTLD_NEXT, "__libc_start_main"); if ((error = dlerror()) != NULL) { fprintf(stderr, "%s\n", error); exit(1); } } real_main = main; return real___libc_start_main(mymain, argc, ubp_av, init, fini, rtld_fini, stack_end);} It is not possible to intervene on main() , but you can intervene on the standard C library function __libc_start_main , which goes on to call main. Compile this file shim_main.c as noted in the comment at the start, and run it as shown. I've left a printf in the code so you check that it is actually being called. For example, run LD_PRELOAD=/tmp/shim_main.so /bin/sleep 100 then do a ps and you will see a blank command and args being shown. There is still a small amount of time that the command args may be visible. To avoid this, you could, for example, change the shim to read your secret from a file and add it to the args passed to the program. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/403870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260020/"
]
} |
403,890 | I have the following (MWE) shell script foo : #!/bin/bashARGS=("$@") # all arguments## => if it exists, we need to drop the argument "-D" herels -l ${ARGS[@]} | sort -fk8 If foo is called with argument -D (the position in the list of arguments is unknown), how can I remove -D from the list of arguments? I found out that unset ARGS[${#ARGS[@]}-1] can drop the last argument, for example, but I'm not sure in which order the arguments are passed (so I first need to know at which place the argument is and then remove it in case it is provided). | The no-frills approach is to simply loop over the positional parameters, collecting all but -D into an array, and then use set -- to update the params: for param; do [[ ! $param == '-D' ]] && newparams+=("$param")doneset -- "${newparams[@]}" # overwrites the original positional params | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37937/"
]
} |
403,906 | Some command line interface tools return a broken console when canceled by CTRL+C . Sometimes the text is invisible, or there are graphic problems until I run the command reset . (I use bash, but expect it is independent of the shell.) Does this effect have a name?What causes this, and how can programmers prevent this in the tools?Is there a strategy how this problem is addressed in the major programming languages? | A console sometimes needs a reset(1) (or some stty(1) command) because the state of a pseudo-terminal does not change when some process (e.g. a program started by your shell) terminates. Read the tty demystified . (I find the handling of pseudo-terminals and pseudottys the most difficult part of Linux) Is there a strategy how this problem is addressed in the major programming languages? A well-behaved program dealing with the terminal and changing its mode or line discipline should try hard to avoid crashing and issue the appropriate calls (see termios(3) ) to put the terminal in the right state. BTW, libraries like ncurses or readline are helpful (but you need to call their cleanup routines appropriately). See signal(7) and signal-safety(7) . Avoiding crashing in your code is difficult. Read about undefined behavior . An imperfect workaround could be to define a shell function which runs your program then does a reset (that could sometimes be inappropriate). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26440/"
]
} |
403,972 | I am using Mac OS X Version 10.13.1 and I have just installed anaconda. I have created a virtual environment using the command conda create -n py3 python=3 Then, I have started the python interpreter using the command python To my surprise, the preinstalled python 2.7 from /usr/bin showed up instead of python 3.6. In order to check what is going wrong I issued the command which python The result was even more surprising, I got the following: /Users/karlstroetmann/anaconda2/envs/py3/bin/python When I then invoked the command /Users/karldrstroetmann/anaconda2/envs/py3/bin/python I did get python 3.6.3. But I don't understand why I cannot invoke this version by just typping python . What am I missing here? Any hints would be very much appreciated. | It's very likely that the python command has been hashed and that you need to clear the cache. In order to see what executable is actually being run you can use the type command, e.g.: type -a python Unlike the which command, the type command is aware of hashed programs, as well as aliases and shell functions. For further discussion of which (no pun intended) commands to use to determine which programs are executed by the shell, see the following post: Why not use "which"? What to use then? Alternatively, you can also use the hash command itself to determine if a given command has been hashed, e.g: hash -t python You can also list all hashed commands by running hash without any arguments, i.e.: hash Similarly, you can use the alias command to check if a given command is an alias, e.g.: alias python And you can list all active aliases as well: alias To clear the cached Python program you can use the following command: hash -d python Alternatively, you can clear everything all at once: hash -r To clear a single alias you could use the unalias command, e.g.: unalias python Or you could clear all aliases at once: unalias -a | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260095/"
]
} |
403,979 | Let's say that in my .zshrc I have: alias ls='ls --color=auto'alias ll='ls -halF' As expected, whence ls returns ls --color=auto , and whence ll returns ls -halF . Is there any option (nothing in help whence helped) or one-liner such that <rwhence> ll will produce ls --color=auto -halF , or similar? | It's very likely that the python command has been hashed and that you need to clear the cache. In order to see what executable is actually being run you can use the type command, e.g.: type -a python Unlike the which command, the type command is aware of hashed programs, as well as aliases and shell functions. For further discussion of which (no pun intended) commands to use to determine which programs are executed by the shell, see the following post: Why not use "which"? What to use then? Alternatively, you can also use the hash command itself to determine if a given command has been hashed, e.g: hash -t python You can also list all hashed commands by running hash without any arguments, i.e.: hash Similarly, you can use the alias command to check if a given command is an alias, e.g.: alias python And you can list all active aliases as well: alias To clear the cached Python program you can use the following command: hash -d python Alternatively, you can clear everything all at once: hash -r To clear a single alias you could use the unalias command, e.g.: unalias python Or you could clear all aliases at once: unalias -a | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/403979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240807/"
]
} |
403,983 | In a bash script (Arch Linux) I have the following rsync command: rsync –nvaAHX --inplace --delete-delay --exclude-from="/etc/$path1/exclude-list-$configName.txt" "$new_snap/" "$BACKUPDIR" The rsync command fails with the following error: rsync: --delete does not work without --recursive (-r) or --dirs (-d). Of course, that message is misleading as "a" implies "r". If I remove the option "--delete-delay" from the rsync command, I get this different error: rsync: link_stat "/some/path/–aAHX" failed: No such file or directory (2) The value shown at "/some/path" is the current working directory. If I change the current directory, that value in the error message changes as well. However, why the options "-aAHX" would be appended to any part of the path is confusing. The computer is a fully updated Arch Linux system. I just rebooted it as well. 4.13.11-1-ARCH #1 SMP PREEMPT Thu Nov 2 10:25:56 CET 2017 x86_64 GNU/Linux rsync program location: # which rsync/usr/bin/rsync Here is the test script: #!/bin/bashpath1=xyzconfigName=rootnew_snap=/.snapshots/1/snapshotBACKUPDIR=/backup/$configNameecho "showing exclude file contents:"cat "/etc/$path1/exclude-list-$configName.txt"echoecho rsync –nvaAHX --inplace --delete-delay --exclude-from="/etc/$path1/exclude-list-$configName.txt" "$new_snap/" "$BACKUPDIR"rsync –nvaAHX --inplace --delete-delay --exclude-from="/etc/$path1/exclude-list-$configName.txt" "$new_snap/" "$BACKUPDIR" Here are the contents of the file "/etc/$path/exclude-list-$configName.txt": "dev/*""proc/*""sys/*""tmp/*""run/*""mnt/*""media/*""lost+found"".trash*/*"".Trash*/*" Here is some testing without any script at all. I find it baffling. # mkdir adir# mkdir bdir# touch adir/afile1# touch adir/afile2# ls -la adir/total 0drwxr-x--x 1 root root 24 Nov 12 02:21 .drwxr-xr-x 1 user user 2080 Nov 12 02:28 ..-rw-r----- 1 root root 0 Nov 12 02:21 afile1-rw-r----- 1 root root 0 Nov 12 02:21 afile2# ls -la bdir/total 0drwxr-x--x 1 root root 0 Nov 12 02:21 .drwxr-xr-x 1 user user 2080 Nov 12 02:28 ..# rsync -nva adir/ bdirsending incremental file list./afile1afile2sent 93 bytes received 25 bytes 236.00 bytes/sectotal size is 0 speedup is 0.00 (DRY RUN)# rsync -nva /home/user/adir/ /home/user/bdirsending incremental file list./afile1afile2sent 93 bytes received 25 bytes 236.00 bytes/sectotal size is 0 speedup is 0.00 (DRY RUN)# rsync –nvaAHX --inplace --delete-delay --exclude-from=/root/exclude-list-root.txt /home/user/adir/ /home/user/bdir/rsync: --delete does not work without --recursive (-r) or --dirs (-d).rsync error: syntax or usage error (code 1) at main.c(1567) [client=3.1.2]# rsync –nvaAHX --inplace --delete-delay /home/user/adir/ /home/user/bdir/rsync: --delete does not work without --recursive (-r) or --dirs (-d).rsync error: syntax or usage error (code 1) at main.c(1567) [client=3.1.2]# rsync –nvaAHX --inplace /home/user/adir/ /home/user/bdir/rsync: link_stat "/home/user/–nvaAHX" failed: No such file or directory (2)skipping directory .rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2]# rsync –nvaAHX /home/user/adir/ /home/user/bdir/rsync: link_stat "/home/user/–nvaAHX" failed: No such file or directory (2)skipping directory .rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2]# rsync –nva /home/user/adir/ /home/user/bdir/rsync: link_stat "/home/user/–nva" failed: No such file or directory (2)skipping directory .rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2] | That dash in front of n in –nvaHAX is not an ordinary dash but a slightly longer em-dash (or hyphen). This may have happened if you're copy and pasting from a "smart" editor or word processor that replaces certain characters with the corresponding typographical character. On my system, copying and pasting the first part of your command results in: $ rsync –nva adir/ bdir/ rsync: link_stat "/tmp_mfs/shell-ksh.D1Mq1Xht/\#342\#200\#223nva" failed: No such file or directory (2)skipping directory .rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1178) [sender=3.1.2] As you can see, my terminal displays the error message slightly differently from yours and shows that the dash is in fact a Unicode character (or something similar, I don't know much about character encodings). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
403,988 | These are the individual threads of Packet Receiver process. Is there any way to kill any individual thread? Does Linux provide any specific command which can kill or send stop signal to any particular thread under a process? | You might use tgkill(2) or tkill in your C program (you'll need to use syscall(2) ) but you don't want to . From inside your program you can use pthread_kill(3) - which is rarely useful. (I don't exactly know what effect would have tgkill or tkill - e.g. with SIGKILL or SIGTERM - on a thread) The pthreads(7) library uses low-level stuff (including some signal(7) -s and futex(7) -s etc...; see also nptl(7) ) and if you raw-killed (with tkill or tgkill ) some individual thread, your process would be in some wrong state (so undefined behavior ) because some internal invariant would be broken. So study the documentation of your packet receiver program and find some other way. If it is free software , study its source code and improve it. Read more carefully signal(7) and signal-safety(7) . Signals are meant to be sent to processes (by kill(2) ) and handled in threads. And in practice, signals and threads don't marry well. Read some pthread tutorial . A common trick, when coding a multi-threaded program (and wanting to handle external signals like SIGTERM ) is to use a pipe(7) to your own process and poll(2) that pipe in some other thread (you might also consider the Linux specific signalfd(2) ), with a signal hander write(2) -ing a byte or a few of them into that pipe. That well known trick is well explained in Qt documentation (and you could use it in your own program, even without Qt). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/403988",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/197355/"
]
} |
404,006 | When we perform this (on linux redhat 7.x) umount /grop/sdcumount: /grop/sdc: target is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) We can see that mount failed on busy. But when we do remount then ...remount is success as the following: mount -o rw,remount /grop/sdcecho $?0 So very interesting. Does remount use the option like ( umount -l ) ? what the different between remount to umount / mount ? | man mount : remount Attempt to remount an already-mounted filesystem. This is commonly used to change the mount flags for a filesystem, especially to make a readonly filesystem writeable. It does not change device or mount point. The remount functionality follows the standard way how the mount command works with options from fstab. It means the mount command doesn't read fstab (or mtab) only when a device and dir are fully specified. The remount option is used when the file system isn't currently in use to modify the mount option from ro to rw . target is busy. If the file system is already in use you can't umount it properly , you need to find the process which accessed your files ( fuser -mu /path/ ) , killing the running process then unmounting the file. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
404,036 | In my Debian machine, the current version of apache2 is 2.4.10: root@9dd0fd95a309:/# apachectl -VServer version: Apache/2.4.10 (Debian) I would like to upgrade apache to a latest version (at least 2.4.26):I tried: root@9dd0fd95a309:/# apt-get install apache2Reading package lists... DoneBuilding dependency treeReading state information... Doneapache2 is already the newest version.0 upgraded, 0 newly installed, 0 to remove and 48 not upgraded. But it doesn't find any update. What can i do to upgrade to a latest version ? | Do not manually upgrade Apache. Manual upgrading for security is unnecessary and probably harmful. How Debian releases software To see why this is, you must understand how Debian deals with packaging, versions, and security issues. Because Debian values stability over changes, the policy is to freeze the software versions in the packages of a stable release. This means that for a stable release very little changes, and once things work they should continue working for a long time. But, what if a serious bug or security issue is discovered after release of a Debian stable version? These are fixed, in the software version provided with Debian stable . So if Debian stable ships with Apache 2.4.10 , a security issue is found and fixed in 2.4.26 , Debian will take this security fix, and apply it to 2.4.10 , and distribute the fixed 2.4.10 to its users. This minimizes disruptions from version upgrades, but it makes version sniffing such as Tenable does meaningless. Serious bugs are collected and fixed in point releases (the .9 in Debian 8.9 ) every few months. Security fixes are fixed immediately and provided through an update channel. In general, as long as you run a supported Debian version, stick to stock Debian packages, and stay up to date on their security updates, you should be good. Your Tenable report To check if Debian stable is vulnerable for your issues, Tenable's "2.4.x < 2.4.27 multiple issues" is useless. We need to know exactly which security issues they are talking about. Luckily, every significant vulnerability is assigned a Common Vulnerability and Exposures (CVE) identifier, so we can talk easily about specific vulnerabilities. For example, on this page for Tenable issue 101788 we can see that that issue is about vulnerabilities CVE-2017-9788 and CVE-2017-9789. We can search for these vulnerabilities on the Debian security tracker . If we do that, we can see that CVE-2017-9788 has the status "fixed" in or before version 2.4.10-10+deb8u11 . Likewise, CVE-2017-9789 is fixed . Tenable issue 10095 is about CVE-2017-3167 , CVE-2017-3169 , CVE-2017-7659 , CVE-2017-7668 , and CVE-2017-7679 , all fixed. So if you're on version 2.4.10-10+deb8u11 , you should be safe from all these vulnerabilities! You can check this with dpkg -l apache2 (ensure your terminal is wide enough to show the full version number). Staying up to date So, how do you ensure you're up to date with these security updates? First, you need to have the security repository in your /etc/apt/sources.list or /etc/apt/sources.list.d/* , something like this: deb http://security.debian.org/ jessie/updates main This is a normal part of any installation, you should not have to do anything special. Next, you must ensure that you install updated packages. This is your responsibility; it is not done automatically. A simple but tedious way is to log in regularly and run # apt-get update# apt-get upgrade Judging from the fact that you report your Debian version as 8.8 (we're at 8.9) and the ... and 48 not upgraded. from your post, you might want to do this soon. To be notified of security updates, I higly recommend subscribing to the Debian security announcements mailinglist . Another option is ensuring your server can send you emails, and installing a package like apticron , which emails you when packages on your system need updating. Basically, it regularly runs the apt-get update part, and pesters you to do the apt-get upgrade part. Finally, you could install something like unattended-upgrades , which not only checks for updates, but automatically installs the updates without human intervention. Upgrading the packages automatically without human supervision carries some risk, so you need to decide for yourself if that is a good solution for you. I use it and I'm happy with it, but caveat updator. Why upgrading yourself is harmful In my second sentence, I said upgrading to the latest Apache version is probably harmful . The reason for this is simple: if you follow Debian's version of Apache, and make a habit of installing the security updates, then you are in a good position, security-wise. Debians security team identifies and fixes security issues, and you can enjoy that work with minimal effort. If, however, you install Apache 2.4.27+, say by downloading it from the Apache website and compiling it yourself, then the work of keeping up with security issues is fully yours. You need to track security issues, and go through the work of downloading/compiling/etc every time a problem is found. It turns out this is a fair amount of work, and most people slack off. So they end up running their self-compiled version of Apache that becomes more and more vulnerable as issues are found. And so they end up a lot worse than if they simply had followed Debian's security updates. So yes, probably harmful. That's not to say there's no place for compiling software yourself (or selectively taking packages from Debian testing or unstable), but in general, I recommend against it. Duration of security updates Debian doesn't maintain its releases forever. As a general rule, a Debian release recieves full security support for one year after it has been obsoleted by a newer release. The release you're running, Debian 8 / jessie , is an obsoleted stable release ( oldstable in Debian terms). It will receive full security support until May 2018 , and long-term support until April 2020. I'm not entirely sure what the extent of this LTS support is. The current Debian stable release is Debian 9 / stretch . Consider upgrading to Debian 9 , which comes with newer versions of all software, and full security support for several years (likely until mid-2020). I recommend upgrading at a time that is convenient for you, but well before May 2018. Closing remarks Earlier, I wrote that Debian backports security fixes. This ended up being untenable for some software due to the high pace of development and high rate of security issues. These packages are the exception, and actually updated to a recent upstream version. Packages I know of this applies to are chromium (the browser), firefox , and nodejs . Finally, this entire way of dealing with security updates is not unique to Debian; many distributions work like this, especially the ones that favour stability over new software. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/404036",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/236919/"
]
} |
404,043 | I want to know if there is a way to remove application launchers in GNOME's activities menu: I also want to know If I can make folders (or groups) like the existing utilities folder in the picture: After I install applications they always install other dependencies which I don't want to browse through every time I am searching for an application. In Openbox this was exeptionally well done using ~/.config/openbox/menu.xml where I specified exact file / folder structure which benefited my productivity. | App launchers shown in GNOME Activities are located either in /usr/share/applications/ or ~/.local/share/applications/ as .desktop files. You can hide an individual app launcher from Activities by adding an extra NoDisplay=true line to the corresponding .desktop file. It is generally not advisable to edit the .desktop file located in /usr/share/applications/ . Instead copy the file to ~/.local/share/applications/ first and make the change to the copied file. If you can't find the right .desktop file in any of the two locations mentioned above, try /usr/local/share/applications too. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/404043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9135/"
]
} |
404,046 | I have 70 machine with CentOS 7.2 and chrony version 2.1.1 syncing perfect with my NTP server protocol v3. Recently I added 30 machines with CentOS 7.4 and chrony version 3.1 , but these 30 machines refuse to sync, I followed all the troubleshooting procedures and I am totally stuck figuring out how to fix that. commands output: chronyc trackingReference ID : 00000000 ()Stratum : 0Ref time (UTC) : Thu Jan 01 00:00:00 1970System time : 0.000000013 seconds fast of NTP timeLast offset : +0.000000000 secondsRMS offset : 0.000000000 secondsFrequency : 11.390 ppm fastResidual freq : +0.000 ppmSkew : 0.000 ppmRoot delay : 1.000000000 secondsRoot dispersion : 1.000000000 secondsUpdate interval : 0.0 secondsLeap status : Not synchronisedchronyc sources210 Number of sources = 1MS Name/IP address Stratum Poll Reach LastRx Last sample===============================================================================^? 172.17.172.220 4 7 377 644 -11.6s[ -11.6s] +/- 8147mstcpdump -n -i lo port 323 [Note: I applied "chronyc sources" in other terminal but nothing captured, in the working machines it capture some packets!]tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes^C0 packets captured0 packets received by filter0 packets dropped by kernel tcpdump -n -i eno2 port 123tcpdump: verbose output suppressed, use -v or -vv for full protocol decodelistening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes15:03:09.870958 IP 192.168.0.100.44841 > 172.17.172.220.ntp: NTPv4, Client, length 4815:03:10.112707 IP 172.17.172.220.ntp > 192.168.0.100.44841: NTPv3, Server, length 4815:11:45.678320 IP 192.168.0.100.46832 > 172.17.172.220.ntp: NTPv4, Client, length 4815:11:45.892482 IP 172.17.172.220.ntp > 192.168.0.100.46832: NTPv3, Server, length 4815:20:22.634981 IP 192.168.0.100.41310 > 172.17.172.220.ntp: NTPv4, Client, length 4815:20:22.871226 IP 172.17.172.220.ntp > 192.168.0.100.41310: NTPv3, Server, length 4815:28:55.820943 IP 192.168.0.100.39143 > 172.17.172.220.ntp: NTPv4, Client, length 4815:28:55.873988 IP 172.17.172.220.ntp > 192.168.0.100.39143: NTPv3, Server, length 4815:37:35.840998 IP 192.168.0.100.57333 > 172.17.172.220.ntp: NTPv4, Client, length 4815:37:35.913139 IP 172.17.172.220.ntp > 192.168.0.100.57333: NTPv3, Server, length 4815:46:15.814980 IP 192.168.0.100.56932 > 172.17.172.220.ntp: NTPv4, Client, length 4815:46:15.882518 IP 172.17.172.220.ntp > 192.168.0.100.56932: NTPv3, Server, length 4815:54:48.587705 IP 192.168.0.100.33711 > 172.17.172.220.ntp: NTPv4, Client, length 4815:54:48.632963 IP 172.17.172.220.ntp > 192.168.0.100.33711: NTPv3, Server, length 48^C14 packets captured14 packets received by filter0 packets dropped by kernelchronyc activity200 OK1 sources online0 sources offline0 sources doing burst (return to online)0 sources doing burst (return to offline)0 sources with unknown addresschronyc ntpdata 172.17.172.220Remote address : 172.17.172.220 (AC11ACDC)Remote port : 123Local address : 192.168.0.100 (C0A80064)Leap status : NormalVersion : 3Mode : ServerStratum : 4Poll interval : 8 (256 seconds)Precision : -6 (0.015625000 seconds)Root delay : 0.031219 secondsRoot dispersion : 8.063156 secondsReference ID : AC11AC88 ()Reference time : Sun Nov 12 09:21:36 2017Offset : +11.719727516 secondsPeer delay : 0.215471357 secondsPeer dispersion : 0.015626255 secondsResponse time : 0.000000000 secondsJitter asymmetry: -0.47NTP tests : 111 111 1101Interleaved : NoAuthenticated : NoTX timestamping : KernelRX timestamping : KernelTotal TX : 35Total RX : 35Total valid RX : 35chronyc serverstatsNTP packets received : 0NTP packets dropped : 0Command packets received : 6Command packets dropped : 0Client log records dropped : 0 What should I do to fix : Reference ID : 00000000 () Stratum : 0 NTP packets received : 0 I already rebooted whole OS, tried all chronyc commands like makestep and waitsync. but nothing working. I also tried to find reported bugs but couldn't find any related. note that firewalld disabled. and /etc/chrony.conf is exact copy from the working 70 machines. Update: By activating tcpdump's verbose mode, it seems chrony 3.1 timestamps corrupted, even by trying chronyc makestep 1 -1 it didn't sync, also I ran debug mode "see below": tcpdump -n -i eno2 port 123 -vvvvvtcpdump: listening on eno2, link-type EN10MB (Ethernet), capture size 262144 bytes20:25:15.708374 IP (tos 0x0, ttl 64, id 399, offset 0, flags [DF], proto UDP (17), length 76) 192.168.0.100.49105 > 172.17.172.220.ntp: [bad udp cksum 0x1a45 -> 0xf15f!] NTPv4, length 48 Client, Leap indicator: (0), Stratum 0 (unspecified), poll 6 (64s), precision 32 Root Delay: 0.000000, Root dispersion: 0.000000, Reference-ID: (unspec) Reference Timestamp: 0.000000000 Originator Timestamp: 3719492661.028820399 (2017/11/12 20:24:21) Receive Timestamp: 1089474065.361510029 (2070/08/17 02:09:21) Transmit Timestamp: 2540453432.493019109 (1980/07/03 13:30:32) Originator - Receive Timestamp: +1664948700.332689629 Originator - Transmit Timestamp: -1179039228.53580129020:25:15.964038 IP (tos 0x0, ttl 122, id 18400, offset 0, flags [none], proto UDP (17), length 76) 172.17.172.220.ntp > 192.168.0.100.49105: [udp sum ok] NTPv3, length 48 Server, Leap indicator: (0), Stratum 4 (secondary reference), poll 6 (64s), precision -6 Root Delay: 0.031219, Root dispersion: 8.154785, Reference-ID: 172.17.172.136 Reference Timestamp: 3719467375.940868199 (2017/11/12 13:22:55) Originator Timestamp: 2540453432.493019109 (1980/07/03 13:30:32) Receive Timestamp: 3719492726.471868199 (2017/11/12 20:25:26) Transmit Timestamp: 3719492726.471868199 (2017/11/12 20:25:26) Originator - Receive Timestamp: +1179039293.978849090 Originator - Transmit Timestamp: +1179039293.978849090 Debug mode output: /usr/sbin/chronyd -d -d2017-11-12T17:32:37Z main.c:473:(main) chronyd version 3.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 +DEBUG)2017-11-12T17:32:37Z conf.c:406:(CNF_ReadFile) Reading /etc/chrony.conf2017-11-12T17:32:37Z conf.c:572:(CNF_ParseLine) commandkey directive is no longer supported2017-11-12T17:32:37Z conf.c:572:(CNF_ParseLine) generatecommandkey directive is no longer supported2017-11-12T17:32:37Z local.c:149:(calculate_sys_precision) Clock precision 0.000000016 (-26)2017-11-12T17:32:37Z sys_linux.c:317:(get_version_specific_details) Linux kernel major=3 minor=10 patch=02017-11-12T17:32:37Z sys_linux.c:338:(get_version_specific_details) hz=100 nominal_tick=10000 max_tick_bias=10002017-11-12T17:32:37Z local.c:663:(lcl_RegisterSystemDrivers) Local freq=11.390ppm2017-11-12T17:32:37Z util.c:1172:(UTI_DropRoot) Dropped root privileges: UID 998 GID 9962017-11-12T17:32:37Z reference.c:209:(REF_Initialise) Frequency 11.390 +/- 0.031 ppm read from /var/lib/chrony/drift2017-11-12T17:32:37Z sys_generic.c:251:(update_slew) slew offset=0.000000e+00 corr_rate=0.000000e+00 base_freq=11.389873 total_freq=11.389862 slew_freq=-1.093958e-11 duration=10000.000000 slew_error=1.203354e-132017-11-12T17:32:37Z ntp_core.c:1089:(transmit_timeout) Transmit timeout for [172.17.172.220:123]2017-11-12T17:32:37Z ntp_io.c:831:(NIO_SendPacket) Sent 48 bytes to 172.17.172.220:123 from [UNSPEC] fd 82017-11-12T17:32:37Z ntp_io_linux.c:652:(NIO_Linux_ProcessMessage) Received 90 (48) bytes from error queue for 172.17.172.220:123 fd=8 if=3 tss=12017-11-12T17:32:37Z ntp_core.c:1994:(update_tx_timestamp) Updated TX timestamp delay=0.0000100862017-11-12T17:32:38Z ntp_io.c:669:(process_message) Received 48 bytes from 172.17.172.220:123 to 192.168.0.100 fd=8 if=3 tss=1 delay=0.0000143982017-11-12T17:32:38Z ntp_core.c:1563:(receive_packet) NTP packet lvm=34 stratum=4 poll=6 prec=-6 root_delay=0.031219 root_disp=8.201569 refid=ac11ac88 []2017-11-12T17:32:38Z ntp_core.c:1568:(receive_packet) reference=1510478575.936134800 origin=3724568162.405584875 receive=1510507968.499134800 transmit=1510507968.4991348002017-11-12T17:32:38Z ntp_core.c:1570:(receive_packet) offset=10.547374307 delay=0.099570973 dispersion=0.015824 root_delay=0.130790 root_dispersion=8.2173932017-11-12T17:32:38Z ntp_core.c:1573:(receive_packet) remote_interval=0.000000000 local_interval=0.099570973 server_interval=0.000000000 txs=K rxs=K2017-11-12T17:32:38Z ntp_core.c:1577:(receive_packet) test123=111 test567=111 testABCD=1111 kod_rate=0 interleaved=0 presend=0 valid=1 good=1 updated=12017-11-12T17:32:38Z sources.c:353:(SRC_AccumulateSample) ip=[172.17.172.220] t=1510507957.951760493 ofs=-10.547374 del=0.130790 disp=8.217393 str=42017-11-12T17:32:38Z sourcestats.c:658:(SST_GetSelectionData) n=1 off=-10.547374 dist=8.282888 sd=4.000000 first_ago=0.049800 last_ago=0.049800 selok=02017-11-12T17:32:38Z sources.c:770:(SRC_SelectSource) badstat=1 sel=0 badstat_reach=1 sel_reach=0 max_reach_ago=0.000000 Confirming that the issue within ver 3.1: By removing 3.1 yum remove chrony and reverting back to chronyd version 2.1.1 yum localinstall /home/chrony-2.1.1-1.el7.centos.x86_64.rpm , Sync worked perfect! | There is a similar bug in RH Bugzilla that was closed as notabug. The issue is a combination of poor time server and a change in defaults for newer chrony to not use them. https://bugzilla.redhat.com/show_bug.cgi?id=1525833 "The server is ignored for synchronization of the clock because it's too inaccurate. In the "chronyc sources" output there is "+/- 4695ms", which is larger than the default maxdistance of 3 seconds. The maxdistance option was added in chrony-2.2, so that's why it worked with chrony-2.1. Older versions only have a hardcoded limit for the root dispersion to be smaller than 16 seconds. The tcpdump output shows that the NTP server has a root dispersion of about 3.6 seconds. Is it a Windows NTP server? You can also check the root dispersion with "chronyc ntpdata". A larger maxdistance needs to be set in chrony.conf to allow chronyd to use the server for synchronization." | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224118/"
]
} |
404,060 | Trying to find files that contains specific strings in name, but don't know how to sort output, in a way, that I will get file names only. I've tried OLDDATA=`find . -regex ".*/[0-9.]+" | ls -t` But ls -t is not working on find result but on whole directory edit: Result of this statement should be sorted by modification day directories. This regex suppose to match directories that contains only numbers and dots in name. | get file names only ... sorted by modification day find + sort + cut approach: find . -regex ".*/[0-9.]+" -printf "%T@ %f\n" | sort | cut -d' ' -f2 %T@ - File's last modification time, where @ is seconds since Jan. 1, 1970, 00:00 GMT, with fractional part %f - File's name with any leading directories removed (only the last element) To sort in descending order: find . -regex ".*/[0-9.]+" -printf "%T@ %f\n" | sort -k1,1r | cut -d' ' -f2 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259599/"
]
} |
404,063 | We can determine the owner of a process by using the ps command. Does this mean that other users cannot run / kill / resume that process? | Read credentials(7) , fork(2) , execve(2) . The fork system call is the way processes are created (today, fork is often implemented with clone(2) but you can see that as an implementation detail). The exec system call is the way executable programs are started. Remember that everything is done from some process with some system calls (listed in syscalls(2) ). The very first process ( init or systemd ) has been started magically by the kernel at boot time. Other processes have been started by fork(2) . Modern Linux kernels sometimes - but rarely - start magically a few special processes (e.g. /sbin/hotplug ) or kernel threads (e.g. kworker , kswapd ....). So yes, every process (and every file) has some owner (technically the uid , a small non-negative number) and group (the gid). The 0 uid is for root and has extra permissions. Read also about setuid (and setreuid(2) ...) It is tricky. does it mean the other owner cannot run that process? A process is already running (but it could be idle or waiting), so no one can run it again. Don't confuse a process (something dynamic) with the program (an executable file , often in ELF format) running inside it. A given program (e.g. /bin/bash ) can be executed in several processes. Many executables stay on your disk without having (at a given instant) any processes running them. On Linux, proc(5) is very useful to query the kernel about the state of processes. Try for examples cat /proc/$$/status and cat /proc/self/maps . See also pgrep(1) , ps(1) , top(1) . Each process has its own virtual address space , its own file descriptor table, its own working directory , (and often several threads , see pthreads(7) ) etc etc... does it mean that other owners cannot run/kill/resume that process? Running a process don't make any sense (it is already running). However, the executable of process of pid 1234 is available as the /proc/1234/exe symlink, and you might use that for execve(2) - but you probably should not -. The permission rules for execve applies. To kill(2) a process, you generally should have the same uid. However, the documentation tells: For a process to have permission to send a signal, it must either be privileged (under Linux: have the CAP_KILL capability in the user namespace of the target process), or the real or effective user ID of the sending process must equal the real or saved set-user-ID of the target process. In the case of SIGCONT, it suffices when the sending and receiving processes belong to the same session. To stop a process, use the SIGSTOP (or SIGTSTP ) signal used with kill(2) . See signal(7) . To resume a stopped process, use the SIGCONT signal. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257882/"
]
} |
404,116 | On an Ubuntu AWS instance I want to ssh in as "thufir" with sudo privileges. Create user thufir with sudo adduser thufir and then adduser thufir sudo from the "ubuntu" user? Simply add my public key to ~/.ssh/authorized_keys and then I'll be able to ssh as "thufir" to the remote instance? Can I use my regular public key to login as "ubuntu" on AWS, or would that require the AWS generated key? I'd like to be able to ssh as "ubuntu" using my own key -- is that possible? There's no password with the "ubuntu" user, strictly key login. | The ssh keys are not personalized, so you can create the key under your user and then just paste your public key to the target user's authorized_keys on the remote server. Thus, if you have key generated on your local workstation under "thufir", and want to logon to remote server as "ubuntu", you need to copy contents of your .ssh/id_rsa.pub to .ssh/authorized_keys of user ubuntu on remote server and use command like ssh ubuntu@remotehost If you want to connect as thufir to remote server, then, yes, on the remote server you need to create user thufir, add it to sudoers, then put your public key to the .ssh/authorized_keys of the new user and then you will be able to connect through ssh thufir@remotehost or, suggesting you are logged on as thufir to your local box, through ssh remotehost | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404116",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17056/"
]
} |
404,120 | I can't mount my e-reader. Here's what I tried: I connected my e-reader to the computer via usb. dmesg tells me the OS (debian 9) recognized the device and assigned /dev/sdb and /dev/sdc to it: usb 1-6: new high-speed USB device number 6 using ehci-pci[ 2023.922301] usb 1-6: New USB device found, idVendor=15a2, idProduct=0c01[ 2023.922306] usb 1-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3[ 2023.922309] usb 1-6: Product: 623[ 2023.922312] usb 1-6: Manufacturer: Papyre[ 2023.922315] usb 1-6: SerialNumber: 0123456789ABCDEF[ 2023.930149] usb-storage 1-6:1.0: USB Mass Storage device detected[ 2023.930323] scsi host4: usb-storage 1-6:1.0[ 2024.961442] scsi 4:0:0:0: Direct-Access Papyre 623 0100 PQ: 0 ANSI: 2[ 2024.963410] scsi 4:0:0:1: Direct-Access Papyre 623 0100 PQ: 0 ANSI: 2[ 2024.964818] sd 4:0:0:0: Attached scsi generic sg2 type 0[ 2024.966505] sd 4:0:0:1: Attached scsi generic sg3 type 0[ 2025.001429] sd 4:0:0:0: [sdb] Attached SCSI removable disk[ 2025.035684] sd 4:0:0:1: [sdc] Attached SCSI removable disk I tried to mount /dev/sdb with mount /dev/sdb /media/ereader getting this error: mount: no se ha encontrado ningún medio en /dev/sdb Which roughly translates to: mount: no medium found in /dev/sdb I also tried with the -t vfat option, and repeated the process with /dev/sdc , with the same result. In case you ask, here's the output of sg_map : /dev/sg2 /dev/sdb/dev/sg3 /dev/sdc And fdisk -l /dev/sdb (my own translation): fdisk: can't open /dev/sdb: Medium not found Output from lsblk -f : NAME FSTYPE LABEL UUID MOUNTPOINTfd0 sda ├─sda1 ext4 8110f71a-b0eb-4968-bdf2-2c398a4e056c /├─sda2 ext4 09be5f99-740b-4892-8607-a87d27953110 ├─sda3 ext4 linux_archivos 16a84f16-bca0-42e6-810e-34851fbcb0a1 /media/linux_archivos└─sda4 swap ea2997b9-6401-424b-a5ea-487f6996c56f [SWAP]sr0 Output from file /dev/sdb : /dev/sdb: block special (8/16) Output from file /dev/sdc : /dev/sdc: block special (8/32) Output from file -s /dev/sdb : /dev/sdb: writable, no read permission Output from file -s /dev/sdc : /dev/sdc: writable, no read permission | The ssh keys are not personalized, so you can create the key under your user and then just paste your public key to the target user's authorized_keys on the remote server. Thus, if you have key generated on your local workstation under "thufir", and want to logon to remote server as "ubuntu", you need to copy contents of your .ssh/id_rsa.pub to .ssh/authorized_keys of user ubuntu on remote server and use command like ssh ubuntu@remotehost If you want to connect as thufir to remote server, then, yes, on the remote server you need to create user thufir, add it to sudoers, then put your public key to the .ssh/authorized_keys of the new user and then you will be able to connect through ssh thufir@remotehost or, suggesting you are logged on as thufir to your local box, through ssh remotehost | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404120",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260194/"
]
} |
404,131 | Getting to know AWS -- it's interesting. Running 16.04 xenial on AWS and looking to install dolibarr . On 17.10, where I have a GUI, dolibarr installs fine. Few questions about running web servers/apps on AWS. (AFAIK apache web server is used to serve pages.) The install process, at least on Ubuntu 17.10, involves navigating to localhost/dolibarr to complete the configuration. How would this be accomplished? Navigate to the Elastic IP (EIP) as xxx.xxx.xxx/dolibarr ? That seems unlikely and not a very secure way to configure. Perhaps use lynx from the CLI to navigate to localhost/dolibarr ? Once configured, just browse to the EIP from any outside client? | The ssh keys are not personalized, so you can create the key under your user and then just paste your public key to the target user's authorized_keys on the remote server. Thus, if you have key generated on your local workstation under "thufir", and want to logon to remote server as "ubuntu", you need to copy contents of your .ssh/id_rsa.pub to .ssh/authorized_keys of user ubuntu on remote server and use command like ssh ubuntu@remotehost If you want to connect as thufir to remote server, then, yes, on the remote server you need to create user thufir, add it to sudoers, then put your public key to the .ssh/authorized_keys of the new user and then you will be able to connect through ssh thufir@remotehost or, suggesting you are logged on as thufir to your local box, through ssh remotehost | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17056/"
]
} |
404,159 | I'm dual-booting Linux Mint 18.2 and Windows 10. I've synchronized OneDrive from Windows, but I can't seem to access the OneDrive folder from Linux. Terminal shows that I have a OneDrive folder, but ls -all gives me the following error on the OneDrive folder: unsupported reparse point I've done a bit of Googling and the problem might have something to do with the fact that it's on an NTFS partition and Microsoft possibly compressing the OneDrive contents, but I haven't been able to verify conclusively. Anyone else have this problem? For context, I'm not needing to sync OneDrive from Linux- I'm just trying to access the OneDrive contents saved on my Windows partition from Linux. | I found it! Michael's WSL link provided the answer. I just need to delete the reparsepoint for OneDrive before I shutdown Windows. Here's my code: fsutil reparsepoint delete "C:\Path\To\OneDrive\Folder" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231713/"
]
} |
404,189 | This problem only happen in docker container. Only find is fine: find ${BASIN_SPIDER_CONFIG_PATH} -type f -name "*.json" find with sed find ${BASIN_SPIDER_CONFIG_PATH} -type f -name "*.json"|xargs sed -i "s/10.142.55.199/host02/g" gives an error: /xxx/config/sed8Ey5tD: Device or resource busy I don't understand what is sed8Ey5tD , ls can't see it. I think it is by docker, but can't figure it out. How to make sed success? OK, I found that file is being volumed by docker , there is volumes: /xxx/config.json : /xxx/config/config.json in docker-compose.yml . After docker-compose down , the file can be edit . But how do I edit the file without docker-compose down ? | Yes, as you found, the file is mounted by docker, which means you are not allowed to change its inode from within docker container. But, what if you only change the content of file without touching its inode, does it work? Sure, it does. So all you need to do is to find a way to change the content of original file only, rather than create a new file and then replace the original one. Command sed with option -i does create new file, and then replace the old file with the new one, which definitely will change the file inode. That is why it gives you the error. So, which ways can change the content of file? Many many ways. shell redirect, e.g., echo abc > file command cp , e.g., cp new old vim ed Give you several examples for how to fix your issue: The cp way: find ${BASIN_SPIDER_CONFIG_PATH} -type f -name "*.json" | xargs -L1 bash -c 'sed "s/10.142.55.199/host02/g" $1 > /tmp/.intermediate-file-2431; cp /tmp/.intermediate-file-2431 $1;' -- The vim way cat > /tmp/vim-temp-script <<EOF:set nobackup backupcopy=yes:let i = 0:while 1: let i += 1: %s/10.142.55.199/host02/g: if i >= argc(): break: endif: wn:endwhile:wqEOF find ${BASIN_SPIDER_CONFIG_PATH} -type f -name "*.json" | xargs vim -s /tmp/vim-temp-script The ed way find ${BASIN_SPIDER_CONFIG_PATH} -type f -name "*.json"|xargs -L1 bash -c 'ed $1 <<EOF,s/10.142.55.199/host02/gwqEOF' -- | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106419/"
]
} |
404,199 | On my Archlinux system, the /usr/lib/systemd/system/mdmonitor.service file contains these lines: [Service]Environment= MDADM_MONITOR_ARGS=--scanEnvironmentFile=-/run/sysconfig/mdadmExecStartPre=-/usr/lib/systemd/scripts/mdadm_env.shExecStart=/sbin/mdadm --monitor $MDADM_MONITOR_ARGS I suspect (confirmed by some googling) that the =- means that the service should not fail if the specified files are absent. However I failed to find that behaviour in the manpage of systemd unit files. Where is the official documentation for the =- assignment? | This is documented in systemd.exec : EnvironmentFile= [...] The argument passed should be an absolute filename or wildcard expression, optionally prefixed with " - ", which indicates that if the file does not exist, it will not be read and no error or warning message is logged. And in systemd.service : ExecStart= … For each of the specified commands, the first argument must be an absolute path to an executable. Optionally, this filename may be prefixed with a number of special characters: Table 1. Special executable prefixes … ExecStartPre= , ExecStartPost= … If any of those commands (not prefixed with - ) fail, the rest are not executed and the unit is considered failed. (To find the most complete documentation for a systemd directive, look it up in systemd.directives .) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/404199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81661/"
]
} |
404,207 | I'm using UNIX language to extract my files. I used this command to extract it: value_1=$(cat tmp.csv | head -1001 | cut -f 3-6 -d','> tmp1.csv)value_2=$(cat tmp.csv | head -2002 | tail -1001 | cut -f 4-6 -d','> tmp2.csv)paste -d ',' tmp1.csv tmp2.csv > final.csv My "tmp.csv" file is : 0 0 0 17.92204 -3.017933 35.142291 0 1 18.27151 -3.179997 35.200442 0 2 18.22776 -3.566021 34.87167 . .0 1 0 20.89817 -2.37854 66.510031 1 1 21.48396 -2.461451 66.489882 1 2 21.78348 -2.575202 66.51389 But the result is like this : 0 17.92204 -3.017933 35.14229 20.89817 -2.37854 66.510031 18.27151 -3.179997 35.20044 21.48396 -2.461451 66.489882 18.22776 -3.566021 34.87167 21.78348 -2.575202 66.51389 I want to make the result like this : 0 17.92204 -3.017933 35.14229 20.89817 -2.37854 66.510031 18.27151 -3.179997 35.20044 21.48396 -2.461451 66.489882 18.22776 -3.566021 34.87167 21.78348 -2.575202 66.51389 I was wondering if it would be possible to achieve that without manually handling? | This is documented in systemd.exec : EnvironmentFile= [...] The argument passed should be an absolute filename or wildcard expression, optionally prefixed with " - ", which indicates that if the file does not exist, it will not be read and no error or warning message is logged. And in systemd.service : ExecStart= … For each of the specified commands, the first argument must be an absolute path to an executable. Optionally, this filename may be prefixed with a number of special characters: Table 1. Special executable prefixes … ExecStartPre= , ExecStartPost= … If any of those commands (not prefixed with - ) fail, the rest are not executed and the unit is considered failed. (To find the most complete documentation for a systemd directive, look it up in systemd.directives .) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/404207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260260/"
]
} |
404,219 | This should be a common enough task. I have just installed Archlinux. Next, I installed openbox , surprisingly finding that X is not among it's dependencies. So I installed xorg-server as per the opening lines in the wiki . However, # startxbash: startx: command not found On my Debian box, $ dpkg -S /usr/bin/startxxinit: /usr/bin/startx Yet on the Arch box pacman -S xiniterror: target not found: xinit Later the wiki again refers to /usr/bin/startx which doesn't exist for me. What am I missing? | This is documented in systemd.exec : EnvironmentFile= [...] The argument passed should be an absolute filename or wildcard expression, optionally prefixed with " - ", which indicates that if the file does not exist, it will not be read and no error or warning message is logged. And in systemd.service : ExecStart= … For each of the specified commands, the first argument must be an absolute path to an executable. Optionally, this filename may be prefixed with a number of special characters: Table 1. Special executable prefixes … ExecStartPre= , ExecStartPost= … If any of those commands (not prefixed with - ) fail, the rest are not executed and the unit is considered failed. (To find the most complete documentation for a systemd directive, look it up in systemd.directives .) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/404219",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20506/"
]
} |
404,250 | Did I understand correct that the Gnome shell is the a shell written to allow GUI based operation, and that the Unity GUI is actually one of the interfaces that are based on the Gnome shell? | GNOME Shell and Unity are both shells on top of the GNOME desktop environment. Neither is based on the other; you either use GNOME Shell, or Unity, and underneath that you’ll have GNOME. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404250",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258010/"
]
} |
404,258 | My understanding is that Ubuntu is based on Debian. For example, on the Wikipedia page for Ubuntu it states " It is a Linux distribution based on the Debian architecture. " How can I find out what version of Debian a particular version of Ubuntu is based on (if any)? For example, the current stable release of Ubuntu is " Artful Aardvark " (17.10) which announces that it is based on the Linux 4.13 kernel, but does not seem to say anything about the Debian version. The current stable release of Debian is code named " Stretch " (9.2) which advertises a 4.9 kernel (on the afore-linked Stretch page). How can I find out the details of the relationship between them? Is there a particular command that will reveal this information? | Ubuntu releases aren’t based on Debian releases. During the development of an Ubuntu release, packages are imported from Debian unstable, until the Debian import freeze (in the past, LTS releases imported from testing, and this is what the linked wiki page still suggests; however looking at my packages shows that 18.04 is importing packages from unstable). This means that a given Ubuntu release will have non-Ubuntu-maintained packages in whatever version was in Debian at the time of the import freeze (barring explicit sync requests ); but that doesn’t match what the next release of Debian will contain. So trying to tie a release of Ubuntu to a release of Debian would just end up being misleading. You can look at the contents of /etc/debian_version to see the Debian codename of the version (under construction) from which packages were pulled; you can also match Debian import freeze dates from the release schedules (for example, Artful’s , Bionic’s , Cosmic’s , or Disco’s ). You’ll see from this that the same Debian release feeds multiple Ubuntu releases ( e.g. Stretch, which ended up being Debian 9, fed Xenial, Yakkety, Zesty and Artful; Buster, which will end up being Debian 10, fed Bionic and Cosmic, and is feeding Disco), with quite different package versions each time. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/404258",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47542/"
]
} |
404,286 | I have a simple pipeline: node foo.js | node bar.js bar.js will read from stdin to get data from foo.js . But what I want to do is ensure that bar.js gets one of the last messages from foo.js before foo.js decides it's OK to exit. Essentially I want to create a simple request/response pattern. foo writes to stdout --> bar reads from stdin --> how can bar send a message back to foo? Is there a way to communicate backwards in a pipeline or should there never be a need to do that? | No. A pipeline is a one-way communication channel. That's why it's called a "pipeline"; you couldn't send oil back up a pipeline if you tried, either. However, if bar.js has to talk to foo.js too, you do have a few options: Create a unix domain socket instead of a pipeline, and start both foo.js and bar.js separately (i.e., don't pipe the output of foo.js into bar.js anymore). I don't know how you do that from node, but essentially a unix domain socket is a network socket that uses filenames rather than IP addresses and works inside the kernel. Sockets are meant for bidirectional communication, but require more setup than a simple pipe (e.g., a listening socket can talk to multiple instances of bar.js). You may find unix domain sockets in the filesystem, but it's not strictly necessary (and indeed Linux allows creating a unix domain socket without a trace on the filesystem). Use mkfifo to create a named pipe (or use some node API to create one, if that exists; again, I do not know node). Then, in foo.js , open that named pipe and read from it. Your bar.js script can open the same named pipe and write to it. The latter will be easiest to transition to, since you still use file I/O (opening a named pipe requires opening a file on the filesystem), but would still be unidirectional (although you'd have two channels, one in each direction). The former is slightly cleaner and also allows you to more easily migrate one of the two scripts to a different host if that ever should become necessary. At any rate, if your scripts are now communicating bidirectionally, then for clarity I would suggest you start them as separate processes, rather than having one process pipe into the other. IMHO, they're equal partners now, and your command line should show that. That's just a detail though, and certainly not technically required. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404286",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
404,287 | I have a laptop with dual boot elementaryOS Loki and Windows 10. Until recently everything was fine, but now suddenly the wifi in elementaryOS is extremely slow (~0.5Mbit download, most speed tests don't even start the upload test). With Ethernet, I get the normal 80 MBit download. I also tried it with Windows where it's still 25 MBit via Wifi. Edit: lspci -knn | grep Net -A201:00.0 Network controller [0280]: Intel Corporation Centrino Advanced-N 6235 [8086:088e] (rev 24) Subsystem: Intel Corporation Centrino Advanced-N 6235 AGN [8086:4060] Kernel driver in use: iwlwifi uname -aLinux tobias-530U3BI-530U4BI-530U4BH 4.10.0-38-generic #42~16.04.1-Ubuntu SMP Tue Oct 10 16:32:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | To ameliorate the connection through the intel wifi card you can: Disable 802.11n Enable software encryption Enable the transmission antenna aggregation Disable bluetooth coexistence Create a /etc/modprobe.d/iwlwifi.conf with the following content : options iwlwifi 11n_disable=1options iwlwifi swcrypto=1options iwlwifi 11n_disable=8options iwlwifi bt_coex_active=0 iwlwifi troubleshooting on arch-linux | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404287",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/251547/"
]
} |
404,298 | I need to take the multi-line output of a program, match a string, and return the previous line of all matches. An example of the output of the program: $ jack_lsp -pfirewire_pcm:analog-1_out properties: input,physical,terminal,firewire_pcm:analog-2_out properties: input,physical,terminal,firewire_pcm:analog-1_in properties: output,physical,terminal,firewire_pcm:analog-2_in properties: output,physical,terminal,$ I need to match, for example, 'input', and return the previous line of all matches. So in the example, the expected output would be: firewire_pcm:analog-1_outfirewire_pcm:analog-2_out Here is what I have, but it only returns the first match: $ jack_lsp -p | grep -B1 input | head -1firewire_pcm:analog-1_out$ What am I doing wrong? | This is the command you're trying: jack_lsp -p | grep -B1 input | head -1 The problem with this is that head -1 will return the first line of the entire stream of data that's piped to it. Try this awk command instead: jack_lsp -p | awk '/input/{print previous_line}{previous_line=$0}' It will print out the line before each line that contains the string "input". Here is the result for your example data: user@host:~$ cat <<HEREDOC | awk '/input/{print previous_line}{previous_line=$0}'firewire_pcm:analog-1_out properties: input,physical,terminal,firewire_pcm:analog-2_out properties: input,physical,terminal,firewire_pcm:analog-1_in properties: output,physical,terminal,firewire_pcm:analog-2_in properties: output,physical,terminal,HEREDOCfirewire_pcm:analog-1_outfirewire_pcm:analog-2_out For more information about this awk approach, see the following post: grep - print line before, don't print match You can accomplish the same thing using sed : <!-- language: bash -->jack_lsp -p |sed -n '/input/{x;p;d;}; x' For more information about this sed approach, see the following post: Print previous line after a pattern match using sed? In your particular case it looks like the string that you're matching against (i.e. "input") doesn't occur in the preceding line, so you can filter those lines out using grep as well, i.e.: jack_lsp -p | grep -B1 'input' | grep -v 'input You can also get the same result as the above awk approach by supplementing grep with some shell scripting, although the result isn't quite as compact: jack_lsp -p | ( unset previous_line; while read line; do if grep -q input <<< "${line}" && [[ -n "${previous_line}" ]]; then echo "${previous_line}"; fi; previous_line="${line}"; done) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404298",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132776/"
]
} |
404,312 | I saw a lot of posts about it all over the internet but i couldn't make it run as i wanted. I have an input from the user of a path. If the user will type ~/Desktop/ for example i can't use $ans in grep and other commands because of the tilde. here is an example of my code. the second line is my main problem right now. read -p "Enter new path or type 'exit': " ansfile=`grep -x $ans ~/.Log` Appreciate your help. | This is the command you're trying: jack_lsp -p | grep -B1 input | head -1 The problem with this is that head -1 will return the first line of the entire stream of data that's piped to it. Try this awk command instead: jack_lsp -p | awk '/input/{print previous_line}{previous_line=$0}' It will print out the line before each line that contains the string "input". Here is the result for your example data: user@host:~$ cat <<HEREDOC | awk '/input/{print previous_line}{previous_line=$0}'firewire_pcm:analog-1_out properties: input,physical,terminal,firewire_pcm:analog-2_out properties: input,physical,terminal,firewire_pcm:analog-1_in properties: output,physical,terminal,firewire_pcm:analog-2_in properties: output,physical,terminal,HEREDOCfirewire_pcm:analog-1_outfirewire_pcm:analog-2_out For more information about this awk approach, see the following post: grep - print line before, don't print match You can accomplish the same thing using sed : <!-- language: bash -->jack_lsp -p |sed -n '/input/{x;p;d;}; x' For more information about this sed approach, see the following post: Print previous line after a pattern match using sed? In your particular case it looks like the string that you're matching against (i.e. "input") doesn't occur in the preceding line, so you can filter those lines out using grep as well, i.e.: jack_lsp -p | grep -B1 'input' | grep -v 'input You can also get the same result as the above awk approach by supplementing grep with some shell scripting, although the result isn't quite as compact: jack_lsp -p | ( unset previous_line; while read line; do if grep -q input <<< "${line}" && [[ -n "${previous_line}" ]]; then echo "${previous_line}"; fi; previous_line="${line}"; done) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404312",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259356/"
]
} |
404,394 | Suppose I have a (potentially very large) text filethat contains a word list with whitespace interjected. For example, it might look like this: Cat DogSoup RatCass Audrey I want each word on a separate line (with no whitespace), like this: CatDogSoupRatCassAudrey I can do a simple tr -d " " to make that into: CatDogSoupRatCassAudrey (but that is not what I want). I do not know what type of blank space separates those words,so assume that it's some combination of ordinary ASCII spaces and tabs. (We can assume that there are no invisible Unicode characterslike em spaces and zero-width thingies.) Naturally, the words do not contain whitespace, so "à la","alma mater", "apple pie", "at large" and "ice cream"are not valid words. Assume that words may contain (non-blank) non-alphabetic characters,such as "AC/DC", "add-on", "AT&T", "audio-visual","can't", "carbon-14", "jack-o'-lantern", "mother-in-law","o'clock", "O'Reilly", "RS-232" and "3-D". Ideally the solution should tolerate non-ASCII characters,as in "Ångström", "Gödel", "naïve", "résumé" and "smörgåsbord". How do I get rid of all those spaces while preserving (and isolating)the indented words using common Unix/Linux tools like tr , sed or awk ? It would be great if the solution would also workfor more general cases of the stated problem;i.e., not just two-column text, but also random arrangements like: Once upon a midnight drearywhile I pondered weak and weary Over manya quaint and curious volume of forgotten lore | etopylight was almost right: tr -s ' \t' '\n' because the question asks to replace tabs, too. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260407/"
]
} |
404,405 | I'm having a little issue. I've a live system which run on RHEL 6.7 (VM) and have VMware 6.5 (which is not managed by our group) . The issue is, the other group tried to extend the capacity of an existing disk on a VM. After that, I ran a scan command to detect new disk as usual with echo "- - -" > /sys/class/scsi_host/host0/scan , but nothing happened. They added 40G on sdb disk which should be 100G and I saw that is changed on VM but not in Linux. So where is the problem ? As I said, this is a live system, so I don't want to reboot it. Here is the system : # df -h /dev/mapper/itsmvg-bmclv 59G 47G 9.1G 84% /opt/bmc# lsblk sdb 8:16 0 60G 0 disk └─itsmvg-bmclv (dm-2) 253:2 0 60G 0 lvm /opt/bmc# vgs VG #PV #LV #SN Attr VSize VFree itsmvg 1 1 0 wz--n- 59.94g 0 # pwd /sys/class/scsi_host# ll lrwxrwxrwx 1 root root 0 Nov 13 16:18 host0 -> ../../devices/pci0000:00/0000:00:07.1/host0/scsi_host/host0 lrwxrwxrwx 1 root root 0 Nov 13 16:19 host1 -> ../../devices/pci0000:00/0000:00:07.1/host1/scsi_host/host1 lrwxrwxrwx 1 root root 0 Nov 13 16:19 host2 -> ../../devices/pci0000:00/0000:00:15.0/0000:03:00.0/host2/scsi_host/host2 | Below is the command that you need to run to scan the host devices so it will show the new hard disk connected. echo "- - -" >> /sys/class/scsi_host/host_$i/scan $i is the host number | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404405",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260411/"
]
} |
404,411 | In a CentOS 7 installation I installed OrangeScrum (which is a standard PHP application copied to /var/www/html ). When I type the server IP I get the Apache test page and if I have an index.html page it will be displayed. And when I type the server-ip/orangescrum for example I get the web app. All this is fine. Moving on to a server with Scientific Linux 7 I did the same, but when I install the app to the Apache and typing the server ip alone I get the app itself not the Apache status nor the index.html if any. Nothing has been done to httpd.conf except adding a virtual host definition like here What am I missing to do in order to get the root index or the Apache test pages to work? | Below is the command that you need to run to scan the host devices so it will show the new hard disk connected. echo "- - -" >> /sys/class/scsi_host/host_$i/scan $i is the host number | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404411",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91118/"
]
} |
404,414 | How can I test that my terminal / tmux is correctly setup to display truecolor / 24-bit color / 16.8 million colours? | The following script will produce a test pattern like: You can optionally call it as: width=1000 truecolor-test and it will print a pattern of width columns. #!/bin/bash# Based on: https://gist.github.com/XVilka/8346728awk -v term_cols="${width:-$(tput cols || echo 80)}" 'BEGIN{ s="/\\"; for (colnum = 0; colnum<term_cols; colnum++) { r = 255-(colnum*255/term_cols); g = (colnum*510/term_cols); b = (colnum*255/term_cols); if (g>255) g = 510-g; printf "\033[48;2;%d;%d;%dm", r,g,b; printf "\033[38;2;%d;%d;%dm", 255-r,255-g,255-b; printf "%s\033[0m", substr(s,colnum%2+1,1); } printf "\n";}' | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/404414",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
404,444 | I installed Opera 12.16 from a .deb for reasons. Just assume that I need this specific browser of this specific version and that there’s no alternative. However, that deb depends on packages (such as the gstreamer0.10 series) which are not in my distribution anymore (Debian testing). This makes apt fail on every operation except apt remove opera with dependency errors: # apt install cli-commonReading package lists... DoneBuilding dependency tree Reading state information... DoneYou might want to run 'apt --fix-broken install' to correct these.The following packages have unmet dependencies: opera : Depends: gstreamer0.10-plugins-good but it is not installable Recommends: flashplugin-nonfree but it is not going to be installedE: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). apt --fix-broken install will just propose to remove opera: # apt --fix-broken installReading package lists... DoneBuilding dependency tree Reading state information... DoneCorrecting dependencies... DoneThe following packages will be REMOVED: opera0 upgraded, 0 newly installed, 1 to remove and 92 not upgraded.1 not fully installed or removed.After this operation, 46.6 MB disk space will be freed.Do you want to continue? [Y/n] Currently, my workaround is to install Opera when I need it, and remove it as soon as anything else needs to be done with apt. This is annoying. Any suggestions? Ideally, I’d like to make apt ignore the dependencies of opera forever, since it works well-enough for my purposes. | You can’t make apt ignore dependencies, but you can create a fake gstreamer0.10-plugins-good package which will satisfy the missing dependency. The simplest way to do this is using equivs : install equivs sudo apt install equivs generate a template control file equivs-control gstreamer0.10-plugins-good.control fix the package name sed -i 's/<package name; defaults to equivs-dummy>/gstreamer0.10-plugins-good/g' gstreamer0.10-plugins-good.control build the package equivs-build gstreamer0.10-plugins-good.control install it sudo dpkg -i gstreamer0.10-plugins-good_1.0_all.deb That should satisfy the opera package’s dependency. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404444",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20834/"
]
} |
404,448 | As for now, I use my CLI (Command Line Interface) with either rbash , bash , dash , or sh . Given this fact, one can assume that the CLI is not shell dependent, and that even if we will delete all of these shells, we could use some primal/basic/ultralimited CLI. My question If I delete all the aforementioned shells in my GUIless operating system, will I still have a primal CLI of some sort? Notes I assume that that CLI won't be part of the kernel, because as I understand, the kernel is usually accessible only via proxy, like a shell). I was thinking about tmux and screen too but removed them from the headline and the question. | No. Your premise that these different shells are all running on top of some more basic CLUI, because they are all fairly similar, is incorrect. Each shell is separately implementing a CLI interface to the kernel, which all look somewhat similar (because they are all 'Unix' shells, which conform more or less rigidly to an accepted standard, and they all run on the same sort of terminal device). The CLUI is coded into each shell program separately - they are all independent and are not sharing some underlying CLUI. If you delete all the shells, then you will have no CLUI. That makes Tux cry :( | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404448",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/258010/"
]
} |
404,551 | I have a question about overwriting a running executable, or overwriting a shared library (.so) file that's in use by one or more running programs. Back in the day, for the obvious reasons, overwriting a running executable didn't work. There's even a specific errno value, ETXTBSY, that covers this case. But for quite a while now, I've noticed that when I accidentally try to overwrite a running executable (for example, by firing off a build whose last step is cc -o exefile on an exefile that happens to be running), it works! So my questions are, how does this work, is it documented anywhere, and is it safe to depend on it? It looks like someone may have tweaked ld to unlink its output file and create a new one, just to eliminate errors in this case. I can't quite tell if it's doing this all the time, or only if it needs to (that is, perhaps after it tries to overwrite the existing file, and encounters ETXTBSY). And I don't see any mention of this on ld 's man page. (And I wonder why people aren't complaining that ld may now be breaking their hard links, or changing file ownership, and like that.) Addendum: The question wasn't specifically about cc / ld (although that does end up being a big part of the answer); the question was really just "How come I never see ETXTBSY any more? Is it still an error?" And the answer is, yes, it is still an error, just a rare one in practice. (See also the clarifying answer I just posted to my own question.) | It depends on the kernel, and on some kernels it might depend on the type of executable, but I think all modern systems return ETXTBSY (” text file busy“) if you try to open a running executable for writing or to execute a file that's open for writing. Documentation suggests that it's always been the case on BSD , but it wasn't the case on early Solaris ( later versions did implement this protection ), which matches my memory. It's been the case on Linux since forever, or at least 1.0 . What goes for executables may or may not go as well for dynamic libraries. Overwriting a dynamic library causes exactly the same problem that overwriting an executable does: instructions will suddenly be loaded from the same old address in the new file, which probably has something completely different. But this is in fact not the case everywhere. In particular, on Linux, programs call the open system call to open a dynamic library under the hood, with the same flags as any data file, and Linux happily allows you to rewrite the library file even though a running process might load code from it at any time. Most kernels allow removing and renaming files while they're being executed, just like they allow removing and renaming files while they're open for reading or writing. Just like an open file, a file that's removed while it's being executed will not be actually removed from the storage medium as long as it is in use, i.e. until the last instance of the executable exits. Linux and *BSD allow it, but Solaris and HP-UX don't. Removing a file and writing a new file by the same name is perfectly safe: the association between the code to load and the open (or being-executed) file that contains the code goes by the file descriptor, not the file name. It has the additional benefit that it can be done atomically, by writing to a temporary file then moving that file into place (the rename system call atomically replaces an existing destination file by the source file). It's much better than remove-then-open-write since it doesn't temporarily put an invalid, partially-written executable in place Whether cc and ld overwrite their output file, or remove it and create a new one, depends on the implementation. GCC (at least modern versions) and Clang do this, in both cases by calling unlink on the target if it exists then open to create a new file. (I wonder why they don't do write-to-temp-then-rename.) I don't recommend depending on this behavior except as a safeguard since it doesn't work on every system (it may work on every modern systems for executables, but not for shared libraries), and common toolchains don't do things in the best way. In your build scripts, always generate files under a temporary file, then move them into place, unless you know the underlying tool does this. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173796/"
]
} |
404,580 | From my understanding, the debootstrap program with the minbase option only installs packages with "essential priority" (and possibly apt ). Is there a web resource which lists all packages with different priority levels? All I really need is a list for priority levels of essential, required, important, and standard and I do not need optional and extra. | I’m not aware of anything giving the information you’re after, on the web. (Arguably it should be added to the package pages.) You can however get the information you’re after by querying the UDD , for example using the unofficial, publicly-accessible UDD mirror : $ psql --host=udd-mirror.debian.net --user=udd-mirror udd --password will connect to the server, then udd=> select distinct package, version, section, priority from packages where essential = 'yes' and release = 'stretch'; will list all essential packages from Stretch (the distinct is useful because binary packages are listed per architecture), and udd=> select distinct package, version, section, priority, essential from packages where priority in ('required', 'important', 'standard') and release = 'stretch' order by priority, essential, package; will list all required, important, and standard packages, with the priority information. There is also a detailed list of all the current contents of minbase , on the Buster priority requalification page . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404580",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22724/"
]
} |
404,586 | I have this function to get MAC address from IP: ip2arp() { local ip="$1" ping -c1 -w1 "$ip" >/dev/null arp -n "$ip" | awk '$1==ip {print $3}' ip="$ip" } What is the right way for using it later? Save to /usr/bin as sh and make it executable, or save it in a home directory and make an alias in bash? Is there a right and wrong way? | If it's only for your personal use then you could add it to your shell's initialization file as a function, e.g. ~/.bashrc . For a summary of the different initialization files in Bash you can consult the Bash Guide for Beginners : Bash Guide for Beginners - Section 3.1: Shell Initialization Files Also see the Bash Reference Manual : Bash Reference Manual - Section 6.2: Bash Startup Files A typical pattern would be to put your function definition in your ~/.bashrc file and source that file from your ~/.bash_profile . But it's probably worth noting that which profile file to use can depend on your OS, your terminal application, and your own preferences. See for example the following posts on AskDifferent: Why doesn't .bashrc run automatically? Why doesn't Mac OS X source ~/.bashrc? Also see this post on StackOverflow: What's the difference between .bashrc, .bash_profile, and .environment? Alternatively, you can create a personal directory for your own scripts (e.g. I use ~/local/bin ) and then add that directory to your PATH in your profile file (i.e. export PATH="${HOME}/local/bin:${PATH} ). If you want to make it available to other users then you might put it in /usr/local/bin as a script (rather than /usr/bin ). For further discussion regarding where to put executable files, see the following posts: /usr/bin vs /usr/local/bin on Linux Where should a local executable be placed? | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404586",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135657/"
]
} |
404,591 | $ a='"apple","ball","cat"'$ a='['$a$ echo $a["apple","ball","cat"$ a=$a']'$ echo $ab I'm stumped hard by the result b while I expect to see ["apple,"ball","cat"] . What am I missing here? This is from bash shell on Mac. Also see it on CentOS 7, while not on Fedora. Can someone please explain? | There is a file with the name b in the current directory. [...] is a pattern matching expression. It matches every file of which the name consists of a single letter between [ and ] . This is similar to having * in a variable value and using the variable without quotes. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404591",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260544/"
]
} |
404,667 | I have come across a .service that contains the following: [Install]WantedBy=multi-user.target The original .service file can be found HERE . I am on Ubuntu 16.04LTS. | This is the dependencies handling mechanism in systemd. multi-user.target is the alternative for runlevel 3 in systemV world. That said, reaching multi-user.target includes starting the "Confluent ZooKeeper" service. Probably that's what you need indeed. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/404667",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260600/"
]
} |
404,704 | When I am using the command to find all the files in a directory and its sub directories and redirecting the output into a file result.txt, then in result.txt file I am getting all the files including result.txt but I don't want result.txt. like I am doing find . -type f > allfiles.txt in result I am getting as ./test1/test3/file4./test1/file3./test2/test4/file6./test2/file5./allfiles.txt How to avoid that? | Simply, don't put your allfiles.txt file into the same directory you're runing find for. Put it somewhere else, like: find . -type f > /tmp/allfiles.txt Or if it has to be in the same directory, filter it our with grep: find . -type f | grep -vxF ./allfiles.txt > allfiles.txt (assuming there are no other files called ./foo\n./allfiles.txt for instance) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260624/"
]
} |
404,709 | I just managed to install a dual boot for Kali Linux and Windows 7. So I can choose to start from my secondary disk, where GRUB finds place, can chooce Kali and everything works fine. The problem: It's only working fine, when a USB-Stick is inserted. I know it sounds weird, but its true... I installed Kali with a image burned to USB-stick with Rufus. Install progress went fine, no problems at all, but now, I can only start Kali when "A" usb stick is inserted, I don't say when "THE" usb stick is inserted, it can be ANY usb stick, I completely formatted the usb stick, I tried another one, it doesn't mind, it just has to be one and exactly one usb stick inserted (if I plug in a second one, it's again not working), the error I get says something like sdb3 not found . And I already got behind the problem. Kali always mounts the USB-Stick to sda , the partition with Kali partitions on is sdb and another partition on another disk is on sdc (doesn't mind for Kali). Now I guess, that Kali tries to find the partition under sdb3 , but if my usb-stick is not plugged in, it would be sda3 . I hope you understand what I mean. This is a picture of the sdb disk. So again, it seems, that Kali always tries to boot from sdb , but as I plug in the usb-stick, sdb is something different then without the usb-stick. How can I change this? I'm really wondering why it's referencing on the identifier sdb instead of really referencing to the disk itself. So what can I do against this, so I don't have to start with a usb-stick inserted? Edit : The output of lsblk is the following: And honestly I'm a bit confused about the thing behind sda1 , because I completely formatted that device... Edit2: running grub install gives me the following error: Output of parted-l Model: SanDisk Extreme (scsi)Disk /dev/sda: 62.7GBSector size (logical/physical): 512B/512BPartition Table: msdosDisk Flags: Number Start End Size Type File system Flags 1 32.8kB 3020MB 3020MB primary boot, hidden 2 3020MB 3021MB 721kB primaryModel: ATA WDC WD30EZRX-00M (scsi)Disk /dev/sdb: 3001GBSector size (logical/physical): 512B/512BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 1 20.5kB 134MB 134MB msftres 2 135MB 2162GB 2162GB ntfs Basic data partition msftdata 3 2162GB 2980GB 818GB ext4 Basic data partition msftdata 4 2980GB 2992GB 12.6GB linux-swap(v1) Basic data partition msftdata 5 2992GB 3001GB 8389MB ntfs Basic data partition msftdataModel: ATA WDC WD5000AAKX-0 (scsi)Disk /dev/sdc: 500GBSector size (logical/physical): 512B/512BPartition Table: msdosDisk Flags: Number Start End Size Type File system Flags 1 1049kB 500GB 500GB primary ntfs boot [ -d /sys/firmware/efi ] && echo UEFI || echo BIOS simply outputs BIOS | Simply, don't put your allfiles.txt file into the same directory you're runing find for. Put it somewhere else, like: find . -type f > /tmp/allfiles.txt Or if it has to be in the same directory, filter it our with grep: find . -type f | grep -vxF ./allfiles.txt > allfiles.txt (assuming there are no other files called ./foo\n./allfiles.txt for instance) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260627/"
]
} |
404,713 | I'm writing a Bash script to test USB flash memory for errors using f3 tool.I have text like this (created by grepping logs from f3read program): 2017-10-25_09:30:22/sdf.log:Data LOST: 4.00 KB (8 sectors)2017-10-25_09:30:22/sdi.log:Data LOST: 5.00 KB (10 sectors)2017-10-25_09:30:22/sdj.log:Data LOST: 2.35 MB (4822 sectors)2017-10-25_09:30:22/sdn.log:Data LOST: 5.00 KB (10 sectors)2017-10-25_09:30:22/sdo.log:Data LOST: 4.00 KB (8 sectors)2017-10-25_09:30:22/sdp.log:Data LOST: 4.00 KB (8 sectors)2017-10-25_09:30:22/sdq.log:Data LOST: 2.00 KB (4 sectors)2017-10-25_14:37:03/sdb.log:Data LOST: 5.00 KB (10 sectors)2017-10-25_14:37:03/sdc.log:Data LOST: 3.00 KB (6 sectors)2017-10-26_09:17:59/sdd.log:Data LOST: 3.00 KB (6 sectors)2017-10-26_09:17:59/sde.log:Data LOST: 2.00 KB (4 sectors)2017-10-26_09:17:59/sdf.log:Data LOST: 3.00 KB (6 sectors)2017-10-26_09:17:59/sdg.log:Data LOST: 6.00 KB (12 sectors)2017-10-26_09:17:59/sdh.log:Data LOST: 611.29 MB (1251918 sectors)2017-10-26_09:17:59/sdi.log:Data LOST: 6.00 KB (12 sectors)2017-10-26_09:17:59/sdl.log:Data LOST: 6.00 KB (12 sectors)2017-10-26_09:17:59/sdo.log:Data LOST: 3.00 KB (6 sectors)2017-10-26_09:17:59/sdp.log:Data LOST: 2.00 KB (4 sectors)2017-10-26_09:17:59/sdq.log:Data LOST: 414.60 MB (849106 sectors)2017-10-26_09:17:59/sdr.log:Data LOST: 65.29 MB (133712 sectors)2017-10-26_09:17:59/sds.log:Data LOST: 5.00 KB (10 sectors) I would like to sort the lines by the number of bad sectors noted at the end of the line. I tried using sort , but I don't know how to use it's --key option to make it do what I want. I cannot cut the lines first, becasue I need the drive name (sda, sdb, etc) to be then extracted for a report. | Use sort -V if that option is available -V, --version-sort natural sort of (version) numbers within text $ <cmd> | sort -k5,5V2017-10-25_09:30:22/sdq.log:Data LOST: 2.00 KB (4 sectors)2017-10-26_09:17:59/sde.log:Data LOST: 2.00 KB (4 sectors)2017-10-26_09:17:59/sdp.log:Data LOST: 2.00 KB (4 sectors)2017-10-25_14:37:03/sdc.log:Data LOST: 3.00 KB (6 sectors)2017-10-26_09:17:59/sdd.log:Data LOST: 3.00 KB (6 sectors)2017-10-26_09:17:59/sdf.log:Data LOST: 3.00 KB (6 sectors)2017-10-26_09:17:59/sdo.log:Data LOST: 3.00 KB (6 sectors)2017-10-25_09:30:22/sdf.log:Data LOST: 4.00 KB (8 sectors)2017-10-25_09:30:22/sdo.log:Data LOST: 4.00 KB (8 sectors)2017-10-25_09:30:22/sdp.log:Data LOST: 4.00 KB (8 sectors)2017-10-25_09:30:22/sdi.log:Data LOST: 5.00 KB (10 sectors)2017-10-25_09:30:22/sdn.log:Data LOST: 5.00 KB (10 sectors)2017-10-25_14:37:03/sdb.log:Data LOST: 5.00 KB (10 sectors)2017-10-26_09:17:59/sds.log:Data LOST: 5.00 KB (10 sectors)2017-10-26_09:17:59/sdg.log:Data LOST: 6.00 KB (12 sectors)2017-10-26_09:17:59/sdi.log:Data LOST: 6.00 KB (12 sectors)2017-10-26_09:17:59/sdl.log:Data LOST: 6.00 KB (12 sectors)2017-10-25_09:30:22/sdj.log:Data LOST: 2.35 MB (4822 sectors)2017-10-26_09:17:59/sdr.log:Data LOST: 65.29 MB (133712 sectors)2017-10-26_09:17:59/sdq.log:Data LOST: 414.60 MB (849106 sectors)2017-10-26_09:17:59/sdh.log:Data LOST: 611.29 MB (1251918 sectors) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67203/"
]
} |
404,762 | Is there anyway to write a bash that can notify me via email when my phone extensions are unreachable? Output from /var/log/asterisk/full [Nov 15 13:25:16] NOTICE[7884] chan_sip.c: Peer '7778' is now UNREACHABLE! Last qualify: 17[Nov 15 13:25:17] NOTICE[7884] chan_sip.c: Peer '7169' is now UNREACHABLE! Last qualify: 17[Nov 15 13:25:17] NOTICE[7884] chan_sip.c: Peer '7176' is now UNREACHABLE! Last qualify: 18[Nov 15 13:25:18] NOTICE[7884] chan_sip.c: Peer '7771' is now UNREACHABLE! Last qualify: 14[Nov 15 13:25:18] NOTICE[7884] chan_sip.c: Peer '7606' is now UNREACHABLE! Last qualify: 17[Nov 15 13:25:18] NOTICE[7884] chan_sip.c: Peer '7773' is now UNREACHABLE! Last qualify: 14[Nov 15 13:25:19] NOTICE[7884] chan_sip.c: Peer '7125' is now UNREACHABLE! Last qualify: 15[Nov 15 13:25:20] NOTICE[7884] chan_sip.c: Peer '7772' is now UNREACHABLE! Last qualify: 15[Nov 15 13:25:22] NOTICE[7884] chan_sip.c: Peer '7605' is now UNREACHABLE! Last qualify: 16[Nov 15 13:25:22] NOTICE[7884] chan_sip.c: Peer '7183' is now UNREACHABLE! Last qualify: 18[Nov 15 13:25:29] NOTICE[7884] chan_sip.c: Peer '7601' is now UNREACHABLE! Last qualify: 24[Nov 15 13:25:30] NOTICE[7884] chan_sip.c: Peer '7776' is now UNREACHABLE! Last qualify: 47[Nov 15 13:25:32] NOTICE[7884] chan_sip.c: Peer '7604' is now UNREACHABLE! Last qualify: 25[Nov 15 13:25:34] NOTICE[7884] chan_sip.c: Peer '7774' is now UNREACHABLE! Last qualify: 46[Nov 15 13:25:38] NOTICE[7884] chan_sip.c: Peer '7770' is now UNREACHABLE! Last qualify: 41[Nov 15 13:25:41] NOTICE[7884] chan_sip.c: Peer '7775' is now UNREACHABLE! Last qualify: 42 As you can see, I don't know the phones are down until people complain they can't make a phone call. What I have done so far: #!/bin/bashemail="[email protected]"offlineExtensions=$(cat /var/log/asterisk/full | grep -i unreachable)if [ "$offlineExtensions" ]thenprintf 'Extensions that are currently offline...\n''\n'"$offlineExtensions" | mail -s 'Extensions OFFLINE' "$email"fi I would like to use sed and awk , but I'm brand new to bash scripting. It would be nice if this script is constantly checking the Asterisk log file to find out if an extension is unreachable. | Instead of looking at the logs, ask asterisk directly which extensions are not OK asterisk -rx 'sip show peers like ^[0-9]{4}$' | awk 'NR>1 && !/ OK /' Would report the 4 digit extensions that are not "OK". | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404762",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240340/"
]
} |
404,784 | Given a random .DEB file, how do we check if installation will be successfully completed without actually installation on device? Please see the following snippet: root@VirtualBox:/Folder# dpkg -i mysql-workbench_6.2.3+dfsg-7_armhf.deb Selecting previously unselected package mysql-workbench.(Reading database ... 48937 files and directories currently installed.)Preparing to unpack mysql-workbench_6.2.3+dfsg-7_armhf.deb ...Unpacking mysql-workbench (6.2.3+dfsg-7) ...dpkg: dependency problems prevent configuration of mysql-workbench: mysql-workbench depends on libatkmm-1.6-1 (>= 2.22.1); however: Package libatkmm-1.6-1 is not installed. mysql-workbench depends on libcairo2 (>= 1.14.0); however: Package libcairo2 is not installed. mysql-workbench depends on libcairomm-1.0-1 (>= 1.6.4); however: Package libcairomm-1.0-1 is not installed. mysql-workbench depends on libctemplate2; however: Package libctemplate2 is not installed. mysql-workbench depends on libgdal1h (>= 1.8.0); however: Package libgdal1h is not installed. mysql-workbench depends on libgdk-pixbuf2.0-0 (>= 2.22.0); however: Package libgdk-pixbuf2.0-0 is not installed. mysql-workbench depends on libgl1-mesa-glx | libgl1; however: Package libgl1-mesa-glx is not installed. Package libgl1 is not installed. mysql-workbench depends on libglibmm-2.4-1c2a (>= 2.42.0); however: Package libglibmm-2.4-1c2a is not installed. mysql-workbench depends on libgnome-keyring0 (>= 2.22.2); however: Package ldpkg: error processing package mysql-workbench (--install): dependency problems - leaving unconfiguredProcessing triggers for mime-support (3.58) ...Processing triggers for shared-mime-info (1.3-1) ...Errors were encountered while processing: mysql-workbenchroot@VirtualBox:/Folder# echo $?1root@VirtualBox:/Folder# dpkg --dry-run -i mysql-workbench_6.2.3+dfsg-7_armhf.deb (Reading database ... 49115 files and directories currently installed.)Preparing to unpack mysql-workbench_6.2.3+dfsg-7_armhf.deb ...root@VirtualBox:/Folder# echo $?0root@VirtualBox:/Folder# dpkg --dry-run --simulate -i mysql-workbench_6.2.3+dfsg-7_armhf.deb (Reading database ... 49115 files and directories currently installed.)Preparing to unpack mysql-workbench_6.2.3+dfsg-7_armhf.deb ...root@VirtualBox:/Folder# echo $?0root@VirtualBox:/Folder# When I use the dpkg -i option, the command fails with a return value of 1, but the same command as a --dry-run returns zero. Adding the --simulate option also doesn't seem to change behaviour. Any pointers on how to consistently check if installation of a .DEB file will go through properly, without actually installing the package? I am running this on a Raspberry Pi emulator. root@VirtualBox:/Folder# cat /etc/os-releasePRETTY_NAME="Raspbian GNU/Linux 8 (jessie)"NAME="Raspbian GNU/Linux"VERSION_ID="8"VERSION="8 (jessie)"ID=raspbianID_LIKE=debianHOME_URL="http://www.raspbian.org/"SUPPORT_URL="http://www.raspbian.org/RaspbianForums"BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs" | To determine whether a package can be installed without needing other dependencies to be installed too, your best bet is to use the “simulate” mode with apt : apt -s install ./mysql-workbench_6.2.3+dfsg-7_armhf.deb (note the ./ which is significant). This will output the dpkg operations which would be performed by the real installation. Package installations are marked with Inst ; if there’s more than one of these, the package can’t be installed on its own. Now, on to the meaty part... You can’t use dpkg for this, not because dpkg doesn’t know about dependencies (it most definitely does), but because dependencies aren’t strong enough. When a package depends on another, the dependency doesn’t prevent the package from being installed if it’s not satisfied, it prevents it from being configured . See section 7.2 of Debian Policy : A Depends field takes effect only when a package is to be configured. It does not prevent a package being on the system in an unconfigured state while its dependencies are unsatisfied, and it is possible to replace a package whose dependencies are satisfied and which is properly installed with a different version whose dependencies are not and cannot be satisfied; when this is done the depending package will be left unconfigured (since attempts to configure it will give errors) and will not function properly. You can see this in your own test: the process fails with dpkg: dependency problems prevent configuration of mysql-workbench Note “configuration”, not “installation”. If you look at the output of dpkg -l mysql-workbench , you should see iU , which means the package is installed but not configured. When you enable “simulate” mode in dpkg , it basically runs in read-only mode. It does this by setting a f_noact flag; you can look for this in the source code. When installing packages, simulation goes through the installation motions (without writing anything), and proceeds to the configuration phase; but that just fakes success , which is the only thing that a simulation can do — configuration involves running maintainer scripts in the package, and it would be difficult to ensure that those scripts didn’t make changes, or ensure that their success could be determined without allowing them to make changes. So in your case, the simulation installs the package, which succeeds (as in your non-simulated test), and fakes the configuration. Thus no error is detected... | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21728/"
]
} |
404,792 | Sometimes I restart a device and need to ssh back in when it's ready. I want to run the ssh command every 5 seconds until the command succeeds. My first attempt: watch -n5 ssh [email protected] && exit 1 How can I do this? | Another option would be to use until . until ssh [email protected]; do sleep 5done If you do this repeatedly for a number of hosts, put it in a function in your ~/.bashrc . repeat(){read -p "Enter the hostname or IP of your server :" servernameuntil ssh $servername; do sleep 5done} | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/404792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
404,804 | I have a file with the following format INSERT INTO table1(field1,field2,field3) VALUES('values1','value2','value3');INSERT INTO table1(field1,field2,field3) VALUES('other_values1','other_value2','other_value3');INSERT INTO table1(field1,field2,field3) VALUES('another_values1','another_value2','another_value3');INSERT INTO table2(table2_field1,table2_field2,table2_field3,field4) VALUES('table2_values1','table2_value2','table2_value3');INSERT INTO table2(table2_field1,table2_field2,table2_field3,table2_field4) VALUES('other_table2_values1','other_table2_value2','other_table2_value3');INSERT INTO table2(table2_field1,table2_field2,table2_field3,table2_field4) VALUES('another_table2_values1','another_table2_value2','another_table2_value3','another_table2_value4'); I want this output SELECT * FROM table1 WHERE field1='values1' AND field2='values2' AND field3=='values3';SELECT * FROM table1 WHERE field1='other_values1' AND field2='other_values2' AND field3=='other_values3';SELECT * FROM table1 WHERE field1='another_values1' AND field2='another_values2' AND field3=='another_values3';SELECT * FROM table2 WHERE table2_field1='table2_values1' AND table2_field2='table2_values2' AND table2_field3=='table2_values3' AND table2_field4=='table2_values4';SELECT * FROM table2 WHERE table2_field1='table2_values1' AND table2_field2='table2_values2' AND table2_field3=='table2_values3' AND table2_field4=='table2_values4';SELECT * FROM table2 WHERE table2_field1='table2_values1' AND table2_field2='table2_values2' AND table2_field3=='table2_values3' AND table2_field4=='table2_values4'; What I've done so far is cat test_inserts |awk -F '[()]' '{print $1 " WHERE "$2 $4}' |sed 's/INSERT INTO /SELECT * FROM /g' and it gives me the following output SELECT * FROM table1 WHERE field1,field2,field3'values1','value2','value3'SELECT * FROM table1 WHERE field1,field2,field3'other_values1','other_value2','other_value3'SELECT * FROM table1 WHERE field1,field2,field3'another_values1','another_value2','another_value3'SELECT * FROM table2 WHERE table2_field1,table2_field2,table2_field3,field4'table2_values1','table2_value2','table2_value3'SELECT * FROM table2 WHERE table2_field1,table2_field2,table2_field3,table2_field4'other_table2_values1','other_table2_value2','other_table2_value3'SELECT * FROM table2 WHERE table2_field1,table2_field2,table2_field3,table2_field4'another_table2_values1','another_table2_value2','another_table2_value3','another_table2_value4' | Complex AWK solution: awk -F'[()]' '{ sub(/INSERT INTO */,"",$1); printf "SELECT * FROM %s WHERE ",$1; len=split($2, f, ","); split($4, v, ","); for (i=1; i<=len; i++) printf "%s=%s%s", f[i], v[i], (i==len? ";":" AND "); print "" }' test_inserts -F'[()]' - complex field separator sub(/INSERT INTO */,"",$1) - remove INSERT INTO phrase from the 1st field (to extract a table name) printf "SELECT * FROM %s WHERE ",$1 - print the start of SQL statement containing a table name split($2, f, ",") - split the 2nd field by separator , to obtain field names ( f becomes an array of field names) split($4, v, ",") - split the 4th field by separator , to obtain field values ( v becomes an array of field values) The output: SELECT * FROM table1 WHERE field1='values1' AND field2='value2' AND field3='value3';SELECT * FROM table1 WHERE field1='other_values1' AND field2='other_value2' AND field3='other_value3';SELECT * FROM table1 WHERE field1='another_values1' AND field2='another_value2' AND field3='another_value3';SELECT * FROM table2 WHERE table2_field1='table2_values1' AND table2_field2='table2_value2' AND table2_field3='table2_value3' AND field4=;SELECT * FROM table2 WHERE table2_field1='other_table2_values1' AND table2_field2='other_table2_value2' AND table2_field3='other_table2_value3' AND table2_field4=;SELECT * FROM table2 WHERE table2_field1='another_table2_values1' AND table2_field2='another_table2_value2' AND table2_field3='another_table2_value3' AND table2_field4='another_table2_value4'; | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216688/"
]
} |
404,822 | I need to create a shell script that checks for the presence of a file and if it doesn't exist, creates it and moves on to the next command, or just moves on to the next command. What I have doesn't do that. #!/bin/bash# Check for the file that gets created when the script successfully finishes.if [! -f /Scripts/file.txt]then: # Do nothing. Go to the next step?elsemkdir /Scripts # file.txt will come at the end of the scriptfi# Next command (macOS preference setting)defaults write ... Return is line 5: [!: command not foundmkdir: /Scripts: File exists No idea what to do. Every place a Google search brings me indicates something different. | Possibly simpler solution, no need to do explicit tests, just use: mkdir -p /Scriptstouch /Scripts/file.txt If you don't want the "modification" time of an existing file.txt to be changed by touch , you can use touch -a /Scripts/file.txt to make touch only change the "access" and "change" times. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/404822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/256085/"
]
} |
404,858 | I understand the difference between .bashrc and .bash_profile (or .bash_login and ~/.profile for that matter), but is there any particular reason to choose .bashrc over .bash_profile for bash shell configuration? From my understanding, configuration such as terminal colors, environmental variables, etc. in .bashrc will be re-loaded every time a new bash window is open. .bash_profile will only get loaded once at login, and I think that should be enough. Why put anything in .bashrc then? The only reason I can think of is to avoid having to log out of the system for a configuration to be loaded. I couldn't find an answer besides purely conventional reasons. | Shell options (from shopt or set ) are not inherited through the environment. Nor are aliases . If you want to, for example, enable failglob for all your shells, that needs to be in the RC file. An alias could be replaced with an exported function, but there's no workaround for the options. It is also conventional & encouraged by the manual to have .bash_profile source .bashrc , so these configurations that you put in there will be loaded into both login and non-logic shells. If they're only in .bash_profile , they might never be loaded into a shell you actually use at all. Another situation is where you have more complex configuration with actual executable code (for example, some advanced PROMPT_COMMAND ) and want freshly-initialised variables for it to use in each shell. You probably wouldn't want those variables exported at all, or perhaps they are arrays and they can't be. A final case would be for side-effecting command execution: displaying fortune or a to-do list in every new shell. That is not so much "configuration", but it is setting up your shell behaviour. There is also the more general case where bash is not invoked as the login shell (because your session manager does something else, or it isn't your login shell, or ...) and your .bash_profile would never be processed at all. That may be out of scope for your concern, though. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404858",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260731/"
]
} |
404,941 | Context I am runnning a Debian Stretch distribution with Cinnamon graphical interface. I use this command to turn off the display xset dpms force off It is useful to me when I want to sleep and just launch a video without being perturbed by the light of my screen.Note that if the mouse pointer is active (moves), then the display is turn on. Problem If the video is launched by VLC or Totem Movie Player, all is working fine. If the video is launched by mplayer, display is turn off for 12s and then the video appears which is not what I expect... I don't know why does the command "xset dpms force off" stop with the mplayer's app. | Run mplayer like this mplayer -nostop-xscreensaver [other options] video-file or add the option into config file ~/.mplayer/config : [default]stop-xscreensaver=0 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/404941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/186283/"
]
} |
404,955 | There are a few home directory locations that complement /usr locations. Files in these locations override their /usr counterparts. For example: ~/.local/bin complements /usr/bin ~/.local/share/applications complements /usr/share/applications If I want to override an application, I can create a custom .desktop file and store it in ~/.local/share/applications . This is helpful if I want to tweak how the application is invoked, but overkill if all I want to do is change the icon. Additionally: if the original .desktop file is non-trivial I either lose the original functionality or need to keep my local copy in-sync I can't modify non-application icons (status icons, etc) I could modify or maintain icons in /usr/share/icons/hicolor/48x48 , but I would prefer to maintain them in my home folder, and these would be fallback icons not overriding icons. Is there a home folder location that complements /usr/share/icons , where I could store application icons and other icons, and they would override existing theme icons? For example, I'm using the Papirus theme but I want to use my own icon notepad.svg for the Text Editor application. This icon is defined in /usr/share/applications/org.gnome.gedit.desktop as Icon=gedit . Where should I place notepad.svg ? | The historical equivalent is ~/.icons , the XDG equivalent is ~/.local/share/icons (strictly speaking, icons subdirectories of the paths in $XDG_DATA_DIRS ). When you specify an icon by name only in a .desktop file, that relies on icon themes, so it’s worth reading the icon theme spec . Ideally you should use xdg-icon-resource to install icons locally. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/269/"
]
} |
404,958 | Input: tmp# cat test51234 0123450.000 0123401/02/03 546701234 012305554567 23400990000 000054 Output on Bash Shell: [tmp]# perl -lpe 's#(^|\h)\K0[^./\h]+(?=\h|$)#"$&"#g' test51234 "012345"0.000 "01234"01/02/03 5467"01234" "0123""05554567" 234"0099""0000" "000054" Output on /sbin/sh shell on HP Unix: /tmp # perl -lpe 's#(^|\h)\K0[^./\h]+(?=\h|$)#"$&"#g' test51234 0123450.000 0123401/02/03 546701234 012305554567 23400990000 000054 | The historical equivalent is ~/.icons , the XDG equivalent is ~/.local/share/icons (strictly speaking, icons subdirectories of the paths in $XDG_DATA_DIRS ). When you specify an icon by name only in a .desktop file, that relies on icon themes, so it’s worth reading the icon theme spec . Ideally you should use xdg-icon-resource to install icons locally. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/404958",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260820/"
]
} |
405,009 | I'm running Debian 9 including non-free firmware, in order to get my wifi to work. I do apparently have the necessary firmware as my wifi works, but I do still get the following output after running dmesg: [ 4.225315] ath10k_pci 0000:03:00.0: firmware: failed to load ath10k/pre-cal-pci-0000:03:00.0.bin (-2)[ 4.225317] ath10k_pci 0000:03:00.0: Direct firmware load for ath10k/pre-cal-pci-0000:03:00.0.bin failed with error -2[ 4.225329] ath10k_pci 0000:03:00.0: firmware: failed to load ath10k/cal-pci-0000:03:00.0.bin (-2)[ 4.225330] ath10k_pci 0000:03:00.0: Direct firmware load for ath10k/cal-pci-0000:03:00.0.bin failed with error -2 I can't seem to find the files pre-cal-pci-0000:03:00.0.bin or cal-pci-0000:03:00.0.bin anywhere. As my wifi seems to work flawlessly without them, what is the purpose of these firmware files? | These are pre-calibration and calibration files; they are optional , and as you’ve noticed, the device can work fine without them. Calibration data can be obtained in a variety of ways (from EEPROM in the device, from files on disk, from device tree information). I get the impression the “firmware” files are intended for very specific configurations (where the PCI location would be fixed); so basically their purpose appears to be to provide a means for systems integrators to provide their own calibration data. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405009",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
405,046 | I have an extract from a forward DNS zone file, which I want to sort by ascending IP address. Before you mark this as a duplicate, please read on a short while, because this isn't about sorting IP addresses as such ( sort -k5V would address that). Here is a sample of the data: esx01.example.com. 3600 IN A 10.1.1.212ilo01.example.com. 3600 IN A 10.1.1.211nas01.example.com. 3600 IN A 10.1.1.101pc001.example.com. 1200 IN A 10.1.1.42pc002.example.com. 1200 IN A 10.1.1.52pc003.example.com. 1200 IN A 10.1.1.29 In this specific case I know I can sort by just the last octet, so this should be a straightforward application of sort . The man page confirms that I can use -k with not only a field but also an offset within that field, and with an n numeric modifier KEYDEF is F[.C][OPTS][,F[.C][OPTS]] for start and stop position, where F is a field number and C a character position in the field; both are origin 1, and the stop position defaults to the line's end. If neither -t nor -b is in effect, characters in a field are counted from the beginning of the preceding whitespace. OPTS is one or more single-letter ordering options [ bdfgiMhnRrV ], which override global ordering options for that key. The last octet conveniently starts at character offset eight within the fifth field, so my understanding is that this command should suffice: sort -k5.8n /tmp/axfr.10.1.1 However, this does not sort my data at all. Empirically I find I need to start at field position 15 to sort this data in ascending numeric order as expected: sort -k5.15n /tmp/axfr.10.1.1pc003.example.com. 1200 IN A 10.1.1.29pc001.example.com. 1200 IN A 10.1.1.42pc002.example.com. 1200 IN A 10.1.1.52nas01.example.com. 3600 IN A 10.1.1.101ilo01.example.com. 3600 IN A 10.1.1.211esx01.example.com. 3600 IN A 10.1.1.212 Why? | Use the sort --debug option to get some clues: $ echo 'esx01.example.com. 3600 IN A 10.1.1.212' | sort --debug -k5.8nsort: using simple byte comparisonsort: leading blanks are significant in key 1; consider also specifying 'b'sort: key 1 is numeric and spans multiple fieldsesx01.example.com. 3600 IN A 10.1.1.212 ____ It is underlining the sort field. It isn't what you expected.You need -b , as sort counts columns from the end of the previous field(man page: If neither -t nor -b is in effect, characters in a field are counted from the beginning of the preceding whitespace ): $ ... | sort --debug -b -n -k5.8 sort: using simple byte comparisonsort: key 1 is numeric and spans multiple fieldsesx01.example.com. 3600 IN A 10.1.1.212 ___ The -n needs to be separate: $ ... | sort --debug -b -k5.8nsort: using simple byte comparisonsort: leading blanks are significant in key 1; consider also specifying 'b'sort: key 1 is numeric and spans multiple fieldssort: option '-b' is ignoredesx01.example.com. 3600 IN A 10.1.1.212 ____ or the b given with the n : $ ... | sort --debug -k5.8nbsort: using simple byte comparisonsort: key 1 is numeric and spans multiple fieldsesx01.example.com. 3600 IN A 10.1.1.212 ___ | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100397/"
]
} |
405,085 | I have a load of files(mp3, wav, txt, doc) that have been created in MS Windows and they have spaces in their names. eg The file of whoever.doc I would like to rename them all at once, replacing the space with an underscore or dot. | The shell can do this pretty easily (here assuming ksh93, zsh, bash, mksh, yash or (some builds of) busybox sh for the ${var//pattern/replacement} operator): for file in *.doc *.mp3 *.wav *.txtdo mv -- "$file" "${file// /_}"done Change the *.doc ... glob to match whatever files you're interested in renaming. To rename all of the files in the current directory that currently have spaces in their filenames: for file in *' '*do mv -- "$file" "${file// /_}"done You might also consider adding a "clobber" check: for file in *' '*do if [ -e "${file// /_}" ] then printf >&2 '%s\n' "Warning, skipping $file as the renamed version already exists" continue fi mv -- "$file" "${file// /_}"done Or use mv 's -i option to prompt the user before overriding a file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405085",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106524/"
]
} |
405,213 | I am trying to extract some data from a file that is constantly updating and I have figured out how to filter two strings with grep. The output is as follows: ! total energy = -9744.24963670 Ry convergence has been achieved in 188 iterations! total energy = -9744.30001681 Ry convergence has been achieved in 140 iterations! total energy = -9744.33953891 Ry convergence has been achieved in 155 iterations! total energy = -9744.36584201 Ry convergence has been achieved in 164 iterations! total energy = -9744.37925372 Ry convergence has been achieved in 154 iterations! total energy = -9744.39185493 Ry convergence has been achieved in 153 iterations! total energy = -9744.39836617 Ry convergence has been achieved in 160 iterations Now what I would like to do is to extract from these lines the numbers as follows:from the line starting with ! I want the number in column no 5 and from the next line in the grep output I want the number in column no 6. Next I would like these numbers written in a separate file as two separated columns as: 188 -9744.24963670140 -9744.30001681155 -9744.33953891164 -9744.36584201 I was thinking that an approach with awk by looping through all these grep results and then looking at odd numbered lines and print column 5 and then for even lines print column 6. But I have no idea how to do this. I have tried extracting individual results into variables separately: var1=$(grep '!' input.file | awk '{print $5}') and var2=$(grep 'convergence has been achieved' input.file | awk '{print $6}') and then I tried to write them to a file as: echo $var1 $var2 > data.dat However the result is not as expected: 188 140155164154153160 -9744.24963670-9744.30001681-9744.33953891-9744.36584201-9744.37925372-9744.39185493-9744.39836617 I don't know how to write them in the form I mentioned above. Also since the file is constantly updated I imagine the piece of code to be combined with a while loop until and end condition (I know how to do this last part) I hope I explained this clearly! | awk solution: awk 'v && NR==n{ print $6,v > "result.txt" }/^!/{ v=$5; n=NR+1 }' file <condition1> { <statement> ... }<condition2>{ <statement> ... } - conditions with respective statements will be evaluated consecutively /^!/{ v=$5; n=NR+1 } - on encountering line starting with ! - capture the 5th field value $5 and plan the next line number NR+1 (assigning to variable n ) v && NR==n - if we have the 1st crucial number v and the current record number NR is the needed "next line number" n - print the values into file result.txt The result.txt file contents: 188 -9744.24963670140 -9744.30001681155 -9744.33953891164 -9744.36584201154 -9744.37925372153 -9744.39185493160 -9744.39836617 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/224652/"
]
} |
405,214 | I have a bash script which has a variable being set from the output of cat and grep: result=`cat $file | grep -A2 "$search"` result contains 3 lines E.G.: This is Line 1This is line 2This is line 3 I need to prefix each line with a space: This is Line 1 This is line 2 This is line 3 I have tried the following: result=`echo $result | awk '{print " "$0}'` and a few different sed commands, all of which result in this: This is Line 1 This is line 2 This is line 3 they add the spaces, but delete the new lines Note: this will be saved into a file which needs the line breaks | awk solution: awk 'v && NR==n{ print $6,v > "result.txt" }/^!/{ v=$5; n=NR+1 }' file <condition1> { <statement> ... }<condition2>{ <statement> ... } - conditions with respective statements will be evaluated consecutively /^!/{ v=$5; n=NR+1 } - on encountering line starting with ! - capture the 5th field value $5 and plan the next line number NR+1 (assigning to variable n ) v && NR==n - if we have the 1st crucial number v and the current record number NR is the needed "next line number" n - print the values into file result.txt The result.txt file contents: 188 -9744.24963670140 -9744.30001681155 -9744.33953891164 -9744.36584201154 -9744.37925372153 -9744.39185493160 -9744.39836617 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261019/"
]
} |
405,216 | i start my VPN client (softEther) with SystemD a OS startup and i have trouble to assignate a static IP to the local interface of vpn client network interface. There is my SystemD config : [Unit] Description=SoftEther VPN Client After=network.target auditd.service ConditionPathExists=!/usr/local/vpnclient/vpnclient/do_not_run [Service] Type=forking EnvironmentFile=-/usr/local/vpnclient/vpnclient ExecStart=/usr/local/vpnclient/vpnclient start ExecStop=/usr/local/vpnclient/vpnclient stop KillMode=process Restart=on-failure # Hardening PrivateTmp=yes ProtectHome=yes ProtectSystem=full ReadOnlyDirectories=/ ReadWriteDirectories=-/usr/local/vpnclient/vpnclient CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SYS_NICE CAP_SYS_ADMIN CAP_SETUID [Install] WantedBy=multi-user.target When i start the service, local interface appear but without the static IP that i configure. vpn_softether: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::2ac:e9ff:fe7e:289e prefixlen 64 scopeid 0x20<link> ether 00:ac:e9:7e:28:9e txqueuelen 1000 (Ethernet) RX packets 12 bytes 864 (864.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 20 bytes 1632 (1.5 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 There is my /etc/sysconfig/network-scripts/ifcfg-vpn_softether : DEVICE="vpn_softether"HWADDR="00:ac:e9:7e:28:9e"ONBOOT="yes"BOOTPROTO=staticNM_CONTROLLED="no"IPADDR="10.38.0.50"NETMASK="255.255.255.0" I need to execute an : ifdown vpn_softether && ifup vpn_softether to be able to have my static IP on the interface : vpn_softether: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.38.0.50 netmask 255.255.255.0 broadcast 10.38.0.255 inet6 fe80::2ac:e9ff:fe7e:289e prefixlen 64 scopeid 0x20<link> ether 00:ac:e9:7e:28:9e txqueuelen 1000 (Ethernet) RX packets 33 bytes 2506 (2.4 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 69 bytes 12308 (12.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 I will appreciate some tips :) | awk solution: awk 'v && NR==n{ print $6,v > "result.txt" }/^!/{ v=$5; n=NR+1 }' file <condition1> { <statement> ... }<condition2>{ <statement> ... } - conditions with respective statements will be evaluated consecutively /^!/{ v=$5; n=NR+1 } - on encountering line starting with ! - capture the 5th field value $5 and plan the next line number NR+1 (assigning to variable n ) v && NR==n - if we have the 1st crucial number v and the current record number NR is the needed "next line number" n - print the values into file result.txt The result.txt file contents: 188 -9744.24963670140 -9744.30001681155 -9744.33953891164 -9744.36584201154 -9744.37925372153 -9744.39185493160 -9744.39836617 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261016/"
]
} |
405,229 | Systems: Linux Mint 18.2 Cinnamon 64-bit (desktop mode, hands-on) GNU/Linux Debian 9.2 Cinnamon 64-bit (headless mode, SSH) Question: How to release and renew IP address from the DHCP server (router) on these Linux systems? | I have found there is the following program listening on the network on both of them: dhclient Quoting the man page : -r Release the current lease and stop the running DHCP client as previously recorded in the PID file. When shutdown via this method dhclient-script will be executed with the specific reason for calling the script set. The client normally doesn't release the current lease as this is not required by the DHCP protocol but some cable ISPs require their clients to notify the server if they wish to release an assigned IP address. So, the solution for all interfaces would be: sudo dhclient -rsudo dhclient Or, conveniently, for a specific interface, say eth0 : sudo dhclient -r eth0sudo dhclient eth0 Of course, when SSH'ing into a server, you need to run both commands oneliner or write a script as per this answer . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405229",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
405,313 | I've been using $((1 + RANDOM % 1000)) to generate a random number. Is it possible to do something similar but provide a seed? So that given the same seed the same random number will always be output? | Assign a seed value to RANDOM $ bash -c 'RANDOM=640; echo $RANDOM $RANDOM $RANDOM'28612 27230 24923$ bash -c 'RANDOM=640; echo $RANDOM $RANDOM $RANDOM'28612 27230 24923$ bash -c 'RANDOM=640; echo $RANDOM $RANDOM $RANDOM'28612 27230 24923$ Notice that single quotes are used; double quotes run afoul shell interpolation rules: $ bash -c 'RANDOM=42; echo $RANDOM $RANDOM $RANDOM'19081 17033 15269$ RANDOM=42$ bash -c "RANDOM=640; echo $RANDOM"19081$ bash -c "RANDOM=640; echo $RANDOM"17033$ bash -c "RANDOM=640; echo $RANDOM"15269$ because $RANDOM is being interpolated by the parent shell before the child bash -c ... process is run. Either use single quotes to turn off interpolation (as shown above) or otherwise prevent the interpolation: $ RANDOM=42$ SEED_FOR_MY_GAME=640$ bash -c "RANDOM=$SEED_FOR_MY_GAME; echo \$RANDOM"28612$ bash -c "RANDOM=$SEED_FOR_MY_GAME; echo \$RANDOM"28612$ This feature of RANDOM is mentioned in the bash(1) manual RANDOM Each time this parameter is referenced, a random integer between 0 and 32767 is generated. The sequence of random numbers may be initialized by assigning a value to RANDOM. If RANDOM is unset, it loses its special properties, even if it is subsequently reset. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405313",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
405,337 | I'm trying to run something like: sudo dhclient $wifi || otherFunction Problem is when dhclient fails it just hangs instead of throwing an error. How can I re-write the above so dhclient is killed and otherFunction gets called if dhclient doesn't finish in 60 seconds? | Your timeout tag gives it all away: sudo timeout 60 dhclient $wifi || otherFunction An example: sudo timeout 3 sleep 5 || echo finished early This uses the timeout utility provided by the GNU coreutils package on Linux. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405337",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
405,370 | I'm several child directories deep and I enter cd .. and receive this error: cd: ..: No such file or directory I am confused- of course there is a parent directory, I'm in it! A little digging shows that my coworker renamed a grandparent directory out from under me and when I tried to move to my parent directory, I got the above error. I tried to reproduce this like so: server|/n01/data/adf/temp/TEMPTEST/SUB1/SUB2> pwd/n01/data/adf/temp/TEMPTEST/SUB1/SUB2server|/n01/data/adf/temp/TEMPTEST/SUB1/SUB2> mv /n01/data/adf/temp/TEMPTEST /n01/data/adf/temp/NEWTEMPTESTserver|/n01/data/adf/temp/TEMPTEST/SUB1/SUB2> pwd/n01/data/adf/temp/NEWTEMPTEST/SUB1/SUB2 And now I am lost and adrift, changing to the parent directory will give me the same error as before. server|/n01/data/adf/temp/TEMPTEST/SUB1/SUB2> cd ..server|/n01/data/adf/temp/NEWTEMPTEST/SUB1> No error. I changed directories successfully. What happened? Why didn't this error like the first time? | Renaming the parent directory will not cause such an error. However, deleting will, for instance: # mkdir -p some/deep/path# cd some/deep/path# rm -r some/deep/path# cd ..error: No such file or directory There is no "rename" command per se in Linux. You can "move" things around though. When moving within the same filesystem, this equates to a rename. However, when moving between filsystems, this is effectively a copy / delete operation, which could result in a similar situation as that shown above, for instance: # mkdir -p /fs1/a/b# cd /fs1/a/b# mv /fs1/a /fs2/a# cd ..error: No such file or directory when /fs1 and /fs2 are different filesystems (mount points in this case). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405370",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/213344/"
]
} |
405,382 | Does set -e behave differently here set -e;function foo {} vs. function foo { set -e;} does set -e belong inside functions? Does set -e declared outside of functions, affect "nested" functions inside a shell file? What about the inverse? Should we call local set -e lol? | Note: the statements here apply to Bash version 4.0.35 and up. Implementations of set -e vary wildly among different shells/versions . Follow Stéphane's advice and don't use set -e . man bash in the Shell Builtin Commands/ set section explains things pretty well though the text is a little dense and requires a bit of focus. To your specific questions the answers are: Does set -e behave different here...vs.. - Depends on what you mean by "differently" but I suspect you'd consider the answer "no"...there are no tricky scoping rules. It acts quite linearlly. Does set -e belong inside functions? - Perfectly valid. Does set -e declared outside of functions, affect "nested" functions inside a shell file? - Yes What about the inverse? - set -e in a function and then encounter a non-zero status after return? Yes, this will exit. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
405,472 | I was trying to rescue GRUB in Linux. I was able to login in the OS following this tutorial: https://www.lisenet.com/2014/grub2-rescue-mode-error-unknown-filesystem/ I have to upgrade GRUB to fix the problem. However, when I run grub-install , I get an error: $ grub-install /dev/sdagrub-install: error: cannot find EFI directory. My file system contains sda4 , sda5 , and sda6 for the EFI system, Linux swap, and Linux file system respectively. I am not very experienced using mount or other commands. | When you run grub-install by default it assumes the EFI system is mounted as /boot/efi It depends on your distribution where EFI system is mounted and on some distributions it isn't mounted after boot. First check if /boot/efi is mounted with mount | grep /boot/efi If that doesn't work first try the following to see if it is mounted elsewhere. mount | grep /dev/[efi device] If neither of those work do: mount /dev/[efi device] /mnt Now run: grub-install --efi-directory=[efi dir]grub-mkconfig -o /boot/grub/grub.cfg where [efi dir] is either /boot/efi or /mnt and [efi device] is the device with the EFI system partition. If you don't know use the command lsblk -o NAME,PARTTYPE,MOUNTPOINT | grep -i "C12A7328-F81F-11D2-BA4B-00A0C93EC93B" | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/245148/"
]
} |
405,515 | I want to change my default boot OS in GRUB2. But the only way I know of seeing the order of the OS I want in the GRUB menu is doing a reboot and seeing the menu displayed. In grub.cfg there are many more menuentry lines than actual choices in the GRUB menu, so I can't identify in that file the one I want. Is there any place where the actually displayed menu is stored so that I can see it without having to reboot? | I believe grub-emu should work for you sudo apt-get install grub-emu Then in terminal execute grub-emu See here for details BE AWARE: You have to set focus to the termial in which you started the emulator to be able to do inputs! The window of the emulator self will not react to any inputs at all. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405515",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146180/"
]
} |
405,527 | I want to touch each file in a directory: files=$(ls -a "node_modules/suman-types/dts")echo "files $files";for file in "$files"; do echo "touching file $file"; touch "node_modules/suman-types/dts/$file";done but after running that, I get: inject.d.tsinjection.d.tsintegrant-value-container.d.tsit.d.tsreporters.d.tsrunner.d.tssuman-utils.d.tssuman.d.tstable-data.d.tstest-suite-maker.d.tstest-suite.d.ts: File name too long What is that "File name too long" message about? Update #1 I changed my script to this: files=$(find "node_modules/suman-types/dts" -name "*.d.ts")for file in "$files"; do echo "touching file $file"; touch "$file";donetouch "node_modules/suman-types" But then I get this: $ ./types-touch.sh touching file node_modules/suman-types/dts/after-each.d.ts node_modules/suman-types/dts/after.d.ts node_modules/suman-types/dts/before-each.d.ts node_modules/suman-types/dts/before.d.ts node_modules/suman-types/dts/describe.d.ts node_modules/suman-types/dts/global.d.ts node_modules/suman-types/dts/index-init.d.ts node_modules/suman-types/dts/inject.d.ts node_modules/suman-types/dts/injection.d.ts node_modules/suman-types/dts/integrant-value-container.d.ts node_modules/suman-types/dts/it.d.ts node_modules/suman-types/dts/reporters.d.ts node_modules/suman-types/dts/runner.d.ts node_modules/suman-types/dts/suman-utils.d.ts node_modules/suman-types/dts/suman.d.ts node_modules/suman-types/dts/table-data.d.ts node_modules/suman-types/dts/test-suite-maker.d.ts node_modules/suman-types/dts/test-suite.d.ts touch: node_modules/suman-types/dts/after-each.d.ts node_modules/suman-types/dts/after.d.ts node_modules/suman-types/dts/before-each.d.ts node_modules/suman-types/dts/before.d.ts node_modules/suman-types/dts/describe.d.ts node_modules/suman-types/dts/global.d.ts node_modules/suman-types/dts/index-init.d.ts node_modules/suman-types/dts/inject.d.ts node_modules/suman-types/dts/injection.d.ts node_modules/suman-types/dts/integrant-value-container.d.ts node_modules/suman-types/dts/it.d.ts node_modules/suman-types/dts/reporters.d.ts node_modules/suman-types/dts/runner.d.ts node_modules/suman-types/dts/suman-utils.d.ts node_modules/suman-types/dts/suman.d.ts node_modules/suman-types/dts/table-data.d.ts node_modules/suman-types/dts/test-suite-maker.d.ts node_modules/suman-types/dts/test-suite.d.ts: No such file or directory | Your problem stems from capturing all of the ls output into a single (string) variable named files . The variable looks something like: filename1\nfilename2\nfilename3\n... See for yourself with: echo "$files" | od -c What you're really doing is looping once over a really long string that corresponds to a file that doesn't exist. The error you got was slightly informative -- it's telling you that this long string-of-a-filename doesn't exist. To touch every file in a directory, just use shell globbing and run touch (the glob would only take files in that one directory): touch node_modules/suman-types/dts/* or touch them one by one: for file in node_modules/suman-types/dts/*; do touch "$file"; done or use find to find all files recursively within the directory, and have it run touch on them: find node_modules/suman-types/dts -type f -exec touch -- {} \; or in shells that support it (Bash/ksh/zsh, with some variations), use the recursive glob operator ** : shopt -s globstar # in Bashfor file in node_modules/suman-types/dts/**/*; do touch "$file"done | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/113238/"
]
} |
405,599 | I am trying to encrypt a file locally and I get an error. [email protected] is a placeholder for my email address, a public key exists for that in my keyring and also on key servers. My private key is located on a hardware key (Yubikey). I can decrypt previously encrypted files with no problem. Here is the error: ➜ ~ gpg -e -r [email protected] somefile.txt gpg: error retrieving '[email protected]' via WKD: General errorgpg: [email protected]: skipped: General errorgpg: somefile.txt encryption failed: General error What does this error mean and how can I solve it? P.S. There is only one more thing that might be related. My public key is expired. | Extending key expiration date fixed the problem. The error message was misleading. However adding -vv as Jens Erat suggested produced some useful error messages such as gpg: Note: signature key ... expired and gpg: ... skipped: Unusable public key that helped finding the actual error. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405599",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/87940/"
]
} |
405,610 | I have a few ebooks scanned from originals. They're formatted so that a single PDF page contains two actual pages : one on the left, and one on the right. I want to programmatically split each PDF page into two, so the left 50% of PDF page 1 becomes page 1 and its right becomes page 2, and so on for all the pages. Does anyone know of a command line utility or script that could help with this? Output from pdfimages -list -f 1 -l 1 file.pdf : page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio-------------------------------------------------------------------------------------------- 1 0 image 1921 1561 rgb 3 8 jpeg no 643 0 200 200 200K 2.3% 1 1 stencil 1 1 - 1 1 image no [inline] 0.692 2 - - 1 2 stencil 1 1 - 1 1 image no [inline] 0.722 0.650 - - 1 3 stencil 1 1 - 1 1 image no [inline] 3 3 - - Second PDF: page num type width height color comp bpc enc interp object ID x-ppi y-ppi size ratio-------------------------------------------------------------------------------------------- 1 0 image 456 625 gray 1 8 jpx yes 251 0 72 72 11.7K 4.2% | This should work it needs pdftk tool ( and ghostscript ). A simple case: Step One: Split into individual pages pdftk clpdf.pdf burst this produces files pg_0001.pdf, pg_0002.pdf, ... pg_NNNN.pdf , one for each page.It also produces doc_data.txt which contains page dimensions. Step Two: Create left and right half pages pw=`cat doc_data.txt | grep PageMediaDimensions | head -1 | awk '{print $2}'` ph=`cat doc_data.txt | grep PageMediaDimensions | head -1 | awk '{print $3}'` w2=$(( pw / 2 )) w2px=$(( w2*10 )) hpx=$(( ph*10 )) for f in pg_[0-9]*.pdf ; do lf=left_$f rf=right_$f gs -o ${lf} -sDEVICE=pdfwrite -g${w2px}x${hpx} -c "<</PageOffset [0 0]>> setpagedevice" -f ${f} gs -o ${rf} -sDEVICE=pdfwrite -g${w2px}x${hpx} -c "<</PageOffset [-${w2} 0]>> setpagedevice" -f ${f} done Step Three: Merge left and right in order to produce newfile.pdf containing single page .pdf. ls -1 [lr]*_[0-9]*pdf | sort -n -k3 -t_ > fl pdftk `cat fl` cat output newfile.pdf A more general case: The example above assumes all pages are same size. The doc_data.txt file contains size for each split page. If the command grep PageMediaDimensions <doc_data.txt | sort | uniq | wc -l does not return 1 then the pages have different dimensions and some extra logic is needed in Step Two . If the split is not exactly 50:50 then a better formula than w2=$(( pw / 2 )) , used in the example above, is needed. This second example shows how to handle this more general case. Step One: split with pdftk as before Step Two: Now create three files that contain the width and height of each pages and a default for the fraction of the split the left page will use. grep PageMediaDimensions <doc_data.txt | awk '{print $2}' > pws.txt grep PageMediaDimensions <doc_data.txt | awk '{print $3}' > phs.txt grep PageMediaDimensions <doc_data.txt | awk '{print "0.5"}' > lfrac.txt the file lfrac.txt can be hand edited if information is availablefor where to split different pages. Step Three: Now create left and right split pages, using the different pages sizes and (if edited) different fractional locations for the split. #!/bin/bashexec 3<pws.txtexec 4<phs.txtexec 5<lfrac.txtfor f in pg_[0-9]*.pdf ; do read <&3 pwloc read <&4 phloc read <&5 lfr wl=`echo "($lfr)"'*'"$pwloc" | bc -l`;wl=`printf "%0.f" $wl` wr=$(( pwloc - wl )) lf=left_$f rf=right_$f hpx=$(( phloc*10 )) w2px=$(( wl*10 )) gs -o ${lf} -sDEVICE=pdfwrite -g${w2px}x${hpx} -c "<</PageOffset [0 0]>> setpagedevice" -f ${f} w2px=$(( wr*10 )) gs -o ${rf} -sDEVICE=pdfwrite -g${w2px}x${hpx} -c "<</PageOffset [-${wl} 0]>> setpagedevice" -f ${f}done Step Four: This is the same merge step as in the previous, simpler, example. ls -1 [lr]*_[0-9]*pdf | sort -n -k3 -t_ > fl pdftk `cat fl` cat output newfile.pdf | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
405,640 | I installed Debian Buster on a Dell Inspiron 5567. This is my soundcard: inxi -AAudio: Card Intel Sunrise Point-LP HD Audio driver: snd_hda_intel Sound: ALSA v: k4.13.0-1-amd64 The laptop's speakers work fine, but I don't get sound out of headphones/speakers when plugged in; automute works though and pavucontrol shows sound directed through the headphones output (attachment follows). I read some other posts on similar issues, but they don't apply in my case. The main reason is that I don't have an alsa-base.conf nor a asound.conf files (the first one is supposed to be on /etc/modprobe.d/ and the other one on /etc/). I checked the links posted on the third link above and I confirmed what I knew already: I only have one soundcard; it has a lot of virtual devices though; screencap follows. I downloaded and ran alsa-info.sh and I get the following error: pcilib: sysfs_read_vpd: read failed: Input/output error ; I googled it and nothing useful came up (most of the posts refer to lspci). All my alsa level are correctly set on alsamixer, although I get a default card with just one level on execution of alsamixer: Any ideas? Should I create alsa-base.conf and asound.conf? If so, what should be in them? Thanks in advance! EDIT: I forgot to mention that headphones work well on a Sparky Linux (based on Debian Testing) live USB. | Two users on a Debian Facebook group provided the answer for this one. First I had to install libsamplerate0 and uncomment the lines allow-module-loadingresample-method = src-sinc-best-qualityavoid-resampling on the file /etc/pulse/daemon.conf; actually the first line has other sampling method by default, that's why it is necessary to install libsamplerate0. Then I had to create the file /etc/modprobe.d/alsa.conf with the single line options snd-hda-intel model=headset-mic and then restart alsa and pulseaudio services. Now audio works both on the internal speakers and on headphones/speakers plugged in the audio jack. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/170823/"
]
} |
405,649 | My friend had put Linux Mint 17.3 Cinnamon 64 bit on my computer. Well, I forgot the user name, so I did a search on the Net for "forgot username linux" and came here. I got the suggestion to hit 'e' at the 1st item in GRUB which I did. The next part of the suggestion said to look for a line that started with KERNAL. Now here is where it gets interesting. I didn't find a line with KERNAL in it. However, I did find a line that started with LINUX. The full unedited line reads: linux /vmlinuz-3.19.0-32-generic root=UUID=0c031f3a-81ae-4c33-06cc--c82a855736d1 ro quiet splash $vt_handoff The suggestion then said to look and edit splash quiet to single . Now if you notice above it says quiet splash instead of splash quiet . So I figured I would edit the quiet splash to single . Now it's asking for a root password. Can anyone help? I suppose I'll need a Live CD. | Exactly what happens when you replace quiet splash or splash quiet (the order doesn't matter) by single depends on the distribution. Most distributions will ask for a root password. If you don't remember the root password, or you just want to boot in the most minimal way, you can replace quiet splash (and $vt_handoff , for that matter) by init=/bin/bash . The line should look like linux /vmlinuz-… root=… ro init=/bin/bash The amount of whitespace between the parts doesn't matter, just leave at least one space wherever there was one before. The parts that I replaced by … above do matter, you must leave what was there before. Remove everything except for the leading word linux , the word after that, root=… and ro , and add init=/bin/bash . When you boot, you'll get a bash command line, running as root. When you have physical access, the only security that could prevent you from getting in is encryption. (If your system has full-disk encryption, you will need to enter the encryption password.) At this command line, run the following commands: mount -o remount,rw /mount /proc Then you can view and modify the user database. The main user database file is /etc/passwd . It contains user names (for both physical users and system accounts), but passwords are in a different file /etc/shadow . Both files are human-readable up to a point. You cannot recover passwords though; if you've forgotten a password, all you can do is change it. The following command lists accounts that have a password: grep -v ':[*!]:' /etc/shadow (Type it carefully, it's pretty sensitive to the exact punctuation.) The first part of each line, before the first : sign, is the username. If you want to change the password for an account, run passwd rob where rob is the username. Once you've noted the username and changed the password if desired, run mount -o remount,ro /reboot | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261304/"
]
} |
405,684 | Why does for i in {1..5}; do x="${i}" echo "$x"; done not output 12345 ? What is the right way to do this? (Tested for i in {1..5}; do x=$(i) echo "$x"; done -bash: i: command not found and others) | You asked for answers to two questions: You asked for an explanation of why your current code does not produce the expected output. You asked for the correct way to write your code so that it does produce the expected output. Looking at your code, I can see two likely explanations for why you wrote your code the way you did: There might be some slight confusion about the syntax of a for-loop . There might be some slight confusion about the order of evaluation in what's called a simple command . for-loop syntax In the first case, I would say that you're missing a semicolon after your variable assignment. If you want to write your for-loop on a single line then you need to put a semi-colon after each command in the loop body. Try this instead: for i in {1..5}; do x="${i}"; echo "$x"; done Another alternative would be to write the for-loop using the multi-line syntax with newlines in place of the semicolons: for i in {1..5}dox="${i}"echo "${x}"done You can also mix-and-match semicolons and newlines, e.g.: for i in {1..5}; dox="${i}"; echo "${x}"done evaluation of simple commands In the second case, I would say that you probably had assumed that the variable assignment in the prologue of the command (i.e. the x="$i" assignment) occurs before the variable expansion in the body of the command (i.e. the expansion of ${x} in echo "${x}" ). But this is actually not the case. To verify this we can refer to the page on simple command expansion in the Bash Manual or to the subsection on simple commands in the Posix Specification . Both of these references include the following passage: A "simple command" is a sequence of optional variable assignments and redirections, in any sequence, optionally followed by words and redirections, terminated by a control operator. When a given simple command is required to be executed (that is, when any conditional construct such as an AND-OR list or a case statement has not bypassed the simple command), the following expansions, assignments, and redirections shall all be performed from the beginning of the command text to the end: The words that are recognized as variable assignments or redirections according to Shell Grammar Rules are saved for processing in steps 3 and 4. The words that are not variable assignments or redirections shall be expanded. If any fields remain following their expansion, the first field shall be considered the command name and remaining fields are the arguments for the command. Redirections shall be performed as described in Redirection . Each variable assignment shall be expanded for tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal prior to assigning the value. Notice that step 2 is where the variable expansion in the command occurs, but step 1 tells us that the variable assignments are saved until steps 3 and 4. It follows that the expression echo "${x}" is expanded to echo "" before the assignment x="${i}" takes place. This explains why you were getting empty output. For further discussion on this topic see the following posts: Why is setting a variable before a command legal in bash? bash script's temporary value on command | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
405,755 | When we say that a process has a controlling terminal, do we mean that the process itself has a controlling terminal, or is it the session that the process belongs to that has a controlling terminal? I used to think that it is the session that has a controlling terminal, but then I have read the following (from here ) which implies that it is the process that has a controlling terminal: One of the attributes of a process is its controlling terminal. Child processes created with fork inherit the controlling terminal from their parent process. In this way, all the processes in a session inherit the controlling terminal from the session leader. A session leader that has control of a terminal is called the controlling process of that terminal. | It is indeed the session that has a controlling terminal The Single UNIX Specification describes the relationship in terms of the controlling terminal being "associated with a session". As it goes on to specify, a controlling terminal has a 1:1 relationship with a session. There is "at most one controlling terminal" associated with a session, and "a controlling terminal is associated with exactly one session". The FreeBSD Design and Implementation book approaches this slightly differently, but reaches the same place. It is not possible for processes that share the same session to have different controlling terminals, nor is it possible for a single terminal to be the controlling terminal of multiple sessions. Internally in FreeBSD that is how the data structures actually work. The process structure has a pointer to the pgrp structure representing the process group that the process belongs to, which in turn points to the session structure representing the session that the process group belongs to, which in turn points to the tty structure of the controlling terminal for the session. Internally in Linux, things are slightly more complex. Each task_struct has a set of pointers to pid structures for its process group ID and session ID; and has another pointer to a per-process signal_struct structure that in turn directly points to the tty structure of the controlling terminal. Further reading George V. Neville-Neil, Marshall Kirk McKusick, and Robert N.M. Watson (2014-09-25). "Process Management". The Design and Implementation of the FreeBSD Operating System . Addison-Wesley Professional. ISBN 9780133761832. Donald Lewine (1991). "Terminal I/O". POSIX Programmers Guide . O'Reilly Media, Inc. ISBN 9780937175736. Daniel P. Bovet and Marco Cesati (2005). "Processes". Understanding the Linux Kernel: From I/O Ports to Process Management . 3rd edition. O'Reilly Media, Inc. ISBN 9780596554910. "Definitions". The Open Group Base Specifications . Issue 7. 2016. IEEE 1003.1:2008. "General Terminal Interface". The Open Group Base Specifications . Issue 7. 2016. IEEE 1003.1:2008. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257803/"
]
} |
405,775 | There are two SLES 11 servers: SERVER311:~ # cat /sys/devices/system/cpu/cpuidle/current_driveracpi_idleSERVER311:~ # and: SERVER705:~ # cat /sys/devices/system/cpu/cpuidle/current_driverintel_idleSERVER705:~ # Both having the: intel_idle.max_cstate=0 processor.max_cstate=0 in the: "/boot/grub/menu.lst", were rebooted. The question: what is the difference between acpi_idle and intel_idle ? | Short Answer : Both are different implementations of CPU idle drivers. acpi_idle is the default driver, supports all CPU architectures, while intel_idle is Intel CPUs specific. More details :The API for a CPU idle driver is defined in include/linux/cpuidle.h. It defines the "generic framework for CPU idle power management". acpi_idle driver (defined in drivers/acpi/processor_idle.c) implements this behaviour for all CPU architectures. intel_idle (defined in drivers/idle/intel_idle.c) is an idle driver designed specifically for modern Intel CPUs (from the comments in the intel_idle.c header): /* * intel_idle.c - native hardware idle loop for modern Intel processors * ... /* * intel_idle is a cpuidle driver that loads on specific Intel processors * in lieu of the legacy ACPI processor_idle driver. The intent is to * make Linux more efficient on these processors, as intel_idle knows * more than ACPI, as well as make Linux more immune to ACPI BIOS bugs. */ So for modern Intel CPUs you should use the intel_idle driver since it is designed specifically for increasing Intel CPUs' efficiency. So why would some setups load with intel_idle and some with acpi_idle? This is what stated in the commit message introducing the intel_idle driver: commit 2671717265ae6e720a9ba5f13fbec3a718983b65 Author: Len Brown Date: Mon Mar 8 14:07:30 2010 -0500 intel_idle: native hardware cpuidle driver for latest Intel processors This EXPERIMENTAL driver supersedes acpi_idle on Intel Atom Processors, Intel Core i3/i5/i7 Processors and associated Intel Xeon processors. It does not support the Intel Core2 processor or earlier. For kernels configured with ACPI, CONFIG_INTEL_IDLE=y allows intel_idle to probe before the ACPI processor driver. Booting with "intel_idle.max_cstate=0" disables intel_idle and the system will fall back on ACPI's "acpi_idle". Typical Linux distributions load ACPI processor module early, making CONFIG_INTEL_IDLE=m not easily useful on ACPI platforms. intel_idle probes all processors at module_init time. Processors that are hot-added later will be limited to using C1 in idle. Signed-off-by: Len Brown So the reasons are: Non-Intel CPU on the system or older Intel architectures. Not marked CONFIG_INTEL_IDLE=y in .config Booting with intel_idle.max_cstate=0 in cmdline Since you said you set #3 on both setups the question is why one of them loaded with intel_idle. Try 'cat /proc/cmdline' and make sure the option is really set. Also, check the differences between the architectures with 'lscpu' or 'cat /proc/cpuinfo' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229275/"
]
} |
405,783 | We've noticed that some of our automatic tests fail when they run at 00:30 but work fine the rest of the day. They fail with the message gimme gimme gimme in stderr , which wasn't expected. Why are we getting this output? | Dear @colmmacuait , I think that if you type "man" at 0001 hours it should print "gimme gimme gimme". #abba @marnanel - 3 November 2011 er, that was my fault, I suggested it. Sorry. Pretty much the whole story is in the commit. The maintainer of man is a good friend of mine, and one day six years ago I jokingly said to him that if you invoke man after midnight it should print " gimme gimme gimme ", because of the Abba song called " Gimme gimme gimme a man after midnight ": Well, he did actually put it in . A few people were amused to discover it, and we mostly forgot about it until today. I can't speak for Col , obviously, but I didn't expect this to ever cause any problems: what sort of test would break on parsing the output of man with no page specified? I suppose I shouldn't be surprised that one turned up eventually, but it did take six years. (The commit message calls me Thomas, which is my legal first name though I don't use it online much.) This issue has been fixed with commit 84bde8 : Running man with man -w will no longer trigger this easter egg. | {
"score": 11,
"source": [
"https://unix.stackexchange.com/questions/405783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/173916/"
]
} |
405,798 | I need to find the largest file in the current and subsequent directory. I tried ls -Rlh | awk '{print $3 " " $5 " " $9}' but do not know if it is ok, how to sort and select the largest file. | GNU find + sort + head solution (for any directory depth level), assuming file paths don't contain newline characters: find . -type f -printf "%s %p\n" | sort -nr | head -1 %s - format specificator pointing to file size (in bytes) %p - format specificator pointing to file name sort -nr - sort records numerically in reversed order head -1 - print the TOP first line/record To get a human-readable file size value - extend the pipeline with GNU numfmt command (if supported): find . -type f -printf "%s %p\n" | sort -nr | head -1 | numfmt --to=si | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405798",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261219/"
]
} |
405,840 | I would like to change the line "disable = yes" to "disable = no" into the following file : [root@centos2 ~]# cat /etc/xinetd.d/tftpservice tftp{ ... server_args = -s /var/lib/tftpboot disable = yes per_source = 11 ...} I tried this : [root@centos2 ~]# grep 'disable = yes' /etc/xinetd.d/tftp[root@centos2 ~]# by just copying the space with my mouse but it doesn't grep anything... Why and how can I know what are the elements between "disable" and "=" ? Is it several spaces? tabulations? I know I can grep using the following regex : [root@centos2 xinetd.d]# grep -E 'disable.+= yes' /etc/xinetd.d/tftp disable = yes[root@centos2 xinetd.d]# And finaly, is there a better way of replacing "yes" by "no" using sed than the following : [root@centos2 xinetd.d]# sed -r 's/disable.+= yes/disable = no/g' /etc/xinetd.d/tftpservice tftp{ ... server_args = -s /var/lib/tftpboot disable = no per_source = 11 ...} EDIT : Result of the od command thanks @ilkkachu [root@centos2 xinetd.d]# < /etc/xinetd.d/tftp grep disable | od -c0000000 \t d i s a b l e0000020 = y e s \n0000037 | The spaces are more commonly known as "whitespace", and can include not just spaces but tabs (and other "blank" characters). In a regular expression you can often refer to these either with [[:space:]] or \s (depending on the RE engine) which includes both horizontal (space, tab and some unicode spacing characters of various width if available) for which you can also use [[:blank:]] and sometimes \h and vertical spacing characters (like line feed, form feed, vertical tab or carriage return). [[:space:]] is sometimes used in place of [[:blank:]] for its covering of the spurious carriage return character in Microsoft text files. You cannot replace with grep - it's just a searching tool. Instead, to replace the yes with no you can use a command like this: sed '/disable\>/s/\<yes\>/no/' /etc/xinetd.d/tftp This tells sed to substitute (change) the word yes into no on any line that contains the word disable . (The \> (initially a ex / vi regexp operator), in some sed implementations, forces an end-of-word (though beware it's not whitespace-delimited-words , it would also match on disable-option )). Conveniently this sidesteps the issue of whitespace altogether. Be careful: with a line such as eyes yes , an unbounded yes substitution would apply to the first instance of yes and leave you with eno yes . That's why I have used \<yes\> instead of just yes . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405840",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103808/"
]
} |
405,916 | I have Windows 10 HOME installed on my system. After I installed Windows 10 HOME, I installed Ubuntu 17.10 on a separate partition so that I could dual boot. I removed Ubuntu 17.10 by deleting the partition it was installed on. Now I am unable to start my system. At boot, my system stops at the Grub command line. I want to boot to my Windows 10 installation which I haven't removed from my system. This is displayed at startup: GNU GRUB version 2.02 ~beta3-4ubuntu7 minimal BASH-like editing is supported.for the first word, TAB listspossible commands completions.anywhere else TAB lists the possible device or file completion.grub> How can I boot my Windows partition from this grub command?Laptop :- Toshiba satellite C55 - C5241 | GRUB uses the contents of /boot/grub/ located on your Linux partition to boot your system normally. Because of this GRUB has very minimal functionality. If you are on a Legacy BIOS system you're out of luck and you'll need to Windows disk for boot repair. (this is because GRUB can't load its NTFS driver because you deleted it). If you have a UEFI system which is most likely then you can still load Windows pretty easily. First type: chainloader +1 If this says unknown command you're out of luck because GRUB didn't embed this command so you must have deleted it. If it reboots back to grub prompt then you have a legacy BIOS and you're out of luck. If it says invalid efi path then you should be able to proceed. Type: ls (hd0,gpt1)/ this should return "/efi" Now do: chainloader (hd0,gpt1)/EFI/Microsoft/Boot/bootmgfw.efiboot | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/405916",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261540/"
]
} |
405,991 | I'm working on custom Bash scripts for mass duplication of USB flash memory and mass testing (using f3 ). I wonder if it's possible to identify what USB port is a pendrive plugged into. I have USB hubs with numbered ports. If the have some static addresses that I could identify and know if something is plugged into them or not, and what is that (essentially: which /dev/sd* file corresponds to that USB port) I could make it much easier for the users to know what is going on and allow them to remove bad drives early in the process, without waiting till the whole batch is processed and then try to sort out the bad drives from the good ones (this is how I do it now). I tried searching around but nothing I found seemed to mesh together with what I want to achieve so I decided to ask directly for help in this context. Rigth now I identify drives by /dev/sd* node names, and the users have no idea what is that. If I could map these to USB ports in a hub, I could present the information based on the USB ports and users could know that port 5 has a bad drive plugged in and they can remove that without interfering with the rest of the process taking place. I could then even stop doing this in batches and make all ports work simultaneously in loop, the user could plug the drives in and out all the time, keeping track on what is what by the HUB port numbers, it could greatly seed up the workflow. So the basic question: how can I identify USB ports and USB flash memory plugged into these ports? | You can use udevadm to get the device path of some device. This is done by examining the symlinks in /sys/ , so you could also do this manually (but it's easier to use udevadm ). For example, an USB stick plugged into an external USB hub on my system produces $ udevadm info -q path -n /dev/sdh/devices/pci0000:00/0000:00:1d.0/usb3/3-1/3-1.1/3-1.1.3/3-1.1.3.2/3-1.1.3.2:1.0/host7/target7:0:0/7:0:0:0/block/sdh As one can see by comparing with the USB tree, $ lsusb -t/: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 5000M/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/8p, 480M |__ Port 1: Dev 26, If 0, Class=Hub, Driver=hub/4p, 480M |__ Port 3: Dev 29, If 0, Class=Hub, Driver=hub/4p, 480M |__ Port 2: Dev 31, If 0, Class=Mass Storage, Driver=usb-storage, 480M |__ Port 4: Dev 30, If 0, Class=Mass Storage, Driver=usb-storage, 480M... the part 3-1.1.3.2 of the path says that on bus 3, it goes through port 1 (on southbridge), again port 1 (on motherboard), port 3 (still on motherboard), and then on port 2 of the external USB hub. Port 4 of this hub is used for an SD card reader. So depending on how your USB hub is connected, you need to do something similar, and extract the last port you are interested in. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/405991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67203/"
]
} |
405,992 | How to list all the .txt files (pipe delimited file) and the number of columns of each file in a directory? | find . -name '*.txt' -type f -size +0 -exec awk -F '|' ' FNR == 1 {print FILENAME ": " NF; nextfile}' {} + Would print something like ./dir/foo.txt: 2 for each regular non-empty file whose name ends in .txt where "2" is the number of | -separated fields in the first line of the file. Note that nextfile is not available in all awk implementations, but in those where it's not, it should be harmless (just less efficient as awk would read the files fully). If you wanted to consider only the files that have the same number of columns in all their non-empty lines, with GNU awk : find . -name '*.txt' -type f -size +0 -exec awk -F '|' ' BEGINFILE {n = 0} NF { if (n && NF != n) { print "skipping "FILENAME" ("NF" != "n")" > "/dev/stderr" n = 0; nextfile } n = NF } ENDFILE {if (n) print FILENAME ": " n}' {} + | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/405992",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261036/"
]
} |
406,012 | I want to do it as efficient as possible in case there will be a lot of files.What I want is rename all the files I found and remove their suffix. For example: [/tmp] $ ls -la.logb.logc.tmp[/tmp] $ find /tmp -name "*.log" -type f -exec mv {} {%.*} \;[/tmp] $ ls -labc.tmp This doesn't work. If it was a normal bash variable ${var%.*} would have returned var until the last . . | Start a shell to use shell parameter expansion operators: find ~/tmp -name '*.log' -type f -exec sh -c ' for file do mv -i -- "$file" "${file%.*}" done' sh {} + Note that you don't want to do that on /tmp or any directory writable by others as that would allow malicious users to make you rename arbitrary .log files on the file system¹ (or move files into any directory²). With some find and mv implementations, you can use find -execdir and mv -T to make it safer: find /tmp -name '*.log' -type f -execdir sh -c ' for file do mv -Ti -- "$file" "${file%.*}" done' sh {} + Or use rename (the perl variant) that would just do a rename() system call so not attempt to move files to other filesystems or into directories... find /tmp -name '*.log' -type f -execdir rename 's/\.log$//' {} + Or do the whole thing in perl : perl -MFile::Find -le ' find( sub { if (/\.log\z/) { $old = $_; s/\.log\z//; rename($old, $_) or warn "rename $old->$_: $!\n" } }, @ARGV)' ~/tmp But note that perl 's Find::File (contrary to GNU find ) doesn't do a safe directory traversal³, so that's not something you would like to do on /tmp either. Notes. ¹ an attacker can create a /tmp/. /auth.log file, and in between find finding it and mv moving it (and that window can easily be made arbitrarily large) replace the "/tmp/. " directory with a symlink to /var/log resulting in /var/log/auth.log being renamed to /var/log/auth ² A lot worse, an attacker can create a /tmp/foo.log malicious crontab for example and /tmp/foo a symlink to /etc/cron.d and make you move that crontab into /etc/cron.d . That's the ambiguity with mv (also applies to cp and ln at least) that can be both move to and move into . GNU mv fixes it with its -t ( into ) and -T ( to ) options. ³ File::Find traverses the directory by doing chdir("/tmp"); read content; chdir("foo") ...; chdir("bar"); chdir("../..")... . So someone can create a /tmp/foo/bar directory and at the right moment, rename it to /tmp/bar so the chdir("../..") would land you in / . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42081/"
]
} |
406,026 | I'd like to manage all email incoming to *@example.com with a Python script running on my server, which will do various jobs. I've already done a DNS MX record for example.com , directing it my server: mx.example.com MX mailforwarder.example.commailforwarder.example.com A 1.2.3.4 I've done lots of trials and errors with both postfix and exim, and nothing was working, so I removed all of them: apt-get remove postfix and apt-get remove exim4 , so I'm ready to start with a fresh install of one of them (which one would allow the shortest solution for this specific task?) What are the main steps to direct all incoming email *@example.com to a Python script? (including: telling the MTA to accept emails coming from outside of the server, from whole internet, etc.) | procmail is considered problematical by Philip Guenther (and is quite possibly useless in this case, as .forward files or equivalent can send the mails directly to your program, skipping the thus needless complexity of procmail ). Executive summary: delete the procmail port; the code is not safe and should not be used as a basis for any further work. As people may know, I was the upstream maintainer of procmail back in the late 1990's though 2001. So some other solution may be advisable; this depends on the Mail Transport Agent (MTA). Another option would be to use the MTA to deliver to a local file or IMAP, then have your program parse that file or IMAP. This has the bonus of continuing to accept email and saving it somewhere; what happens when your program is buggy or otherwise fails to run? Less of a problem than during live mail delivery... Exim Probably either copy the Sendmail .forward method or figure out how to do this properly in Eximese. (I aborted as it was taking to much time to dig through the Exim docs.) There is elspy if you want to do at-SMTP-time scanning in a MILTER fashion... Postfix https://serverfault.com/questions/258469/how-to-configure-postfix-to-pipe-all-incoming-email-to-a-script#258491 Gosh that seems long and complicated. Sendmail Set a mailertable entry to forward all mails for the domain (and .domain for subdomains, if necessary) to a local user, here jdoe example.com local:jdoe.example.com local:jdoe and then setup a .forward file for that user to run the necessary code $ cat ~jdoe/.forward"|/etc/smrsh/process"$ which could be as simple as $ cat /etc/smrsh/process#!/bin/shcat >> /home/jdoe/allmails$ because the emails are fed in on standard input (this might be bad if multiple instances of this process run at once; presumably your actual code handles such race conditions or is otherwise idempotent...right?). This method may also work for any other MTA that copies Sendmail's forward syntax, assuming you can get the MTA to redirect all mails to a particular user. This assumes mailertable support is enabled, confFORWARD_PATH is set, that Sendmail is allowed to run the code (see smrsh(8) though note that some vendors may change the directory without updating the documentation (running strings /the/path/to/smrsh | fgrep / may help)) and that something like selinux isn't also breaking things. Another option for Sendmail is to use a MILTER such as MIMEDefang and perform whatever business logic is necessary there. (Various other MTA support MILTER, or have something like it.) Simple Mail Transfer Protocol daemon (OpenBSD) From a look at smtpd.conf(5) (updated for OpenBSD 6.4 changes) action "mymda" mda "/path/to/your/command" user jdoematch from any for domain example.com action "mymda" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406026",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/59989/"
]
} |
406,040 | I am using Centos as my server OS and in my desktop I have a directory called "john", I need delete it via a command. How can I do this? | You can use rm -f -r john It will recursively delete john directory even if it contains files or subdirectories. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406040",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/259075/"
]
} |
406,101 | I often have code where I format by making a long AND/OR statements. For example: # Get wifi router gateway gateway=$(cat $leases \ | grep -A 5 -m 1 $wifi \ | grep option\ routers \ | cut -d' ' -f5 \ | tr --delete \;) Sometimes a single step in a command like above can be complex. Thus I end up wanting to comment it. For example say the cut command was more complicated than it really is here. So I want to do something like: # Get wifi router gateway gateway=$(cat $leases \ | grep -A 5 -m 1 $wifi \ | grep option\ routers \ # Here is a note | cut -d' ' -f5 \ | tr --delete \;) I realize this is invalid syntax. But I'm curious to see if anyone else has some strategies for commenting long command chains? | This seems to work in Bash, dash , etc: #!/bin/shseq 20 | # make a long listgrep '[234]' # mut only take part of it Similarly with && or || in place of the pipe, and also inside $( ... ) . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
406,107 | How do I list all connected (including unmounted) devices on OpenBSD? I'm looking for something similar to lsblk for Linux or camcontrol devlist for FreeBSD: # List devices on FreeBSD$ camcontrol devlist<VBOX HARDDISK 1.0> at scbus0 target 0 lun 0 (ada0,pass0)<VBOX CD-ROM 1.0> at scbus1 target 0 lun 0 (pass1,cd0)# List devices on Linux$ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT/dev/sda 8:0 0 465.8G 0 disk ├─/dev/sda1 8:1 0 1007K 0 part ├─/dev/sda2 8:2 0 256M 0 part /boot├─/dev/sda3 8:3 0 9.8G 0 part [SWAP]├─/dev/sda4 8:4 0 29.3G 0 part /├─/dev/sda5 8:5 0 29.3G 0 part /var├─/dev/sda6 8:6 0 297.6G 0 part /home└─/dev/sda9 8:9 0 16.3G 0 part /dev/sr0 11:0 1 1024M 0 rom None of these commands seem to exist or be available in the (default) repos for OpenBSD. Not even pciinfo , kldstat , or geom are available. | The sysctl command can list devices attached to the system. sysctl gets or sets kernel state. To list how many disks you have: sysctl hw.diskcount To list disk names: sysctl hw.disknames Or sysctl -a | grep -i disk | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406107",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43029/"
]
} |
406,109 | Good morning, this is extremely similar to the question Grep From the Last Occurrence of a Pattern to Another Pattern (several months old), while adding a little more detail. I am trying to write a UNIX script for a file with multiple duplicate patterns, followed by the pattern I am looking for. However I do not have 'tac' or 'tail -r' (using the UNIX emulator, MKS Toolkit), and am looking to return the last occurrence of Pattern1 before Pattern2, followed by the data in between Pattern1 and Pattern2, and then Pattern2 also. The Patterns in this case would be 'Condition 1' and 'Condition 2': output.out: ...Condition 1: Adata1Condition 1: Bdata2Condition 2: Cdata3Condition 1: Ddata4Condition 1: Edata5Condition 2: F... I'd like to write an awk (or sed, but figured awk would be the right tool) script to return: Condition 1: Bdata2Condition 2: CCondition 1: Edata5Condition 2: F I figure it's some form of the line below, but I can't get the syntax right: awk '/Condition 1/ {acc = $0;} /,/Condition 2/ {print ?}' output.out Working the '/,/' is where I seem to be having hangups. Was wondering if anyone had any advice, would be much appreciated. Many thanks for any help and time related to this question. | Try: $ awk 'f{a=a"\n"$0} /Condition 1/{a=$0; f=1} f && /Condition 2/{print a; f=0}' output.out Condition 1: Bdata2Condition 2: CCondition 1: Edata5Condition 2: F How it works f{a=a"\n"$0} If the variable f is true (nonzero), then append the current line onto the end of variable a . /Condition 1/{a=$0; f=1} If the current line contains Condition 1 , then set s to the current line and set variable f to 1. f && /Condition 2/{print a; f=0} If f is true and the current line contains Condition 2 , then print variable a and set f back to zero. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406109",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/206455/"
]
} |
406,137 | When I run top, I show CPU 0-7. When i do: cat /proc/cpuinfo | grep "cpu cores" | uniq I get: cpu cores : 4 If I grep "physical id" I have 1. I am thinking my command is wrong and top is right. This is not a VM and it a physical server, RedHat. What am I doing wrong? I am not sure these answer it: How to know number of cores of a system in Linux? Number of processors in /proc/cpuinfo Edit: Am I correct that Physical ID, if it only shows 1, then I have one physical chip in the motherboard? Edit: It is a Intel(R) Xeon(R) CPU X5560 @ 2.80GHz but the physical id is 1, and I thought it would be 0, but there is no physical id 0 in cpuinfo. Edit: If it matters, I am trying to figure out licensing where they do .5 the core count. | What CPU are you using? How many thread present per physical core? cat /proc/cpuinfo shows number of physical core whereas top shows total no of threads present. I think your CPU has 4 physical core and 2 logical core per physical core. So it's top showing 8. Moreover contents of /proc/cpuinfo is somewhat implementation dependent. Like in rooted android shell the cpuinfo file doesn't contain any term cpu cores . However in cpuinfo each thread is named as processor : X , Where X is thread no. So the last thread no shall be same as top/htop output. Result of nproc --all shall also be consistent with top/htop | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/406137",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/50420/"
]
} |
406,197 | I am running a dual boot of Windows and Debian on my Laptop. I use Linux mostly but from time to time I need to access my files in my Windows partition. My Windows partition is mounted as follows at startup. >cat /etc/fstab |grep Win7LABEL=Windows7_OS /mnt/Win7 auto nosuid,nodev,nofail,x-gvfs-show 0 0 Basically every file in the Windows partition is owned by root:root and has a 777 permission. Then whenever I mv a file to my working (Linux) partition, I have a 777 file under my partition, owned by me (while cp in terminal will give a 755 file but if done via gnome will save the file with a 777 permission). Is this the best practice to mount a partition? Or should I mount it such that instead of root, I am the owner of all files/dirs and somehow be able to set all dirs to 755 and files to 644 when the mount happens at boot? If so, how can it be done? | You can use fmask and dmask mount options * to change the permission mapping on an ntfs filesystem. To make files appear rw-r--r-- (644) and directories rwxr-xr-x (755) use fmask=0133,dmask=0022 . You can combine this with uid= and gid= options to select the file owner and group if you need write access for your user. * fmask and dmask seem to work for the kernel (read-only) driver as well, even that they are not documented in mount man page . They are documented options for ntfs-3g. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/406197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260154/"
]
} |
406,216 | I have a function and would like to use the pipefail option inside. But I don't want to simply set -o pipefail , because I fear that another part of the script may not expect pipefail to be set. Of course I could do set +o pipefail afterwards, but in case pipefail was set outside of my function, this may introduce unexpected behavior as well. Of course I could use the exit code of false | true as a measure, if pipefail is set, but this seems a little bit dirty. Is there a more general (and maybe canonical?) way to check the set bash options? | The $SHELLOPTS variable holds all set options separated by colons. As an example, it may look like: braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor To check for a given option one may do the following: # check if pipefail is set, if not set it and remember thisif [[ ! "$SHELLOPTS" =~ "pipefail" ]]; then set -o pipefail; PIPEFAIL=0;else PIPEFAIL=1;fi# do something here# reset pipefail, if it was not set beforeif [ "$PIPEFAIL" -eq 0 ]; then set +o pipefail;fi | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260671/"
]
} |
406,245 | How do we allow certain set of Private IPs to enter through SSH login(RSA key pair) into Linux Server? | You can limit which hosts can connect by configuring TCP wrappers or filtering network traffic (firewalling) using iptables . If you want to use different authentication methods depending on the client IP address, configure SSH daemon instead (option 3). Option 1: Filtering with IPTABLES Iptables rules are evaluated in order, until first match. For example, to allow traffic from 192.168.0.0/24 network and otherwise drop the traffic (to port 22). The DROP rule is not required if your iptables default policy is configured to DROP . iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPTiptables -A INPUT -p tcp --dport 22 -j DROP You can add more rules before the drop rule to match more networks/hosts. If you have a lot of networks or host addresses, you should use ipset module. There is also iprange module which allows using any arbitrary range of IP addresses. Iptables are not persistent across reboots. You need to configure some mechanism to restore iptables on boot. iptables apply only to IPv4 traffic. Systems which have ssh listening to IPv6 address the necessary configuration can be done with ip6tables . Option 2: Using TCP wrappers Note: this might not be an option on modern distributions, as support for tcpwrappers was removed from OpenSSH 6.7 You can also configure which hosts can connect using TCP wrappers. With TCP wrappers, in addition to IP addresses you can also use hostnames in rules. By default, deny all hosts. /etc/hosts.deny : sshd : ALL Then list allowed hosts in hosts.allow. For example to allow network 192.168.0.0/24 and localhost . /etc/hosts.allow : sshd : 192.168.0.0/24sshd : 127.0.0.1sshd : [::1] Option 3: SSH daemon configuration You can configure ssh daemon in sshd_config to use different authentication method depending on the client address/hostname. If you only want to block other hosts from connecting, you should use iptables or TCP wrappers instead. First remove default authentication methods: PasswordAuthentication noPubkeyAuthentication no Then add desired authentication methods after a Match Address in the end of the file. Placing Match in the end of the file is important, since all the configuration lines after it are placed inside the conditional block until the next Match line. For example: Match Address 127.0.0.* PubkeyAuthentication yes Other clients are still able to connect, but logins will fail because there is no available authentication methods. Match arguments and allowed conditional configuration options are documented in sshd_config man page . Match patterns are documented in ssh_config man page . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/406245",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261580/"
]
} |
406,247 | I have this situation where there's a lot of files with similar names (but they all follow a pattern) in different subfolders file1file1 (Copy)/folder1/file2.txt/folder1/file2 (Copy).txt/folder1/file3.png/folder1/file3 (Copy).png Each file is in the same folder of its copy and has the same extension, the difference is that it has (Copy) at the end of the name I want to get all these files and delete the oldest one, then eventually rename the file from, for example, file1 (Copy) to file1 (that is, remove the (Copy) suffix) if it needs to be renamed. I was thinking of using find and mv but I'm not sure how to tell it to move the most recent one. | You can limit which hosts can connect by configuring TCP wrappers or filtering network traffic (firewalling) using iptables . If you want to use different authentication methods depending on the client IP address, configure SSH daemon instead (option 3). Option 1: Filtering with IPTABLES Iptables rules are evaluated in order, until first match. For example, to allow traffic from 192.168.0.0/24 network and otherwise drop the traffic (to port 22). The DROP rule is not required if your iptables default policy is configured to DROP . iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPTiptables -A INPUT -p tcp --dport 22 -j DROP You can add more rules before the drop rule to match more networks/hosts. If you have a lot of networks or host addresses, you should use ipset module. There is also iprange module which allows using any arbitrary range of IP addresses. Iptables are not persistent across reboots. You need to configure some mechanism to restore iptables on boot. iptables apply only to IPv4 traffic. Systems which have ssh listening to IPv6 address the necessary configuration can be done with ip6tables . Option 2: Using TCP wrappers Note: this might not be an option on modern distributions, as support for tcpwrappers was removed from OpenSSH 6.7 You can also configure which hosts can connect using TCP wrappers. With TCP wrappers, in addition to IP addresses you can also use hostnames in rules. By default, deny all hosts. /etc/hosts.deny : sshd : ALL Then list allowed hosts in hosts.allow. For example to allow network 192.168.0.0/24 and localhost . /etc/hosts.allow : sshd : 192.168.0.0/24sshd : 127.0.0.1sshd : [::1] Option 3: SSH daemon configuration You can configure ssh daemon in sshd_config to use different authentication method depending on the client address/hostname. If you only want to block other hosts from connecting, you should use iptables or TCP wrappers instead. First remove default authentication methods: PasswordAuthentication noPubkeyAuthentication no Then add desired authentication methods after a Match Address in the end of the file. Placing Match in the end of the file is important, since all the configuration lines after it are placed inside the conditional block until the next Match line. For example: Match Address 127.0.0.* PubkeyAuthentication yes Other clients are still able to connect, but logins will fail because there is no available authentication methods. Match arguments and allowed conditional configuration options are documented in sshd_config man page . Match patterns are documented in ssh_config man page . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/406247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262038/"
]
} |
406,256 | In exploring a hung umount , I bumped into /run/mount/utab in some strace output. What is the purpose of /run/mount/utab ? Where can I read more about /run/mount/utab : purpose format what interacts with it (and how) | You can limit which hosts can connect by configuring TCP wrappers or filtering network traffic (firewalling) using iptables . If you want to use different authentication methods depending on the client IP address, configure SSH daemon instead (option 3). Option 1: Filtering with IPTABLES Iptables rules are evaluated in order, until first match. For example, to allow traffic from 192.168.0.0/24 network and otherwise drop the traffic (to port 22). The DROP rule is not required if your iptables default policy is configured to DROP . iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPTiptables -A INPUT -p tcp --dport 22 -j DROP You can add more rules before the drop rule to match more networks/hosts. If you have a lot of networks or host addresses, you should use ipset module. There is also iprange module which allows using any arbitrary range of IP addresses. Iptables are not persistent across reboots. You need to configure some mechanism to restore iptables on boot. iptables apply only to IPv4 traffic. Systems which have ssh listening to IPv6 address the necessary configuration can be done with ip6tables . Option 2: Using TCP wrappers Note: this might not be an option on modern distributions, as support for tcpwrappers was removed from OpenSSH 6.7 You can also configure which hosts can connect using TCP wrappers. With TCP wrappers, in addition to IP addresses you can also use hostnames in rules. By default, deny all hosts. /etc/hosts.deny : sshd : ALL Then list allowed hosts in hosts.allow. For example to allow network 192.168.0.0/24 and localhost . /etc/hosts.allow : sshd : 192.168.0.0/24sshd : 127.0.0.1sshd : [::1] Option 3: SSH daemon configuration You can configure ssh daemon in sshd_config to use different authentication method depending on the client address/hostname. If you only want to block other hosts from connecting, you should use iptables or TCP wrappers instead. First remove default authentication methods: PasswordAuthentication noPubkeyAuthentication no Then add desired authentication methods after a Match Address in the end of the file. Placing Match in the end of the file is important, since all the configuration lines after it are placed inside the conditional block until the next Match line. For example: Match Address 127.0.0.* PubkeyAuthentication yes Other clients are still able to connect, but logins will fail because there is no available authentication methods. Match arguments and allowed conditional configuration options are documented in sshd_config man page . Match patterns are documented in ssh_config man page . | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/406256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/143394/"
]
} |
406,276 | input > a='Vikas' > echo $a Vikas my required output is echo $a | <some command>vIKAS | $ echo Vikas | LC_ALL=C tr a-zA-Z A-Za-zvIKAS The utility tr translates characters; it takes two arguments representing sets of characters; it then copies standard input to standard output replacing each character found in the first set with the corresponding character in the second set. In this application, it replaces lowercase letters with uppercase letters and vice-versa. See the manual page of tr(1) for details and for other processing which tr can perform. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261666/"
]
} |
406,334 | I'd have supposed this code would have printed " oof " first: echo foo | tee >(rev) | ( sleep 1 ; cat ; ) Output: foooof Increasing the sleep time doesn't change the order. Why doesn't that work? Note that other tools do work as supposed, e.g. : echo foo | pee rev 'sleep 1 ; cat' . | In echo foo | tee >(rev) | (sleep 1; cat) In bash like in ksh , but unlike zsh , the stdout of rev also is the pipe to (sleep 1; cat) . echo , tee , rev and the (...) subshell are started at the same time, but tee writes foo\n to stdout before the pipe to rev , so in any case, rev will write oof to the pipe after tee writes foo , so oof can only come last. Delaying cat has no incidence. If you wanted the output of rev not to go through the pipe to (sleep 1; cat) , you'd use zsh or do: { echo foo 3>&- | tee >(rev >&3 3>&-) 3>&- | (sleep 1; cat) 3>&-; } 3>&1 Note that zsh also has a builtin tee in its multios feature, so you can do: echo foo > >(rev) > >(sleep 1; cat) However in: echo foo > >(rev) | (sleep 1; cat) The output of rev would go through cat (confusingly considering it doesn't in the echo foo >(echo bar) | (sleep 1; cat) case). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165517/"
]
} |
406,382 | I have a text file with empty lines separating blocks of text. I would like to use *NIX command-line tools to shuffle this file while respecting the block structure. In other words, in the output I would like to see the changed order of blocks; the lines and their order inside the block remain the same. Input file example: line 1line 2line 10line 20line 30line 100line 200 The output file (after shuffle): line 10line 20line 30line 1line 2line 100line 200 Of course, running repeatedly should give different order of blocks. The first line of the file is always non-empty. There are no double blank lines. The last line of the file is always empty. I wrote a very simple Python script that reads all lines in a list of lists, shuffles it and outputs. I am curious whether I could do it with standard *NIX tools. | POSIXly, you could do something like: <file awk ' BEGIN{srand(); n=rand()} {print n, NR, $0} !NF {n=rand()} END {if (NF) print n, NR+1, ""}' | sort -nk1 -k2 | cut -d' ' -f3- That is, prefix each line with <a-random-number-that-changes-with-each-paragraph> then the line number, then sort numerically on the first number and then second to keep the line order in the paragraphs and remove those extra numbers. One may want to pipe to sed '$d' to remove the trailing blank line. Beware that with most awk implementations srand() uses the unix epoch time to seed the pseudo-random number generator, so you may get the same result if run twice in the same second (a historical bug now engraved in the POSIX spec, despite my efforts unfortunately ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104493/"
]
} |
406,410 | I found this Q/A with the solution to print all the keys in an object: jq -r 'keys[] as $k | "\($k), \(.[$k] | .ip)"' In my case I want to perform the above but on a sub-object: jq -r '.connections keys[] as $k | "\($k), \(.[$k] | .ip)"' What is the proper syntax to do this? | Simply pipe to keys function: Sample input.json : { "connections": { "host1": { "ip": "10.1.2.3" }, "host2": { "ip": "10.1.2.2" }, "host3": { "ip": "10.1.18.1" } }} jq -r '.connections | keys[] as $k | "\($k), \(.[$k] | .ip)"' input.json The output: host1, 10.1.2.3host2, 10.1.2.2host3, 10.1.18.1 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/406410",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
406,442 | I would like to take the output of a bash command (in this case grep) and insert a blank line every other line: notlikethis ...butlikethis! Smart way to do it in bash? | Using pr from coreutils : $ grep '' file | pr -Tdnotlikethis ... (note that this does double-space the final line - unlike, say, sed '$!G' ). From man pr : -d, --double-space double space the output -T, --omit-pagination omit page headers and trailers, eliminate any pagination by form feeds set in input files | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406442",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36186/"
]
} |
406,454 | If I export an image with lets say 300 DPI and I read out its meta-info with any application that can do it (like file , exiftool , identify , mediainfo etc.), I always get a value showing Image-Width and Image-Height. In this case: 2254 x 288 how do I get the 300 DPI value, or the corresponding value from any other image file? Since in my case the proportional value of Image-Width and Image-Height does not matter I want to be able to check the resolution of any image to be able to compile new images with the same quality independent of their proportion, since this varies on every file. For my workflow I'm especially interested in any command line solution, though any others are of course highly appreciated too. | You could use identify from imagemagick : identify -format '%x,%y\n' image.png Note however that in this case (a PNG image) identify will return the resolution in PPCM (pixels per centimeter) so to get PPI (pixels per inch) you need to add -units PixelsPerInch to your command (e.g. you could also use the fx operator to round value to integer): identify -units PixelsPerInch -format '%[fx:int(resolution.x)]\n' image.png There's also exiftool : exiftool -p '$XResolution,$YResolution' image.png though it assumes the image file has those tags defined . | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/406454",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
406,520 | I have compiled openvpn from source, running openvpn --version returns: OpenVPN 2.4.4 x86_64-unknown-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [MH/PKTINFO] [AEAD] built on Nov 19 2017library versions: OpenSSL 1.0.2g 1 Mar 2016, LZO 2.08 And created a /etc/openvpn/server.conf file with some basic settings. However, when I try to start it with sudo systemctl start openvpn@server it returns Failed to start [email protected]: Unit [email protected] not found. And sudo systemctl status openvpn returns: ● openvpn.service Loaded: masked (/dev/null; bad) Active: inactive (dead) since Sun 2017-11-19 14:21:06 HKT; 4 days ago Main PID: 1502 (code=exited, status=0/SUCCESS) Which makes me think that openvpn service is not even registered. I have checked /lib/systemd/system/ , it doesn't have openvpn.service file, but /etc/systemd/system/ does. As I understand this is because I compiled instead of apt-get install openvpn ? Can anyone suggest how should I add self-compiled openvpn as a service? First time compiling from source, so any advise/tips much appreciated! EDIT 1: I can start openvpn server and connect clients to it with (only service doesn't seem to work): sudo openvpn /etc/openvpn/server.conf | Made it work by manually creating two files in /lib/systemd/system . The first one is openvpn.service : [Unit]Description=OpenVPN serviceAfter=network.target[Service]Type=oneshotRemainAfterExit=yesExecStart=/bin/trueExecReload=/bin/trueWorkingDirectory=/etc/openvpn[Install]WantedBy=multi-user.target and second is [email protected] : [Unit]Description=OpenVPN connection to %iPartOf=openvpn.serviceReloadPropagatedFrom=openvpn.serviceBefore=systemd-user-sessions.serviceDocumentation=man:openvpn(8)Documentation=https://community.openvpn.net/openvpn/wiki/Openvpn23ManPageDocumentation=https://community.openvpn.net/openvpn/wiki/HOWTO[Service]PrivateTmp=trueKillMode=mixedType=forkingExecStart=/usr/local/sbin/openvpn --daemon ovpn-%i --status /run/openvpn/%i.status 10 --cd /etc/openvpn --script-security 2 --config /etc/openvpn/%i.conf --writepid /run/openvpn/%i.pidPIDFile=/run/openvpn/%i.pidExecReload=/bin/kill -HUP $MAINPIDWorkingDirectory=/etc/openvpnProtectSystem=yesCapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SYS_CHROOT CAP_DAC_READ_SEARCH CAP_AUDIT_WRITELimitNPROC=10DeviceAllow=/dev/null rwDeviceAllow=/dev/net/tun rw[Install]WantedBy=multi-user.target After creating them, do sudo systemctl daemon-reload to reload the new changes. Generally, the files are the same, as if openvpn was installed from official repo, the only difference is ExecStart=/usr/sbin/openvpn should be ExecStart=/usr/local/sbin/openvpn , pointing to compiled local openVPN. Edit:If you use openvpn 2.4+, remove PIDFile=/run/openvpn/%i.pid and --writepid /run/openvpn/%i.pid from the second file, as it prevents server from starting on boot. Found it here | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/406520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262304/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.