source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
337,332
I have a computer with two NICS, one eth one wlan . wlan is on 10.0.0.0/24 eth is on 192.168.0.0/16 Kernel routing table is: $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 enp4s0f0 0.0.0.0 10.0.0.1 0.0.0.0 UG 600 0 0 wlp3s0 10.0.0.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp3s0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 enp4s0f0 192.168.0.0 0.0.0.0 255.255.0.0 U 100 0 0 enp4s0f0 Questions: Does the kernel choose which default gw use, or does it send to both? How does it choose if it chooses? What is the most efficient way to influence the choice, or to make it make one?
In this case the kernel chooses based on the metric: the lower metric wins. (Route selection is based on route specificity, administrative cost, and metric in that order. Both your default gateways have the same specificity and administrative cost.) To change the selection, the best approach is to change the route metric.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337332", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/125207/" ] }
337,336
I have some files on my laptop which I want to copy them on a remote cluster. To this end, I use PuTTy to SSH the remote cluster. Then to copy files, I use PuTTy terminal and after logging to the remote system, I write the below instruction, scp -r ~/Desktop/AFU/ username@host:~/SVM aiming copy all files in folder C:Users\name\Desktop\AFU in my laptop to a folder named SVM on the remote cluster. However, it does not work and I get the error: /home/username/Desktop/AFU: No such file or directory. Could you please help me?The operating system on my laptop is Windows 8.1.
The scp command you're trying to run is not only wrong, but won't work anyway because it presumes your laptop is running a SSH server. To do what you want, there's a much simpler way: use WinSCP on your laptop to connect to the remote cluster (it works similarly to PuTTY), then upload the files you want -- in your case, files from C:Users\name\Desktop\AFU in your laptop to ~/SVM on the remote cluster.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337336", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210415/" ] }
337,358
so I bought a laptop that has Linux OS and I ave ABSOLUTELY no idea about Linux.When I start it, it loads some things and then shows black screen with the following text: Acer Linux N1 (1.00.3001)Kernel 4.2.3-300.fc23.x86_64 on an x86_64 (tty1)Admin Console: https://127.0.0.1:9090/ or https://[::ffff:127.0.0.1]:9090/localhost login: root (automatic login)Last login: Sat Jan 14 16:30:25 on tty1[root@localhost ~]# What should I even write for it to boot into the actual OS, so I can access the files and things? I am stuck here, and I don't know what to do next. I would like to enter boot manager and set boot option to CD, so I can install Windows 7, how do I do that?
The scp command you're trying to run is not only wrong, but won't work anyway because it presumes your laptop is running a SSH server. To do what you want, there's a much simpler way: use WinSCP on your laptop to connect to the remote cluster (it works similarly to PuTTY), then upload the files you want -- in your case, files from C:Users\name\Desktop\AFU in your laptop to ~/SVM on the remote cluster.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337358", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210429/" ] }
337,424
I tried to write a program that can read input from a file, and got stuck.my program is prog: #!/bin/bashnum=$(($1 + $2))echo $num my input test.in: 11 I used ./prog < test1.in but got error message ./prog: line 2: + : syntax error: operand expected (error token is "+ ") What's wrong?Thanks!
What you have written is not a program that reads input from a file, but a program that takes its input in the form of positional parameters (aka command-line arguments). The redirection operator < sends your file data to the program's standard input stream (aka stdin ) - which your program ignores. At its simplest, to read one line per value from standard input, you could change your program to #!/bin/bashread aread bnum=$((a+b))echo $num Now when you redirect stdin from your test file, the result should be $ ./newprog < test1.in2 Alternatively you could have used the xargs utility to read your file data and pass its contents to your program as arguments $ xargs -a test1.in ./prog2
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337424", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209747/" ] }
337,435
I have been trying to insert a file as the first line of another with following SED command, without much success. Each time the file is inserted after line 1. Is there a switch that will inserted before line 1? sed -i '1r file1.txt' file2.txt Thanks in advance.
The sed command r will place the contents of the specified file in the output before reading the next line of input. Unfortunately you can't specify 0 as the address for this command, so there's no way to insert the contents of a file before the first line of input (without poking around with the hold space). You could, however, just use plain old cat . It is, after all, the command for concatenating files: $ cat file1.txt file2.txt >out && mv out file2.txt To be sure you're writing to a temporary file that does not already exist, one may use the mktemp utility: $ tmp="$(mktemp)" && cat file1.txt file2.txt >"$tmp" && mv "$tmp" file2.txt This is slightly awkward on the command line, but a good precaution in any script that needs to write to a temporary file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337435", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68650/" ] }
337,454
I experience that my Ubuntu Linux requires a restart every now and then and also that it needs restart more often than in the old days (maybe it's because it updates more). I understand that updating the kernel forces a reboot but is that a convenience and technically feasible to update without rebooting and only complicated? Is it something that is technically impossible to update while the system is running? "The system will go down for maintenance now...." I understand that a BIOS update forces a reboot but I don't understand why it is absolutely necessary or if that is for convenience and making it much easier to update. Reading a similar question , there were not convincing answers. I believe that it is running binaries that should be replaced.
The only thing that absolutely requires a reboot is modifying the kernel. Any process can be killed if the program (or some library or other file that it depends on) has been upgraded, but that isn't the case for the kernel. Actually, it's possible to patch a Linux kernel directly in memory sometimes. There are several tools that work at least in some cases: Ksplice , Kpatch , kGraft … Each of them works in some simple cases but not all; they typically work with security updates, as those don't change any internal interface (especially data structure formats), but not to upgrade between kernel versions. Ubuntu LTS supports kernel patching using livepatch since 16.04 with the 4.4 kernel with a proprietary client. Although anything that isn't in the kernel can be upgraded on a running system, it still requires restarting the affected processes. On a server this means restarting the servers that use an executable, library, plugin, data file, configuration or other dependency that has been updated. On a desktop machine, this can mean getting users to log out and back in (e.g. if it's a bug in the graphics driver). Determining exactly what needs to be restarted can be difficult as it depends on the exact nature of the bug fix and how the program is used. Rather than go through the huge amount of work needed to determine this precisely, Ubuntu plays it safe and recommends a reboot in packages that think that restarting the service would be too fiddly. The way it works is that you get a prompt to reboot when a package's post-installation (post-upgrade, in this case) script declares that a reboot is required (see How can I tell what package requires a reboot of my system? ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337454", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/9115/" ] }
337,457
I want to find a file and then enter the directory containing it. I tried find /media/storage -name "Fedora" | xargs cd but of course, I the is not a directory error. How do I enter its parent directory with a one line command?
At least if you have GNU find , you can use -printf '%h' to get the directory %h Leading directories of file's name (all but the last ele‐ ment). If the file name contains no slashes (since it is in the current directory) the %h specifier expands to ".". So you could probably do cd "$(find /media/storage -name "Fedora" -printf '%h' -quit)" The -quit should prevent multiple arguments to cd in the case more than one file matches.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/337457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/135657/" ] }
337,465
I'm creating a small backup script using sshfs : sshfs backup_user@target_ip:/home /mnt/backup Is there a way to include the password in this command? Or is there another file transfer solution where the login password can be included other than FTP/SFTP?
-o password_stdin do not seem to be working on all systems, for instance freeBSD. etc. You can also use expect Interpreter, it should work with sshfsand should do the trick. Another solution would be sshpass , for instance, let say your are backing up directory /var/www Backing up: name=$(date '+%y-%m-%d')mkdir /backup/$name && tar -czvf /backup/$name/"$name.tar.gz" /var/www uploading backup file to backup server sshpass -p "your_password" scp -r backup_user@target_ip:/home/ /backup/$name So it will upload directory with today's backup But still, as it was said higher, best(safe and simple) way would be to use ssh key pair The only inconvenience would be that you have to go through the key generation process once on every server you need to pair, but it is better than keeping a password in plain text format on all servers you want to back up :), Generating a Key Pair the Proper way On Local server ssh-keygen -t rsa On remote Server ssh root@remote_servers_ip "mkdir -p .ssh" Uploading Generated Public Keys to the Remote Server cat ~/.ssh/id_rsa.pub | ssh root@remote_servers_ip "cat >> ~/.ssh/authorized_keys" Set Permissions on Remote server ssh root@remote_servers_ip "chmod 700 ~/.ssh; chmod 640 ~/.ssh/authorized_keys" Login ssh root@remote_servers_ip Enabling SSH Protocol v2 uncomment "Protocol 2" in /etc/ssh/sshd_config enabling public key authorization in sshd uncomment "PubkeyAuthentication yes" in /etc/ssh/sshd_config If StrictModes is set to yes in /etc/ssh/sshd_config then restorecon -Rv ~/.ssh
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/337465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209233/" ] }
337,474
I have searched many OpenSUSE forums for an answer to this, but so far I have not found one. Long story short, when installing the RPM for the JDK from Oracle, I receive the following: > sudo zypper install jdk-8u111-linux-x64.rpm[sudo] password for root:Loading repository data...Reading installed packages...Resolving package dependencies...The following NEW package is going to be installed: jdk1.8.0_1111 new package to install.Overall download size: 158.3 MiB. Already cached: 0 B. After the operation, additional 258.5 MiB will be used.Continue? [y/n/? shows all options] (y): yRetrieving package jdk1.8.0_111-2000:1.8.0_111-fcs.x86_64 (1/1), 158.3 MiB (258.5 MiB unpacked)Checking for file conflicts: ......................................................................[done](1/1) Installing: jdk1.8.0_111-2000:1.8.0_111-fcs.x86_64 ..........................................[done]Additional rpm output:Unpacking JAR files... tools.jar... plugin.jar... javaws.jar... deploy.jar... rt.jar... jsse.jar... charsets.jar... localedata.jar...update-alternatives: using /usr/java/jdk1.8.0_111/jre/bin/java to provide /usr/bin/java (java) in auto modeupdate-alternatives: error: alternative ControlPanel can't be slave of javac: it is a slave of javawarning: %post(jdk1.8.0_111-2000:1.8.0_111-fcs.x86_64) scriptlet failed, exit status 2 Forgive me for the high levels of verbosity, I just wanted you to see precisely as I see. This is on a fresh install of OpenSUSE Tumbleweed. I have also tried to install it on OpenSUSE Leap 42.2 with a fresh install as well. After my very first attempt, I reloaded with no Java support whatsoever (no OpenJDK) to start from scratch as I've done with this install. I've followed the guides for installing Java on OpenSUSE specifically. Ones that have no Java installed, ones that java OpenJDK installed prior, ones that install both the JDK and JRE, for whatever reason, etc. For the record, Java itself is functioning, but obviously the Control Panel is not. I have attempting manually use update-alternatives, I've attempted to compile from scratch, I've reloaded, I've switched from Leap to Tumbleweed. Here is some other information that may be of use: > sudo update-alternatives --list java/usr/java/jdk1.8.0_111/jre/bin/java> sudo update-alternatives --config javaThere is only one alternative in link group java (providing /usr/bin/java): /usr/java/jdk1.8.0_111/jre/bin/javaNothing to configure.> java -versionjava version "1.8.0_111"Java(TM) SE Runtime Environment (build 1.8.0_111-b14)Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)> javac -versionjavac 1.8.0_111 Again, I can see Java is working. But I'd still like to understand why this is so easily reproducible and how to fix it.
The process of executing this install is easier than most think and surprisingly there is not much good or direct info out there on how to do this. The above answer is correct, but has some elements which are a bit outdated. Download Oracle JDK 1.8.0_151 #// rpm Installation Oracle JDK 1.8.0_151wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie"  http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm Run installation command #// installation commandrpm -ivh jdk-8u151-linux-x64.rp Verify the version is configured / installed to your prefernce java -version Set the environment variables using the command line interface or editor #//Command line export JAVA_HOME=/usr/java/jdk1.8.0_151/export PATH=$PATH:/usr/java/jdk1.8.0_151/bin#// set variables at the END of the file etc/profilesudo vim /etc/profile/#//Variables to set within the fileJAVA_HOME=/usr/java/jdk1.8.0_151PATH=$Path:$HOME/bin:@JAVA_HOME/binexport JAVA_HOMEexport PATH#//To save / exit vim execute the following key strokes<ESC><:><x>
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337474", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209702/" ] }
337,476
This gives me an error that says too many arguments: if [ $( file -b $i ) == "directory" ] But when I tried this name=$( file -b $i )if [ name == "directory" ] It seems to work just fine. Can someone explain this or point out in the docs an explanation?
Couple of issues: ] indicates the end of arguments for [ ( test ), and it must be the last argument; you have couple of ] s, which is wrong; presumably you meant to use: if [ $( file -b $i ) == "directory" ] If you had used the above, you would get bash: [: too many arguments , because word splitting would be done upon on the output of the variable expansion ( $i ), and then command substitution, $() ( file command) and [ will see multiple words before = , leading to the error message. You need to quote the variable expansion, and command substitution: [ "$(file -b "$1")" == "directory" ] As a side note, you should use the bash keyword [[ , instead of [ as the former will handle word splitting (and pathname expansion) for you.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337476", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/154099/" ] }
337,571
The default behaviour of git diff — syntax-colored, paged — is very nice to work with, but it would be slightly nicer with line numbers for context, particularly for larger diffs, and especially for the final page. git diff | nl | more almost gives me everything I need, but it discards the coloring; any way I can get that back?
use less -r to display the colour, but you will need to force git to use colours because when you pipe git diff it will difault to --nocolor git diff --color HEAD~3 HEAD | nl | less -R If you like to get the line numbers per line, try looking about the solutions suggested here, https://stackoverflow.com/questions/24455377/git-diff-with-line-numbers-git-log-with-line-numbers
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/365/" ] }
337,619
I recently saw https://lintian.debian.org/tags/binary-without-manpage.html and it shows around 14k manpages which are missing. This means it's more than likely that some of the binary packages (not libraries) have missing manpages. How do I get a list of installed binary packages/applications (NOT libraries) which don't have manpages? I might know some and start contributing a bit towards that.
You can list all binary without man page through manpage-alert command manpage-alert - check for binaries without corresponding manpagesDESCRIPTION manpage-alert searches the given list of paths for binaries without cor‐ responding manpages. If no paths are specified on the command line, the path list /bin /sbin /usr/bin /usr/sbin /usr/games will be assumed
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/337619", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
337,638
This will still write to /dev/foo if there is a journal: mount -oro /dev/foo /mnt/disk How can I treat /dev/foo as read-only?
mount -oro,noload /dev/foo /mnt/disk
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
337,739
I have a binary (that I can't modify) and I can do: ./binary < file I also can do: ./binary << EOF> "line 1 of file"> "line 2 of file"...> "last line of file"> EOF But cat file | ./binary gives me an error. I don't know why it doesn't work with a pipe.In all 3 cases the content of file is given to the standard inputof binary (in different ways): bash reads the file and gives it to stdin of binary bash reads lines from stdin (until EOF) and gives it to stdin of binary cat reads and puts the lines of file to stdout, bash redirects them to stdin of binary The binary shouldn't notice the difference between those 3 as faras I understood it. Can someone explain why the 3rd casedoesn't work? BTW: The error given by the binary is: 20170116/125624.689 - U3000011 Could not read script file '', error code '14'. But my main question is, how is there a difference for any program with that 3 options. Here are some further details: I tried it again with strace and there were in fact some errors ESPIPE (Illegal seek) from lseek followed by EFAULT (Bad address) from read right before theerror message. The binary I tried to control with a ruby script (without usingtemporary files) is part of the callapi from Automic (UC4) .
In ./binary < file binary 's stdin is the file open in read-only mode. Note that bash doesn't read the file at all, it just opens it for reading on the file descriptor 0 (stdin) of the process it executes binary in. In: ./binary << EOFtestEOF Depending on the shell, binary 's stdin will be either a deleted temporary file (AT&T ksh, zsh, bash...) that contains test\n as put there by the shell or the reading end of a pipe ( dash , yash ; and the shell writes test\n in parallel at the other end of the pipe). In your case, if you're using bash , it would be a temp file. In: cat file | ./binary Depending on the shell, binary 's stdin will be either the reading end of a pipe, or one end of a socket pair where the writing direction has been shut down (ksh93) and cat is writing the content of file at the other end. When stdin is a regular file (temporary or not), it is seekable. binary may go to the beginning or end, rewind, etc. It can also mmap it, do some ioctl()s like FIEMAP/FIBMAP (if using <> instead of < , it could truncate/punch holes in it, etc). pipes and socket pairs on the other hand are an inter-process communication means, there's not much binary can do beside read ing the data (though there are also some operations like some pipe-specific ioctl() s that it could do on them and not on regular files). Most of the times, it's the missing ability to seek that causes applications to fail/complain when working with pipes, but it could be any of the other system calls that are valid on regular files but not on different types of files (like mmap() , ftruncate() , fallocate() ). On Linux, there's also a big difference in behaviour when you open /dev/stdin while the fd 0 is on a pipe or on a regular file. There are many commands out there that can only deal with seekable files, but when that's the case, that's generally not for the files open on their stdin. $ unzip -l file.zipArchive: file.zip Length Date Time Name--------- ---------- ----- ---- 11 2016-12-21 14:43 file--------- ------- 11 1 file$ unzip -l <(cat file.zip) # more or less the same as cat file.zip | unzip -l /dev/stdinArchive: /proc/self/fd/11 End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive.unzip: cannot find zipfile directory in one of /proc/self/fd/11 or /proc/self/fd/11.zip, and cannot find /proc/self/fd/11.ZIP, period. unzip needs to read the index stored at the end of the file, and then seek within the file to read the archive members. But here, the file (regular in the first case, pipe in the second) is given as a path argument to unzip , and unzip opens it itself (typically on fd other than 0) instead of inheriting a fd already opened by the caller. It doesn't read zip files from its stdin. stdin is mostly used for user interaction. If you run that binary of yours without redirection at the prompt of an interactive shell running in a terminal emulator, then binary 's stdin will be inherited from its caller the shell, which itself will have inherited it from its caller the terminal emulator and will be a pty device open in read+write mode (something like /dev/pts/n ). Those devices are not seekable either. So, if binary works OK when taking input from the terminal, possibly the issue is not about seeking. If that 14 is meant to be an errno (an error code set by failing system calls), then on most systems, that would be EFAULT ( Bad address ). The read() system call would fail with that error if asked to read into a memory address that is not writable. That would be independent of whether the fd to read the data from points to a pipe or regular file and would generally indicate a bug 1 . binary possibly determines the type of file open on its stdin (with fstat() ) and runs into a bug when it's neither a regular file nor a tty device. Hard to tell without knowing more about the application. Running it under strace (or truss / tusc equivalent on your system) could help us see what is the system call if any that is failing here. 1 The scenario envisaged by Matthew Ife in a comment to your question sounds a lot plausible here. Quoting him: I suspect it is seeking to the end of file to get a buffer size for reading the data, badly handling the fact that seek doesn't work and attempting to allocate a negative size (not handling a bad malloc). Passing the buffer to read which faults given the buffer is not valid.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/337739", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210721/" ] }
337,754
Does yum list command get data from the yum repository or from the redhat page via internet? Back ground: I have updated yum repository for httpd only (x86_64 with updated httpd rpm) createrepo -update /repository/yum/x86_64 Then I have reverted the original repository file createrepo -update /repository/yum/x86_64_20170116 when I check the httpd ver of x86_64_20170116 the httpd version is outdated(original state) ll x86_64_20170116/Packages/ httpd* however, when I enter below commands, the httpd version is up to date yum list available | grep httpd Could someone please shed some light on this?
In ./binary < file binary 's stdin is the file open in read-only mode. Note that bash doesn't read the file at all, it just opens it for reading on the file descriptor 0 (stdin) of the process it executes binary in. In: ./binary << EOFtestEOF Depending on the shell, binary 's stdin will be either a deleted temporary file (AT&T ksh, zsh, bash...) that contains test\n as put there by the shell or the reading end of a pipe ( dash , yash ; and the shell writes test\n in parallel at the other end of the pipe). In your case, if you're using bash , it would be a temp file. In: cat file | ./binary Depending on the shell, binary 's stdin will be either the reading end of a pipe, or one end of a socket pair where the writing direction has been shut down (ksh93) and cat is writing the content of file at the other end. When stdin is a regular file (temporary or not), it is seekable. binary may go to the beginning or end, rewind, etc. It can also mmap it, do some ioctl()s like FIEMAP/FIBMAP (if using <> instead of < , it could truncate/punch holes in it, etc). pipes and socket pairs on the other hand are an inter-process communication means, there's not much binary can do beside read ing the data (though there are also some operations like some pipe-specific ioctl() s that it could do on them and not on regular files). Most of the times, it's the missing ability to seek that causes applications to fail/complain when working with pipes, but it could be any of the other system calls that are valid on regular files but not on different types of files (like mmap() , ftruncate() , fallocate() ). On Linux, there's also a big difference in behaviour when you open /dev/stdin while the fd 0 is on a pipe or on a regular file. There are many commands out there that can only deal with seekable files, but when that's the case, that's generally not for the files open on their stdin. $ unzip -l file.zipArchive: file.zip Length Date Time Name--------- ---------- ----- ---- 11 2016-12-21 14:43 file--------- ------- 11 1 file$ unzip -l <(cat file.zip) # more or less the same as cat file.zip | unzip -l /dev/stdinArchive: /proc/self/fd/11 End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive.unzip: cannot find zipfile directory in one of /proc/self/fd/11 or /proc/self/fd/11.zip, and cannot find /proc/self/fd/11.ZIP, period. unzip needs to read the index stored at the end of the file, and then seek within the file to read the archive members. But here, the file (regular in the first case, pipe in the second) is given as a path argument to unzip , and unzip opens it itself (typically on fd other than 0) instead of inheriting a fd already opened by the caller. It doesn't read zip files from its stdin. stdin is mostly used for user interaction. If you run that binary of yours without redirection at the prompt of an interactive shell running in a terminal emulator, then binary 's stdin will be inherited from its caller the shell, which itself will have inherited it from its caller the terminal emulator and will be a pty device open in read+write mode (something like /dev/pts/n ). Those devices are not seekable either. So, if binary works OK when taking input from the terminal, possibly the issue is not about seeking. If that 14 is meant to be an errno (an error code set by failing system calls), then on most systems, that would be EFAULT ( Bad address ). The read() system call would fail with that error if asked to read into a memory address that is not writable. That would be independent of whether the fd to read the data from points to a pipe or regular file and would generally indicate a bug 1 . binary possibly determines the type of file open on its stdin (with fstat() ) and runs into a bug when it's neither a regular file nor a tty device. Hard to tell without knowing more about the application. Running it under strace (or truss / tusc equivalent on your system) could help us see what is the system call if any that is failing here. 1 The scenario envisaged by Matthew Ife in a comment to your question sounds a lot plausible here. Quoting him: I suspect it is seeking to the end of file to get a buffer size for reading the data, badly handling the fact that seek doesn't work and attempting to allocate a negative size (not handling a bad malloc). Passing the buffer to read which faults given the buffer is not valid.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/337754", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210733/" ] }
337,774
Today, after doing updates in Debian Stretch, it started displaying these warnings when restarting the ssh service with my current config: /etc/ssh/sshd_config line 17: Deprecated option KeyRegenerationInterval/etc/ssh/sshd_config line 18: Deprecated option ServerKeyBits/etc/ssh/sshd_config line 29: Deprecated option RSAAuthentication/etc/ssh/sshd_config line 36: Deprecated option RhostsRSAAuthentication[....] Restarting OpenBSD Secure Shell server: sshd/etc/ssh/sshd_config line 17: Deprecated option KeyRegenerationInterval/etc/ssh/sshd_config line 18: Deprecated option ServerKeyBits/etc/ssh/sshd_config line 29: Deprecated option RSAAuthentication/etc/ssh/sshd_config line 36: Deprecated option RhostsRSAAuthentication What is happening here? Using Debian 9 with OpenSSH 7.4
In the current Stretch update, openssh version changed from 7.3 to 7.4, released on 2016-Dec-19. As it can be inferred from the Release notes, and from @Jakuje comments, OpenSSH maintainers have removed the corresponding configuration options for good, as they are obsolete. So the lines can be safely removed. Also, take head of: Future deprecation notice We plan on retiring more legacy cryptography in future releases, specifically: In approximately August 2017, removing remaining support for the SSH v.1 protocol (client-only and currently compile-time disabled). In the same release, removing support for Blowfish and RC4 ciphers and the RIPE-MD160 HMAC. (These are currently run-time disabled). Refusing all RSA keys smaller than 1024 bits (the current minimum is 768 bits) The next release of OpenSSH will remove support for running sshd(8) with privilege separation disabled. The next release of portable OpenSSH will remove support for OpenSSL version prior to 1.0.1.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/337774", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138261/" ] }
337,780
Is there any possible situation when ls -l file.txt is showing not the same number of bytes as wc -c file.txt In one script I found comparison of those two values. What could be the reason of that? Is it even possible to have different byte counts of the same file?
Yes, there are such cases. In case of symlinks on Linux system with GNU ls , the ls -l will put out the size of the link, while wc -c will resolve the actual file and read number of bytes there. Below you can see that ls -l reports 29 bytes , while wc reports 172 bytes in the actual file. $ ls -l /etc/resolv.conf lrwxrwxrwx 1 root root 29 1月 17 2016 /etc/resolv.conf -> ../run/resolvconf/resolv.conf$ wc -c /etc/resolv.conf 172 /etc/resolv.conf$ wc -c /var/run/resolvconf/resolv.conf 172 /var/run/resolvconf/resolv.conf$ ls -l /var/run/resolvconf/resolv.conf -rw-r--r-- 1 root root 172 1月 15 15:49 /var/run/resolvconf/resolv.conf In case of virtual filesystems , such as /proc or /sys , many files there will show as having size 0 ls -l . Under /dev filesystem we have variety of special files, such as character devices and block devices - wc -c hangs on those and ls -l shows major and minor numbers instead of size. Named pipes will be reported as 0 bytes by ls -c , but wc -c will actually read the contents of the pipe, so technically it will tell you how much data is in the named pipe: $ mkfifo named.pipe $ echo "This is a test" > named.pipe &[1] 2129$ ls -l named.pipeprw-rw-r-- 1 xieerqi xieerqi 0 1月 16 08:40 named.pipe|$ wc -c named.pipe15 named.pipe[1] + Done echo "This is a test" >named.pipe For a regular files, the size should be equal. The point of ls -l and wc -c , and how they work also differs. wc -c actually opens file for reading ( you can see that if you run strace wc -c /etc/passwd for example). ls -l only performs stat() call on those. This also explains why in /proc ls -l shows 0 size - you can't stat those files because they aren't "real" or actually stored on the hard-drive/ssd. wc -c instead, reads the contents of that file, and calculates its size. Finally, ls -l is only a tool for listing items interactively. It's rarely a good fit for scripting. When you actually need to read the data, use wc -c instead. Please note, that for scripting and assessing size of a file, ls is not the best candidate. In fact , it is a one of the common practices to avoid parsing ls output . Please use du -b for finding out the size of a file.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337780", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/152015/" ] }
337,819
On Slackware, using sbopkg permits one to build a package from source. Repos is not big as Debian, but it's nice. Some software can use environment variables,for example on the VICE c64 emulator, if the variable FFMPEG is set to yes , it will enable ffmpeg recording the emulator. I tried to use $ export FFMPEG=yes; sudo sbopkg -B -i vice but ffmpeg is disabled. Instead I had to use $ su -$ export FFMPEG=yes$ sbopkg -B -i vice which works. How to use environment variables with sudo ?
You may use sudo's -E option: FMPEG=yes sudo -E sbopkg -B -i vice From the manual: -E, --preserve-env Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the user does not have permission to preserve the environment. Note that this exports all your existing environment variables. It is safer to only export the environment variables you need with the following syntax : sudo FMPEG=yes sbopkg -B -i vice
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/337819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
337,833
I have installed httpd on CentOS 7, but the installed version is 2.4.6-45.el7 . This page says that the latest version of httpd is 2.4.25. I want to know if 2.4.6-45.el7 is equivalent to 2.4.25 . What does -45.el7 mean? Is there any documentation about this?
That's version 2.4.6 and the part after the - is the package release version. el (no e1 as stated in the question) represents Enterprise Linux and the following is its corresponding version ( 7 ). This version is consistent across RedHat and related distributions (including CentOS). The packaging version changes when it has to be rebuilt because of a change to another package, which is why it increases even though the actual source package is still the same.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/337833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205726/" ] }
337,852
I have a huge file with about 12300 lines that look something similar to this. 001.domain.com=001.somedomain.com:10001002.domain.com=002.somedomain.com:10002003.domain.com=003.somedomain.com:10003 I want the file to look like this when it is done 001.domain.com=IP_Address_of_001.somedomain.com:10001002.domain.com=IP_Address_of_002.somedomain.com:10002003.domain.com=IP_Address_of_003.somedomain.com:10003 So basically I need to find and replace the hostname after the = signs with the ip address. If someone can point me in the right direction, I would appreciate it.
This uses sed to extract the hostname, then uses dig to get its IP, then uses sed again for the replace. It outputs the replacements to a new file: $ while read line; do hostname=$(echo "$line" | sed "s/.*=\(.*\):.*/\1/g") ip=$(dig +short $hostname | head -n1) echo "$line" | sed "s/\(.*=\).*\(:.*\)/\1${ip}\2/g"done < file.txt > new_file.txt
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337852", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210789/" ] }
337,853
(On Stock Debian testing aka stretch aka 9, so I have a regular systemd+logind+NetworkManager+GNOME stack) I have a pair of script that I want to run on startup/shutdown and resume/suspend. This script requires networking to be present when it runs. I have attempt this with the following script: [Unit]Description=Yamaha Reciever powerRequires=network-online.targetAfter=network-online.targetBefore=sleep.targetConflicts=sleep.target[Service]Type=oneshotExecStart=/usr/local/bin/av-upExecStop=/usr/local/bin/av-downRemainAfterExit=yes[Install]WantedBy=graphical.target This works correctly on startup/shutdown, however during suspend it runs after the network has shutdown and therefore fails. I have determined that the reason for this is how suspend proceeds under systemd: Suspend is initiated by a program (e.g. systemctl suspend ) sending a dbus call to logind. Logind then sends a PrepareToShutdown dbus signal to anyone listening. Logind then sends a StartUnit dbus call to systemd to run the suspend.target unit. NetworkManager listens to PrepareToShutdown, so removes the network at (2), while my unit is triggered when systemd actually starts suspending at (3). NetworkManager keeps an "inhibit" lock with logind to ensure it shuts the network down before (3). (Side note: it seems crazy to have something like systemd control ordering of suspend/resume, only to subvert it with logind making stuff circumvent this) What is the right way to trigger a program to run on suspend/resume while networking is still running? Should I use NetworkManager pre-down scripts? If so how do I stop it triggering if the network goes down but I'm not suspending? Is there a way to hook into the suspend process earlier? Is there a way to make NetworkManager keep the network up longer? NB: this is distinct from How to write a Systemd unit that will fire before networking goes down as I am talking about suspend/resume.
Inspired by https://unix.stackexchange.com/a/139664/160746 I upgraded my start/stop scripts to a full blown daemon and made it listen for PrepareToShutdown signal. This is a race against NetworkManager every start/stop, but it seems to work reliably on my system. I have uploaded my code and systemd unit at https://github.com/davidn/av .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337853", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/160746/" ] }
337,859
I'm wondering if I can do something like: export HOME=$HOME:$HOME/.configs So that I can keep all my custom configs in ~/.configs. I know it's possible to set it but I'm not sure if it will cause problems down the road. Is this safe? Is there a more standard way?
Firstly, albeit possible, and experts might need this in certain rare cases, you shouldn't change the value of the HOME environment variable, it is set by your system. Secondly, the content of HOME is expected to be one, and only one, (existing) directory: your home directory. See what POSIX says about HOME : HOME: The system shall initialize this variable at the time of login to be a pathname of the user's home directory.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337859", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103531/" ] }
337,870
I'm sure many of you are familiar with the canonical pathmunge function used in Bourne-shell-compatible dotfiles to prevent duplicate entries in the PATH variable. I have also created similar functions for the LD_LIBRARY_PATH and MANPATH variables, so I have the following three functions in my .bashrc : # function to avoid adding duplicate entries to the PATHpathmunge () { case ":${PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi esac}# function to avoid adding duplicate entries to the LD_LIBRARY_PATHldpathmunge () { case ":${LD_LIBRARY_PATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$1 else LD_LIBRARY_PATH=$1:$LD_LIBRARY_PATH fi esac}# function to avoid adding duplicate entries to the MANPATHmanpathmunge () { case ":${MANPATH}:" in *:"$1":*) ;; *) if [ "$2" = "after" ] ; then MANPATH=$MANPATH:$1 else MANPATH=$1:$MANPATH fi esac} Are there any elegant ways that I can combine these three functions into one to keep my .bashrc file smaller? Maybe a way that I can pass in which variable to check/set, similar to passing-by-reference in C?
You can use eval to get and set the value of a variable knowing its name; the following function works both in Bash and in Dash : varmunge (){ : ' Invocation: varmunge <varname> <dirpath> [after] Function: Adds <dirpath> to the list of directories in <varname>. If <dirpath> is already present in <varname> then <varname> is left unchanged. If the third argument is "after" then <dirpath> is added to the end of <varname>, otherwise it is added at the beginning. Returns: 0 if everthing was all right, 1 if something went wrong. ' : local pathlist eval "pathlist=\"\$$1\"" 2>/dev/null || return 1 case ":$pathlist:" in *:"$2":*) ;; "::") eval "$1=\"$2\"" 2>/dev/null || return 1 ;; *) if [ "$3" = "after" ]; then eval "$1=\"$pathlist:$2\"" 2>/dev/null || return 1 else eval "$1=\"$2:$pathlist\"" 2>/dev/null || return 1 fi ;; esac return 0}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153578/" ] }
337,877
How do I silently extract files, without displaying status?
man unzip: -q perform operations quietly (-qq = even quieter). Ordinarily unzip prints the names of the files it's extracting or testing, the extraction methods, any file or zipfile comments that may be stored in the archive, and possibly a summary when finished with each archive. The -q[q] options suppress the printing of some or all of these messages.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/337877", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210788/" ] }
337,884
Whenever I am upgrading one of my Linux boxes (i.e. installing the next version of my favorite distribution), upgrading the respective configuration files always has been very time consuming, because in many cases I don't just alter the distribution's default configuration files to reflect my situation, but I have crafted my own configuration files very carefully. Until now, when upgrading, in these cases I either have read the respective man pages completely from scratch and have made new configuration files from scratch (this is clean, but costs much effort), or I have compared (think diff) the old and the new distribution's default configuration files, and when I saw a difference which could be important, I have "ported" (merged) it into my own configuration file (I am not happy with that method for several reasons, one of them being that the maintainer could have ignored a new configuration directive which could be dangerous to ignore in my case; but it hasn't been always avoidable if I was in hurry). I always have asked myself how other people cope with that problem. One idea would be to compare the man page of the old version of a software to that of the new version, thus seeing all differences in configuration directives or methods immediately. So here is the question: Does anybody know about a specific diff viewer for man pages, notably for the text console (main scenario would be working via SSH without X)? Please note that I am aware that there are many diff viewers (I have read dozens of articles and Q & A about this subject). My question is specifically about diff viewers for man pages which offer some comfort (for example, you tell it the base directory of the old man pages and then only have to say "show diff sshd_config" or the like). I am also aware that I eventually could read the change log of the respective upstream, but I have seen many cases where you couldn't rely on it (i.e. not all changes were mentioned there), it's much more inconvenient, and some distributions heavily patch the upstream, so I would say that this is not really an option. Comparing the old version's source code with the new one only for finding out about new configuration options seems too much and perhaps is impossible in case of Apache, Sendmail and the like. In contrast, comparing the man pages seems reasonable (if possible). Any ideas?
Man pages, once changed to a human-readable form, are text files that you can diff with whatever tool that suits you. Here are two examples, as two bash functions, for two tools: diff and vimdiff . Adapt them to your favorite tool. With vimdiff : vimdiff_man() { vimdiff -R <(man --manpath="/old/path/to/man" "$1") <(man "$1"); } With diff , side by side, adjusted to your screen width: diff_man() ( width="${COLUMNS:-80}" export MANWIDTH=$((width / 2 - 2)) diff -y -W"$width" <(man --manpath="/old/path/to/man" "$1") <(man "$1") | less) In each function, I'm diffing between two pseudo files <(...) , each of which containing the result of the man command between parentheses (this is bash's Process Substitution ). /old/path/to/man is the directory hierarchy containing your old manual pages. It is expected to have the same secondary man levels man1 , man2 , ... as your main manual directory (probably /usr/share/man ). Change it to fit your needs. Usage: diff_man sshd_configvimdiff_man sshd_config
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210810/" ] }
337,899
I have files named file_1_supply.csvfile_2_supply.csvfile_3_supply.csv.......file_30_supply.csv I want to copy these files from one folder to another in Linux. The problem is there are also many other files in the directory. I want to do it by command line because the directory have a lots of file. cp file_1_supply.csv /home/user/destination usually I use this for copy but how to use this in a loop?
If you only want to copy file_1 - file_30 : cp file_{1..30}_supply.csv /home/user/destination
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/203943/" ] }
337,910
How can I move a file within a directory to the current working directory without renaming? I can mv a file from current working directory to the parent directory without renaming using the shorthand.. mv file.file ../ To move a file from a directory to the current working directory I've tried.. mv directory/file.file / but I get permission denied. However, mv directory/file.file file.file works but I have to type out the file name and the autocomplete won't work because the file isn't in the current working directory. Isn't there a shorthand to specify the current working directory? Thanks
To move a file to the current directory you (as you correctly surmised) need to indicate which directory to move to. This is because mv will note that the destination is a directory and will not rename the file on the way. So the... Question Is: How do I denote the current directory on the command line Answer: The current working directory is . (a single dot)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337910", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/168989/" ] }
337,911
Many a times when manually grepping through a file, there are so many comments that your eyes glaze over and you start to wish there was a way in which you just could show it to display only those lines which have no comments. Is there a way to skip comments with cat or another tool? I am guessing there is a way and it involves a regular-expression. I want it just to display and not actually remove any lines or such. Comments are in the form of # and I'm using zsh as my xterm.
Well, that depends on what you mean by comments. If just lines without a # then a simple: grep -v '#' might suffice (but this will call lines like echo '#' a comment). If comment lines are lines starting with # , then you might need: grep -v '^#' And if comment lines are lines starting with # after some optional whitespace, then you could use: grep -v '^ *#' And if the comment format is something else altogether, this answer will not help you.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/337911", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50490/" ] }
337,919
Is there any (simple) way to deny FTP connections based on the general physical location? I plan to use FTP as a simple cloud storage for me and my friends. I use an odroid c2 (similar to raspberry pi but uses arm64 architecture) running Debian 8 with proftpd and ufw as my firewall. Ftp server runs on a non-standard port which I prefer not to mention here. I want to do this to increase the security of my server.
Use pam and geoip module This PAM module provides GeoIP checking for logins. The user can be allowed or denied based on the location of the originating IP address. This is similar to pam_access(8), but uses a GeoIP City or GeoIP Country database instead of host name / IP matching.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337919", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210831/" ] }
337,952
I'm trying to install hhvm-git package from AUR and getting an error. There is a bug in one of submodules. This bug is fixed already and I want to specify revision contains that fix for the submodule. How can I do that? In PKGBUILD I tried to specify revision as suggested in Arch Wiki (line in source array): "git+https://github.com/facebook/proxygen#7e37f926d922b55c85537057b57188dea9694c32" Result: -> Creating working copy of proxygen git repo...remote: Counting objects: 6, done.remote: Compressing objects: 100% (6/6), done.remote: Total 6 (delta 4), reused 0 (delta 0)Unpacking objects: 100% (6/6), done.From /tmp/yaourt-tmp-german/aur-hhvm-git/proxygen 7e2a49c..3395064 master -> origin/master==> ERROR: Unrecognized reference: 7e37f926d922b55c85537057b57188dea9694c32
I specified revision in wrong format. Correct format in my case is: "git+https://github.com/facebook/proxygen#commit=7e37f926d922b55c85537057b57188dea9694c32" From man PKGBUILD : USING VCS SOURCES Building a developmental version of a package using sources from a version control system (VCS) is enabled by specifying the source in the formsource=('directory::url#fragment'). Currently makepkg supports the Bazaar, Git, Subversion, and Mercurial version control systems. For other version control systems,manual cloning of upstream repositories must be done in the prepare() function. The source URL is divided into three components: directory (optional) Specifies an alternate directory name for makepkg to download the VCS source into. url The URL to the VCS repository. This must include the VCS in the URL protocol for makepkg to recognize this as a VCS source. If the protocol does not include the VCSname, it can be added by prefixing the URL with vcs+. For example, using a Git repository over HTTPS would have a source URL in the form: git+https://.... fragment (optional) Allows specifying a revision number or branch for makepkg to checkout from the VCS. For example, to checkout a given revision, the source line would havethe format source=(url#revision=123). The available fragments depends on the VCS being used: bzr: revision (see 'bzr help revisionspec' for details) git: branch, commit, tag hg: branch, revision, tag svn: revision
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/337952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/78353/" ] }
337,965
I have added an alias command to kill my guake terminal to my .bashrc alias killguake="kill -9 $(ps aux | grep guake | head -n -1 | awk '{print $2}')" But the problem is, the sub-command i.e. ps aux | grep guake | head -n -1 | awk '{print $2}' is executed at the time of terminal startup and killguake is set to kill -9 result_of_subcommand . Is there any way to set it like so, that the sub command is run/calculated every time I run killguake ? So that it can have the latest PID for the guake. I have also tried piping to the kill using xargs but that also result in same, that is calculating everything at startup. Here is what I tried with piping ps aux | grep guake | head -n -1 | awk '{print $2}' | xargs -I{} kill -9 {}
Use pkill instead: alias killguake='pkill guake' This is a whole lot safer than trying to parse the process table outputted by ps . Having said that, I will now explain why your alias doesn't do what you want, but really, use pkill . Your alias alias killguake="kill -9 $(ps aux | grep guake | head -n -1 | awk '{print $2}')" is double quoted. This means that when the shell parses that line in your shell initialization script, it will perform command substitutions ( $( ... ) ). So each time the file runs, instead of giving you an alias to kill guake at a later time, it will give you an alias to kill the guake process running right now . If you list your aliases (with alias ), you'll see that this alias is something like killguake='kill -9 91273' or possibly even just killquake='kill -9' if guake wasn't running at the time of shell startup. To fix this (but really, just use pkill ) you need to use single quotes and escape the $ in the Awk script (which is now in double quotes): alias killguake='kill -9 $(ps aux | grep guake | head -n -1 | awk "{print \$2}")' One of the issues with this approach in general is that you will match the processes belonging to other users. Another is that you will possibly just find the grep guake command instead of the intended process. Another is that it will throw an error if no process was found. Another is that you're invoking five external utilities to do the job of one.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/337965", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/149432/" ] }
338,000
I am trying to get the output of a pipe into a variable. I tried the following things: echo foo | myvar=$(</dev/stdin)echo foo | myvar=$(cat)echo foo | myvar=$(tee) But $myvar is empty. I don’t want to do: myvar=$(echo foo) Because I don’t want to spawn a subshell. Any ideas? Edit: I don’t want to spawn a subshell because the command before the pipe needs to edit global variables, which it can’t do in a subshell. Can it? The echo thing is just for simplification. It’s more like: complex_function | myvar=$(</dev/stdin) And I don’t get, why that doesn’t work.This works for example: complex_function | echo $(</dev/stdin)
The correct solution is to use command substitution like this: variable=$(complex_command) as in message=$(echo 'hello') (or for that matter, message=hello in this case). Your pipeline: echo 'hello' | message=$(</dev/stdin) or echo 'hello' | read message actually works. The only problem is that the shell that you're using will run the second part of the pipeline in a subshell. This subshell is destroyed when the pipeline exits, so the value of $message is not retained in the shell. Here you can see that it works: $ echo 'hello' | { read message; echo "$message"; }hello ... but since the subshell's environment is separate (and gone): $ echo "$message" (no output) One solution for you would be to switch to ksh93 which is smarter about this: $ echo 'hello' | read message$ echo "$message"hello Another solution for bash would be to set the lastpipe shell option. This would make the last part of the pipeline run in the current environment. This however does not work in interactive shells as lastpipe requires that job control is not active. #!/bin/bashshopt -s lastpipeecho 'hello' | read messageecho "$message"
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/338000", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210885/" ] }
338,053
I have a problem with this library: libXtst.so.6. I work with the eclipse IDE and I want to install scenebuilder-8.3.0-1.x86_64 for Drag and Drop UI in eclipse. When I enter this commend to install scenebuilder rpm -ihv scenebuilder-8.3.0-1.x86_64 terminal gives me this error: error: Failed dependencies: libXtst.so.6 is needed by scenebuilder-8.3.0-1.x86_64 I don't know what it needs of me, but I downloaded and installed libXtst-1.2.2-2.1.el7.x86_64.rpm, but it doesn't work!
Instead of downloading the RPM, you can try installing it from the CentOS repositories with the following (as root): yum install libXtst That should pull in any other dependencies and update any packages that require that. If the 64-bit package is already installed then you may need to install the 32-bit library yum install libXtst.i686
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338053", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210924/" ] }
338,104
How can I create multiple nested directories in one command? mkdir -p /just/one/dir But I need to create multiple different nested directories...
mkdir accepts multiple path arguments: mkdir -p -- a/foo b/bar a/baz
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/338104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210969/" ] }
338,115
Is it possible to delete all mail in which the subject matches a regex pattern? For example, to delete message 1, you do: d 1 But to delete all mail with subject starting with, say, [SPAM] , I can't do: d -s "^\[SPAM\].*$"
I am answering this in case anyone comes by the same question. It does not appear there is any way to do bulk deletion by pattern matching using mail .An alternative is to use the mutt mail client, which does have such feature: D \[SPAM\] Thanks to @thrig and @dirkt for suggesting the alternative.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338115", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/94339/" ] }
338,116
I have the following data (a list of R packages parsed from a Rmarkdown file), that I want to turn into a list I can pass to R to install: d3heatmapdata.tableggplot2htmltoolshtmlwidgetsmetricsgraphicsnetworkD3plotlyreshape2scalesstringr I want to turn the list into a list of the form: 'd3heatmap', 'data.table', 'ggplot2', 'htmltools', 'htmlwidgets', 'metricsgraphics', 'networkD3', 'plotly', 'reshape2', 'scales', 'stringr' I currently have a bash pipeline that goes from the raw file to the list above: grep 'library(' Presentation.Rmd \| grep -v '#' \| cut -f2 -d\( \| tr -d ')' \| sort | uniq I want to add a step on to turn the new lines into the comma separated list. I've tried adding tr '\n' '","' , which fails. I've also tried a number of the following Stack Overflow answers, which also fail: https://stackoverflow.com/questions/1251999/how-can-i-replace-a-newline-n-using-sed This produces library(stringr)))phics) as the result. https://stackoverflow.com/questions/10748453/replace-comma-with-newline-in-sed This produces ,% as the result. Can sed replace new line characters? This answer (with the -i flag removed), produces output identical to the input.
You can add quotes with sed and then merge lines with paste , like that: sed 's/^\|$/"/g'|paste -sd, - If you are running a GNU coreutils based system (i.e. Linux), you can omit the trailing '-' . If you input data has DOS-style line endings (as @phk suggested), you can modify the command as follows: sed 's/\r//;s/^\|$/"/g'|paste -sd, -
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/338116", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136298/" ] }
338,123
I sometimes face a problem that I only want to search or install a single tiny package and I am on a very limited internet connection (say: mobile data in a remote place abroad). Being more of an emergency case, I wouldn't mind using the cached repository contents; the last update would have been just a week or two ago. Yet dnf insists on downloading several MB of metadata which takes forever and which I don't want, and earlier today it happened that I was just rendered unable to finish a simple search operation due to this. I tried running the command offline but it refuses to operate if the metadata is not current. Is there an option forcing the command to use the old data, or a configuration entry to change the period for how long it is considered valid?
If you look in the various dnf and yum repo config files you should find several explicit metadata expiry times, eg: /etc/yum.repos.d/fedora-updates.repo metadata_expire=6h/etc/dnf/dnf.conf metadata_expire=86400 You can override these on the dnf command line using --setopt= , but you must explicitly do it for every enabled repository, as well as the dnf main configuration. So you end up with something like sudo dnf --setopt=metadata_expire=-1 \ --setopt=fedora.metadata_expire=-1 \ --setopt=fedora-update.metadata_expire=-1 \ --setopt=rpmfusion-free.metadata_expire=-1 \ search abcdef Note the use of sudo to avoid dnf creating a separate cache for the user.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338123", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42413/" ] }
338,126
I have this portable SSD drive that I am trying to format for use with my Raspberry Pi 3: https://www.amazon.com/gp/product/B00N0V4JG2 In the past I have used this exact product, but the 128GB version, formatted as FAT32 on my OSX machine, and the drive worked with no issues on the Pi. I'm using it store the Bitcoin blockchain. Now that the blockchain is too big I'm trying to replace the drive with a 512GB drive, and I am having no luck getting this thing to work! I first tried the OSX FAT32 format, but that didn't work. So I'm trying to format it with the Pi itself. Starting off with fdisk /dev/sda as sudo su with USB drive unmounted: /dev/sda1 2 1000215215 1000215214 477G b W95 FAT32 Then I go through the process of [d]elete, [n]ew, [w]rite: /dev/sda1 2048 1000215215 1000213168 477G 83 Linux but even after a partprobe AND a reboot, fdisk -l still reports no change: /dev/sda1 2 1000215215 1000215214 477G b W95 FAT32 ... am I doing anything wrong up to this point? I also went forward with mfks.ext4 /dev/sda1 and still don't see anything changing (I can post those logs too...) And when I run fsck it is a TOTAL BLOODBATH -- which is even more confusing! How can a freshly formatted, brand new file-system have so many errors? Stuff like this (selected examples out of hundreds): Inode 138789 has a extra size (30700) which is invalid Inode 138825 has a bad extended attribute block 17929510.Inode 138877 has compression flag set on filesystem without compression support.Inode 139153 has a extra size (6956) which is invalid Finally, when I attach the drive my OSX machine I can format it and use it and it works FINE. So I think the drive is not defective.
If you look in the various dnf and yum repo config files you should find several explicit metadata expiry times, eg: /etc/yum.repos.d/fedora-updates.repo metadata_expire=6h/etc/dnf/dnf.conf metadata_expire=86400 You can override these on the dnf command line using --setopt= , but you must explicitly do it for every enabled repository, as well as the dnf main configuration. So you end up with something like sudo dnf --setopt=metadata_expire=-1 \ --setopt=fedora.metadata_expire=-1 \ --setopt=fedora-update.metadata_expire=-1 \ --setopt=rpmfusion-free.metadata_expire=-1 \ search abcdef Note the use of sudo to avoid dnf creating a separate cache for the user.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338126", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66147/" ] }
338,146
I've come across a script that uses VAR1=${1:-8} VAR2=${2:-4} I can see from some other questions and playing with some code that VAR1=${VAR2:-8} will create VAR1 with a value of whatever VAR2 is, if it exists. If VAR2 is unset, then VAR1 will default to value 8, and VAR2 will remain unset. That is, after this command, echo VAR2 will not return anything. So then, my question is what the first line of code does. Since variable names cannot begin with numbers, VAR1 is clearly not being set to 1 or any variable named 1. Surely there is reason for this, and it's not just a bit of pointless obfuscation?
The variables used in ${1:-8} and ${2:-4} are the positional parameters $1 and $2 . These hold the values passed to the script (or shell function) on the command line. If they are not set or empty, the variable substitutions you mention will use the default values 8 and 4 (respectively) instead. This could possibly be used in a shell script or in a shell function that takes (at least) two command-line arguments, to which you'd like to provide default values if they are not provided. A script or shell function can take any number of arguments, and these may be referenced by using $1 , $2 , ..., in the script or function. To get the values of the positional parameters above 9, one needs to write ${10} , ${11} etc. Another useful variable substitution in this case is ${parameter:?word} which will display word as an error (and exit the script) if parameter is unset: $ cat script.sh#!/bin/bashvar1="${1:?Must provide command line argument}"printf 'I got "%s"\n' "$var1" $ ./script.shscript.sh: line 3: 1: Must provide command line argument $ ./script.sh "Hello world."I got "Hello world."
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338146", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118923/" ] }
338,150
Devices can be mounted to a path. For example "/dev/sda1" can be mounted to "/home/user". What I don't understand is how and where the " / " is mounted during boot.Any help explaining?
During the boot of a Unix system, the kernel does a few things that it doesn't do during normal operation. One of these things is to mount a filesystem on the directory / ; this is quite different from normal mounting operations since the mounting is not triggered of a mount system call, and the target directory is not an existing directory. Another thing is to execute a program as PID 1 , which is different from normal operation since this creates a process without duplicating an existing process . The way this “magic” mounting of the root directory is very different in different Unix variants. The kernel chooses what device to mount based on configuration parameters that may be specified in a variety of ways: compile-time configuration, runtime configuration in the kernel image, runtime configuration in some predefined memory location, command line parameters, … To find how it works on your machine, you would need to look at the documentation of your Unix variant, and find how your machine is configured. To give an idea of how it works, here's an overview of how a modern Linux kernel operates. This is not the simplest example because Linux has a lot of history and varied use cases. Linux can start with a “special” filesystem attached to the path / , which consists of files stored in RAM. This special filesystem is called initramfs ; it's an instance of the rootfs filesystem type. The initramfs is populated by content either passed by the bootloader through an architecture-dependent protocol, compiled directly into the kernel image that is loaded into memory by the bootloader. Alternatively, Linux can mount a device to / that is part of a restricted (but large) set of volume types that are recognized by the initialization code in the kernel. Such device types include any filesystem on common types of partitions on common types of disks (anything vaguely SCSI-like, including ATA, USB, etc.), as well as RAM disks and NFS mounts. Depending on which path was taken, the initial root filesystem may later be shadowed or replaced by another. Shadowing is what happens with an initramfs, and that's how most desktop and server systems operate (embedded systems, on the other hand, often have a hard-coded root filesystem). Replacement is what happens with an initrd , which is a particular kind of RAM disk. The job of the initramfs or initrd is to load the drivers that provide the “real” root filesystem that will get used in normal operation.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210993/" ] }
338,181
The following removes the surrounding quotes of the email address string: $ echo "[email protected]" | sed 's/"([^"]*)"/\0/g' [email protected] But if: $ cat ~/Desktop/emails.txt "[email protected]" $ sed 's/"([^"]*)"/\0/g' ~/Desktop/emails.txt "[email protected]" $ sed -i '' 's/"([^"]*)"/\0/g' ~/Desktop/emails.txt$ cat ~/Desktop/emails.txt "[email protected]" Trying to apply the exact same sed regex substitution using a file containing the same string does not work. What am I doing wrong?
I am sorry but your echo example does not work. It seems to work because double quote ( " ) are interpreted by bash and never passed to sed . Note the difference between the following two examples: $ echo "[email protected]" [email protected]$ echo "\"[email protected]\"" "[email protected]" Your echo command does not feed the " to sed so it seems to work because there are no " to remove in input string. If you try correctly escaping the " , the echo example will do not work as the file example: $ echo "\"[email protected]\"" | sed 's/"([^"]*)"/\0/g' "[email protected]" You sed command has two errors: You are using the extended regex syntax. You can use it only if you have gnu sed. The difference is in the way they use the parenthesis. You must count back-references starting by 1 . So the correct command is: echo "\"[email protected]\"" | sed 's/"\([^"]*\)"/\1/g' or, if your sed supports extended regex: echo "\"[email protected]\"" | sed -E 's/"([^"]*)"/\1/g'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338181", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42132/" ] }
338,228
Recently, I started to use i3wm and fell in love with it. However, one thing bothers me: controlling more than 10 workspaces. In my config $mod+1 to $mod+9 switches between the workspaces 1 to 9 (and $mod+0 for 10), but sometimes 10 workspaces just aren't enough. At the moment I reach out to workspace 11 to 20 with $mod+mod1+1 to $mod+mod1+0 , i.e. hitting mod+alt+number . Of course this works without any problems, but it is quite a hassle to switch workspaces like that, since the keys aren't hit easily. Additionally, moving applications between workspaces 11 to 20 requires to mod+shift+alt+number -> ugly. In my Vim bindings (I have lots of plugins) I started to use double modifier shortcuts, like modkey + r for Plugin 1 and modkey + modkey + r for Plugin 2. This way I can bind every key twice and hitting the mod key twice is easy and fast. Can I do something similar in i3wm ? How do you make use of more than 10 workspaces in i3wm ? Any other solutions?
i3 does not really support key sequences like vim . Any key binding consists of a single key preceded by an optional list of distinct (so no Shift+Shift ) modifiers. And all of the modifiers need to be pressed down at the time the main key is pressed. That being said, there are two main ways to have a lot of workspaces without having to bind them to long lists of modifiers: 1. Dynamically create and access workspaces with external programs You can do not have to define a shortcut for every single workspace, you can just create them on the fly by sending a workspace NEW_WS to i3 , for example with the i3-msg program: i3-msg workspace NEW_WSi3-msg move container to workspace NEW_WS i3 also comes with the i3-input command, which opens a small input field then runs a command with the given input as parameter i3-input -F 'workspace %s' -P 'go to workspace: 'i3-input -F 'move container to workspace %s' -P 'move to workspace: ' Bind these these two commands to shortcuts and you can access an arbitrary number of workspaces by just pressing the shortcut and then entering the name (or number) of the workspace you want. (If you only work with numbered workspaces, you might want to use workspace number %s instead of just workspace %s ) 2. Statically bind workspaces to simple Shortcuts within key binding modes Alternatively, for a more static approach, you could use modes in your i3 configuration. You could have separate modes for focusing and moving to workspaces: set $mode_workspace "goto_ws"mode $mode_workspace { bindsym 1 workspace 1; mode "default" bindsym 2 workspace 2; mode "default" # […] bindsym a workspace a; mode "default" bindsym b workspace b; mode "default" # […] bindsym Escape mode "default"}bindsym $mod+w mode $mode_workspaceset $mode_move_to_workspace "moveto_ws"mode $mode_move_to_workspace { bindsym 1 move container to workspace 1; mode "default" bindsym 2 move container to workspace 2; mode "default" # […] bindsym a move container to workspace a; mode "default" bindsym b move container to workspace b; mode "default" # […] bindsym Escape mode "default"}bindsym $mod+shift+w mode $mode_move_to_workspace Or you could have separate bindings for focusing and moving within a single mode: set $mode_ws "workspaces"mode $mode_ws { bindsym 1 workspace 1; mode "default" bindsym Shift+1 move container to workspace 1; mode "default" bindsym 2 workspace 2; mode "default" bindsym Shift+2 move container to workspace 2; mode "default" # […] bindsym a workspace a; mode "default" bindsym Shift+a move container to workspace a; mode "default" bindsym b workspace b; mode "default" bindsym Shift+b move container to workspace b; mode "default" # […] bindsym Escape mode "default"}bindsym $mod+shift+w mode $mode_move_to_workspace In both examples the workspace or move commands are chained with mode "default" , so that i3 automatically returns back to the default key binding map after each command.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/338228", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95677/" ] }
338,248
I am currently studying for RHCSA and wish to practice on a system (vm) that is as near as possible to RHEL 7. I know about the free development version but as per Appendix 1 (Exhibit 1.C), my intended use does not qualify me for the free copy. I use Ubuntu for personal purposes and am unfamiliar with the subtle differences in the RHEL-like OSs. I am aware of CentOS, Fedora, Scientific Linux, and Oracle Linux, but my question is strictly for the purposes of practicing for the RHCSA exam - which one is the best match for RHEL 7 when considering the following tasks? A Red Hat® Certified System Administrator (RHCSA) is able to perform the following tasks: Understand and use essential tools for handling files, directories, command-line environments, and documentation Operate running systems, including booting into different run levels, identifying processes, starting and stopping virtual machines, and controlling services Configure local storage using partitions and logical volumes Create and configure file systems and file system attributes, such as permissions, encryption, access control lists, and network file systems Deploy, configure, and maintain systems, including software installation, update, and core services Manage users and groups, including use of a centralized directory for authentication Manage security, including basic firewall and SELinux configuration Another user asked a similar question back in 2012, but it was about an earlier RHEL version (and a lot can change in that time).
As mentioned in the previous question , CentOS is your best choice since it is derived from the sources of Red Hat Enterprise Linux (RHEL). Also as mentioned in their Technical and Release notes, almost all your required tasks are met in CentOS.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338248", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/162397/" ] }
338,280
I would assume the following would work: mkdir /tmp/chrootTest # Create chroot folder.mkdir /tmp/chrootTest/bin # Create `/bin` in the chroot folder.sudo mount -B /bin /tmp/chrootTest/bin/ # Bind-mount the real system's `/bin` to the chroot's `/bin`.sudo chroot /tmp/chrootTest/ /bin/bash # Execute (open) `/bin/bash` in the chroot. However the last command yields: chroot: failed to run command ‘/bin/bash’: No such file or directory I also tried copying /bin to /tmp/chrootTest/bin and giving it full permissions. However, this doesn't work either. I would not be entirely surprised to see an error message informing me that /bin/bash can't work in its very rudimentary chroot as other files can't be found. However, the error message printed is surprising as the file clearly exists. Why does this happen? What is necessary to successfully open a bash in a chroot?
If /bin/bash is a binary with shared library dependencies, these dependencies needs to be able to be resolved within the chroot. On my system: $ ldd $( command -v bash )/usr/local/bin/bash: Start End Type Open Ref GrpRef Name 0000115f08700000 0000115f08a0c000 exe 1 0 0 /usr/local/bin/bash 00001161f6a2e000 00001161f6c88000 rlib 0 1 0 /usr/lib/libtermcap.so.14.0 00001161bc41e000 00001161bc629000 rlib 0 1 0 /usr/local/lib/libintl.so.6.0 000011614b1de000 000011614b4dd000 rlib 0 2 0 /usr/local/lib/libiconv.so.6.0 00001161bd091000 00001161bd35b000 rlib 0 1 0 /usr/lib/libc.so.89.2 000011612ef00000 000011612ef00000 rtld 0 1 0 /usr/libexec/ld.so In contrast: $ ldd $( command -v sh )/bin/sh: Start End Type Open Ref GrpRef Name 000007ca3c446000 000007ca3c6c6000 dlib 1 0 0 /bin/sh I'm on OpenBSD. The format of the output of ldd will be different on a Linux system, but the same essential information (what libraries are shared, and where they are) ought to be displayed on Linux as well. When I try with a very simplistic chroot that only contains /bin/sh and /bin/bash ( doas is OpenBSD's " sudo replacement"): $ doas chroot -u kk t /bin/sh/bin/sh: No controlling tty (open /dev/tty: No such file or directory)/bin/sh: warning: won't have full job control$ /bin/bashAbort trap Notice that I do get a shell ( /bin/sh ), but that /bin/bash fails. The error is different from yours but it has, I assume, the same cause. Executing /bin/bash directly with the chroot command just gives a one-word "Abort" message, again, presumably due to the same issue with libraries. Conclusion: The chroot needs to contain at least a minimal installation of a system, including device files and libraries that are needed to run the executables within it. Explanation of the "No such file or directory" error on Linux: I was a bit confused as to why the error was "No such file or directory" on Linux, so I ran a test through strace . The execve() call that ought to have executed the shell returns ENOENT: execve("/bin/bash", ["/bin/bash"], [/* 13 vars */]) = -1 ENOENT (No such file or directory) ... so I thought it was something wrong with finding /bin/bash . However, upon reading the execve(2) manual, I saw: ENOENT The file filename or a script or ELF interpreter does not exist, or a shared library needed for file or interpreter cannot be found . So there you go.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/147785/" ] }
338,292
I am using a computer system with the gnome desktop system, with which I am not able to work properly (resizing of windows does not work etc). Therefore I would like to install the unity-desktop. Is that possible on Redhat (Red Hat Enterprise Linux Workstation release 6.8 (Santiago))? And if so, how? A simple yum install unity did not find anything....
If /bin/bash is a binary with shared library dependencies, these dependencies needs to be able to be resolved within the chroot. On my system: $ ldd $( command -v bash )/usr/local/bin/bash: Start End Type Open Ref GrpRef Name 0000115f08700000 0000115f08a0c000 exe 1 0 0 /usr/local/bin/bash 00001161f6a2e000 00001161f6c88000 rlib 0 1 0 /usr/lib/libtermcap.so.14.0 00001161bc41e000 00001161bc629000 rlib 0 1 0 /usr/local/lib/libintl.so.6.0 000011614b1de000 000011614b4dd000 rlib 0 2 0 /usr/local/lib/libiconv.so.6.0 00001161bd091000 00001161bd35b000 rlib 0 1 0 /usr/lib/libc.so.89.2 000011612ef00000 000011612ef00000 rtld 0 1 0 /usr/libexec/ld.so In contrast: $ ldd $( command -v sh )/bin/sh: Start End Type Open Ref GrpRef Name 000007ca3c446000 000007ca3c6c6000 dlib 1 0 0 /bin/sh I'm on OpenBSD. The format of the output of ldd will be different on a Linux system, but the same essential information (what libraries are shared, and where they are) ought to be displayed on Linux as well. When I try with a very simplistic chroot that only contains /bin/sh and /bin/bash ( doas is OpenBSD's " sudo replacement"): $ doas chroot -u kk t /bin/sh/bin/sh: No controlling tty (open /dev/tty: No such file or directory)/bin/sh: warning: won't have full job control$ /bin/bashAbort trap Notice that I do get a shell ( /bin/sh ), but that /bin/bash fails. The error is different from yours but it has, I assume, the same cause. Executing /bin/bash directly with the chroot command just gives a one-word "Abort" message, again, presumably due to the same issue with libraries. Conclusion: The chroot needs to contain at least a minimal installation of a system, including device files and libraries that are needed to run the executables within it. Explanation of the "No such file or directory" error on Linux: I was a bit confused as to why the error was "No such file or directory" on Linux, so I ran a test through strace . The execve() call that ought to have executed the shell returns ENOENT: execve("/bin/bash", ["/bin/bash"], [/* 13 vars */]) = -1 ENOENT (No such file or directory) ... so I thought it was something wrong with finding /bin/bash . However, upon reading the execve(2) manual, I saw: ENOENT The file filename or a script or ELF interpreter does not exist, or a shared library needed for file or interpreter cannot be found . So there you go.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338292", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31853/" ] }
338,306
I'm wondering what the FHS compliant mount points for internal harddrives and networkshares are? Many different tutorials are suggesting to mount them in subdirectories to /mnt or /media According to the FHS 3.0 (File Hierarchy Standard): /media : Mount point for removable media( This directory contains subdirectories which are used as mount points for removable media such as floppy disks, cdroms and zip disks. ) /mnt : Mount point for a temporarily mounted filesystem ( This directory is provided so that the system administrator may temporarily mount a filesystem as needed. The content of this directory is a local issue and should not affect the manner in which any program is run ) I assume that those mount points could go to /home/foo/extdrive /home/foo/nfsshare for a single user system, but where would I mount them accessible for all users? Update: FHS 3.0, Chapter 3.1, second "Rationale" paragraph new directory in / (ie /workspace and /nfsshare ) There are several reasons why creating a new subdirectory of the root filesystem is prohibited:It demands space on a root partition which the system administrator may want kept small and simple for either performance or security reasons.It evades whatever discipline the system administrator may have set up for distributing standard file hierarchies across mountable volumes.Distributions should not create new directories in the root hierarchy without extremely careful consideration of the consequences including for application portability.
You make your own mount point directories. If you want to ask why, I can only point to the great answer by Wouter Verhelst . Internal drives /mnt is a valid place to make your own if you like, and so is / . /mnt may have been used for this purpose by some historical installation systems, as well as for removable media (before /media ). It's still valid for you to do so, but the system itself is no longer supposed to set up anything in /mnt . I think it's reasonable to use /mnt if you might make multiple mount points. It makes it easy to see all of them together, and it's known as one of the locations people like to use. Some other people like to use /Volumes - following the OS X system, or /vol . /data is common for a single mount point. /d/ is also used. /disk/ is almost certainly used by some, but may be distracting for storage which is not disk-based. If you use /mnt, I would also create /mnt/tmp. Then there will still be a convenient directory for temporary mounts, the original use of /mnt which FHS mentions. Preferred mount points for internal HDDs It's possible that manually creating mount points under /media is a bad idea on some common systems. Modern Linux OS's will create mount points for removable media automatically, and it's possible the structure they create would conflict, or simply appear inconsistent with your own. You don't say what your system is, but you may be interested in portable guidelines, especially if you're asking about FHS. Note this reasoning is similar to why the FHS says the OS must not populate /mnt. Mount point for system-wide USB disk Network filesystems It is sometimes recommended to mount network filesystems in a dedicated sub-directory e.g. /n/host , /nfs/host or /net/host etc. For example, if you mount a network filesystem at /host and the network becomes unreachable, ls / may hang when it tries to stat the network filesystem. This could be undesirable and frustrating, at a time when you are already becoming frustrated.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338306", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53278/" ] }
338,325
I am trying to add the line text += num.toString(16); after each line, using sed. My approach is: Replace every new line with a new line, plus text += num.toString(16); . That is: sed 's/\/\text \+= num\.toString\(16\);/g' But I couldn't get this working. I am getting unterminated substitute pattern from sed. What is wrong here? I am using BSD sed.
You can't use: s/\/ to match newline in pattern space in s/pattern/replacement/ form. It depends on implementations to interpret that pattern. Both GNU sed and BSD sed treats it as literal newline, but BSD sed doesn't accept and will raise the error . Generally, you can't match a newline at the end of input line, but you can use \n to match newline appears in pattern space as the result of N command. The right way, POSIXLY: sed 'a\text += num.toString(16);' or: sed 's/$/\text += num.toString(16);/'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338325", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106637/" ] }
338,353
Im trying to search all files within a directory by name and outputting the files' content onto the shell. Currently I'm only getting a list of files find -name '.htaccess' -type f./dir1/.htaccess./dir23/folder/.htaccess... But how can I output the content of each file instead. Thought of something like piping the filename to the cat -command.
Use cat within the -exec predicate of find : find -name '.htaccess' -type f -exec cat {} + This will output the contents of the files, one after another.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338353", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62994/" ] }
338,362
I have a file (called version ) that contains the text: version=31 I want a bash script to check if the file contains: version=31 . If it does, then continue with the script; if not, exit and present a message: Image is not Image 31 . How way I accomplish that?
Use cat within the -exec predicate of find : find -name '.htaccess' -type f -exec cat {} + This will output the contents of the files, one after another.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338362", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211149/" ] }
338,410
I have the Ecplise Platform (the programming environment, see https://eclipse.org/ ) on my system. It can be run by typing "eclipse" into the terminal. Now I installed eclipse prolog (see http://www.eclipseclp.org/ ). I followed the instructions from http://eclipseclp.org/Distribution/Current/6.1_224_x86_64_linux/Readme.txt ) and now I want to start it. In these instructions they say that it can be run by typing "eclipse" into the terminal. But if I do that, only the Eclipse programming environment starts, not the eclipse prolog thingy. What do I do now? I am using Linux Mint 17, 64 bit.
Figure out where the new eclipse is installed, and don't just enter eclipse but the full path: /where/the/new/eclipse/is/installed/bin/eclipse If this new eclipse becomes your first choice, you may want to define an alias in your startup files (e.g. .profile for sh ): alias eclipse=/where/the/new/eclipse/is/installed/bin/eclipse Now, if you enter eclipse , the new one will be run. To execute the old one, you will have to specify its full path. You can even define two aliases, one for each eclipse : alias eprolog=/where/the/new/eclipse/is/installed/bin/eclipsealias eplatform=/where/the/old/eclipse/is/installed/bin/eclipse ... and enter either eprolog or eplatform at the shell prompt.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211177/" ] }
338,444
I have a Dir1 with multiple subdir and files inside it. I intend to copy the Dir1 to Dir2 so that all the files in it will be just empty files but with same file name as Dir1. Then I intend to push the Dir2 to github to explain example data-structure and filenames to users. Is there a command to copy files in a way just to destination files are empty but with same filename?
Or more complicatedly but with a single filesystem pass (for even more portability ~ should be written as $HOME ) find . \( -type d -exec mkdir -p "~/elsewhere/{}" \; \ -o -type f -exec touch "~/elsewhere/{}" \; \) The complexity here is that of Boolean logic (which may be of some benefit to learn) and precedence (also good to know) and how find implements these concepts with an implicit AND between the -type and subsequent action, and OR making an appearance as -o .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95284/" ] }
338,510
I have a CSV file as input.csv"1_1_0_0_76""1_1_0_0_77""1_1_0_0_78""1_1_0_0_79""1_1_0_0_80""1_1_0_0_81""1_1_0_0_82""1_1_0_0_83""1_1_0_0_84""1_1_0_0_85" ............. and so on. I need to convert this CSV file into result.csv 1,1,0,0,761,1,0,0,771,1,0,0,781,1,0,0,791,1,0,0,801,1,0,0,811,1,0,0,821,1,0,0,831,1,0,0,841,1,0,0,85
Far simpler way is to use tr $ tr '_' ',' < input.csv | tr -d '"' 1,1,0,0,761,1,0,0,771,1,0,0,78 The way this works is that tr takes two arguments - set of characters to be replaced, and their replacement. In this case we only have sets of 1 character. We redirect input.csv input tr 's stdin stream via < shell operator, and pipe the resulting output to tr -d '"' to delete double quotes. But awk can do it too. $ cat input.csv"1_1_0_0_76""1_1_0_0_77""1_1_0_0_78"$ awk '{gsub(/_/,",");gsub(/"/,"")};1' input.csv1,1,0,0,761,1,0,0,771,1,0,0,78 The way this works is slightly different: awk reads each file line by line, each in-line script being /Pattern match/{ codeblock}/Another pattern/{code block for this pattern} . Here we don't have a pattern, so it means to execute codeblock for each line. gsub() function is used for global substitution within a line, thus we use it to replace underscores with commas, and double quotes with a null string (effectively deleting the character). The 1 is in place of the pattern match with missing code block, which defaults simply to printing the line; in other words the codeblock with gsub() does the job and 1 prints the result. Use the shell redirection ( > ) to send output to a new file: awk '{gsub(/_/,",");gsub(/"/,"")};1' input.csv > output.csv
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/338510", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/203943/" ] }
338,524
I have a CSV file like a.csv"1,2,3,4,9""1,2,3,6,24""1,2,6,8,28""1,2,4,6,30" I want something like b.csv1,2,3,4,91,2,3,6,241,2,6,8,281,2,4,6,30 I tried awk '{split($0,a,"\""); But did not help.Any help is appreciated.
Use gsub() function for global substitution $ awk '{gsub(/"/,"")};1' input.csv 1,2,3,4,91,2,3,6,241,2,6,8,281,2,4,6,30 To send output to new file use > shell operator: awk '{gsub(/"/,"")};1' input.csv > output.csv Your splitting to array approach also can be used, although it's not necessary, but you can use it as so: $ awk '{split($0,a,/"/); print a[2]}' input.csv 1,2,3,4,91,2,3,6,241,2,6,8,281,2,4,6,30 Note that in this particular question the general pattern is that quotes are in the beginning and end of the line, which means we can also treat that as field separator, where field 1 is null, field 2 is 1,2,3,4 , and field 3 is also null. Thus, we can do: awk -F '"' '{print $2}' input.csv And we can also take out substring of the whole line: awk '{print substr($0,2,length()-2)}' quoted.csv Speaking of stripping first and last characters, there's a whole post on stackoverflow about that with other tools such as sed and POSIX shell.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338524", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/192624/" ] }
338,535
My google search results little helpful so I asked here. My Ubuntu server now has many duplicated entries in both files ~/.ssh/authorzied_keys and ~/.ssh/known_hosts I wonder if there is a command/utility to remove those duplicated lines and list them once only?
The commandline utilities are called uniq and sort . You can simply pipe the file through them to get only unique entries: sort ~/.ssh/authorized_keys | uniq > ~/.ssh/authorized_keys.uniq and then replace the old file with the new one: mv ~/.ssh/authorized_keys{.uniq,} The ~/.ssh/known_hosts are handled by ssh itself and should not contain any duplicates (if you modified it by hand, it can and then you can use the same approach as above).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338535", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17671/" ] }
338,570
Using this answer, I have created a symbolic link in my .bashrc file to make changing to a frequently used directory easier. E.g. ln -s ~/a/b/c/d/development dev I can change directory from my home dir to the development dir by entering cd dev . I can also enter ls dev from my home dir and that works too. However, these commands only work in my home dir. If I enter them from anywhere else I get an error telling me No such file or directory . If I enter cd ~/dev or ls ~/dev it works. Can someone explain why that is and how I can fix it so I don't have to include ~/ in the path when I'm not in my home dir.
Since you’re using Bash as your shell, you can use the CDPATH shellvariable. The Bash manual describes it as a search path: each directory name in CDPATH is searched for directory, with alternative directory names in CDPATH separated by a colon (‘:’) You could add the following line to your .bashrc : CDPATH=".:$HOME" If you later type cd dev , the current working directory would be searched for a sub-directory named dev : If such a directory exists, it changes into that directory (as the cd builtin command usually works). If not, it would then search your home directory ( ~ ), find the symbolic link (realise that it’s a link to a directory) and change to the target directory (pointed to by ~/dev ). If you wanted to give preference to the directories within your home directory, you could list $HOME first in your CDPATH ( "$HOME:." ) but I would strongly advise against that as it breaks the principle of least surprise : the resulting behaviour differs too greatly from the standard.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52183/" ] }
338,628
I'm trying to install awesome 4.0 . To install all the dependencies I ran sudo apt-get build-dep awesome . If I run make in my awesome directory there are some libs still missing: $ makeRunning cmake…-- git not found.-- asciidoc -> /usr/bin/asciidoc-- xmlto -> /usr/bin/xmlto-- gzip -> /bin/gzip-- ldoc -> /usr/bin/ldoc-- convert -> /usr/bin/convert-- Checking for modules 'glib-2.0;gdk-pixbuf-2.0;cairo;x11;xcb-cursor;xcb-randr;xcb-xtest;xcb-xinerama;xcb-shape;xcb-util>=0.3.8;xcb-keysyms>=0.3.4;xcb-icccm>=0.3.8;xcb-xkb;xkbcommon;xkbcommon-x11;cairo-xcb;libstartup-notification-1.0>=0.10;xproto>=7.0.15;libxdg-basedir>=1.0.0;xcb-xrm'-- No package 'xcb-xrm' foundCMake Error at /usr/share/cmake-3.5/Modules/FindPkgConfig.cmake:367 (message): A required package was not foundCall Stack (most recent call first): /usr/share/cmake-3.5/Modules/FindPkgConfig.cmake:532 (_pkg_check_modules_internal) awesomeConfig.cmake:153 (pkg_check_modules) CMakeLists.txt:17 (include) I checked which package I have to install to close this gap apt-cache search xcb-xrm but I got no results. Then I checked the dependencies list from awesome, there is only a entry xcb-util-xrm so I was looking for apt-cache search xcb-util-xrm`. I got also no results. How to install the missing library? $ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 16.04.1 LTSRelease: 16.04Codename: xenial
As mentioned by steeldriver, the package is not available until 16.10. One option is to built it manually from source ( github ) A Second option would be to get it from a 3rd party ppa sudo add-apt-repository ppa:aguignard/ppasudo apt-get updatesudo apt-get install xcb-util-xrm
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/198795/" ] }
338,635
I have a list of words. For example: a=(ENCFF002CDP ENCFF002COQ ENCFF002DAJ ENCFF002DCM) and I want to run all possible combinations of them and with a tool, like: bedtools intersect -a ENCFF002CDP -b ENCFF002COQ > ENCFF002CDP.ENCFF002COQ.intersected bedtools intersect -a ENCFF002CDP -b ENCFF002DAJ > ENCFF002CDP.ENCFF002DAJ.intersected etc. for all the possible combinations. How can I do this?
declare -a encode_ids=(ENCFF002CDP ENCFF002COQ ENCFF002DAJ ENCFF002DCM) for (( i = 0; i < ${#encode_ids[@]}; ++i )); do for (( j = i + 1; j < ${#encode_ids[@]}; ++j )); do bedtools intersect -a "${encode_ids[i]}" -b "${encode_ids[j]}" \ >"${encode_ids[i]}.${encode_ids[j]}".intersected donedone The double loop above will give you all combinations of the given IDs, but will leave out combinations of the same ID with itself as well as avoiding combining ID A with B if the combination B with A has already been used. The example array will result in the following bedtool runs: bedtools intersect -a ENCFF002CDP -b ENCFF002COQ >ENCFF002CDP.ENCFF002COQ.intersectedbedtools intersect -a ENCFF002CDP -b ENCFF002DAJ >ENCFF002CDP.ENCFF002DAJ.intersectedbedtools intersect -a ENCFF002CDP -b ENCFF002DCM >ENCFF002CDP.ENCFF002DCM.intersectedbedtools intersect -a ENCFF002COQ -b ENCFF002DAJ >ENCFF002COQ.ENCFF002DAJ.intersectedbedtools intersect -a ENCFF002COQ -b ENCFF002DCM >ENCFF002COQ.ENCFF002DCM.intersectedbedtools intersect -a ENCFF002DAJ -b ENCFF002DCM >ENCFF002DAJ.ENCFF002DCM.intersected
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338635", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/136099/" ] }
338,644
Environment : I am running Fedora 24 with LXC (2.0.6) containers inside and SELinux enabled. Problem : Setting up Linux containers and starting them is all fine, and the same holds for attaching to a container via lxc-attach except for those containers that have been started by the autostart-feature of LXC ( lxc.start.auto = 1 in its config file). For any auto-started container I cannot attach to, I can lxc-stop and lxc-start it again, whereupon I can directly lxc-attach to that container. What I've tried : I already checked the proposed solution from this bug-report , which consists of dnf-installing the container-selinux extensions, and adding the right label ( container_runtime_exec_t ) to the executables /usr/bin/lxc-* which includes lxc-attach . Though also proposed as possible solution, I have not added a label to the context of the base folder for my root-filesystems of all containers ( chcon -Rt container_var_lib_t /var/lib/lxc ). Output : With manually started containers I have no problems to attach ( lxc-attach -n name_of_container ), but when I try to attach to a container which has been started automatically at system-boot, I get a message on the terminal like lxc-attach: attach.c: lxc_attach_run_shell: 1325 Permission denied - failed to exec shell while in the /var/log/audit/audit.log file I get a message like type=AVC msg=audit(1484836169.882:2969): avc: denied { entrypoint } for pid=7867 comm="lxc-attach" path="/bin/dash" dev="sda3" ino=10289 scontext=system_u:system_r:unconfined_service_t:s0 tcontext=unconfined_u:object_r:unlabeled_t:s0 tclass=file permissive=0 If I look up the labels of the processes that run my containers ( ps -eZ | grep lxc ), I get system_u:system_r:unconfined_service_t:s0 2794 ? 00:00:00 lxc-autostart for auto-started containers, as compared with unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 6399 ? 00:00:00 lxc-start for manually started containers. My question : I'm a bit too new to SELinux, but what I can see from the output above is that the context in which the container runs after it has been started on system-boot is quite different than the context my lxc-attach runs in (the former context starts with scontext=system_u:... while the my current context would is tcontext=unconfined_u:... from the audit.log above). That is why I have to ask someone to explain to me: What kind of mismatch causes this permission-denied? And: Can I fix this?
declare -a encode_ids=(ENCFF002CDP ENCFF002COQ ENCFF002DAJ ENCFF002DCM) for (( i = 0; i < ${#encode_ids[@]}; ++i )); do for (( j = i + 1; j < ${#encode_ids[@]}; ++j )); do bedtools intersect -a "${encode_ids[i]}" -b "${encode_ids[j]}" \ >"${encode_ids[i]}.${encode_ids[j]}".intersected donedone The double loop above will give you all combinations of the given IDs, but will leave out combinations of the same ID with itself as well as avoiding combining ID A with B if the combination B with A has already been used. The example array will result in the following bedtool runs: bedtools intersect -a ENCFF002CDP -b ENCFF002COQ >ENCFF002CDP.ENCFF002COQ.intersectedbedtools intersect -a ENCFF002CDP -b ENCFF002DAJ >ENCFF002CDP.ENCFF002DAJ.intersectedbedtools intersect -a ENCFF002CDP -b ENCFF002DCM >ENCFF002CDP.ENCFF002DCM.intersectedbedtools intersect -a ENCFF002COQ -b ENCFF002DAJ >ENCFF002COQ.ENCFF002DAJ.intersectedbedtools intersect -a ENCFF002COQ -b ENCFF002DCM >ENCFF002COQ.ENCFF002DCM.intersectedbedtools intersect -a ENCFF002DAJ -b ENCFF002DCM >ENCFF002DAJ.ENCFF002DCM.intersected
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/202140/" ] }
338,650
I know that the system call interface is implemented on a low level and hence architecture/platform dependent, not "generic" code. Yet, I cannot clearly see the reason why system calls in Linux 32-bit x86 kernels have numbers that are not kept the same in the similar architecture Linux 64-bit x86_64? What is the motivation/reason behind this decision? My first guess has been that a backgrounding reason has been to keep 32-bit applications runnable on a x86_64 system, so that via an reasonable offset to the system call number the system would know that user-space is 32-bit or 64-bit respectively. This is however not the case. At least it seems to me that read() being system call number 0 in x86_64 cannot be aligned with this thought. Another guess has been that changing the system call numbers might have a security/hardening background, something I was not able to confirm myself. Being ignorant to the challenges of implementation the architecture-dependent code parts, I still wonder how changing the system call numbers , when there seems no need (as even a 16-bit register would store largely more then the currently ~346 numbers to represent all calls), would help to achieve anything, other than break compatibility (though using the system calls through a library, libc, mitigates it).
As for the reasoning behind the specific numbering, which does not match any other architecture [except "x32" which is really just part of the x86_64 architecture]: In the very early days of the x86_64 support in the linux kernel, before there were any serious backwards compatibility constraints, all of the system calls were renumbered to optimize itat the cacheline usage level . I don't know enough about kernel development to know the specific basis for these choices, but apparently there is some logic behind the choice to renumber everything with these particular numbers rather than simply copying the list from an existing architecture and remove the unused ones. It looks like the order may be based on how commonly they are called - e.g. read/write/open/close are up front. Exit and fork may seem "fundamental", but they're each called only once per process. There may also be something going on about keeping system calls that are commonly used together within the same cache line (these values are just integers, but there's a table in the kernel with function pointers for each one, so each group of 8 system calls occupies a 64-byte cache line for that table)
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/338650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/24394/" ] }
338,667
An answer I gave to a question , and the comments to it, had me read the POSIX Conformance section of the Base Definitions to figure out whether /dev/stdin , /dev/stdout and /dev/stderr were actually needed for conformance to the POSIX standard. It turns out they are not: The system may provide non-standard extensions. These are features not required by POSIX.1-2008 and may include, but are not limited to: [...] Additional character special files with special properties (for example, /dev/stdin , /dev/stdout , and /dev/stderr ) As far as I can find, this is the only mentioning of these files in the standard. I have access to only one "system" (environment, really) which does not implement them, and that's MinGW on Windows (no /dev at all as far as I can see). As far as I know, all the free Unices have them, and so does Cygwin, Windows' new Linux environment and Darwin/macOS. I'm not well versed with the commercial Unices though. Is there a POSIX system, Unix, or a Unix-like environment of some description, alive today, that does not implement /dev/stdin , /dev/stdout , and /dev/stderr as files in the filesystem?
Is there a POSIX system, Unix, or a Unix-like environment of some description, alive today, that does not implement /dev/stdin, /dev/stdout, and /dev/stderr as files in the filesystem? Yes, at least per my example system below. I'm not an expert in this system by any means; however, AIX 6.1, which wikipedia claims is: one of five commercial operating systems that have versions certified to The Open Group's UNIX 03 standard ( https://en.wikipedia.org/wiki/IBM_AIX ) does not appear to implement those file descriptors in the installation I have access to. As you can see, if using bash, it will behave as if they did exist for the purposes of redirection: $ uname -sAIX$ echo $SHELL/usr/bin/ksh$ ls -al /dev/stdinls: 0653-341 The file /dev/stdin does not exist.$ ls -al /dev/stdoutls: 0653-341 The file /dev/stdout does not exist.$ ls -al /dev/stderrls: 0653-341 The file /dev/stderr does not exist.$ echo foo >/dev/stderrThe file access permissions do not allow the specified action.ksh: /dev/stderr: 0403-005 Cannot create the specified file.$ bashbash-4.2$ ls /dev/stderrls: 0653-341 The file /dev/stderr does not exist.bash-4.2$ echo foo >/dev/stderrfoo As other commenters have mentioned, the following questions provide some interesting information as well: Portability of "> /dev/stdout" Portability of file descriptor links
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338667", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116858/" ] }
338,687
I want to run Pulseaudio system wide, on a headless CentOS 7 server. Pulseaudio works great if I startx but in text mode (runlevel 3 equivalent) pulse audio clients fail. So I created a systemd service file: [Unit]Description=PulseAudio Daemon[Install]WantedBy=multi-user.target[Service]Type=simplePrivateTmp=trueExecStart=/usr/bin/pulseaudio --system --realtime --disallow-exit --no-cpu-limit which starts, but I see the following in journalctl: Jan 19 13:31:47 lserver.mydomain pulseaudio[2523]: W: [pulseaudio] main.c: Running in system mode, but --disallow-module-loading not set!Jan 19 13:31:47 lserver.mydomain pulseaudio[2523]: N: [pulseaudio] main.c: Running in system mode, forcibly disabling SHM mode!Jan 19 13:31:47 lserver.mydomain pulseaudio[2523]: N: [pulseaudio] main.c: Running in system mode, forcibly disabling exit idle time!Jan 19 13:31:47 lserver.mydomain pulseaudio[2523]: W: [pulseaudio] main.c: OK, so you are running PA in system mode. Please note that you most likely shouldn't be doing that.Jan 19 13:31:47 lserver.mydomain pulseaudio[2523]: W: [pulseaudio] main.c: If you do it nonetheless then it's your own fault if things don't work as expected.Jan 19 13:31:47 lserver.mydomain pulseaudio[2523]: W: [pulseaudio] main.c: Please read http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/WhatIsWrongWithSystemWide/ for an explanation why system mode is usually a bad idea.Jan 19 13:31:48 lserver.mydomain pulseaudio[2523]: N: [pulseaudio] alsa-util.c: Disabling timer-based scheduling because running inside a VM.Jan 19 13:31:48 lserver.mydomain pulseaudio[2523]: N: [pulseaudio] alsa-util.c: Disabling timer-based scheduling because running inside a VM.Jan 19 13:31:48 lserver.mydomain pulseaudio[2523]: W: [pulseaudio] authkey.c: Failed to open cookie file '/var/run/pulse/.config/pulse/cookie': No such file or directoryJan 19 13:31:48 lserver.mydomain pulseaudio[2523]: W: [pulseaudio] authkey.c: Failed to load authentication key '/var/run/pulse/.config/pulse/cookie': No such file or directoryJan 19 13:31:48 lserver.mydomain pulseaudio[2523]: W: [pulseaudio] authkey.c: Failed to open cookie file '/var/run/pulse/.pulse-cookie': No such file or directoryJan 19 13:31:48 lserver.mydomain pulseaudio[2523]: W: [pulseaudio] authkey.c: Failed to load authentication key '/var/run/pulse/.pulse-cookie': No such file or directoryJan 19 13:31:48 lserver.mydomain systemd[1]: Got message type=signal sender=org.freedesktop.DBus destination=n/a object=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=56 reply_cookie=0 error=n/aJan 19 13:31:48 lserver.mydomain systemd-logind[575]: Got message type=signal sender=org.freedesktop.DBus destination=n/a object=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=56 reply_cookie=0 error=n/aJan 19 13:31:48 lserver.mydomain systemd-logind[575]: Got message type=signal sender=org.freedesktop.DBus destination=n/a object=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=57 reply_cookie=0 error=n/aJan 19 13:31:48 lserver.mydomain systemd[1]: Got message type=signal sender=org.freedesktop.DBus destination=n/a object=/org/freedesktop/DBus interface=org.freedesktop.DBus member=NameOwnerChanged cookie=57 reply_cookie=0 error=n/a and attempts to access audio fail: speaker-test 1.1.1Playback device is defaultStream parameters are 48000Hz, S16_LE, 1 channelsUsing 16 octaves of pink noiseALSA lib pulse.c:243:(pulse_connect) PulseAudio: Unable to connect: Access deniedPlayback open error: -111,Connection refused And after speaker-test the log shows: Jan 19 14:06:39 lserver.ocg.ca pulseaudio[2795]: W: [pulseaudio] protocol-native.c: Denied access to client with invalid authentication data. User root is added to the 'audio' group. And speaker-test is being run as root. Can someone suggest how to fix this?
After some experimentation I found that modifying /etc/pulse/system.pa to permit anonymous: load-module module-native-protocol-unix auth-anonymous=1 The audio now plays fine. Hope this helps others needing system mode pulse
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338687", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23091/" ] }
338,699
Take a look at these attempts: $ case `true` in 0) echo success ;; *) echo fail ;; esacfail$ if `true` ; then> echo "success"> else > echo "fail"> fisuccess Now, why is the case statement failing? You might wonder why I don't just use the if statement and I shall explain. My command if complex and might return different return codes on which I want act on. I don't want to run the command multiple times and I can't do: my_commandres = $?case $? in...esac This is because I use set -e in my script and therefore if my_command returns failure the script aborts. But I have a workaround... set +emy_commandres=$?set -ecase $? in ...esac But this is ugly, so returning to my initial question... why can I just use the case my_command in ... esac version?
You can't use case $(somecommand) in ... to test the exit status of somecommand because the command substitution expands to the output of the command, not its exit status. Using $(true) doesn't work since true doesn't produce any output on standard output. You could do { somecommand; err="$?"; } || truecase $err in 0) echo success ;; *) echo failesac This will stop the script running under errexit ( -e ) from exiting. From the bash manual (from the description of set -e ): The shell does not exit if the command that fails ispart of the command list immediately following a whileor until keyword, part of the test following the if orelif reserved words, part of any command executed in a&& or || list except the command following the final &&or || , any command in a pipeline but the last, or if thecommand's return value is being inverted with !. In this case, with only the possibilities to either succeed or to fail, it would be easier to just do if somecommand; then echo successelse echo failfi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26082/" ] }
338,723
Suppose I install debian,and my Internet networkcomes down.Install works OK,but at the time to setup the apt mirrorfrom list give error because network Internet is down.I continue to install without mirror(apt/sources.list containonly cdrom entry)Internet work..how to setup the debian mirror after installation?I know how to edit sources.list with vi ,but I want the menu withmirror list selection.
You just want some mirror or the closest/fastest mirror. If it's the latter, then you could just install netselect-apt and run it. I just ran to see which are the fastest form my geographical location and it said - [$] sudo netselect-apt testing................ The fastest 10 servers seem to be: http://mirrors.ispros.com.bd/debian/ http://ftp.sg.debian.org/debian/ http://mirrors.apu.edu.my/debian/ http://ftp.iinet.net.au/debian/debian/ http://debian.mirror.cambrium.nl/debian/ http://mirror.sax.uk.as61049.net/debian/ http://ftp.uk.debian.org/debian/ http://mirror.vorboss.net/debian/ http://mirror.1000mbps.com/debian/ http://ftp.antik.sk/debian/ Of the hosts tested we choose the fastest valid for HTTP: http://mirrors.ispros.com.bd/debian/ Writing sources.list. sources.list exists, moving to sources.list.1484862805 Done.[$] cat sources.list.1484862805 1 # Debian packages for testing 2 deb http://debian.ec.as6453.net/debian/ testing main contrib 3 # Uncomment the deb-src line if you want 'apt-get source' 4 # to work with most packages. 5 # deb-src http://debian.ec.as6453.net/debian/ testing main contrib 6 7 # Security updates for stable 8 # deb http://security.debian.org/ stable/updates main contrib Hope you find it useful.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
338,725
I have a dir that contains many sub dir which all has a file 'name' I want to print in each file 'name' 5 lines under each string 'str'
You just want some mirror or the closest/fastest mirror. If it's the latter, then you could just install netselect-apt and run it. I just ran to see which are the fastest form my geographical location and it said - [$] sudo netselect-apt testing................ The fastest 10 servers seem to be: http://mirrors.ispros.com.bd/debian/ http://ftp.sg.debian.org/debian/ http://mirrors.apu.edu.my/debian/ http://ftp.iinet.net.au/debian/debian/ http://debian.mirror.cambrium.nl/debian/ http://mirror.sax.uk.as61049.net/debian/ http://ftp.uk.debian.org/debian/ http://mirror.vorboss.net/debian/ http://mirror.1000mbps.com/debian/ http://ftp.antik.sk/debian/ Of the hosts tested we choose the fastest valid for HTTP: http://mirrors.ispros.com.bd/debian/ Writing sources.list. sources.list exists, moving to sources.list.1484862805 Done.[$] cat sources.list.1484862805 1 # Debian packages for testing 2 deb http://debian.ec.as6453.net/debian/ testing main contrib 3 # Uncomment the deb-src line if you want 'apt-get source' 4 # to work with most packages. 5 # deb-src http://debian.ec.as6453.net/debian/ testing main contrib 6 7 # Security updates for stable 8 # deb http://security.debian.org/ stable/updates main contrib Hope you find it useful.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211415/" ] }
338,781
This is how my docker-compose.yml looks like. nginx: container_name: 'nginx' image: 'nginx:1.11' restart: 'always' ports: - '80:80' - '443:443' volumes: - '/opt/nginx/conf.d:/etc/nginx/conf.d:ro' links: - 'anything' Now I need to add some content via shell script (on an ubuntu server).I am not quite sure if it is possible at all: Add new element to nginx/links , if it is not existing Append newthing block if no newthing-block is existing The new content should look like this: nginx: container_name: 'nginx' image: 'nginx:1.11' restart: 'always' ports: - '80:80' - '443:443' volumes: - '/opt/nginx/conf.d:/etc/nginx/conf.d:ro' - '/etc/letsencrypt:/etc/letsencrypt' links: - 'anything' - 'newthing'newthing: container_name: foo image: 'newthing:1.2.3' restart: always hostname: 'example.com'
There are a number of yaml libraries for Perl, Python etc. if it's ok to do it not directly from a shell script, but use another language. Another option is to install a command-line yaml processor , and call it from your shell script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338781", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210969/" ] }
338,823
Why the files are created with ~ symbol? By removing ~ symbol file will it affect the original file or it won't? My files:(obtained by linux command ls) DG_Item122 DG_Item147 DG_Item175 DG_Item200 DG_Item226 DG_Item249~ DG_Item271~ DG_Item293~ DG_Item314~ DG_Item49 DG_Item80DG_Item122~ DG_Item148 DG_Item175~ DG_Item200~ DG_Item226~ DG_Item25 DG_Item272 DG_Item294 DG_Item315 DG_Item5 DG_Item80~ My expected output: Files to be printed without the tilde.
When editing files with certain text editors, the editor may save a backup of the file with a ~ suffix. Other editors may use other suffixes, but ~ is by far the most commonly used backup suffix on Unix. You may remove these files if you feel that you don't need the backups any longer: $ rm *~ If you wish to keep the backup files, but don't want to see them in the output of ls , then you may use $ ls -B or $ ls --ignore-backups (which is the same thing). These flags will make ls ignore files specifically matching the shell filename globbing pattern *~ (since it's such a common backup suffix). To hide the listing of any other files, use e.g. --hide='*.bak' instead (this will hide any file with a .bak suffix). The -B and --ignore-backups flags may be seen as shorthands for --hide='*~' . To avoid having to type -B every time, you could add the following to your ~/.bashrc file: function ls { command ls --ignore-backups "$@"} This will effectively "replace" the ls command with a shell function that calls the real ls with the --ignore-backups flag added. Instead of a shell function, you could instead add an alias: alias ls='command ls -B "$@"' ... if you think that looks neater. Note: The -B / --ignore-backups options, as well as --hide , are GNU extensions to ls available in the ls implemented by the GNU coreutils package, but this will most likely already be installed on your Linux machine anyway. As to aliases vs. shell functions, the bash manual contains the phrase For almost every purpose, aliases are superseded by shell functions.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338823", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/209626/" ] }
338,884
I need to mount a volume, tar the contents of the mounted volume and unmount that mounted volume, in a single shell script. So I coded as, $ cat sample.shsudo mount -o loop Sample.iso /tmp/mntcd /tmp/mnttar-cvf /tmp/sample.tar *sudo umount /tmp/mnt I got the error umount: /tmp/mnt: device is busy. So I checked the $ lsof /tmp/mnt It outputs the current "sh" file. So I convinced myself, /tmp/mnt is busy in the current script (in this case, sample.sh). Is there any way around for (mount, tar, unmount) in the same script ? P.S : I'm able to unmount the /tmp/mnt volume once the script finishes.
You need to exit the directory to unmount it, like this: #!/bin/bashsudo mount -o loop Sample.iso /tmp/mntcd /tmp/mnttar -cvf /tmp/sample.tar *#Got to the old working directory. **NOTE**: OLDPWD is set automatically.cd $OLDPWD#Now we're able to unmount it. sudo umount /tmp/mnt That is it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338884", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/191874/" ] }
338,915
We have a pair of load balanced managed VMs which install apt-transport-https as part of a startup script. However recently the servers went into an error state because on startup they could no longer download the version of the package required (1.0.9.8.3) because it is no longer present on the mirror: http://httpredir.debian.org/debian/pool/main/a/apt root@validator-dev-group-c2v4:/etc# apt-get install -f apt-transport-httpsReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following NEW packages will be installed: apt-transport-https0 upgraded, 1 newly installed, 0 to remove and 27 not upgraded.Need to get 138 kB of archives.After this operation, 195 kB of additional disk space will be used.Err http://httpredir.debian.org/debian/ jessie/main apt-transport-https amd64 1.0.9.8.3 404 Not FoundE: Failed to fetch http://httpredir.debian.org/debian/pool/main/a/apt/apt-transport-https_1.0.9.8.3_amd64.deb 404 Not FoundE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Trying the suggestion of --fix-missing does not help. root@validator-dev-group-c2v4:/etc# apt-get install --fix-missing apt-transport-httpsReading package lists... DoneBuilding dependency tree Reading state information... DoneThe following NEW packages will be installed: apt-transport-https0 upgraded, 1 newly installed, 0 to remove and 27 not upgraded.Need to get 138 kB of archives.After this operation, 195 kB of additional disk space will be used.Err http://httpredir.debian.org/debian/ jessie/main apt-transport-https amd64 1.0.9.8.3 404 Not FoundE: Failed to fetch http://httpredir.debian.org/debian/pool/main/a/apt/apt-transport-https_1.0.9.8.3_amd64.deb 404 Not FoundE: Internal Error, ordering was unable to handle the media swap Next I manually downloaded the higher version of apt-transport-https (1.0.9.8.4) bug I was unable to install it directly because of a dependency on libapt-pkg4.12. root@validator-dev-group-c2v4:/home/<user># sudo dpkg -i ./apt-transport-https_1.0.9.8.4_amd64.deb Selecting previously unselected package apt-transport-https.(Reading database ... 26719 files and directories currently installed.)Preparing to unpack .../apt-transport-https_1.0.9.8.4_amd64.deb ...Unpacking apt-transport-https (1.0.9.8.4) ...dpkg: dependency problems prevent configuration of apt-transport-https: apt-transport-https depends on libapt-pkg4.12 (>= 1.0.9.8.4); however: Version of libapt-pkg4.12:amd64 on system is 1.0.9.8.3. Can anyone help me resolve this problem? Is it as simple as upgrading libapt-pkg4.12? If so, how do I go about that? EDIT : Also I am unable to run apt-get update ... because I haven't got apt-transport-https installed. Which I think they call Catch-22! root@validator-dev-group-c2v4:/home/<user># apt-get updateE: The method driver /usr/lib/apt/methods/https could not be found.N: Is the package apt-transport-https installed? This is what my /etc/apt/sources.list looks like: deb http://httpredir.debian.org/debian/ jessie maindeb-src http://httpredir.debian.org/debian/ jessie maindeb http://security.debian.org/ jessie/updates maindeb-src http://security.debian.org/ jessie/updates maindeb http://httpredir.debian.org/debian/ jessie-updates maindeb-src http://httpredir.debian.org/debian/ jessie-updates main Thank you in advance
I appear to have fixed the issue by symlinking the https dir in /usr/lib/apt/methods to the http dir. root@validator-dev-group-c2v4:~# cd /usr/lib/apt/methodsroot@validator-dev-group-c2v4:/usr/lib/apt/methods# ln -s http https Since I don't actually have any https:// sources configured it seems harmless and then when apt-get install apt-transport-https runs it actually overwrites the symlink with the correct files.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/338915", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99070/" ] }
338,938
In the past I've gotten a bit fed up with out-of-memory conditions on Linux, where the virtual memory starts swapping and hogging disk activity, and the machine slows down. So when I installed Ubuntu on my MacBook Pro, I noticed that it had 8 GB of memory, and I said to myself, "that seems like enough, I think I'll avoid swapping problems and not reserve a partition for virtual memory. I need the disk space anyways." Well, to my surprise, it turns out that the user experience in out-of-memory conditions with Linux without virtual memory is much, much WORSE than I expected. If I accidentally compile too many large C++ files at once (easy to do with "make -j6"), or something else that accidentally consumes the machine memory before I notice, instead of that program crashing and giving an error as I would expect, it turns out that the behaviour instead is that my entire desktop stops responding and I am forced to hard-reboot the computer! Sometimes I lose lots of time or work due to this! I would fix it by going back and re-partitioning to give myself some virtual memory, but damn.. I can't afford to do that right now. Are there any tips for getting Linux to handle out-of-memory conditions more cleanly?
I would fix it by going back and re-partitioning to give myself some virtual memory, but damn.. You don't need to have a full partition dedicated to swap, and you don't need to re-partition. Create swap as a file is pretty easy. Just create a large empty file, run mkswap on it, then add the swap. # create an big empty 1GB file (or whatever size you like)dd if=/dev/zero of=/swapfile bs=1M count=1024# format the file as swapmkswap /swapfile# turn it on.swapon /swapfile If you want to make it permanent add it to your fstab /swapfile swap swap defaults 0 0
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
338,945
I've read that semicolon is used to separate programs: $ echo 3; ls -la Does it mean that if , then and else are separate programs here? $ if [ $VARIABLE == abcdef ] ; then echo yes ; else echo no ; fi This question is not about semicolons.
The ; separates statements (loosely speaking). It is (almost) always possible to replace a ; by a newline. To say that ; separates two programs, therefore if and then must be "programs" is a bit too simplistic as a statement may be made of reserved words, shell functions, built-in utilities and external utilities, and combinations of these using pipes and boolean operators etc. etc. Both if and then are reserved words in the shell grammar , not "programs". Here they are used to build up what's technically called a compound command . echo is likely a built-in utility in the shell (but doesn't need to be), and ls is probably an external utility (or "program" as you say).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/338945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90372/" ] }
338,949
I would like to extract from a package name only the version of the package. Assuming I have a variable var that contains a package name (E.g.: var="nfs-utils-1.2.6-6.fc17.i686.rpm" ). The string extracted would be 1.2.6-6 . The method used to parse can be anything (regex, awk, cut). Edit:In the example above I would actually like to extract 1.2.6
This is not very portable, but for this specific case, this grep works: echo $var | egrep -o '[0-9].*-[0-9]'1.2.6-6
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338949", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171008/" ] }
338,963
How do I set -atime in milliseconds, seconds, or minutes? The default is days: -atime n File was last accessed n*24 hours ago. When find figures out how many 24-hour periods ago the file was last accessed, any fractional part is ignored, so to match -atime +1, a file has to have been accessed at least two days ago. I'd like to run a cron job, say, hourly to check if files in a particular directory have been accessed within that time frame. Entering time as a decimal doesn't seem to work, i.e. find . -atime 0.042 -print But maybe there is a better solution anyway – another command perhaps? Or perhaps this can't be done.. for finding files modified in last x minutes there is -mmin that allows setting the time in minutes. Perhaps the absence of such option for the access time implies that information is not stored the same way? I'm using Ubuntu 16.04.
Note that when you do -mtime <timespec> , the <timespec> checks the age of the file at the time find was started. Unless you run it in a very small directory tree, find will take several milliseconds (if not seconds or hours) to crawl the directory tree and do a lstat() on every file. So having a precision of shorter than a second doesn't necessarily make a lot of sense. Also note that not all file systems support time stamps with subsecond granularity. Having said that, there are a few options. With the find of many BSDs and the one from schily-tools , you can do: find . -atime -1s To find files that have been last accessed less than one second ago (compared to when find was started). With zsh : ls -ld -- **/*(Dms-1) For subsecond granularity, with GNU tools, you can use a reference file whose atime you set with touch : touch -ad '0.5 seconds ago' ../referencefind . -anewer ../reference Or with recent versions of perl : perl -MTime::HiRes=lstat,clock_gettime -MFile::Find -le ' $start = clock_gettime(CLOCK_REALTIME) - 0.5; find( sub { my @s = lstat $_; print $File::Find::name if @s and $s[8] > $start }, ".")'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/338963", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21912/" ] }
339,011
I was trying to update Java and I used a star to make sure it was removed completely not realizing it would mess up everything I used apt-get remove java* this was on the Ubuntu 16.0 or something server. I now have sysrcd or System Rescue CD on the server and I'm attempting to get my old files back to put them on a new server and reload the sysrcd server back to Ubuntu. However I can't seem to figure out how to use the mount system. I've tried running fdisk -l and I get Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x04a5ca62Device Boot Start End Blocks Id System/dev/sda1 * 2048 4095 1024 83 Linux/dev/sda2 6142 468860927 234427393 5 Extended/dev/sda5 6144 2004991 999424 83 Linux/dev/sda6 2007040 468860927 233426944 8e Linux LVMDisk /dev/mapper/vg-root: 221.6 GiB, 237879951360 bytes, 464609280 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/vg-tmp: 976 MiB, 1023410176 bytes, 1998848 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/mapper/vg-swap: 88 MiB, 92274688 bytes, 180224 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes I'm not sure which drive to mount or how to mount it. Can someone help?
A few more steps are needed when mounting an LVM partition vs. a non-LVM partition. sudo apt-get install lvm2 #This step may or may not be required.sudo pvscan #Use this to verify your LVM partition(s) is/are detected.sudo vgscan --mknodes #Scans for LVM Volume Group(s)sudo vgchange -ay #Activates LVM Volume Group(s)sudo lvscan #Scans for available Logical Volumessudo mount /dev/YourVolGroup00/YourLogVol00 /YourMountPoint
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/339011", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
339,039
I have this file that simply prints one line. I'm working on manipulating this one line with different sed commands. apple orange.5678 dog cat 009 you I'm wanting to grab 'orange.5678' and include 'you' and ignore everything else. I want it to look like below orange.5678 you I'm not sure where to start and how to exclude everything except for 'orange.5678' and 'you'. Any help would be great!
$ sed -r 's/.* ([^ ]+\.[^ ]+).* ([^ ]+)$/\1 \2/' orangeorange.5678 you Explanation -r use extended regular expressions s/old/new replace old with new .* any number of any characters (some characters) save some characters to reference later in replacement [^ ]+ some characters that are not a space \. literal dot $ end of line \1 backreference to saved pattern so s/.* ([^ ]+\.[^ ]+).* ([^ ]+)$/\1 \2/ means, match anything on the line up to a space that precedes some non-space characters up to a . and then some non space characters after it (saving those characters either side of the . ), then match any characters and save the last set of non-space characters on the line, and replace the whole match with the two saved patterns separated by a space
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211625/" ] }
339,077
I have created multiple keys using gpg. Whenever I try to sign any file, gpg automatically uses the first one I have created. How to set default key for signing in gpg. I don't want to delete/revoke the other one yet. Otherwise, how can I change my default keys for signing?
To choose a default key without having to specify --default-key on the command-line every time, create a configuration file (if it doesn't already exist), ~/.gnupg/gpg.conf , and add a line containing default-key <key-fpr> replacing <key-fpr> with the id or fingerprint of the key you want to use by default.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/339077", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211656/" ] }
339,087
I'm unable to create a hotspot using network-manager of Gnome.By click on use as hotspot -> Turn on , nothing is happening; I don't get any pop up telling created, failed, etc. No config files regarding hotspot are being created in /etc/NetworkManager/system-connection either. I have installed almost all optional dependencies of networkmanager except for bluez and ppp .The device I'm using is TP link TL-WW722N.
I used create_ap : pacman -S create_apsudo create_ap -m bridge wifi_interface ethernet_interface test_arch vinod123 Note: You won't be able to browse internet on the host. Maybe we should use NAT rather than bridge. I haven't tried it yet to confirm anything regarading NAT. UPDATE Used NAT and I'm able to browse on the host too. sudo create_ap -m nat wifi_interface ethernet_interface test_arch vinod123
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339087", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211663/" ] }
339,103
What does the following mean: basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')") I'm particulalry interested in this part: varible=$(...) I know that parenthesis are used to execute a subprocess, but what if they are used alongside with $ ?
From the Bash manual ( man bash ): Command Substitution Command substitution allows the output of a command to replace the command name. There are two forms: $(command) or `command` Bash performs the expansion by executing command in a subshell environment and replacing the command substitution with the standard output of the command, with any trailing newlines deleted. Embedded newlines are not deleted, but they may be removed during word splitting. The command substitution $(cat file) can be replaced by the equivalent but faster $(< file). (This holds true for all Bourne-like shells, i.e. sh , ksh , zsh , bash etc., and zsh is also able to capture data with embedded NUL characters in this way) The command basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')") will assign the name of the directory where the script is located (while also changing all backslashes into forward slashes) to the variable basedir . Any errors, warnings or other diagnostic messages that are outputted to the standard error stream will still be displayed on the terminal ( $(...) only captures the standard output of the command). The shell will start by executing the innermost command substitution: echo "$0" | sed -e 's,\\,/,g' The output of that will be given as a string to dirname , and the output of that will be assigned to the variable basedir . The double quotes are there to make sure that no word-splitting or filename globbing will be done, otherwise you may find that the script fails or produce strange output when $0 (the name of the script including the path used to execute it) contains a space character or a filename globbing character (such as ? or * ). It is in general a good idea to always quote expansions (variable expansions, command substitutions, and arithmetic expansions). See this question and its answers for an excellent explanation as to why this is a good idea. If the script was executed as $ /usr/local/bin/script.sh then basedir will get the value of /usr/local/bin . Or, on Cygwin: $ bash c:\\Users\\Me\\script.sh then basedir will get the value of c:/Users/Me . The double backslashes on the command line in this case is just to escape the single backslashes from the shell. The actual value of $0 is c:\Users\Me\script.sh . Another way of doing the same thing without using dirname , echo and sed would be basedir="${0//\\//}"basedir="${basedir%/*}"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/339103", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90372/" ] }
339,106
Laptop I want to buy comes with preinstalled windows 10. I want to install linux on this laptop but I want to have option to recover windows (reset to factory settings). Because of that I want to make bit by bit image of whole disk (all partitions, MBR, etc). Of course I must have an option to "flash" this image to disk so laptop is reverted to "factory settings" with preinstalled windows. What is the best option to create such image and how do I later recover such image? I will boot linux from usb and attach much bigger hard drive throught usb. I will store backup on this usb hard drive. I was thinking about dd command but it won't take in account empty space so it will produce huge image.
From the Bash manual ( man bash ): Command Substitution Command substitution allows the output of a command to replace the command name. There are two forms: $(command) or `command` Bash performs the expansion by executing command in a subshell environment and replacing the command substitution with the standard output of the command, with any trailing newlines deleted. Embedded newlines are not deleted, but they may be removed during word splitting. The command substitution $(cat file) can be replaced by the equivalent but faster $(< file). (This holds true for all Bourne-like shells, i.e. sh , ksh , zsh , bash etc., and zsh is also able to capture data with embedded NUL characters in this way) The command basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')") will assign the name of the directory where the script is located (while also changing all backslashes into forward slashes) to the variable basedir . Any errors, warnings or other diagnostic messages that are outputted to the standard error stream will still be displayed on the terminal ( $(...) only captures the standard output of the command). The shell will start by executing the innermost command substitution: echo "$0" | sed -e 's,\\,/,g' The output of that will be given as a string to dirname , and the output of that will be assigned to the variable basedir . The double quotes are there to make sure that no word-splitting or filename globbing will be done, otherwise you may find that the script fails or produce strange output when $0 (the name of the script including the path used to execute it) contains a space character or a filename globbing character (such as ? or * ). It is in general a good idea to always quote expansions (variable expansions, command substitutions, and arithmetic expansions). See this question and its answers for an excellent explanation as to why this is a good idea. If the script was executed as $ /usr/local/bin/script.sh then basedir will get the value of /usr/local/bin . Or, on Cygwin: $ bash c:\\Users\\Me\\script.sh then basedir will get the value of c:/Users/Me . The double backslashes on the command line in this case is just to escape the single backslashes from the shell. The actual value of $0 is c:\Users\Me\script.sh . Another way of doing the same thing without using dirname , echo and sed would be basedir="${0//\\//}"basedir="${basedir%/*}"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/339106", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27960/" ] }
339,126
I want to install Kali and have that as my only operating system but I don't have a USB, how can I do this?
From the Bash manual ( man bash ): Command Substitution Command substitution allows the output of a command to replace the command name. There are two forms: $(command) or `command` Bash performs the expansion by executing command in a subshell environment and replacing the command substitution with the standard output of the command, with any trailing newlines deleted. Embedded newlines are not deleted, but they may be removed during word splitting. The command substitution $(cat file) can be replaced by the equivalent but faster $(< file). (This holds true for all Bourne-like shells, i.e. sh , ksh , zsh , bash etc., and zsh is also able to capture data with embedded NUL characters in this way) The command basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')") will assign the name of the directory where the script is located (while also changing all backslashes into forward slashes) to the variable basedir . Any errors, warnings or other diagnostic messages that are outputted to the standard error stream will still be displayed on the terminal ( $(...) only captures the standard output of the command). The shell will start by executing the innermost command substitution: echo "$0" | sed -e 's,\\,/,g' The output of that will be given as a string to dirname , and the output of that will be assigned to the variable basedir . The double quotes are there to make sure that no word-splitting or filename globbing will be done, otherwise you may find that the script fails or produce strange output when $0 (the name of the script including the path used to execute it) contains a space character or a filename globbing character (such as ? or * ). It is in general a good idea to always quote expansions (variable expansions, command substitutions, and arithmetic expansions). See this question and its answers for an excellent explanation as to why this is a good idea. If the script was executed as $ /usr/local/bin/script.sh then basedir will get the value of /usr/local/bin . Or, on Cygwin: $ bash c:\\Users\\Me\\script.sh then basedir will get the value of c:/Users/Me . The double backslashes on the command line in this case is just to escape the single backslashes from the shell. The actual value of $0 is c:\Users\Me\script.sh . Another way of doing the same thing without using dirname , echo and sed would be basedir="${0//\\//}"basedir="${basedir%/*}"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/339126", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211676/" ] }
339,150
I'm getting this above syntax error and i can't decipher what is wrong with it. I'm trying to do a float value in BASH. Hence i used this command called the awk to achieve the target. while $empty; do empty=false echo -n "Price : "; read getPrice#float value using awkawk 'BEGIN{if ('$getPrice'>'0') exit 1}' if [ $? -eq 1 ]; then PRICE[$COUNT]=$getPrice;else empty=true echo "Please put in the correct price figure!"fidone However, i got this error awk: line 1: syntax error at or near > This error occured when i did not input any value into the getPrice variable. However, it's working fine when i input some value which is >0 . After much deliberation, i still could not figure what is wrong with the syntax. Regards.
The reason you get a syntax error in your Awk script is because when $getPrice is empty, then the script is actually just BEGIN{if (>0) exit 1} The proper way to import a shell variable into an Awk script as an Awk variable is by using -v : awk -vprice="$getPrice" 'BEGIN { if (price > 0) exit 1 }' I also had a look at the flow of contol in your script, and rather than using the $empty variable, you could just exit the loop when a correct price has been entered: while true; do read -p 'Price : ' getPrice # compare float value using awk if awk -vprice="$getPrice" 'BEGIN { if (price <= 0) exit(1) }' then PRICE[$COUNT]="$getPrice" break fi echo 'Please put in the correct (positive) price figure!' >&2done Additional detail after comments: The user should be alerted about invalid input if the value entered is not a number or if it's negative. We can test the inputted value for characters that are not supposed to be there (anything other than the digits 0 through to 9 and the decimal point): while true; do read -p 'Price : ' getPrice if [[ ! "$getPrice" =~ [^0-9.] ]] && \ awk -vprice="$getPrice" 'BEGIN { if (price <= 0) exit(1) }' then PRICE[$COUNT]="$getPrice" break fi echo 'Please put in the correct (positive) price figure!' >&2done
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339150", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/210592/" ] }
339,160
I would like to enable Ctrl + C for copy and Ctrl + Shift + C for SIGINT/interrupt in xterm. I have found the following . XTerm*VT100.Translations: #override \ Shift Ctrl<Key>V: insert-selection(CLIPBOARD) \n\ Shift Ctrl<Key>V: insert-selection(PRIMARY) \n\ Shift<Btn1Down>: select-start() \n\ Shift<Btn1Motion>: select-extend() \n\ Shift<Btn1Up>: select-end(CLIPBOARD) \n\ which I believe is partially there but doesn't give a demonstrate how to override Ctrl + C .
That's "partially there", but runs into the problem that there is no predefined (single byte) character corresponding to Ctrl + Shift + C or Ctrl + Shift + V . You need that single byte character for the interrupt ( intr ) setting with stty . Likewise, control+V is the literal next ( lnext ) setting in stty . You could use the translation resource to send a Ctrl + C character using the string feature, e.g., something like these lines in a translations resource: ctrl shift <key>C : string(0x03) \n\ctrl shift <key>V : string(0x16) \n\ and then assign the unshifted keys (putting a tilde ~ before the `shift keyword). From the followup comment, I agree that just specifying the un shifted pattern should be enough: ~Shift Ctrl <KeyPress> v: insert-selection(CLIPBOARD)\n\~Shift Ctrl <KeyPress> c: copy-selection(CLIPBOARD)\n a few notes (which might be documented, but the source code helps): <KeyPress> means the same as <Key> the modifiers (and <KeyPress> ) are matched ignoring case. the parts on the right-side of : are case-sensitive. Normally there is no translation done for Ctrl + C with or without the shift-modifier. xterm simply gets an XKeyEvent which has the modifier information and the character, and decodes that . The translations resource alters the events which might be sent to xterm. You would use modifiers in a translation to limit the matches, e.g., omitting Shift means it matches whether or not the shift-key is pressed. Adding an explicit ~shift ( no shift) modifier has no effect on the match for shift . Further reading: Custom Key Bindings (xterm manual)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339160", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4930/" ] }
339,173
I'm trying to call a self-defined function funk_a in strace but it doesn't seem to find it.I confirmed that funk_a can be called by itself.I appreciate any opinions. $ source ./strace_sample.sh $ funk_aEarth, Wind, Fire and Water$ funk_bGet on upstrace: Can't stat 'funk_a': No such file or directory$ dpkg -p strace|grep VersVersion: 4.8-1ubuntu5$ lsb_release -aNo LSB modules are available.Distributor ID: UbuntuDescription: Ubuntu 14.04.5 LTSRelease: 14.04Codename: trusty strace_sample.sh #!/bin/bashfunction funk_a { echo "Earth, Wind, Fire and Water"}function funk_b { echo "Get on up" strace -o trace_output.txt -c -Ttt funk_a} Thank you.
strace can only strace executable files. funk_a is a function, a programming construct of the shell, not something you can execute. The only thing strace could strace would be a new shell that evalutes the body of that function like: strace -o trace_output.txt -Ttt bash -c "$(typeset -f funk_a); funk_a" (I removed -c as it makes no sense with -Ttt ). But you'll then see all the system called made by bash to load and initialise (and after to clean-up and exit) in addition to that one write system call made by that funk_a function. Or you could tell strace to trace the pid of the shell while it evaluates the funk_a function: strace -o trace_output.txt -Ttt -p "$$" &funk_akill "$!" Though, by the time strace attaches to the PID of the shell, the shell could very well have finished interpreting the function. You could try some synchronisation like strace -o trace_output.txt -Ttt -p "$$" &tail -F trace_output.txt | read # wait for some output in trace_output.txtfunk_akill "$!" But even then depending on timing, trace_output.txt would include some of the system calls used interpret tail|read , or kill could kill strace before it has had the time to write the trace for the echo command to the output file. A better approach could be to wrap the call to funk_a between two recognisable system calls like strace -fo >(sed -n '1,\|open("///dev/null|d \|open("/dev///null|q;p' > trace_output.txt ) -Ttt -p "$$" &sleep 1 # give enough time for strace to startexec 3< ///dev/null # start signalfunk_aexec 3< /dev///null # end signal
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339173", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14968/" ] }
339,174
I set the xkb option shift:both_capslock (because I also use caps:escape) but this seems to have disabled the normal behavior of the shift key. How can I get that normal behavior back while still allowing the double press. I used to do this in gnome, but I am trying out sway.
strace can only strace executable files. funk_a is a function, a programming construct of the shell, not something you can execute. The only thing strace could strace would be a new shell that evalutes the body of that function like: strace -o trace_output.txt -Ttt bash -c "$(typeset -f funk_a); funk_a" (I removed -c as it makes no sense with -Ttt ). But you'll then see all the system called made by bash to load and initialise (and after to clean-up and exit) in addition to that one write system call made by that funk_a function. Or you could tell strace to trace the pid of the shell while it evaluates the funk_a function: strace -o trace_output.txt -Ttt -p "$$" &funk_akill "$!" Though, by the time strace attaches to the PID of the shell, the shell could very well have finished interpreting the function. You could try some synchronisation like strace -o trace_output.txt -Ttt -p "$$" &tail -F trace_output.txt | read # wait for some output in trace_output.txtfunk_akill "$!" But even then depending on timing, trace_output.txt would include some of the system calls used interpret tail|read , or kill could kill strace before it has had the time to write the trace for the echo command to the output file. A better approach could be to wrap the call to funk_a between two recognisable system calls like strace -fo >(sed -n '1,\|open("///dev/null|d \|open("/dev///null|q;p' > trace_output.txt ) -Ttt -p "$$" &sleep 1 # give enough time for strace to startexec 3< ///dev/null # start signalfunk_aexec 3< /dev///null # end signal
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339174", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105557/" ] }
339,237
One easy install method for Docker (for example) is this: curl -sSL https://get.docker.com/ | sh However, I have also seen some that look like this (using the Docker example): sh -c "$(curl -sSL https://get.docker.com/)" They appear to be functionally the same, but is there a reason to use one over the other? Or is it just a preference/aesthetic thing? (Of note, be very careful when running script from unknown origins.)
There is a practical difference. curl -sSL https://get.docker.com/ | sh starts curl and sh at the same time, connecting the output of curl with the input of sh . curl will carry out with the download (roughly) as fast as sh can run the script. The server can detect the irregularities in the timing and inject malicious code not visible when simply downloading the resource into a file or buffer or when viewing it in a browser. In sh -c "$(curl -sSL https://get.docker.com/)" , curl is run strictly before the sh is run. The whole contents of the resource are downloaded and passed to your shell before the sh is started. Your shell only starts sh when curl has exited, and passes the text of the resource to it. The server cannot detect the sh call; it is only started after the connection ends. It is similar to downloading the script into a file first. (This may not relevant in the docker case, but it may be a problem in general and highlights a practical difference between the two commands.)
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/339237", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/171237/" ] }
339,266
How can I remove only the first space from a line like the one below without removing the other spaces in the same line? Example Input: 2015-04-18 10:21:59 10 05430 -9999 -9999 000000000000 Example Output: 2015-04-1810:21:59 10 05430 -9999 -9999 000000000000
You may use sed for this: sed 's/ //' infile >outfile This applies a substitution to all lines of the file infile that will substitute the first space character with nothing (i.e. remove it). The output is stored in the file outfile . With sed 's/ //N' , where N is an integer between 1 and 9, you can pick which space to remove. If the line is in a shell variable, you could use var="${var/ /}" This uses the ${parameter/pattern/string} parameter expansion in bash to do the same thing as the sed command, but on the value in $var . The resulting string is, in this example, then stored back into $var .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/339266", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211253/" ] }
339,363
How can I send characters to a command as though they came from a file? For example I tried: wc < "apple pear orange"-bash: apple pear orange: No such file or directory
In shells that support here strings , including bash , zsh and ksh93 , you can use wc <<< "apple pear orange"
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/339363", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/47542/" ] }
339,463
I have seen the following install command used in multiple yocto recipes install -d ${D}${libdir} I am aware of the install command and its purpose, however I am unable to understand the purpose of ${D} variable as it is often nowhere defined in the recipe. Can somebody explain the purpose of this shell variable?
The ${D} variable allows the software being built to be installed in a directory other than its real target. For example, you might configure the software so that libdir is /usr/lib , but that's for the target device; when you run the installation on your build system, you don't want the newly-built files to actually be installed in /usr/lib , you want the placed somewhere isolated so that they can be readily identified and copied across to the target system. So you create a temporary directory and install there: mkdir /tmp/yocto-targetmake install D=/tmp/yocto-target That way the files end up in /tmp/yocto-target/usr/lib and so on. You can then archive all of /tmp/yocto-target using whatever tool you prefer, dropping the /tmp/yocto-target prefix, copy the archive to the target device and install its contents there. In other build systems, the DESTDIR variable is used for the same reason.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/339463", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211905/" ] }
339,465
I have for a school project a simple architecture composed of 3 virtual machines which all run Fedora 24: one server, one client, and one router. I decided to use iptables over firewalld for the extensive use of DNAT/SNAT that I only knew how to manage well with iptables; therefore, I disabled firewalld, and enable iptables: # dnf install iptables-services# systemctl stop firewalld# systemctl disable firewalld# systemctl start iptables && systemctl start ip6tables# systemctl enable iptables && systemctl enable ip6tables I had a set of rules I saved through # service iptables save and it worked perfectly on my router. I used the same method on my two other machines, server, and client, but the rules were not saved. After a bit of research, I realized that iptables.service does not start on boot; and I further noticed that firewalld did even though it was disabled as presented above. Is there a particular reason why firewalld would start even though it is disabled? Here's what the status shows right after boot: firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset : enabled) Active: active (running) since Sun 2017-01-22 23:52:34 PST; 15s ago Docs: man:firewalld(1)Main PID: 619 (firewalld) Tasks: 2 (list:512) CGroup: /system.slice/firewalld.service └─619 /usr/bin/python3 -Es /usr/sbin/firewalld --nofork --nopidJan 22 23:52:33 public systemd[1]: Starting firewalld - dynamic firewall daemonJan 22 23:52:34 public systemd[1]: Started firewalld - dynamic firewall daemon. On the other hand, here is iptables's: iptables.service - IPv4 firewall with iptables Loaded: loaded (/usr/lib/systemd/systemd/system/iptables.service; enabled; vendor preset: disabled) Active: inactive (dead)
The safest way to get rid of firewalld is to remove it: dnf remove firewalld It is quite ok to do for virtual machines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339465", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211904/" ] }
339,468
I'm having trouble updating all packages on Solaris 11.3. I use the system for testing software. I'm not a Solaris admin or a Solaris user. When attempting to update the system I'm encountering the following (this used to work): $ sudo pkg updatePassword:------------------------------------------------------------Package: pkg://solaris/release/[email protected],5.12-5.12.0.0.0.115.0:20170111T175931ZLicense: evaluationThis software has been made available for evaluation purposes only.See http://www.oracle.com/technetwork/server-storage/solaris11/technologies/foss-evaluation-program-2586275.html for further information. Packages to remove: 1 Packages to install: 3 Packages to update: 2 Services to change: 1 Create boot environment: NoCreate backup boot environment: Yespkg: The following packages require their licenses to be accepted before they can be installed or updated:----------------------------------------Package: pkg://solaris/release/[email protected],5.12-5.12.0.0.0.115.0:20170111T175931ZLicense: evaluation License requires acceptance.To indicate that you agree to and accept the terms of the licenses of the packages listed above, use the --accept option. To display all of the related licenses, use the --licenses option. I'm not sure what the message is talking about. I accepted the adminstrivia fodder when I installed the system last year. I did not install a package called pkg://solaris/release/evaluation , and I'm not sure where it came from. However, I gave it its due diligence, which did not work: $ sudo pkg --accept updatepkg: illegal global option -- acceptTry `pkg --help or -?' for more information. I visited the URL cited in the message, but it does not tell me what needs to be done. The page describe an oracle program. Apparently, what needs to be done is top secret or above. What needs to be done to update this system? More humorously, how did Oracle manage to break a simple process that worked for years?
The safest way to get rid of firewalld is to remove it: dnf remove firewalld It is quite ok to do for virtual machines.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339468", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
339,487
How do I make Zathura copy selected text to the system clipboard? I'm using Zathura with the poppler PDF plugin.
Add set selection-clipboard clipboard in the config file ~/.config/zathura/zathurarc or /etc/zathurarc .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/339487", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/201493/" ] }
339,566
I wanted to move all files, including starting with dot (hidden) and folders (recursively). So I used the following commands shopt -s dotglob nullglobmv ~/public/* ~/public_html/ and it worked. But do I need to reset anything after doing shopt -s dotglob nullglob ? Doesn't it change how commands like mv operate? Because I would like it changed back.
Yes, you would have to unset those options (with shopt -u nullglob dotglob ) afterwards if you wanted the default globbing behaviour back in the current shell. You could just do mv ~/public/* ~/public/.* ~/public_html/ That would still generate an error without nullglob set if one of the patterns didn't match anything, obviously, but would work without having to set either option. It would probably also say something about failing to rename . since it's a directory, but that too isn't stopping it from moving the files. A better option may be to use rsync locally: rsync -av ~/public/ ~/public_html/ and then delete ~/public .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/211986/" ] }
339,613
I have a problem with the certificates in Arch linux. It seems that it can't find ca-certificates.crt . I have updated my system and installed the ca-certificates{,-utils,-mozilla} packages and it still doesn't work. git clone http://github.com/sstephenson/bats.gitCloning into 'bats'...fatal: unable to access 'https://github.com/sstephenson/bats.git/': error setting certificate verify locations: CAfile: /etc/ssl/certs/ca-certificates.crt CApath: none
I am posting an answer to my own question because I solved the problem and I did not find a valid solution elsewhere. There is no /etc/ssl/certs/ca-certificate-crt file. So a link needs to be provided to the proper cert. $ ln -s /etc/ca-certificates/extracted/ca-bundle.trust.crt /etc/ssl/certs/ca-certificates.crt Now I can curl and git clone through https.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/339613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212016/" ] }
339,619
Every once in a while there's an answer (or a comment) that suggests using grep 's -v and -l switches together instead of the -L (especially when the latter isn't available). The authors seem to believe they are equivalent. So, are they interchangeable? IOW, does grep -v -l produce the same result as grep -L ?
No, they're not equivalent and as such the results will almost always 1 be different. Let's see what each of those switches does: ‘-L’‘--files-without-match’ Suppress normal output; instead print the name of each input file from which no output would normally have been printed. This should be simple: grep -L 'pattern' ./* prints the name of each file that does not contain any line matching 'pattern' . ‘-v’‘--invert-match’ Invert the sense of matching, to select non-matching lines. This should be very simple too: grep -v 'pattern' ./* prints all lines not matching 'pattern' from each file. ‘-l’‘--files-with-matches’ Suppress normal output; instead print the name of each input file from which output would normally have been printed. Again, simple: grep -l 'pattern' ./* prints the name of each file that contains at least one line matching 'pattern' . Now, what happens when combining -v and -l ? It's not much different from grep -l , except that it selects the non-matching lines hence grep -vl 'pattern' ./* prints the name of each file that contains at least one line not matching 'pattern' . So, note the difference: grep -L prints the file name only if there is no line matching 'pattern' grep -vl prints the file name only if there is at least one line not matching 'pattern' 1: based on the above, it's easy to see when could these two commands produce the same output: -either when the file is not empty and all lines match the pattern - in which case there will be no output -or when the file is not empty and no line matches the pattern - in which case it will be listed Moral of the story: never use grep -vl to emulate grep -L . If your grep does not support -L , this is the proper way to emulate it .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339619", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22142/" ] }
339,629
I want to capture all values from this xml file and print the values in file as out1.txt remark - value from the xml mean the word in the double bracket more input.txt <app name="UAT/ECC/Global/MES/1206/MRP-S23" ear="UAT/ECC/Global/MES/1206/MRP-S23.ear" xml="UAT/ECC/Glal/ME/120/MRP- S23.xml"/> <app name="OQ/ediedbn/adSFSF/adSFSF-CL" ear="OQ/ebn/aSF/adSF- CL.ear" xml="OQ/ediedbn/adSFSF/adSSF-CL.xml"/> <app name="OQ/ediedbn/adaEBS/adOrBS-HR-CL" ear="OQ/ediedbn/adOraS/araEBS- HR-CL.ear" xml="OQ/eddbn/aOraEBS/adOEBS- HR-CL.xml"/> <app name="UAT/CZ/LIMS/T068_01/LIMS-QA-S03" ear="UAT/CZ/LIS/T068_01/LIS-QA- .ear" xml="UAT/CZ/LIMS/T068_01/LIMS-QA-S03.xml"/> . more out1.txtUAT/ECC/Global/MES/1206/MRP-S23UAT/ECC/Glal/ME/120/MRP-S23.xmlOQ/ediedbn/adSFSF/adSFSF-CLOQ/ebn/aSF/adSF- CL.ear... please advice how to capture the values in the out1.txt file with awk / perl one liner , bash
No, they're not equivalent and as such the results will almost always 1 be different. Let's see what each of those switches does: ‘-L’‘--files-without-match’ Suppress normal output; instead print the name of each input file from which no output would normally have been printed. This should be simple: grep -L 'pattern' ./* prints the name of each file that does not contain any line matching 'pattern' . ‘-v’‘--invert-match’ Invert the sense of matching, to select non-matching lines. This should be very simple too: grep -v 'pattern' ./* prints all lines not matching 'pattern' from each file. ‘-l’‘--files-with-matches’ Suppress normal output; instead print the name of each input file from which output would normally have been printed. Again, simple: grep -l 'pattern' ./* prints the name of each file that contains at least one line matching 'pattern' . Now, what happens when combining -v and -l ? It's not much different from grep -l , except that it selects the non-matching lines hence grep -vl 'pattern' ./* prints the name of each file that contains at least one line not matching 'pattern' . So, note the difference: grep -L prints the file name only if there is no line matching 'pattern' grep -vl prints the file name only if there is at least one line not matching 'pattern' 1: based on the above, it's easy to see when could these two commands produce the same output: -either when the file is not empty and all lines match the pattern - in which case there will be no output -or when the file is not empty and no line matches the pattern - in which case it will be listed Moral of the story: never use grep -vl to emulate grep -L . If your grep does not support -L , this is the proper way to emulate it .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339629", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153544/" ] }
339,650
This directory /data/files/ has thousands files like: 1test2test3test[...]60000test60001test I'm also sending them to a S3 Bucket (AWS), using AWS CLI . However, sometimes the S3 bucket can be offline and because of that the file is skipped. How can I check if the file that exists in /data/files/ is also in the S3 Bucket? and if not copy the missing file to S3? I would prefer to do this using BASH. Also if I need to change the AWS CLI for another one, can be.
If you do aws s3 ls on the actual filename. If the filename exists, the exit code will be 0 and the filename will be displayed, otherwise, the exit code will not be 0: aws s3 ls s3://bucket/filnameif [[ $? -ne 0 ]]; then echo "File does not exist"fi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/212038/" ] }
339,657
How can I find two concatenated repeated lines in files? For example, in this file we have only two concatenated repeated lines: OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/54/ECC-MRP-S05 OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/54/ECC-MRP-S05.ear OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter <-- OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter <-- OQ-63/ECC/Global/MES/54/ECC-MRP-S05.xml
Uniq should be enough: $ cat c.txt OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/54/ECC-MRP-S05 OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/54/ECC-MRP-S05.ear OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/54/ECC-MRP-S05.xml$ uniq -D c.txt OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter$ uniq c.txt OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/54/ECC-MRP-S05 OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/54/ECC-MRP-S05.ear OQ-63/ECC/Global/MES/CZ/adWerum-CZ-Adapter OQ-63/ECC/Global/MES/54/ECC-MRP-S05.xml By default uniq checks adjacent lines of the input file. So for an unsorted file (like your case) uniq will do the job you want. you might also be interested in uniq -d and -u option. See man page for more details (-d prints only one of the both duplicate lines , -u print only uniq lines - removes both duplicate entries).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339657", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/153544/" ] }
339,697
I am working on a simple shell script to change a file one.PDF to one.pdf . I have this file stored in a folder called Task1. My script is in the same directory as Task1. Called Shell1.sh When I run it Task1 becomes Task1.pdf but the file inside one.PDF doesn't change. I need it the other way around but nothing I try seems to work I keep alternating between a syntax error, mv not being able to move because a resource is busy or just renaming the directory. Can anyone tell me what I'm doing wrong? #!/bin/bash#shebang for bourne shell executionecho "Hello this is task 1" #Initial prompt used for testing to see if script ran#/rename .* .pdf *.pdf // doesnt work#loop to iterate through each file in current directory and renamefor file in Task1;do mv "$file" "${file%.PDF}.pdf"done
Use globbing to get the filenames: for file in Task1/*; do mv ...; done For precision, only match the files ending in .PDF : for file in Task1/*.PDF; do mv ...; done More precise, make sure we are dealing with files, not directories: for file in Task1/*.PDF; do [ -f "$file" ] && mv ...; done As a side note, your parameter expansion pattern is fine.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/339697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/138496/" ] }
339,765
We use a hosting server of FreeBSD 10.3, where we don't have the authority to be a superuser. We use the server to run apache2 for web pages of our company.The previous administrator of our web pages appeared to set an ACL permission to a directory, but we want to remove it. Let us say the directory is called foobar . Now the result of ls -al foobar is as follows: drwxrwxr-x+ 2 myuser another_user 512 Nov 20 2013 foobar And the permission is as follows: [myuser@hosting_server]$ getfacl foobar# file: foobar/# owner: myuser# group: another_useruser::rwxgroup::rwxmask::rwxother::r-x Here we want to remove the ACL permission and the plus sign at the last of the permission list. Therefore, we did setfacl -b foobar It eliminated the special permission governed by the ACL, but didn't erase the plus sign + . Our question is how can we erase the plus sign + in the permission list, shown by 'ls -al foobar'?
Our problem was resolved by using: setfacl -bn foobar The point was we also had to remove the aclMask from the directory with an option -n... The man page of setfacl says as follows: -n Do not recalculate the permissions associated with the ACL mask entry. This option is not applicable to NFSv4 ACLs. We're not sure why this option worked, but it did... In case you get d????????? permission after the above solution, try chmod -R a+rX as two commented below.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/339765", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/188255/" ] }
339,784
I have a tab delimited table a b cA 5 2 0B 0 5 4C 4 3 4D 2 0 2 I want to change the non-zero values to "1", without changing the column or row names. Desired output: a b cA 1 1 0B 0 1 1C 1 1 1D 1 0 1 To clarify, this is an example table. The letters are variables representing the column/row names - there may be hundreds of columns and rows. The non-zero values (given here as numbers) may not necessarily be numbers - they might the names of people for example.
Assuming strictly tab-delimited input: $ cat data.in a b cA nancy bilbo baggins 0B 0 darcy benderC phantom menace Unix !!D last row 0 the end$ cat -t data.in^Ia^Ib^IcA^Inancy^Ibilbo baggins^I0B^I0^Idarcy^IbenderC^Iphantom menace^IUnix^I!!D^Ilast row^I0^Ithe end An awk script to do the job: BEGIN { OFS = FS = "\t" }NR != 1 { for (i = 2; i <= NF; ++i) { if ($i != "0") { $i = "1"; } }}{ print } Running it: $ awk -f script.awk data.in a b cA 1 1 0B 0 1 1C 1 1 1D 1 0 1 The script compares each field (column) with the single character 0 (except for the first field) and replaces everything that isn't exactly 0 with a 1 . The output will be tab-delimited.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/205286/" ] }
339,801
When you --gen-key in GPG, you can choose which actions of Sign, Certify, Encrypt, and Authenticate the key will be usable for. Can these be later modified (i.e. obviously a new key can be created if the current one has C, and the old one revoked, but that's not the question) to remove or add actions?
Keys' allowed usages can be modified, but the gpg tool doesn't support it (even in version 2). To change a key's usage, you need to modify gpg . The basic idea is detailed in a thread on the gnupg-users mailing list : usage information is carried by the self-signature, so you need to change the usage parser to force the value you're interested in, then create a new self-signature on your key, for example by changing your key's expiry date.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/339801", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62835/" ] }