source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
409,893 | Say I have the following script: #!/bin/bashfor i in $(seq 1000)do cp /etc/passwd tmp cat tmp | head -1 | head -1 | head -1 > tmp #this is the key line cat tmpdone On the key line, I read and write the same file tmp which sometimes fails. (I read it is because of race conditions because the processes in the pipeline are executed in parallel, which I do not understand why - each head needs to take the data from the previous one, doesn't it? This is NOT my main question, but you can answer it as well.) When I run the script, it outputs about 200 lines. Is there any way I can force this script to output always 0 lines (so the I/O redirection to tmp is always prepared first and so the data is always destroyed)? To be clear, I mean changing the system settings, not this script. Thanks for your ideas. | Why there is a race condition The two sides of a pipe are executed in parallel, not one after the other. There's a very simple way to demonstrate this: run time sleep 1 | sleep 1 This takes one second, not two. The shell starts two child processes and waits for both of them to complete. These two processes execute in parallel: the only reason why one of them would synchronize with the other is when it needs to wait for the other. The most common point of synchronization is when the right-hand side blocks waiting for data to read on its standard input, and becomes unblocked when the left-hand side writes more data. The converse can also happen, when the right-hand side is slow to read data and the left-hand side blocks in its write operation until the right-hand side reads more data (there is a buffer in the pipe itself, managed by the kernel, but it has a small maximum size). To observe a point of synchronization, observe the following commands ( sh -x prints each command as it executes it): time sh -x -c '{ sleep 1; echo a; } | { cat; }'time sh -x -c '{ echo a; sleep 1; } | { cat; }'time sh -x -c '{ echo a; sleep 1; } | { sleep 1; cat; }'time sh -x -c '{ sleep 2; echo a; } | { cat; sleep 1; }' Play with variations until you're comfortable with what you observe. Given the compound command cat tmp | head -1 > tmp the left-hand process does the following (I've only listed steps that are relevant to my explanation): Execute the external program cat with the argument tmp . Open tmp for reading. While it hasn't reached the end of the file, read a chunk from the file and write it to standard output. The right-hand process does the following: Redirect standard output to tmp , truncating the file in the process. Execute the external program head with the argument -1 . Read one line from standard input and write it to standard output. The only point of synchronization is that right-3 waits for left-3 to have processed one full line. There is no synchronization between left-2 and right-1, so they can happen in either order. What order they happen in is not predictable: it depends on the CPU architecture, on the shell, on the kernel, on which cores the processes happen to be scheduled, on what interrupts the CPU receives around that time, etc. How to change the behavior You cannot change the behavior by changing a system setting. The computer does what you tell it to do. You told it to truncate tmp and read from tmp in parallel, so it does the two things in parallel. Ok, there is one “system setting” you could change: you could replace /bin/bash by a different program that is not bash. I hope it would go without saying that this is not a good idea. If you want the truncation to happen before the left-hand side of the pipe, you need to put it outside of the pipeline, for example: { cat tmp | head -1; } >tmp or ( exec >tmp; cat tmp | head -1 ) I have no idea why you'd want this though. What's the point in reading from a file that you know to be empty? Conversely, if you want the output redirection (including the truncation) to happen after cat has finished reading, then you need to either fully buffer the data in memory, e.g. line=$(cat tmp | head -1)printf %s "$line" >tmp or write to a different file and then move it into place. This is usually the robust way to do things in scripts, and has the advantage that the file is written in full before it's visible through the original name. cat tmp | head -1 >new && mv new tmp The moreutils collection includes a program that does just that, called sponge . cat tmp | head -1 | sponge tmp How to detect the issue automatically If your goal was to take badly-written scripts and automatically figure out where they break, then sorry, life isn't that simple. Runtime analysis won't reliably find the problem because sometimes cat finishes reading before the truncation happens. Static analysis can in principle do it; the simplified example in your question is caught by Shellcheck , but it may not catch a similar problem in a more complex script. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/409893",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/253459/"
]
} |
409,930 | I am currently doing this in a POSIX shell script: success=falsepv --wait "$input_filename" | openssl enc -aes-256-cbc -d -salt -out "$output_filename" && success=trueif [ "$success" = "true" ]... The problem is, I am not quite sure, if I do this correctly. For instance, I don't get the difference between that approach and just checking $? . Why I care? Well, because ShellCheck.net warned me: SC2181 Check exit code directly with e.g. 'if mycmd;', not indirectly with $?. | According to the link you posted it should be something like this: if pv --wait "$input_filename" | openssl enc -aes-256-cbc -d -salt -out "$output_filename"; then code for trueelse code for falsefi In my testing this may be an issue with pipelines depending on what is on the other end of yours...I think with openssl you should be fine but if you were piping to something like cat or echo I believe it will always be treated as success because the last command in the pipeline will exit with success. If you just have one command to execute on success or on failure, something like this may also work: command && code for success or command || code for failure | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/409930",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
410,049 | When I tried to upgrade a Fedora 26 Server earlier today, I got this error message after downloading packages: warning: /var/cache/dnf/forensics-5e8452ee3a114fbe/packages/protobuf-c-1.3.0-1.fc26.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 87e360b8: NOKEYImporting GPG key 0x87E360B8:Userid : "CERT Forensics Operations and Investivations Team <[email protected]>" Fingerprint: 26A0 829D 5C01 FC51 C304 9037 E97F 3E0A 87E3 60B8 From : /etc/pki/rpm-gpg/RPM-GPG-KEY-cert-forensics-2018-04-07Is this ok [y/N]: nDidn't install any keysThe downloaded packages were saved in cache until the next successful transaction.You can remove cached packages by executing 'dnf clean packages'.Error: GPG check FAILED So I aborted the upgrade, and I tried to dnf clean packages and redownload, but I still got the same error. It seems that the protobuf packaged does not have a valid signature so dnf cannot continue, is that correct? | But... you are saying "No": Is this ok [y/N]: n ...when asked to install the key! Try with yes ( y ) instead! | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205857/"
]
} |
410,051 | Is there a command or set of commands that I can use to horizontally align lines of text to an arbitrary character? For example, with a list of email addresses the output would produce a text file with all the '@' characters lined up vertically. To be successful I believe that a variable number of empty spaces must be added to the beginning of most lines. I do not want separate columns as they take more effort to read (for example, column -t -s "@" < file.txt ). Before: [email protected]@[email protected] After: [email protected]@example.net [email protected] Put differently: can I specify a character to be an anchor point, around which the surrounding text is horizontally centered? My use-case for this is email addresses, to make them easier to scan visually. | At its simplest, you could just print the first field in a suitably large fieldwidth e.g. awk -F@ 'BEGIN{OFS=FS} {$1 = sprintf("%12s", $1)} 1' file [email protected] [email protected] [email protected] AFAIK any method that does not assume a specific maximum fieldwidth will require either holding the file in memory or making two passes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410051",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265226/"
]
} |
410,056 | Can someone please describe, for a non-programmer, but IT person, what is pledge ? Ex.: there is a program, ex.: "xterm". How can pledge make it more secure? It pledge inside the programs code, or outside in the OS itself? Where is pledge? Is it in the programs code; or does the OS has a list of binaries that can only invoke xy syscalls? | What is Pledge? pledge is a system call. Calling pledge in a program is to promise that the program will only use certain resources. Another way of saying is to limit the operation of a program to its needs, e.g., "I pledge not to use any other ports except port 63 " "I pledge not to use any other system-call except lseek() and fork() " How does it make a program more secure? It limits the operation of a program. Example: You wrote a program named xyz that only needs the read system-call. Then you add pledge to use only read but nothing else. Then a malicious user found out that in your program there is a vulnerability by which one can invoke a root shell. Exploiting your program to open a root shell will result that the kernel will kill the process with SIGABRT (which cannot be caught/ignored) and generate a log (which you can find with dmesg ). It happens because before executing other codes of your program, it first pledge not to use anything other than read system call. But opening root shell will call several other system-calls which is forbidden because its already promised not to use any other but read . Where is Pledge? Its usually in a program. Usage from OpenBSD 6.5 man page : #include <unistd.h>int pledge(const char *promises, const char *execpromises); Example Code: Example code of cat command from cat.c ........#include <unistd.h>........int ch;if (pledge("stdio rpath", NULL) == -1) err(1, "pledge");while ((ch = getopt(argc, argv, "benstuv")) != -1).......... | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/410056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229275/"
]
} |
410,132 | I'm a new linux user running Mint 17.3. I want to install a program I downloaded as a .tar file. I extracted the contents of the .tar. Now I see folders: programname/libprogramname/binprogramname/include There are files in those folders but nothing that looks like an install file. I'm not sure where to go from here to install this program. Any help would be great. | Short Answer It looks like your download has a collection of precompiled files. In order to "install" them you'll just need to copy or move each file to an appropriate location. In this case you'll probably just want to copy all of the files from each subdirectory of smartcash-1.0.0 to the corresponding subirectory of /usr/local , e.g.: cp -i smartcash-1.0.0/bin/* /usr/local/bincp -i smartcash-1.0.0/include/* /usr/local/includecp -i smartcash-1.0.0/lib/* /usr/local/lib That's it. Once you do that you should be able to run four new commands: smartcash-clismartcash-qtsmartcash-txsmartcashd Long Answer Here is what I did to try to figure out what you're dealing with. First I downloaded the TAR archive: wget 'https://smartcash.cc/wp-content/uploads/2017/11/smartcash-1.0.0-x86_64-linux-gnu.tar.gz' Then I decompressed the archive: tar xzf smartcash-1.0.0-x86_64-linux-gnu.tar.gz Then I viewed the resulting directory: tree smartcash-1.0.0 Here is the output from tree : smartcash-1.0.0|-- bin| |-- smartcash-cli| |-- smartcash-qt| |-- smartcash-tx| `-- smartcashd|-- include| `-- bitcoinconsensus.h`-- lib |-- libbitcoinconsensus.so -> libbitcoinconsensus.so.0.0.0 |-- libbitcoinconsensus.so.0 -> libbitcoinconsensus.so.0.0.0 `-- libbitcoinconsensus.so.0.0.0 It looks what we have are some precompiled executable programs (in the 'bin/' subdirectory), some shared libraries (in the lib/ subdirectory), and a header file (in the include subdirectory). In general, you probably want to put executables into a directory that's in your path. To see the directories in your PATH you can run the following command: (IFS=:; for path in ${PATH[@]}; do echo "${path}"; done) Here is what the output might look like: /usr/local/sbin/usr/local/bin/usr/sbin/usr/bin/sbin/bin A typical place to put these would be /usr/local/bin . You could that with a command such as the following: cp -i smartcash-1.0.0/bin/* /usr/local/bin The shared library files should go in a directory that's in your shared-library search-path. To see what your shared-library search-path is you should check the /etc/ld.so.conf configuration file. Here's what's in mine: include /etc/ld.so.conf.d/*.conf So it's including configuration files from the /etc/ld.so.conf.d directory. Checking the contents of that directory (i.e. cat /etc/ld.so.conf.d/* ) reveals the following list of directories: /usr/lib/x86_64-linux-gnu/libfakeroot/usr/local/lib/lib/x86_64-linux-gnu/usr/lib/x86_64-linux-gnu So I would put the files in the /usr/local/lib directory, e.g.: cp -i smartcash-1.0.0/lib/* /usr/local/lib For further discussion on the subject of where to put shared libraries you might want to refer to the following posts: Where do executables look for shared objects at runtime? Is /usr/local/lib searched for shared libraries? Finally, you'll probably want to put the header file in /usr/local/include - for the sake of consistency, e.g.: cp -i smartcash-1.0.0/include/* /usr/local/include | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265277/"
]
} |
410,237 | I'm curious. Is it possible to install a 64 bit program on a 32 bit OS with a 64 bit processor? I'm running Linux on a raspberry pi 3 and I try to install a newer version of MongoDB: armv7l GNU/LinuxPRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"NAME="Raspbian GNU/Linux"VERSION_ID="9"VERSION="9 (stretch)"ID=raspbianID_LIKE=debian | Is it possible to install a 64 bit program on a 32 bit OS with a 64 bit processor? In principle yes, but the processor and the OS have to support it. On ARMv8, a 32-bit (Aarch32) kernel cannot run 64-bit (Aarch64) processes. This is a limitation of the processor. There are other processors that don't have this limitation, for example it is possible to run x86_64 processes on top of an x86_32 kernel on an x86_64 processor, but few kernels support it, presumably because it's of limited utility (mostly, you save a bit of RAM in the kernel by making it 32-bit). Linux doesn't support it, but Solaris does. You can keep your existing 32-bit OS if you run a 64-bit kernel . An Aarch64 Linux kernel can run Aarch32 processes. Raspbian doesn't support this out of the box, so you'd need to maintain both a 32-bit OS and a 64-bit OS. You can use either one as the main OS (i.e. the one that runs init and system services) and the other to run a specific program using chroot. See How do I run 32-bit programs on a 64-bit Debian/Ubuntu? for a practical approach. Note that you will need to install all the libraries that the 64-bit program requires. Any given process must be either wholly 32-bit or wholly 64-bit, so you can't use a 32-bit library in a 64-bit executable. Unless you have strong reasons to keep a 32-bit system, if you need to run a 64-bit executable, it would be easier to install a 64-bit system. Note that the only thing that 64-bit programs can do but 32-bit programs can't is address more than about 3GB of virtual memory, which is of limited utility on a system with 1GB of RAM. You may get performance benefits from the extra, larger registers, but you'll also lose performance from the extra memory accesses. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410237",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265116/"
]
} |
410,257 | I have hundreds of directories, some nested in other directories, with tens of thousands of files. The files need to have a date/time stamp removed from them. An example filename is Letter to Client 27May2016~20160531-162719.pdf and I would like for it to go back to being Letter to Client 27May2016.pdf Another example filename is ABCDEF~20160531-162719 and I would like for it to go back to being ABCDEF . Note that this file has no extension, unlike the example above. I need a command that I can run at the root of the affected folders that will recursively go through and find/fix the filenames. ( I use Syncthing to sync files, and restored deleted files by copying them from the .stversions directory back to where they were, but found that Syncthing appends that date/time stamp...) | Meet the Perl rename tool: $ rename -n -v 's/~[^.]+//' *~*rename(ABCDEF~20160531-162719, ABCDEF)rename(Letter to Client 27May2016~20160531-162719.pdf, Letter to Client 27May2016.pdf) ( online man page , also see this Q ) That regex says to match a tilde, as many characters that are not dots, but at least one; and to replace whatever matched with an empty string. Remove the -n to actually do the replace. We could change the pattern to ~[-0-9]+ to just replace digits and dashes. Sorry, you said "recursively", so lets use find : $ find -type f -name "*~*" -execdir rename -n -v 's/~[-0-9]+//' {} +rename(./ABCDEF~20160531-162719, ./ABCDEF)rename(./Letter to Client 27May2016~20160531-162719.pdf, ./Letter to Client 27May2016.pdf) Or just with Bash or ksh, though directories with ~ followed by digits will break this: $ shopt -s extglob # not needed in ksh (as far as I can tell)$ shopt -s globstar # 'set -o globstar' in ksh$ for f in **/*~* ; do g=${f//~+([-0-9])/}; echo mv -- "$f" "$g" donemv -- ABCDEF~20160531-162719 ABCDEFmv -- Letter to Client 27May2016~20160531-162719.pdf Letter to Client 27May2016.pdf Again, remove the echo to actually do the rename. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265387/"
]
} |
410,269 | I am wondering if it is theoretically possible to build a Linux distro that can both support rpm and debian packages. Are there any distros live out there that support both? And if not is it even possible? | Bedrock Linux does this. Not saying I've done this, or that it is a good idea, but it is being done. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/410269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/100193/"
]
} |
410,281 | I'm trying to manually create my own custom usb drive, with a bunch of iso files on it, and a partition for data. I used the instruction I put here to create my key, but to sum-up, I have done a partition /dev/sda1 for data a partition /dev/sda2 that has grub installed a partition /dev/sda3 that contains my iso files in the folder linux-iso/ I put in the file grub2/grub/conf (on /dev/sda2 ) the following file : insmod loopbackinsmod iso9660menuentry 'XUbuntu 16.04 "Xenial Xerus" -- amd64' { set isofile="/linux-iso/xubuntu-16.04.1-desktop-amd64.iso" search --no-floppy --set -f $isofile loopback loop $isofile linux (loop)/casper/vmlinuz.efi locale=fr_FR bootkbd=fr console-setup/layoutcode=fr iso-scan/filename=$isofile boot=casper persistent file=/cdrom/preseed/ubuntu.seed noprompt ro quiet splash noeject -- initrd (loop)/casper/initrd.lz}menuentry 'Debian 9.3.0 amd64 netinst test 3' { set isofile="/linux-iso/debian-9.3.0-amd64-netinst.iso" search --no-floppy --set -f $isofile loopback loop $isofile linux (loop)/install.amd/vmlinuz priority=low config fromiso=/dev/sdb3/$isofile initrd (loop)/install.amd/initrd.gz} This way, when I load ubuntu everything works great... But when I load debian it fails at the step "Configure CD-Rom", with the error: Incorrect CD-ROM detected.The CD-ROM drive contains a CD which cannot be used for installation.Please insert a suitable CD to continue with the installation." I also tried to mount /dev/sdb3 at /cdrom , but in that case I've an error on the next step: Load installer components from CD:There was a problem reading data from the CD-ROM. Please make sure it is in the drive.Failed to copy file from CD-ROM. Retry?" Do you know how to solve this problem? Thank you! | Bedrock Linux does this. Not saying I've done this, or that it is a good idea, but it is being done. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/410281",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/169695/"
]
} |
410,335 | I'd like to get Terminfo for my terminal (rxvt-unicode) working, so that when I ssh from Linux to macOS, the Home/End and other keys work properly. Usually, to accomplish this with a Linux remote host, I use a script like the following: ssh "$1" 'mkdir -p ~/.terminfo/r'for f in /usr/share/terminfo/r/rxvt-unicode{,-256color}do scp "$f" "$1":.terminfo/r/done However, this isn't working with macOS. When I run screen , first I was getting "TERM too long - sorry.". After updating it to the brew version (4.06.02), I'm now getting "Cannot find terminfo entry for 'rxvt-unicode-256color'." TERM is correctly set to rxvt-unicode-256color , and ~/.terminfo/r/rxvt-unicode-256color exists. Running screen with TERMINFO=$HOME/.terminfo/ also has no effect. | ncurses uses 2-characters for filesystems (such as MacOS and OS/2) where filenames are case-preserving / case-insensitive . That is documented in the NEWS file . Apple, by the way, provides an old version of ncurses (5.7) which is still new enough for this feature. Portable applications should not rely upon any particular organization of the terminal database... By the way, current terminfo entries for xterm-256color will not work well with that old ncurses 5.7 base system, since the color pairs value exceeds limits. The effect upon rxvt-unicode depends on how the source was constructed. This is mentioned in the FAQ : ncurses 6.1 introduced support for large number capabilities, e.g., for more than 32767 color pairs. Other implementations generally treated out-of-range values as zero. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4830/"
]
} |
410,367 | The following command will list all of the groups of someUser (the primary group and the supplementary groups): groups someUser But is there a way to only get the primary group? | See the FreeBSD handbook (information also valid for Linux): Group ID (GID) The Group ID (GID) is a number used to uniquely identify the primary group that the user belongs to. Groups are a mechanism for controlling access to resources based on a user's GID rather than their UID. This can significantly reduce the size of some configuration files and allows users to be members of more than one group. It is recommended to use a GID of 65535 or lower as higher GIDs may break some software. If so, running id <username> will show gid=<primary group> : id <username>uid=1000(<username>) gid=1000(<username>) groups=1000(<username>),4(adm),24(cdrom),27(sudo) If you want the command to return just the primary group name, see man id : -g, --group print only the effective group ID -G, --groups print all group IDs -n, --name print a name instead of a number, for -ugG so, id -gn <username> should give you what you want. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/410367",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227869/"
]
} |
410,371 | I know that chsh is use to change the login shell with the shell listed in this file. Okay, But what is actually Login shell is? Can anyone tell me this as simple as you can? ;) | See the FreeBSD handbook (information also valid for Linux): Group ID (GID) The Group ID (GID) is a number used to uniquely identify the primary group that the user belongs to. Groups are a mechanism for controlling access to resources based on a user's GID rather than their UID. This can significantly reduce the size of some configuration files and allows users to be members of more than one group. It is recommended to use a GID of 65535 or lower as higher GIDs may break some software. If so, running id <username> will show gid=<primary group> : id <username>uid=1000(<username>) gid=1000(<username>) groups=1000(<username>),4(adm),24(cdrom),27(sudo) If you want the command to return just the primary group name, see man id : -g, --group print only the effective group ID -G, --groups print all group IDs -n, --name print a name instead of a number, for -ugG so, id -gn <username> should give you what you want. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/410371",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261973/"
]
} |
410,471 | I'm trying to watch for any new output of a log file. Another script (not under my control) is deleting the file then creating a new one with the same name. Using tail -f doesn't work because the file is being deleted. | If your tail supports it, use tail -F , it works nicely with disappearing and re-appearing files. Just make sure you start tail from a directory which will stay in place. -F is short-hand for --follow=name --retry : tail will follow files by name rather than file descriptor, and will retry when files are inaccessible ( e.g. because they’ve been deleted). (A number of bugs relating to --follow=name with --retry were fixed in coreutils 8.26, so you may run into issues with earlier versions; e.g. retrying when the directory containing the tailed file is deleted appears to only work in all cases with version 8.26 or later.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/410471",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265540/"
]
} |
410,474 | I am writing a set of bash scripts. The first, wrapper calls two scripts: do_something and do_something_else . In pseudo code: $ wrapperdo_somethingif exitcode of do_something = 0 then do_something_elseelse exit with errorfiexit success This would generate a log file: $ cat /var/logs/wrapper.log | tail -3Deleting file 299Deleting file 300wrapper ran successfully on 01/01/18 00:01:00 GMT I have two goals: create a log of the entire process. In other words, everything that do_something , do_something_else and wrapper send to stdout and stderr I want in one log file that shows the daily run of this script so I can grep for errors. I want to pre-compile do_something , do_something_else and wrapper so I can put them in /usr/bin and scp them to all my systems. This way I have one source in dev and quick running un-editable code in prod. Is this possible? | If your tail supports it, use tail -F , it works nicely with disappearing and re-appearing files. Just make sure you start tail from a directory which will stay in place. -F is short-hand for --follow=name --retry : tail will follow files by name rather than file descriptor, and will retry when files are inaccessible ( e.g. because they’ve been deleted). (A number of bugs relating to --follow=name with --retry were fixed in coreutils 8.26, so you may run into issues with earlier versions; e.g. retrying when the directory containing the tailed file is deleted appears to only work in all cases with version 8.26 or later.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/410474",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231139/"
]
} |
410,477 | I'm running Ubuntu Server 16.04 and my upgrade to linux-image-4.4.0-103-generic fails because my /boot directly is almost full (188MB out of 200MB). gzip: stdout: No space left on deviceE: mkinitramfs failure cpio 141 gzip 1update-initramfs: failed for /boot/initrd.img-4.4.0-103-generic with 1.run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-4.4.0-103-generic.postinst line 1052.dpkg: error processing package linux-image-4.4.0-103-generic (--configure):subprocess installed post-installation script returned error exit status 2No apport report written because the error message indicates its a followup error from a previous failure.dpkg: dependency problems prevent configuration of linux-image-extra-4.4.0-103-generic:linux-image-extra-4.4.0-103-generic depends on linux-image-4.4.0-103-generic; however:Package linux-image-4.4.0-103-generic is not configured yet. dpkg shows that I only have the 2 most recent kernels installed (4.4.0-96-generic and 4.4.0-97-generic). claude@shannon:~$ sudo dpkg --list 'linux-image*'Desired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trigpend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-==============================================-============================- ============================-=================================================== ===============================================un linux-image <none> <none> (no description available)un linux-image-4.2.0-27-generic <none> <none> (no description available)un linux-image-4.2.0-42-generic <none> <none> (no description available)iF linux-image-4.4.0-103-generic 4.4.0-103.126 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMPun linux-image-4.4.0-59-generic <none> <none> (no description available)un linux-image-4.4.0-62-generic <none> <none> (no description available)un linux-image-4.4.0-63-generic <none> <none> (no description available)un linux-image-4.4.0-64-generic <none> <none> (no description available)un linux-image-4.4.0-72-generic <none> <none> (no description available)un linux-image-4.4.0-77-generic <none> <none> (no description available)rc linux-image-4.4.0-81-generic 4.4.0-81.104 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMPrc linux-image-4.4.0-83-generic 4.4.0-83.106 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMPii linux-image-4.4.0-96-generic 4.4.0-96.119 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMPii linux-image-4.4.0-97-generic 4.4.0-97.120 amd64 Linux kernel image for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.2.0-27-generic 4.2.0-27.32~14.04.1 amd64 Linux kernel extra modules for version 4.2.0 on 64 bit x86 SMPrc linux-image-extra-4.2.0-42-generic 4.2.0-42.49~14.04.1 amd64 Linux kernel extra modules for version 4.2.0 on 64 bit x86 SMPiU linux-image-extra-4.4.0-103-generic 4.4.0-103.126 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.4.0-59-generic 4.4.0-59.80 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.4.0-62-generic 4.4.0-62.83 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.4.0-63-generic 4.4.0-63.84 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.4.0-64-generic 4.4.0-64.85 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.4.0-72-generic 4.4.0-72.93 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.4.0-77-generic 4.4.0-77.98 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.4.0-81-generic 4.4.0-81.104 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPrc linux-image-extra-4.4.0-83-generic 4.4.0-83.106 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPii linux-image-extra-4.4.0-96-generic 4.4.0-96.119 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPii linux-image-extra-4.4.0-97-generic 4.4.0-97.120 amd64 Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMPiU linux-image-generic 4.4.0.103.108 amd64 Generic Linux kernel image I thought about uninstalling one of them to make room for the new one, but uname -r shows 4.4.0.96-generic as the current kernel, not 4.4.0-97-generic. I'm not sure why the more recent kernel isn't being used, and I don't want to uninstall either one if I don't have to. claude@shannon:~$ uname -r4.4.0-96-generic sudo apt-get autoremove fails because /boot is too full gzip: stdout: No space left on device(and so on) How do I install the latest kernel and remove the old kernel packages? | If your tail supports it, use tail -F , it works nicely with disappearing and re-appearing files. Just make sure you start tail from a directory which will stay in place. -F is short-hand for --follow=name --retry : tail will follow files by name rather than file descriptor, and will retry when files are inaccessible ( e.g. because they’ve been deleted). (A number of bugs relating to --follow=name with --retry were fixed in coreutils 8.26, so you may run into issues with earlier versions; e.g. retrying when the directory containing the tailed file is deleted appears to only work in all cases with version 8.26 or later.) | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/410477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/192353/"
]
} |
410,531 | In the sequence of five commands below, all depend on single-quotes to hand off possible variable substitution to the called bash shell rather than the calling shell. The calling user is xx , but the called shell will be run as user yy . The first command substitutes $HOME with the calling shell's value because the called shell is not a login shell. The second command substitutes the value of $HOME loaded by a login shell, so it is the value belonging to user yy . The third command does not rely on a $HOME value and creates a file in the guessed home directory of user yy . Why does the fourth command fail? The intention is that it writes the same file, but relying on the $HOME variable belonging to user yy to ensure it actually does end up in her home directory. I don't understand why a login shell breaks the behaviour of a here-doc command passed in as a static single-quoted string. The failure of the fifth command verifies that this problem is not about variable substitution. xx@host ~ $ sudo -u yy bash -c 'echo HOME=$HOME'HOME=/home/xxxx@host ~ $ sudo -iu yy bash -c 'echo HOME=$HOME'HOME=/home/yyxx@host ~ $ sudo -u yy bash -c 'cat > /home/yy/test.sh << "EOF"> script-content> EOF> 'xx@host ~ $ sudo -iu yy bash -c 'cat > $HOME/test.sh << "EOF"> script-content> EOF> 'bash: warning: here-document at line 0 delimited by end-of-file (wanted `EOFscript-contentEOF')xx@host ~ $ sudo -iu yy bash -c 'cat > /home/yy/test.sh << "EOF"> script-content> EOF> 'bash: warning: here-document at line 0 delimited by end-of-file (wanted `EOFscript-contentEOF') These commands were issued on a Linux Mint 18.3 Cinnamon 64-bit system, which is based on Ubuntu 16.04 (Xenial Xerus). Update: The here-doc aspect is just clouding the issue. Here's a simplification of the problem: $ sudo bash -c 'echo 1> echo 2'12$ sudo -i bash -c 'echo 1> echo 2'1echo 2 Why does the first of those two commands preserve the linebreak and the second does not? sudo is common to both commands, yet seems to be escaping/filtering/interpolating differently depending on nothing but the "-i" option. | The documentation for -i states: The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a login shell. This means that login-specific resource files such as .profile or .login will be read by the shell. If a command is specified, it is passed to the shell for execution via the shell's -c option. That is, it genuinely runs the user's login shell, and then passes whatever command you gave sudo to it using -c - unlike what sudo cmd arg arg usually does without the -i option. Ordinarily, sudo just uses one of the exec* functions directly to start the process itself, with no intermediate shell and all arguments passed through exactly as-is. With -i , it sets up the environment, runs the user's shell as a login shell, and reconstructs the command you asked to run as an argument to bash -c . In your case, it runs (say, approximately) /bin/bash -c "bash -c ' ... '" (imagine quoting that works). The problem lies in how sudo turns the command you wrote into something that -c can deal with , explained in the next section. The last section has some possible solutions, and in between is some debugging and verification technique, Why does this happen? When passing the command to -c , it needs some preprocessing to make it do the right thing, which sudo does in advance of running the shell. For example, if your command is: sudo -iu yy echo 'three spaces' then those spaces need to be escaped in order for the command to mean the same thing (i.e., for the single argument not to be split into two words). What ends up being run is: /bin/bash -c 'echo three\ \ \ spaces' Let's start with your simplified command: sudo bash -c 'echo 1echo 2' In this case, sudo changes user, then runs execvp("bash", \["bash", "-c", "echo 1\necho 2"\]) (for an invented array literal syntax). With -i : sudo -i bash -c 'echo 1echo 2' instead it changes user, then runs execv("/bin/bash", ["-bash", "-c", "bash -c echo\\ 1\\\necho\\ 2"]) , where \\ equates to a literal \ and \n is a linebreak. It's escaped the spaces and the newline in your main command by preceding them with backslashes. That is, there's an outer login shell, which has two arguments: -c and your entire command, reconstructed into a form the shell is expected to understand correctly. Unfortunately, it doesn't. The inner bash command ultimately tries to run: echo 1\echo 2 where the first physical line ends with a line continuation (backslash followed by newline), which is deleted entirely. The logical line is then just echo 1echo 2 , which doesn't do what you wanted. There's an argument that this is a flaw in sudo 's escaping, given the standard behaviour of backslash-newline pairs. I think it should be safe to leave them unescaped here. The same happens for your command with a here-document. It runs as, roughly: /bin/bash -c 'bash -c cat\ \<\<\ \"EOF\"\\012script-content\\012EOF\\012' where \012 represents an actual newline - sudo has inserted a backslash before each of them, just like the spaces. Note the double-escaping on \\012 : that's ps 's rendition of an actual backslash followed by newline, which I'm using here (see below). What eventually runs is: bash -c 'cat << "EOF"\script-content\EOF\' with line continuations \ + newline everywhere, which are just removed. That makes it one long line, with no actual newlines in it, and an invalid heredoc: bash -c 'cat << "EOF"script-contentEOF' So that's your problem: the inner bash process gets only one line of command, and so the here-document never gets a chance to end (or start). I have some imperfect solutions at the bottom, but this is the underlying cause. How can you check what's happening? To get those commands out correctly quoted and validate what was happening I modified my login shell's profile file ( .profile , .bash_profile , .zprofile , etc) to say just: ps awx|grep $$ That shows me the command line of the running shell at the time and gives me an extra couple of lines of output before the warning. hexdump -C /proc/$$/cmdline will also be helpful on Linux. What can you do about it? I don't see an obvious and reliable way of getting what you want out of this. You don't want sudo to touch your command at all if possible. One option that will largely work for a simple case is to pipe the commands into the shell, rather than specifying them on the command line: printf 'cat > ... << ... \n ...' | sudo -iu yy That requires careful internal escaping still. Probably better is just to put them into a temporary script file and run them that way: f=`mktemp`printf 'command' > "$f"chmod +r "$f"sudo -iu yy "$f"rm "$f" A made-up filename of your own will work too. Depending on your sudoers settings you might be able to keep a file descriptor open and have it read from that as a file ( /dev/fd/3 ) if you really don't want it on disk, but a real file is going to be easier. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/86392/"
]
} |
410,534 | I know the Kali is based on Debian, but if I install Debian and download all of the related kali packages, is there even a difference? | Adding to @EightBitTony's answer: Kali is a special distro aimed at pentesting, and as such is quite different from a standard Linux distro. For instance: System is supposed to be used as root and in a single-user environment Network services are disabled by default and will not persist across reboots Custom kernel So, assuming you manage to download and install all Kali packages on your Debian, you'll have something that still is significantly different , in essence and in behavior, than a standard Kali install. My humble advice would be not to mix the two and use the real thing (Kali) if/when you really need it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410534",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260948/"
]
} |
410,550 | ls returns output in several columns, whereas ls|cat returns byte-identical output with ls -1 for directories I've tried. Still I see ls -1 piped in answers, like ls -1|wc -l . Is there ever a reason to prefer ls -1 ? Why does ...|cat change the output of ls ? | ls tests whether output is going to a terminal. If the output isn't going to a terminal, then -1 is the default. (This can be overridden by one of the -C , -m , or -x options.) Thus, when ls is used in a pipeline and you haven't overridden it with another option, ls will use -1 . You can rely on this because this behavior is required by POSIX POSIX Specification POSIX requires -1 as the default whenever output is not going to a terminal: The POSIX spec : The default format shall be to list one entry per line to standard output; the exceptions are to terminals or when one of the -C, -m, or -x options is specified. If the output is to a terminal, the format is implementation-defined. Those three options which override the default single-column format are: -C Write multi-text-column output with entries sorted down the columns, according to the collating sequence. The number of text columns and the column separator characters are unspecified, but should be adapted to the nature of the output device. This option disables long format output. -m Stream output format; list pathnames across the page, separated by a <comma> character followed by a <space> character. Use a <newline> character as the list terminator and after the separator sequence when there is not room on a line for the next list entry. This option disables long format output. -x The same as -C, except that the multi-text-column output is produced with entries sorted across, rather than down, the columns. This option disables long format output. GNU Documentation From GNU ls manual : ‘-1’ ‘--format=single-column’ List one file per line. This is the default for ls when standard output is not a terminal . See also the -b and -q options to suppress direct output of newline characters within a file name. [Emphasis added] Examples Let's create three files: $ touch file{1..3} When output goes to a terminal, GNU ls chooses to use a multi-column format: $ lsfile1 file2 file3 When output goes to a pipeline, the POSIX spec requires that single-column is the default: $ ls | catfile1file2file3 The three exceptions which override the default single-column behavior are -m for comma-separated, -C for columns sorted down, and -x for columns sorted across: $ ls -m | catfile1, file2, file3$ ls -C | catfile1 file2 file3$ ls -x | catfile1 file2 file3 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/410550",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263916/"
]
} |
410,579 | I am using Ubuntu 16.04 LTS . I have python3 installed. There are two versions installed, python 3.4.3 and python 3.6 . Whenever I use python3 command, it takes python 3.4.3 by default. I want to use python 3.6 with python3 . python3 --version shows version 3.4.3 I am installing ansible which supports version > 3.5 . So, whenever, I type ansible in the terminal, it throws error because of python 3.4 sudo update-alternatives --config python3update-alternatives: error: no alternatives for python3 | From the comment: sudo update-alternatives --config python Will show you an error: update-alternatives: error: no alternatives for python3 You need to update your update-alternatives , then you will be able to set your default python version. sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.4 1sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.6 2 Then run : sudo update-alternatives --config python Set python3.6 as default. Or use the following command to set python3.6 as default: sudo update-alternatives --set python /usr/bin/python3.6 | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/410579",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265641/"
]
} |
410,608 | I have a remote host with a file I want to override with scp.this is a simple scp 'myfile.ext' '[email protected]:/bar/baz' I would like to also "rename" the original "/bar/baz/myfile.xt" into a file, and not override it with the new version. A simple "/bar/baz/myfile.xt~" is enough, but the best would be a counter or the current date. is there some way to do it with scp? I would like to minimize the "scp" calls, because I need to always put an interactive password (no, I can't change the authentication method) | You want to rename the original /bar/baz/myfile.xt as /bar/baz/myfile.xt~ or better still, with a counter or date suffix. You cannot do this directly with scp , but here are a couple of alternatives to your original command. Using rsync rsync -ab myfile.ext [email protected]:/bar/baz The -b flag tells rsync to make a backup if there is a change to the target file. The default is to append ~ but you can change that default. For example this will use today's date (as defined on the source machine): rsync -ab --suffix ".$(date +'%Y%m%d')" myfile.ext [email protected]:/bar/baz Using ssh with scp . I've assumed that baz is the name of the target file rather than a directory in which the source file is to be copied: ssh [email protected] 'cp -p /bar/baz /bar/baz."$(date +'%Y%m%d')"' &&scp -p myfile.ext [email protected]:/bar/baz You could use mv instead of cp if you preferred, but this would lose any non-standard permissions and hard file links on the true target file. The rsync option is cleaner, but it's not always installed. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410608",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20428/"
]
} |
410,636 | I know how to create an arithmetic for loop in bash . How can one do an equivalent loop in a POSIX shell script? As there are various ways of achieving the same goal, feel free to add your own answer and elaborate a little on how it works. An example of one such bash loop follows: #!/bin/bashfor (( i=1; i != 10; i++ ))do echo "$i"done | I have found useful information in Shellcheck.net wiki , I quote: Bash¹: for ((init; test; next)); do foo; done POSIX: : "$((init))" while [ "$((test))" -ne 0 ]; do foo; : "$((next))"; done though beware that i++ is not POSIX so would have to be translated, for instance to i += 1 or i = i + 1 . : is a null command that always has a successful exit code. "$((expression))" is an arithmetic expansion that is being passed as an argument to : . You can assign to variables or do arithmetic/comparisons in the arithmetic expansion. So the above script in the question can be POSIX-wise re-written using those rules like this: #!/bin/sh: "$((i=1))"while [ "$((i != 10))" -ne 0 ]do echo "$i" : "$((i = i + 1))"done Though here, you can make it more legible with: #!/bin/shi=1while [ "$i" -ne 10 ]do echo "$i" i=$((i + 1))done as in init , we're assigning a constant value, so we don't need to evaluate an arithmetic expression. The i != 10 in test can easily be translated to a [ expression, and for next , using a shell variable assignment as opposed to a variable assignment inside an arithmetic expression, lets us get rid of : and the need for quoting. Beside i++ -> i = i + 1 , there are more translations of ksh/bash-specific constructs that are not POSIX that you might have to do: i=1, j=2 . The , arithmetic operator is not really POSIX (and conflicts with the decimal separator in some locales with ksh93). You could replace it with another operator like + as in : "$(((i=1) + (j=2)))" but using i=1 j=2 would be a lot more legible. a[0]=1 : no arrays in POSIX shells i = 2**20 : no power operator in POSIX shell syntax. << is supported though so for powers of two, one can use i = 1 << 20 . For other powers, one can resort to bc : i=$(echo "3 ^ 20" | bc) i = RANDOM % 3 : not POSIX. The closest in the POSIX toolchest is i=$(awk 'BEGIN{srand(); print int(rand() * 3)}') . ¹ technically, that syntax is from the ksh93 shell and is also available in zsh in addition to bash | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/410636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
410,668 | Installed Debian Stretch (9.3). Installed Vim and removed Nano. Vim is selected as the default editor. Every time I run crontab -e , I get these warnings: root@franklin:~# crontab -eno crontab for root - using an empty one/usr/bin/sensible-editor: 25: /usr/bin/sensible-editor: /bin/nano: not found/usr/bin/sensible-editor: 28: /usr/bin/sensible-editor: nano: not found/usr/bin/sensible-editor: 31: /usr/bin/sensible-editor: nano-tiny: not foundNo modification made I've tried reconfiguring the sensible-utils package, but it gives no input (indicating success with whatever it's doing), but the warnings still appear. root@franklin:~# dpkg-reconfigure sensible-utilsroot@franklin:~# Although these warnings don't prevent me from doing anything, I find them quite annoying. How can I get rid of them? | I found my own answer and so I'm posting it here, in case it helps someone else. In the root user's home directory, /root , there was a file alled .selected_editor , which still retained this content: # Generated by /usr/bin/select-editorSELECTED_EDITOR="/bin/nano" The content suggests that the command select-editor is used to select a new editor, but at any rate, I removed the file (being in a bad mood and feeling the urge to obliterate something) and was then given the option of selecting the editor again when running crontab -e , at which point I selected vim.basic , and all was fine after that. The new content of the file reflects that selection now: # Generated by /usr/bin/select-editorSELECTED_EDITOR="/usr/bin/vim.basic" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/410668",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5838/"
]
} |
410,689 | Yes, I've seen that there's already a similar question , but I came across kill -- -0 and was wondering what -- is doing? | In UNIX/Linux world two dashes one after other mean end of options. For example if you want to search for string -n with grep you should use command like: grep -- -n file If you want to get only the line number in above case you should use grep -l -- -n file So the command kill -- -0 try to send signal to process with ID -0 (minus zero) | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/410689",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264975/"
]
} |
410,708 | Why bother? Clearing scrollback buffer is handy in many ways, for example, when I wish to run some command with long output, and want to quickly scroll to start of this output. When scrollback buffer is cleared, I can just scroll to top, and will be done. Some considerations: There is clear command, according to man, clear clears your screen if this is possible, including its scrollback buffer (if the extended "E3" capability is defined). In gnome-terminal clear does not clear scrollback buffer. (What is "E3" capability, though?) There is also reset , which clears, but it does a little bit more than that, and it is really slow (on my system it takes more than a second, which is significant delay for humans to be noticed). And there is echo -ne '\ec' or echo -ne '\033c' , which does the job. And indeed it is much faster than reset . The question is, what is \ec sequence, how it differs from what clear and reset does, and why there is no separate command for it? There is also readline's C-l key sequence, which by default bound to clear-screen command (I mean, readline command, not shell command). What is this command? Which escape sequence it emits? How does it actually work? Does it run shell command? Or what? Again, in gnome-terminal, it seems like it works just by spiting out blank lines until prompt appear in top line of terminal. Not sure about other terminal emulators. This is very cumbersome behavior. It pollutes scrollback with chunks of emptiness, so you must scroll up more, and more. It is like a hack, rather than clean solution. Another question is, is there a readline command for mentioned \ec sequence? I want to bound it to C-l instead because I always want to clear scrollback buffer when I clear the screen. And another question is how to just type such escape sequence into terminal, to perform desired action? Then do not have to think about binding C-l to another readline command (if such command exists). I tried typing Esc , then c but this does not work. UPDATE This question answered mostly here: https://unix.stackexchange.com/a/375784/257159 . It is very good answer which explains almost all questions asked here. | From the man bash 's readline section : clear-display (M-C-l) Clear the screen and, if possible, the terminal's scrollback buffer, then redraw the current line, leaving the current line at the top of the screen. clear-screen (C-l) Clear the screen, then redraw the current line, leaving the current line at the top of the screen. With an argu‐ ment, refresh the current line without clearing the screen. so press control + alt + L | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/257159/"
]
} |
410,723 | This question appears to be addressed properly in several questions and otherplaces easily found with Google, but I don't find the solution satisfactoryfor the reasons explained below. But just for completion, I've included somerelevant links: https://askubuntu.com/questions/138284/how-to-downgrade-a-package-via-apt-get https://askubuntu.com/questions/428772/how-to-install-specific-version-of-some-package/428778 https://askubuntu.com/questions/26498/choose-gcc-and-g-version ... And others. However, this question is regarding installing a very specific version of GCC in Kali Linux, which does not appear readily available as a specific package. In particular, the question is regarding how to install version 6.3.0, as I need this version to compile a particular program: https://www.reddit.com/r/Monero/comments/6d0ah8/xmrig_miner_new_release/ (as a bonus question, if there is a more sane way to fix this particularissue without using a different version of GCC, feel free to answer, but I believe this question is general and I would like to know how to do it regardless of how to make the aforementioned program link correctly) The versions which are available to install of any package, e.g. gcc, can bedetermined with: apt-cache showpkg gcc Which will list the available versions under "versions:", e.g. 4:7.2.0-1d14:4.9.2-2 Installation is then as simple as issuing apt-get install gcc:4:4.9.2-2 This will install the older version 4:4.9.2-2, by simply (I believe)overwriting the 7.2.0-1d1 install, if present. To get version 4:4.9.2-2 available at all, I had to add debhttp://old.kali.org/kali sana main non-free contrib to my /etc/apt/sources.list file and then run apt-get update . However, what if the version I need is not listed? I've been experimenting with various sources, e.g. those found here: http://snapshot.debian.org/ and at various other questions and websites from Google searches. Most of them give me ignore or errors, e.g. as follows Ign:3 http://snapshot.debian.org/archive/debian/20091004T111800Z lenny InRelease Even if this would work, it seems to be a very bad approach to get aparticular version installed, as adding some arbitrary source might not have the particular version I want. If I search on snapshot.debian.org for gcc, I get only very old versions: http://snapshot.debian.org/package/gcc/ I eventually became frustrated with this approach and compiled GCC 6.3.0 from the source tarball. The compilation was successful, but then I'm faced with how to install it. I'm cautious about running make install as I fear it will tamper with apt and dpkg and possibly break the system. Instead, I attempted to run it from the build directory, directly. I tried to simply add the build directory as the first entry in my PATH, which didn't work. Then, I attempted to rename /usr/bin/gcc and do a symlink from /usr/bin/gcc to where my gcc-6.3.0 executable lives. This presents the following problem: cc: error trying to exec 'cc1': execvp: No such file or directory, which This was fixed with another entry in my PATH. Then, I get this error: /usr/include/stdio.h:34:21: fatal error: stddef.h: No such file or directory Which I assume is because of a missing entry in /usr/lib/gcc/x86_64-linux-gnu . I tried to make a symlink from 6 to 6.3.0, but this wasn't sufficient. I also tried to actually copy everything with cp -R , same result. This should be a 64-bit program, but I also considered the same for /usr/lib/gcc/i686-linux-gnu . I'm sure I could start doing strace to see where it attempts to open the files from, read log files, read the source, and eventually I imagine I'd be able to figure out how to hack together a poorly conceived solution. But it would be nice if someone could tell me how to do this in a sane manner. | How to install a specific version of GCC in Kali Linux? GCC 6 is available on kali linux it can be installed as follow : apt install g++-6 gcc-6 To switch between gcc6 and gcc7 update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 1 --slave /usr/bin/g++ g++ /usr/bin/g++-7update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-6 2 --slave /usr/bin/g++ g++ /usr/bin/g++-6update-alternatives --config gcc Sample output: There are 2 choices for the alternative gcc (providing /usr/bin/gcc). Selection Path Priority Status------------------------------------------------------------* 0 /usr/bin/gcc-6 2 auto mode 1 /usr/bin/gcc-6 2 manual mode 2 /usr/bin/gcc-7 1 manual modePress <enter> to keep the current choice[*], or type selection number: Select your default gcc version. on 2017-08-05 the gcc-6 version is upgraded from 6.3.0 to 6.4.0 . Installing xmrig following the build's instructions. apt-get install git build-essential cmake libuv1-dev libmicrohttpd-devgit clone https://github.com/xmrig/xmrig.gitcd xmrigmkdir buildcd buildcmake ..make Building a specific gcc version 6.3.0 Download the tarball from the closest mirror : GCC Releases wget https://ftp.gnu.org/gnu/gcc/gcc-6.3.0/gcc-6.3.0.tar.bz2tar xvjf gcc-6.3.0.tar.bz2cd gcc-6.3.0apt build-dep gcc./contrib/download_prerequisitescd ..mkdir objdircd objdir$PWD/../gcc-6.3.0/configure --prefix=/usr/bin/gcc-6.3 --enable-languages=c,c++,fortran,go --disable-multilibmake -j 8make install Add gcc-6.3 to update-alternatives Important : The --disable-multilib option is required to configure and build gcc for the current architecture. GCC WIKI : Installing GCC | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/410723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/263942/"
]
} |
410,737 | I have tried using crontab, but it is limited to minutes. Is there any other option? I have also tried something like: watch -n 1 sh /path-to-script/try.sh But whenever I close the terminal it stops.I want something which continuously works in the background. | Use a script with the while loop and nohup . the_script.sh : while :; do /path-to-script/try.sh sleep 1done Run the_script.sh immune to hangups: nohup /path/to/the_script.sh > /dev/null Replace /dev/null with some file path if you care what's in the stdout . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410737",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265759/"
]
} |
410,750 | I want to suppress errors in my sub-shell after a certain point. I wrote script to demonstrate the situation: worked=false(echo Starting subshell process \ && echo If this executes process is considered success \ && false \ && echo run if possible, but not an error if failed) \ && worked=trueecho $worked I want to report back to the outer shell that the process worked. I also thought about putting the worked variable inside the subshell: && echo This works process worked: \ && worked=true \ && false \ && echo run if possible, but not an error if failed) But this doesn't work either because setting a variable inside the subshell doesn't effect the main script. | How about this worked=false( set -e echo Starting subshell process echo If this executes process is considered success false echo run if possible, but not an error if failed || true)[[ 0 -eq $? ]] && worked=trueecho "$worked" The set -e terminates the subshell as soon as an unprotected error is found. The || true construct protects a statement that might fail, where you don't want the subshell to terminate. If you just want to know if the subshell succeeded you can dispense with the $worked variable entirely ( set -e ...)if [[ 0 -eq $? ]]then echo "Success"fi Note that if you want to use set -e to abort execution in the subshell as soon as a command fails, you cannot use a construct such as ( set -e; ... ) && worked=true or if ( set -e; ...); then ... fi . This is documented in the man page for bash but I missed it first time round: If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16792/"
]
} |
410,768 | I am trying to get Kerberos PAM to pull a ticket and not destroy it after an RStudio login on CentOS 7. My rstudio file in /etc/pam.d/ looks like: #%PAM-1.0 auth required pam_krb5.so retain_after_close debug session requisite pam_krb5.so retain_after_close debug account required pam_krb5.so debug I know that RStudio is communicating fine with the PAM Stack because if I delete the first line, RStudio will not login. I an also do other manipulations that let me know the two are in sync. Per the RStudio documentation, if I run the command: pamtester --verbose rstudio <user> authenticate setcred open_session After entering my password, a ticket is created in /tmp called krb5cc_(uid) which is what I would expect. I can make the above pamtester line fail to pull a ticket by removing the setcred flag which tells me that this the key component. A look in the Keberos PAM documentation says that session performs the same as auth but it runs with the command pam_setcred(PAM_ESTABLISH_CRED) flag, which is what I want. The same documentation says that if I add retain_after_close then the ticket should be retained. However, this is not happening and I'm not even sure it's actually pulling the ticket. Any help is appreciated, I have tried nearly every combination of flags and parameters in the PAM file as possible but to no avail. Kerberos is a nightmare. LMK what else I can add to help. The log files are not useful unfortunately as they do not log an error due to the fact that PAM "silently fails" if a line is not understood. | How about this worked=false( set -e echo Starting subshell process echo If this executes process is considered success false echo run if possible, but not an error if failed || true)[[ 0 -eq $? ]] && worked=trueecho "$worked" The set -e terminates the subshell as soon as an unprotected error is found. The || true construct protects a statement that might fail, where you don't want the subshell to terminate. If you just want to know if the subshell succeeded you can dispense with the $worked variable entirely ( set -e ...)if [[ 0 -eq $? ]]then echo "Success"fi Note that if you want to use set -e to abort execution in the subshell as soon as a command fails, you cannot use a construct such as ( set -e; ... ) && worked=true or if ( set -e; ...); then ... fi . This is documented in the man page for bash but I missed it first time round: If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410768",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265774/"
]
} |
410,769 | I'm trying to install tails on an USB drive. Up to now I already verfied my tails .iso and I followed the instructions on this website to install it: https://tails.boum.org/install/expert/usb/index.en.html However at number 3/7, where I have to install the tails-installer I get the following problem:When running sudo apt update I get the following warning: W: Fehlschlag beim Holen von http://ppa.launchpad.net/tails-team/tails-installer/ubuntu/dists/trusty/main/binary-i386/Packages 404 Not Found meaning the address, where the package lies is not accessible any more. Do you have any solution to this problem? I am running Linux Mint on a bootable USB Drive. | How about this worked=false( set -e echo Starting subshell process echo If this executes process is considered success false echo run if possible, but not an error if failed || true)[[ 0 -eq $? ]] && worked=trueecho "$worked" The set -e terminates the subshell as soon as an unprotected error is found. The || true construct protects a statement that might fail, where you don't want the subshell to terminate. If you just want to know if the subshell succeeded you can dispense with the $worked variable entirely ( set -e ...)if [[ 0 -eq $? ]]then echo "Success"fi Note that if you want to use set -e to abort execution in the subshell as soon as a command fails, you cannot use a construct such as ( set -e; ... ) && worked=true or if ( set -e; ...); then ... fi . This is documented in the man page for bash but I missed it first time round: If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410769",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265777/"
]
} |
410,784 | Given input: 144.252.36.69afrloop=32235330165603144.252.36.69afrloop=32235330165603144.252.36.69afrloop=32235330165603222.252.36.69afrloop=31135330165603222.252.36.69afrloop=31135330165603222.252.36.69afrloop=31135330165603222.252.36.69afrloop=31135330165603 How can I output: 144.252.36.69afrloop=32235330165603 3 times222.252.36.69afrloop=31135330165603 4 times | How about this worked=false( set -e echo Starting subshell process echo If this executes process is considered success false echo run if possible, but not an error if failed || true)[[ 0 -eq $? ]] && worked=trueecho "$worked" The set -e terminates the subshell as soon as an unprotected error is found. The || true construct protects a statement that might fail, where you don't want the subshell to terminate. If you just want to know if the subshell succeeded you can dispense with the $worked variable entirely ( set -e ...)if [[ 0 -eq $? ]]then echo "Success"fi Note that if you want to use set -e to abort execution in the subshell as soon as a command fails, you cannot use a construct such as ( set -e; ... ) && worked=true or if ( set -e; ...); then ... fi . This is documented in the man page for bash but I missed it first time round: If a compound command or shell function sets -e while executing in a context where -e is ignored, that setting will not have any effect until the compound command or the command containing the function call completes. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265789/"
]
} |
410,898 | What is the syntax error in this file? I can't spot it. set-k8s-azure() { export KUBECONFIG=~/.kube/config.azure-1 }set-k8s-minikube() { export KUBECONFIG=~/.kube/config.minikube }minikube() { if [[ $@ == start* ]]; then set-k8s-minikube fi command minikube "$@"}alias pulr='if output=$(git status --porcelain) && [ -z "$output" ]; then git pull --rebase; else git stash save "pulr WIP saved" && git pull --rebase && git stash pop; fi'alias vi=nvim source ~/.bash_aliases produces: bash: /home/niel/.bash_aliases: line 1: syntax error near unexpected token `('bash: /home/niel/.bash_aliases: line 1: `set-k8s-azure() { ' | I believe that the syntax error is here: set-k8s-minikube() { export KUBECONFIG=~/.kube/config.minikube } The {...} construct needs either a newline or a ; before the final } : set-k8s-minikube() { export KUBECONFIG=~/.kube/config.minikube; } Also, I'd advise that you use $HOME rather than ~ in scripts, partly because it serves as documentation and partly because $HOME behaves like a variable whereas ~ does not (see Why doesn't the tilde (~) expand inside double quotes? ). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410898",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/195851/"
]
} |
410,952 | I've heard about the Scrub of Death. However one can disable checksumming in ZFS datasets. If so, will that make the situation safer for a system that's not using ECC RAM? I'm not thinking of a NAS or anything like that - more of a workstation deployment with a single drive just to use the ZFS volume management and snapshots (and no need for fsck ) benefits. I don't want to use redundancy even. Will a bad memory location still completely destroy my storage if I disable ZFS checksums? | I've heard about the Scrub of Death. You should read this: http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/ Unless the memory in your system is absolute trash, it will almost certainly have fewer problems than your disks. If your system has an SSD and a "slow" CPU, the performance hit from calculating the checksum data will be negligible. My personal opinion on this is that, unless your CPU is 100% in use the majority of time (and sometimes even then), it's best to just let ZFS use checksums. I feel like there's much confusion in this topic. There is. Unfortunately, I don't have a better answer. If you ask this question on the ZFS on Linux mailing list, you'll get a much more detailed answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410952",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67203/"
]
} |
410,972 | I am setting up a file server with a shared directory. Inside, there are per-user folders that are readable by any user and a shared directory that is readable and writeable by any user. The per-user folders are simple enough. However, I am having some issues with the shared folder. I performed the standard procedure for making a set GID folder: # chown root shared# chmod -R ug+rwX shared# chgrp -R users shared# find shared -type d -exec chmod g+s "{}" \;# find shared -type d -exec setfacl -m "default:group::rwx" "{}" \; After ensuring all users are in the 'users' group, this works perfectly via direct console login, ssh, rsync, etc. However, there are some issues with samba. With the default samba config, the SGID bit and GID are propagated, but new files and folders do not have the group write bit set. This appears to be because the ACL is being ignored. According to Samba Ignoring POSIX ACLs , the solution is to add vfs objects = acl_xattr to smb.conf. When I set that, the group write permission is correctly set. However, the group is then set to the user's primary group instead of the group of the parent directory, which rather defeats the purpose of the set GID bit. I tried the other smb.conf adjustments noted in the link ( map acl inherit = yes , store dos attributes = yes , and inherit acls = yes ), but these had no effect. What's the proper way to make this work? | I've heard about the Scrub of Death. You should read this: http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/ Unless the memory in your system is absolute trash, it will almost certainly have fewer problems than your disks. If your system has an SSD and a "slow" CPU, the performance hit from calculating the checksum data will be negligible. My personal opinion on this is that, unless your CPU is 100% in use the majority of time (and sometimes even then), it's best to just let ZFS use checksums. I feel like there's much confusion in this topic. There is. Unfortunately, I don't have a better answer. If you ask this question on the ZFS on Linux mailing list, you'll get a much more detailed answer. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410972",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55879/"
]
} |
410,983 | Is there any consistent logic to it? some-command "${somevariable//some pattern/'how does this get parsed?'}" I've posted some conclusions and raw tests below as an "answer" but they're not a full answer by any means. The Bash man page appears silent on the subject. | As discussed in the comments, this seems to have changed between versions of Bash. I think this is the relevant change in bash-4.3-alpha ( changelog ): zz. When using the pattern substitution word expansion, bash now runs the replacement string through quote removal, since it allows quotes in that string to act as escape characters. This is not backwards compatible, so it can be disabled by setting the bash compatibility mode to 4.2. And the description for shopt -s compat42 ( online manual ): compat42 If set, bash does not process the replacement string in the pattern substitution word expansion using quote removal. The quoting single-quotes example: $ s=abc\'def; echo "'${s//\'/\'\\\'\'}'"'abc'\''def'$ shopt -s compat42$ s=abc\'def; echo "'${s//\'/\'\\\'\'}'"'abc\'\\'\'def'$ bash --version | head -1GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) Workaround: put the replacement string in a variable, and don't use quotes inside the replacement: $ shopt -s compat42$ qq="'\''"; s=abc\'def; echo "'${s//\'/$qq}'";'abc'\''def'$ qq="'\''"; s=abc\'def; echo "'${s//\'/"$qq"}'";'abc"'\''"def' The funny thing is, that if the expansion is unquoted , then the quotes are removed after the substitution, in all versions. That is s=abc; echo ${s/b/""} prints ac . This of course doesn't happen with other expansions, e.g. s='a""c' ; echo ${s%x} outputs a""c . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135943/"
]
} |
410,991 | I remember making an image that I never used. I cannot uninstall qemu-img because I have one vm that is being used. | As discussed in the comments, this seems to have changed between versions of Bash. I think this is the relevant change in bash-4.3-alpha ( changelog ): zz. When using the pattern substitution word expansion, bash now runs the replacement string through quote removal, since it allows quotes in that string to act as escape characters. This is not backwards compatible, so it can be disabled by setting the bash compatibility mode to 4.2. And the description for shopt -s compat42 ( online manual ): compat42 If set, bash does not process the replacement string in the pattern substitution word expansion using quote removal. The quoting single-quotes example: $ s=abc\'def; echo "'${s//\'/\'\\\'\'}'"'abc'\''def'$ shopt -s compat42$ s=abc\'def; echo "'${s//\'/\'\\\'\'}'"'abc\'\\'\'def'$ bash --version | head -1GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu) Workaround: put the replacement string in a variable, and don't use quotes inside the replacement: $ shopt -s compat42$ qq="'\''"; s=abc\'def; echo "'${s//\'/$qq}'";'abc'\''def'$ qq="'\''"; s=abc\'def; echo "'${s//\'/"$qq"}'";'abc"'\''"def' The funny thing is, that if the expansion is unquoted , then the quotes are removed after the substitution, in all versions. That is s=abc; echo ${s/b/""} prints ac . This of course doesn't happen with other expansions, e.g. s='a""c' ; echo ${s%x} outputs a""c . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/410991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193482/"
]
} |
411,043 | I have multiple files to be pulled from remote server. For further processing of the files in the local server, I need to merge (concatenate) them into single file, which can't be done in the remote file though. I am not sure how scp work internally, but for the best performance I believe instead of writing those files into local directory and then merge, I feel, I should merge them on the fly and then write into single file. Can you please let me know if merging (appending) the files on fly during scp from remote to local files possible? If not any better idea? | Use SSH directly instead of scp and run cat . Where you would do: scp remote:{file1,file2...} local-dir Instead do: ssh remote cat file1 file2 ... > locale-file | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411043",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266019/"
]
} |
411,051 | I was reading about the differences between cron and anacron and I realized that anacron, unlike cron is not a daemon. So I'm wondering how does it work actually if it's not a daemon. | It uses a variety of methods to run: if the system is running systemd, it uses a systemd timer (in the Debian package, you’ll see it in /lib/systemd/system/anacron.timer ); if the system isn’t running systemd, it uses a system cron job (in /etc/cron.d/anacron ); in all cases it runs daily, weekly and monthly cron jobs (in /etc/cron.{daily,weekly,monthly}/0anacron ); it also runs at boot (from /etc/init.d/anacron or its systemd unit). | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/411051",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265148/"
]
} |
411,091 | We would like to store millions of text files in a Linux filesystem, with the purpose of being able to zip up and serve an arbitrary collection as a service. We've tried other solutions, like a key/value database, but our requirements for concurrency and parallelism make using the native filesystem the best choice. The most straightforward way is to store all files in a folder: $ ls text_files/1.txt2.txt3.txt which should be possible on an EXT4 file system , which has no limit to number of files in a folder. The two FS processes will be: Write text file from web scrape (shouldn't be affected by number of files in folder). Zip selected files, given by list of filenames. My question is, will storing up to ten million files in a folder affect the performance of the above operations, or general system performance, any differently than making a tree of subfolders for the files to live in? | The ls command, or even TAB-completion or wildcard expansion by the shell, will normally present their results in alphanumeric order. This requires reading the entire directory listing and sorting it. With ten million files in a single directory, this sorting operation will take a non-negligible amount of time. If you can resist the urge of TAB-completion and e.g. write the names of files to be zipped in full, there should be no problems. Another problem with wildcards might be wildcard expansion possibly producing more filenames than will fit on a maximum-length command line. The typical maximum command line length will be more than adequate for most situations, but when we're talking about millions of files in a single directory, this is no longer a safe assumption. When a maximum command line length is exceeded in wildcard expansion, most shells will simply fail the entire command line without executing it. This can be solved by doing your wildcard operations using the find command: find <directory> -name '<wildcard expression>' -exec <command> {} \+ or a similar syntax whenever possible. The find ... -exec ... \+ will automatically take into account the maximum command line length, and will execute the command as many times as required while fitting the maximal amount of filenames to each command line. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/411091",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99989/"
]
} |
411,122 | The request string in the example below interpolates version variable, but keeps the curly braces and I can't figure out why. #!/bin/shversion=2989request="http://example.com/?version={$version}&therest"echo "$request" Result: $ ~/script.shhttp://example.com/?version={2989}&therest Environment: $ echo $0-zsh | The { is before $ . It should be ${version} :) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18297/"
]
} |
411,132 | I work in Maxima a lot (start it on the terminal with "rlwrap .../maxima" and sometimes I want to save a few (several) screens worth (scrolling) of calculations. I realize I can use xmaxima, a variant that can then save it to a text file - that works. But I also sometimes use scipy/python in the terminal, or even others. In general, is there a way to save several screens of interactive program input/output from the bash terminal to a file (possibly preserving 'word art', or 2D display)? I use terminator, though not sure it matters. Also, sometimes I work on a debian system and other times on Linux Mint. | This is what the script tool is for. It will save an entire terminal session - inputs and outputs: $ script sessionlog.txt[ do stuff ]$ exit$ ls sessionlog.txt | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411132",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131044/"
]
} |
411,138 | I use the following command to convert a decimal value into a time value : echo "1.5" | awk -F'.' '{printf $1 ":" "%.0f", $2 / 100 * 60}' outputs: 1:3 how can I make awk to add a trailing zero to the output so I would get: 1:30 ? | I don't think that adding a trailing 0 is a good approach.It may produce the intended output for the 1.5 as input,but if you need a general solution for other inputs,this approach will probably not work well. A better approach is to not split the integer part and the decimal parts,but to work with minutes, using the / and % operators to compute the correct hours and minutes , for example: awk '{printf "%d:%02d", ($1 * 60 / 60), ($1 * 60 % 60)}' <<< 1.5# prints 1:30awk '{printf "%d:%02d", ($1 * 60 / 60), ($1 * 60 % 60)}' <<< 1.50# prints 1:30awk '{printf "%d:%02d", ($1 * 60 / 60), ($1 * 60 % 60)}' <<< 1.7# prints 1:42awk '{printf "%d:%02d", ($1 * 60 / 60), ($1 * 60 % 60)}' <<< 1.05# prints 1:03 To handle negative values,you can introduce an abs function: awk 'function abs(v) {return v < 0 ? -v : v} {printf "%d:%02d", ($1 * 60 / 60), ($1 * 60 % 60)}' <<< -1.7# prints -1:42 | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411138",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
411,141 | I have a bash script that continuously outputs some information when run. I need to Automatically run this when my system boots. Monitor this output and control it every once in a while remotely, using ssh. For this purpose, I would like to use tmux. So how do I approach this? For simplicity, let's say my shell script is this: filename: start.bash #!/bin/bash# just an example for simplicity watch date I need another script that runs this in tmux and be able to attach to this when I need, later. I am struggling at the part where I need to create a new tmux session with a name and make it run another shell script. Once I have this working, I can put this in another shell script and take care of the rest of the stuff. That is easy, I think. Can someone give me an example for this specific step please? | You can do this many ways. You can do it after you've created the session either with send-keys: tmux new -s "remote" -dtmux send-keys -t "remote" "start.bash" C-mtmux attach -t "remote" -d Or through the shell: tmux new -s "remote" -d "/bin/bash"tmux run-shell -t "remote:0" "start.bash"tmux attach -t "remote" -d | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411141",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180667/"
]
} |
411,159 | I know there are two "levels" of programs: User space and kernel space. My question is: I want to see only kernel programs,or better: programs on kernel space. Is this approach correct? ps -ef|grep "\["root 1 0 0 20:23 ? 00:00:00 init [4]root 2 0 0 20:23 ? 00:00:00 [kthreadd]root 3 2 0 20:23 ? 00:00:00 [ksoftirqd/0]root 5 2 0 20:23 ? 00:00:00 [kworker/0:0H]root 7 2 0 20:23 ? 00:00:06 [rcu_sched]root 8 2 0 20:23 ? 00:00:00 [rcu_bh]root 9 2 0 20:23 ? 00:00:00 [migration/0]root 10 2 0 20:23 ? 00:00:00 [migration/1]root 11 2 0 20:23 ? 00:00:00 [ksoftirqd/1]root 13 2 0 20:23 ? 00:00:00 [kworker/1:0H]root 14 2 0 20:23 ? 00:00:00 [migration/2].... | Kernel processes (or "kernel threads") are children of PID 2 ( kthreadd ), so this might be more accurate: ps --ppid 2 -p 2 -o uname,pid,ppid,cmd,cls Add --deselect to invert the selection and see only user-space processes. (This question was pretty much an exact inverse of this one .) In 2.4.* and older kernels, this PID 2 convention did not exist yet. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/411159",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
411,164 | I know that locate has to have a database generated and is much faster, and that find does not need a database generated and is not as fast. So in what situations is find and locate more efficient/affective/give a better end result? | Kernel processes (or "kernel threads") are children of PID 2 ( kthreadd ), so this might be more accurate: ps --ppid 2 -p 2 -o uname,pid,ppid,cmd,cls Add --deselect to invert the selection and see only user-space processes. (This question was pretty much an exact inverse of this one .) In 2.4.* and older kernels, this PID 2 convention did not exist yet. | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/411164",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/241691/"
]
} |
411,187 | I have a file input: $ cat input1echo 12345 and I have the following program 1st version #include <stdio.h>#include <stdlib.h>int main() { system("/bin/bash -i"); return 0;} Now If I run it, $ gcc -o program program.c$ ./program < inputbash: line 1: 1echo: command not found$ exit Everything works as expected. Now I want to ignore the 1st character of the file input, so I make a call to getchar() before invoking system() . 2nd version: #include <stdio.h>#include <stdlib.h>int main() { getchar(); system("/bin/bash -i"); return 0;} Surprisingly, the bash exits instantly like there is no input. $ gcc -o program program.c$ ./program < input$ exit Question why bash is not receiving the input ? NOTE I tried some stuff and I figured out that forking a new child for the main process solves the problem: 3rd Version #include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <sys/wait.h>int main() { getchar(); if (fork() > 0) { system("/bin/bash -i"); wait(NULL); } return 0;}$ gcc -o program program.c$ ./program < input$ 12345$ exit OS Ubuntu 16.04 64bit, gcc 5.4 | A file stream is defined to be : fully buffered if and only if it can be determined not to refer to an interactive device Since you're redirecting into standard input, stdin is non-interactive and so it's buffered. getchar is a stream function, and will cause the buffer to be filled from the stream, consuming those bytes, and then return a single byte to you. system just runs fork-exec , so the subprocess inherits all your open file descriptors as-is. When bash tries to read from its standard input, it will find that it's already at the end of file because all the content has been read by your parent process already. In your case, you want to consume only that single byte before handing over to the child process, so: The setvbuf() function may be used after the stream pointed to by stream is associated with an open file but before any other operation [...] is performed on the stream. Thus adding a suitable call before the getchar() : #include <stdio.h>#include <stdlib.h>int main() { setvbuf(stdin, NULL, _IONBF, 0 ); getchar(); system("/bin/bash -i"); return 0;} will do what you want, by setting stdin to be unbuffered ( _IONBF ). getchar will cause only a single byte to be read, and the rest of the input will be available to the subprocess. It might be better to use read instead, avoiding the whole streams interface, in this case. POSIX mandates certain behaviours when the handle can be accessed from both processes after a fork , but explicitly notes that If the only action performed by one of the processes is one of the exec functions [...], the handle is never accessed in that process. which means that system() doesn't (have to) do anything special with it, since it's just fork-exec . This is probably what your fork workaround is hitting. If the handle is accessed on both sides, then for the first one : If the stream is open with a mode that allows reading and the underlying open file description refers to a device that is capable of seeking, the application shall either perform an fflush() , or the stream shall be closed. Calling fflush() on a read stream will mean that: the file offset of the underlying open file description shall be set to the file position of the stream so the descriptor position should be reset back to 1 byte, the same as the stream's, and the subsequent subprocess will get its standard input starting from that point. Additionally, for the second (child's) handle : If any previous active handle has been used by a function that explicitly changed the file offset, except as required above for the first handle, the application shall perform an lseek() or fseek() (as appropriate to the type of handle) to an appropriate location. and I suppose "an appropriate location" might be the same (though it isn't further specified). The getchar() call "explicitly changed the file offset", so this case should apply. The intention of the passage is that working in either branch of the fork should have the same effect, so both fork() > 0 and fork() == 0 should work the same. Since nothing actually happens in this branch, though, it's arguable that none of these rules should be used at all for either parent or child. The exact result is probably platform-dependent - at least, exactly what counts as "can ever be accessed" isn't directly specified, nor which handle is first and second. There is also an earlier, overriding case for the parent process: If the only further action to be performed on any handle to this open file descriptor is to close it, no action need be taken. which arguably applies for your program since it just terminates afterwards. If it does then all the remaining cases, including the fflush() , should be skipped, and the behaviour you're seeing would be a deviation from the specification. It's arguable that calling fork() constitutes performing an action on the handle, but not explicit or obvious, so I wouldn't trust that. There's also enough "either" and "or" in the requirements that a lot of variation seems allowable. For multiple reasons I think the behaviour you're seeing may be a bug , or at least a generous interpretation of the specification. My overall reading is that, since in every case one branch of the fork doesn't do anything, none of these rules should have been applied and the descriptor position should have been ignored where it was. I can't be definitive about that, but it seems like the most straightforward reading. I wouldn't rely on the fork technique working. Your third version doesn't work for me here. Use setbuf / setvbuf instead. If possible, I'd even use popen or similar to set up the process with the necessary filtering explicitly, rather than relying on the vagaries of stream and file descriptor interactions. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266117/"
]
} |
411,202 | This question is similar to this one , but more specific. I have a -stable OpenBSD machine and I want to start following -current. I know about the upgrade procedure from one release to another. How can I go from a release to the latest snapshot? I can simply boot from the latest snapshot's bsd.rd and follow the upgrade procedure, but what about the "pre-upgrade steps" and the "configuration steps"? Are there any to apply when going from -stable to the latest snapshot? When I get a -current system and I want to update it again, what is the procedure? Should I build from sources or use the latest snapshot's bsd.rd again? In any case, are there any "configuration steps" involved, as in the link above? | Don't build from source. I've been following current for several years. You can do binary upgrades to new snapshots. And you can do a direct binary upgrade from release/stable to current. Reboot. At the prompt type: boot bsd.rd Go through the motions of upgrading.When it asks for a hostname, I use this one, it's quite fast mirrors.sonic.net When it asks for a path, change it to /pub/OpenBSD/snapshots/amd64/ Substitute amd64 for your architecture. Continue with the upgrade prompts Reboot after it's done. Change PKG_PATH export PKG_PATH=http://mirrors.sonic.net/pub/OpenBSD/snapshots/packages/amd64/ Add this to ~/.profile and /root/.profile PKG_PATH=http://mirrors.sonic.net/pub/OpenBSD/snapshots/packages/amd64/export PKG_PATH Then run doas pkg_add -u In the future, you won't have to change PKG_PATH or the bsd.rd file path. It will remember. Like pepperidge farm. To update to a new snapshot in the future, just boot bsd.rdfollow the promptsrebootdoas pkg_add -u One thing to note. When the upgrade to a new snapshot will take you to a new version number, like from 6.2 to 6.3 which will happen rather soon, booting bsd.rd and following the prompts will only allow you to download the new bsd.rd ramdisk. You must reboot after it's finished and re-enter bsd.rd to continue with the upgrade. But you'll only have to do this once every six months, and it's automatic. Just don't freak out when it only says it's downloading bsd.rd If you want to know if you should upgrade, just bookmark: http://mirrors.sonic.net/pub/OpenBSD/snapshots/amd64/ In your browser and visit it to check the dates on the archives. Don't forget to visit one directory up once in a while: http://mirrors.sonic.net/pub/OpenBSD/snapshots/ To snag ports.tar.gz and update your ports tree | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/411202",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36146/"
]
} |
411,210 | I am running Fedora 27 with kernel 4.14.5 and have a problem that /etc/sysctl.conf is not being loaded upon boot. If I run: sudo sysctl -p after boot, my settings are loaded and everything works fine. What do I need to do to enable the loading of /etc/sysctl.conf or what alternatives are there to load it? | In systemd operating systems like Fedora, loading these settings is done with the systemd-sysctl commmand, run by the systemd-sysctl service. Your problem is that you have put the settings in the wrong configuration file. systemd-sysctl does not read /etc/sysctl.conf . It reads a whole bunch of *.conf files in (amongst other places) the /etc/sysctl.d directory. You should create such a file and put your settings there. Further reading Lennart Poettering et al. (2016). systemd-sysctl . systemd manual pages. Freedesktop.org. Lennart Poettering et al. (2016). sysctl.d . systemd manual pages. Freedesktop.org. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411210",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266129/"
]
} |
411,223 | I want to use BGP failover on three VMs. I installed a BGP daemon (BIRD) on the local VMs to achieve this and created a NIC with the floating IP: eth0:0 . However, I cannot 'up' the network interface on all VMs at the same time, but that is the behavior I need for BGP failover. I get the following error: [root@proxy2 network-scripts]# ifup eth0:0ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Error, some other host (xxx) already uses address xxx. How can I disable this check? | In systemd operating systems like Fedora, loading these settings is done with the systemd-sysctl commmand, run by the systemd-sysctl service. Your problem is that you have put the settings in the wrong configuration file. systemd-sysctl does not read /etc/sysctl.conf . It reads a whole bunch of *.conf files in (amongst other places) the /etc/sysctl.d directory. You should create such a file and put your settings there. Further reading Lennart Poettering et al. (2016). systemd-sysctl . systemd manual pages. Freedesktop.org. Lennart Poettering et al. (2016). sysctl.d . systemd manual pages. Freedesktop.org. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78333/"
]
} |
411,260 | How can I execute each command pre-appended with another one? Example when I run: nmap -p 80 host I want it to run proxychains nmap -p 80 host even when I do not add proxychains intentionally. In other words: can I alias all commands at once with proxychains pre-appended? Bonus if this is something I can switch on/off. | Would it work to just run a full shell under proxychains ? Assuming it can deal with processes started by the shell properly. You could do with just $ proxychains bash and exit the shell at will. But if you really want to, you can abuse the DEBUG trap (with extdebug set) to mangle the commands the shell runs. This would run every command with time : $ shopt -s extdebug$ d() { eval "time $BASH_COMMAND"; return 1; }$ trap d DEBUG$ sleep 2real 0m2.010suser 0m0.000ssys 0m0.000s$ trap - DEBUG # turn it off, this still prints the 'time' output But the tricky part here is that it will also affect builtins, like trap or shopt themselves, so you'd probably want to add some exceptions for those... Also, stuff like cd somedir would turn into proxychains cd somedir , which probably will not work. This would also affect everything started from within functions etc. Maybe it's better to just have the function use proxychains only for those commands known to need it. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/205303/"
]
} |
411,301 | I have this file 10.1.1.1 www1 10.1.1.2 www2 10.1.1.3 www3 I want to extract first IP address field and move it following place with http://www.foo.com=10.1.1.1/test.php 10.1.1.1 www1 # http://www.foo.com=10.1.1.1/test.php10.1.1.2 www2 # http://www.foo.com=10.1.1.2/test.php10.1.1.3 www3 # http://www.foo.com=10.1.1.3/test.php I can do this with for loop but i want to do it with sed with single liner trick. | sed 's@\([^ ]*\)\(.*\)@\1\2 #http://www.foo.com=\1/test.php@' I used @ as the delimiter not to have to backslash the slashes in the address. The IP address is matched by [^ ]* , i.e. non-whitespace at least zero times, and captured by \(\) into \1 . The rest of the line is captured into \2 by .* , i.e. anything. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29656/"
]
} |
411,304 | Suppose I have a non-associative array that has been defined like my_array=(foo bar baz) How can I check whether the array contains a given string? I’d prefer a solution that can be used within the conditional of an if block (e.g. if contains $my_array "something"; then ... ). | array=(foo bar baz foo)pattern=f*value=fooif (($array[(I)$pattern])); then echo array contains at least one value that matches the patternfiif (($array[(Ie)$value])); then echo value is amongst the values of the arrayfi $array[(I)foo] returns the index of the last occurrence of foo in $array and 0 if not found. The e flag is for it to be an e xact match instead of a pattern match. To check the $value is among a literal list of values, you could pass that list of values to an anonymous function and look for the $value in $@ in the body of the function: if ()(( $@[(Ie)$value] )) foo bar baz and some more; then echo "It's one of those"fi To know how many times the value is found in the array, you could use the ${A:*B} operator (elements of array A that are also in array B ): array=(foo bar baz foo)value=foosearch=("$value")(){print -r $# occurrence${2+s} of $value in array} "${(@)array:*search}" Or using pattern matching on the array elements: (){print -r $# occurrence${2+s} of $value in array} "${(M@)array:#$value}" | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/411304",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/88560/"
]
} |
411,452 | When I umount a SD flash card in a USB card reader, and then pull the card, filesystem stays "dirty". System: RPi or xubuntu 16.04.3 The card reader is some super cheap Chinese one. I have tried a few different ones I have tried a bunch of different SD cards. How to reproduce: connect card reader insert SD card with vfat on the first partition wait for system to detect SD card wait for system to automount filesystem or mount it manually update a random file, I do: date >> /media/mogul/2E3E-AE54/d un-mount: sudo umount /dev/sdd1 (placeholder, do nothing here, yet) pull sd card from card reader Now repeat from step 2. Keep an eye on your dmesg, it will say: [357207.805594] FAT-fs (sdd1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. (newer linux's support dmesg -w ) Now, if I add an additional action after the umount read a random byte on the SD card, like: dd if=/dev/sdd1 skip=1000000 ibs=1 count=1 of=/dev/null the filesystem seems to survive. This seems a bit hackish to me, am I missing something fundamental? Do you have more elegant solutions? I prefer not to use eject , but only umount, since eject` powers down the card reader too; the system won’t detect a new SD card before I re-plug the card reader. | As your step 7, try the following: echo 1 | sudo tee /sys/block/sdd/device/delete or if you're running as root, just echo 1 > /sys/block/sdd/device/delete This signals the kernel that device /dev/sdd is about to be removed, and should trigger a controlled flushing of any remaining write buffers to the card, to avoid the filesystem corruption. This may cause the reader to power down similar to the the eject command; if it does, an alternative way would be to just flush the buffers without the implication of an imminent device removal. This can be achieved with the blockdev command: sudo blockdev --flushbufs /dev/sdd If this does not help, then I'm afraid the card reader might not support hot-unplugging the card. This is possible with cheap readers. The only safe way to use such a reader could then be to first unplug the reader from the USB port, and only then remove the card from the reader. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39669/"
]
} |
411,472 | I'm on system running a (fairly recent-)Debian-based distribution. I'd like to generate a plain list of all installed packages matching a certain pattern. I can do that by, running, say, apt list --installed "linux-image-*" | cut -d/ -f1 but I get lines I don't care for, e.g.: WARNING: apt does not have a stable CLI interface. Use with caution in scripts.Listing... So maybe I'd better not use apt . I can run dpkg-query like so: dpkg-query --showformat='${Package}\n' --show "linux-image*" but that's not limited to installed packages. I could use dpkg-query --list "linux-image-*" | grep "ii" but then I'd need to do a bunch of text processing, and who can trust those spaces, right? So, bottom line: What's the right way to get the list of installed packages matching a pattern? Note : Bonus points if it can be a proper regexp rather than just a shell glob. Having to parse the text seems like a less-than-ideal solution; if that's what you suggest, please argue why there isn't a better way. | Here's one good way to do get the list of installed packages on a Debian-based system: dpkg -l | grep ^ii | awk '{print $2}' The output lines of dpkg -l can be trusted to be sane.The pattern ^ii will match the lines of installed packages,and the simple Awk will extract the second column,the package names (the same names used in apt-get install commands).Package names cannot contain whitespace,so this again is a safe operation. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34868/"
]
} |
411,551 | How can the TCP traffic initiated by a specific application be forced to go through a SOCKS proxy, regardless of the remote IP or port? A VPN would direct all outbound traffic on a host through an interface ( tun0 typically), so it's an overkill solution. But in a split tunnel configuration, instead of doing that by default, the VPN client offers a SOCKS proxy for specific applications. While browsers support connecting through a SOCKS proxy, many other applications don't. I've tried dante socksify but it didn't work with common programs like curl and wget . (I've sent a message to their mailing list, but it's not archived anywhere so I can't link to it.) | Wikipedia lists a number of open-source proxifiers . Of those, proxychains-ng seems to be the most actively developed , judging by GitHub activity. To install and configure, Download the latest release Unzip and cd into the directory ./configure && make Optional: sudo make install && sudo make install-config nano /usr/local/etc/proxychains.conf At the end of the config file, set the SOCKS IP port address Usage: proxychains4 -q curl icanhazip.com | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29324/"
]
} |
411,554 | I have installed Kali in VirtualBox and now trying to install Guest Additions to get full screen view. I updated and installed my packages and installed dkms. When I try to install linux-headers I get the following: # apt-get install linux-headers-$(uname -r)Reading package lists... DoneBuilding dependency tree Reading state information... Donelinux-headers-4.14.0-kali1-amd64 is already the newest version (4.14.2-1kali1).0 upgraded, 0 newly installed, 0 to remove and 86 not upgraded. The headers installed are as follows: # dpkg --get-selections | grep linux-headerslinux-headers-4.14.0-kali1-amd64 installlinux-headers-4.14.0-kali1-common installlinux-headers-amd64 install When I try to run the Guest Additions CD I get the following: Verifying archive integrity... All good.Uncompressing VirtualBox 5.0.40 Guest Additions for Linux............VirtualBox Guest Additions installerRemoving installed version 5.0.40 of VirtualBox Guest Additions...Removing existing VirtualBox DKMS kernel modules ...done.Removing existing VirtualBox non-DKMS kernel modules ...done.update-initramfs: Generating /boot/initrd.img-4.13.0-kali1-amd64update-initramfs: Generating /boot/initrd.img-4.14.0-kali1-amd64Copying additional installer modules ...Installing additional modules ...Removing existing VirtualBox DKMS kernel modules ...done.Removing existing VirtualBox non-DKMS kernel modules ...done.Building the VirtualBox Guest Additions kernel modulesThe headers for the current running kernel were not found. If the followingmodule compilation fails then this could be the reason.Building the main Guest Additions module ...fail!(Look at /var/log/vboxadd-install.log to find out what went wrong)Doing non-kernel setup of the Guest Additions ...done.Press Return to close this window... It appears to me that the correct linux headers for the kernal are installed. Why is VBox not able to find them? Tried updating to VBox 5.2.2 but after removing existing version and installing 5.2.2 I was unable to launch Kali-Linux - screenshot attached . | Wikipedia lists a number of open-source proxifiers . Of those, proxychains-ng seems to be the most actively developed , judging by GitHub activity. To install and configure, Download the latest release Unzip and cd into the directory ./configure && make Optional: sudo make install && sudo make install-config nano /usr/local/etc/proxychains.conf At the end of the config file, set the SOCKS IP port address Usage: proxychains4 -q curl icanhazip.com | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411554",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266358/"
]
} |
411,602 | I have a script s1 that outputs a list of numbers separated with ',' e.g. 1,2,3,4 . Now I want to give these numbers to script s2 as arguments, so that s2 will be run on each of them and output its result in a separate line. For example, if s2 multiplies numbers by two, this would be the result I'm looking for: 2468 What I'm doing right now is: s1 | xargs -d "," | xargs -n1 s2 But I feel like I'm doing it in such a foolish way! So my question is: What is the proper way of doing it? My problem with my solution is that it's calling xargs twice and iterating over the input twice which is not reasonable to my eyes of course by means of performance! The answer xargs -d "," -n1 seems nice, but I'm not sure if it's only iterating once. If it does, please verify that in your answer, and I'll accept it. By the way, I'd rather not use Perl since it still is iterating twice and also Perl may not exist on many systems. | This should equally work as well: s1 | xargs -d "," -n1 s2 Test case: printf 1,2,3,4 | xargs -d ',' -n1 echo Result: 1234 If s1 outputs that list followed by a newline character, you'd want to remove it as otherwise the last call would be with 4\n instead of 4 : s1 | tr -d '\n' | xargs -d , -n1 s2 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/411602",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231067/"
]
} |
411,649 | According to systemd's service documentation , a + may be used as a prefix in systemd service configurations. I am trying to use it like this: [Service]ExecStartPre=+/usr/bin/tomcat1Type=simpleEnvironment="NAME=tomcat1"EnvironmentFile=/etc/sysconfig/tomcat1ExecStart=/usr/libexec/tomcat/server startSuccessExitStatus=143User=tomcatGroup=tomcat I want to run /usr/bin/tomcat1 with elevated privileges, but doing so with the "+" sign gives the following error (note, "-" does not give an error). "systemd[1]: [/usr/lib/systemd/system/tomcat1.service:10] Executable path is not absolute, ignoring: +/usr/bin/tomcat1" I've also tried ExecStartPre="+/usr/bin/tomcat1 , ExecStartPre="+"/usr/bin/tomcat1 , etc. I know I can use PermissionsStartOnly=true as an alternative, which should work, but that seemed less than ideal. | The documentation that you are using does not match the version of systemd that you are using. The "+" modifier was introduced in version 231. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411649",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266443/"
]
} |
411,664 | I recently saw a video where someone executed ^foo^bar in Bash. What is that combination for? | Bash calls this a quick substitution . It's in the "History Expansion" section of the Bash man page, under the "Event Designators" section ( online manual ): ^ string1 ^ string2 ^ Quick substitution. Repeat the previous command, replacing string1 with string2. Equivalent to !!:s/ string1 / string2 / So ^foo^bar would run the previously executed command, but replace the first occurence of foo with bar . Note that for s/old/new/ , the bash man page says "The final delimiter is optional if it is the last character of the event line." This is why you can use ^foo^bar and aren't required to use ^foo^bar^ . (See this answer for a bunch of other designators, although I didn't mention this one there). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/411664",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/264975/"
]
} |
411,691 | I want to walk a file and compare two lines to see if they begin with the same 12 characters. If they do, I want to delete the first line and then compare the remaining line with the next line in the file until all lines have been compared. The file contains the list of files in the directory, already sorted. There can be two or more files (always in sequence) that start with the same 12 characters. I only want the last one. I saw a similar solution, in an early post: sed '$!N; /\(.*\)\n\1:FOO/D; P;D' file but I could not modify it to work for me. | Bash calls this a quick substitution . It's in the "History Expansion" section of the Bash man page, under the "Event Designators" section ( online manual ): ^ string1 ^ string2 ^ Quick substitution. Repeat the previous command, replacing string1 with string2. Equivalent to !!:s/ string1 / string2 / So ^foo^bar would run the previously executed command, but replace the first occurence of foo with bar . Note that for s/old/new/ , the bash man page says "The final delimiter is optional if it is the last character of the event line." This is why you can use ^foo^bar and aren't required to use ^foo^bar^ . (See this answer for a bunch of other designators, although I didn't mention this one there). | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/411691",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266472/"
]
} |
411,780 | I want to prepend a text contained in the file disclaimer.txt to all the .m files in a folder. I tried the following: text=$(cat ./disclaimer.txt)for f in ./*.mdo sed -i '1i $text' $fdone but it just prepends an empty line. | There are many ways to do this one, but here's a quick first stab: #!/bin/shfor file in *.m; do cat disclaimer.txt $file >> $file.$$ mv $file.$$ $filedone It concatenates the disclaimer along with the original file into a new temporary file then replaces the original file with the contents of the temporary file. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/411780",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/266555/"
]
} |
411,811 | As far as I know, every operating system has a different way to mark the end of line (EOL) character. Commercial operating systems use carriage return for EOL (carriage return and line feed on Windows, carriage return only on Mac). Linux, on the other hand, just uses line feed for EOL. Why doesn't Linux use carriage return for EOL (and solely line feed instead)? | Windows uses CR LF because it inherited it from MS-DOS. MS-DOS uses CR LF because it was inspired by CP/M which was already using CR LF . CP/M and many operating systems from the eighties and earlier used CR LF because it was the way to end a line printed on a teletype (return to the beginning of the line and jump to the next line, just like regular typewriters). This simplified printing a file because there was less or no pre-processing required. There was also mechanical requirements that prevented a single character to be usable. Some time might be required to allow the carriage to return and the platen to rotate. Gnu/Linux uses LF because it is a Unix clone . 1 Unix used a single character, LF , from the beginning to save space and standardize to a canonical end-of-line, using two characters was inefficient and ambiguous. This choice was inherited from Multics which used it as early as 1964. Memory, storage, CPU power and bandwidth were very sparse so saving one byte per line was worth doing. When a file was printed, the driver was converting the line feed (new-line) to the control characters required by the target device. LF was preferred to CR because the latter still had a specific usage. By repositioning the printed character to the beginning of the same line, it allowed to overstrike already typed characters. Apple initially decided to also use a single character but for some reason picked the other one: CR . When it switched to a BSD interface, it moved to LF . These choices have nothing to do with the fact an OS is commercial or not. 1 This is the answer to your question. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/411811",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/255231/"
]
} |
411,822 | So I wrote a systemd service file for Manjaro Linux to run two shell commands to set some kernel parameters at runtime to enable custom power saving actions. BTW: This was a tip by a german computer magazine.Originally I should place the shell commands in the /etc/rc.local file but I want to make it with a systemd service.Because rc.local is considered as deprecated and I want to learn something new. Below you see my service file saved as /etc/systemd/system/power-savings.service .Because there are two ExecStart directives I have chosen Type=oneshot . [Unit]Description=Enable custom power saving actions provided by c't magazine# Quelle(n): c't 25/2016, S. 77# c't 26/2016, S. 12[Service]Type=oneshot# SATA Link Power Management aktivierenExecStart=/usr/bin/sh -c 'for I in /sys/class/scsi_host/host?/link_power_management_policy; do echo min_power > $I; done'# Energieverwaltung für den Audiocodec aktivierenExecStart=/usr/bin/sh -c 'echo 1 > /sys/module/snd_hda_intel/parameters/power_save'[Install]WantedBy=multi-user.target I verified the service file with: $ sudo systemd-analyze verify /etc/systemd/system/power-savings.service Then I reloaded the daemon with: $ sudo systemctl daemon-reload I enabled it with: $ sudo systemctl enable power-savings.service Then I have run the service with: $ sudo systemctl start power-savings.service And it worked!The kernel parameters have been set. But then, after rebooting my system the service seems not to have any effect.Although, the service status said success... $ systemctl status power-savings.serviceProcess: 412 ExecStart=/usr/bin/bash -c echo 1 > /sys/module/snd_hda_intel/parameters/power_save (code=exited, status=0/SUCCESS)Process: 404 ExecStart=/usr/bin/bash -c for I in /sys/class/scsi_host/host?/link_power_management_policy; do echo min_power > $I; done (code=exited, status=0/SUCCESS)Main PID: 412 (code=exited, status=0/SUCCESS) But the kernel parameter were not set.So my service works during a user session, only.Unfortunately not during system boot up as it was intended to work. Is there something I might have missed?Do I need some of those After or Require directives?How am I able to debug what really happens with the ExecStart directives? | Windows uses CR LF because it inherited it from MS-DOS. MS-DOS uses CR LF because it was inspired by CP/M which was already using CR LF . CP/M and many operating systems from the eighties and earlier used CR LF because it was the way to end a line printed on a teletype (return to the beginning of the line and jump to the next line, just like regular typewriters). This simplified printing a file because there was less or no pre-processing required. There was also mechanical requirements that prevented a single character to be usable. Some time might be required to allow the carriage to return and the platen to rotate. Gnu/Linux uses LF because it is a Unix clone . 1 Unix used a single character, LF , from the beginning to save space and standardize to a canonical end-of-line, using two characters was inefficient and ambiguous. This choice was inherited from Multics which used it as early as 1964. Memory, storage, CPU power and bandwidth were very sparse so saving one byte per line was worth doing. When a file was printed, the driver was converting the line feed (new-line) to the control characters required by the target device. LF was preferred to CR because the latter still had a specific usage. By repositioning the printed character to the beginning of the same line, it allowed to overstrike already typed characters. Apple initially decided to also use a single character but for some reason picked the other one: CR . When it switched to a BSD interface, it moved to LF . These choices have nothing to do with the fact an OS is commercial or not. 1 This is the answer to your question. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/411822",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/208465/"
]
} |
411,825 | I found this cool trick from Postgres.app website echo /Users/user1/latest/bin | sudo tee /etc/paths.d/postgresapp I want to know, how can I make this work for any logged in user.I want something like echo {whatever the home directory of the logged in user at runtime}/latest/bin | sudo tee /etc/paths.d/postgresapp My first thought was to try the $HOME variable, but my home variable pointsto my home directory, whereas I want this to work when any user logs in to Mac and uses terminal.app. | Windows uses CR LF because it inherited it from MS-DOS. MS-DOS uses CR LF because it was inspired by CP/M which was already using CR LF . CP/M and many operating systems from the eighties and earlier used CR LF because it was the way to end a line printed on a teletype (return to the beginning of the line and jump to the next line, just like regular typewriters). This simplified printing a file because there was less or no pre-processing required. There was also mechanical requirements that prevented a single character to be usable. Some time might be required to allow the carriage to return and the platen to rotate. Gnu/Linux uses LF because it is a Unix clone . 1 Unix used a single character, LF , from the beginning to save space and standardize to a canonical end-of-line, using two characters was inefficient and ambiguous. This choice was inherited from Multics which used it as early as 1964. Memory, storage, CPU power and bandwidth were very sparse so saving one byte per line was worth doing. When a file was printed, the driver was converting the line feed (new-line) to the control characters required by the target device. LF was preferred to CR because the latter still had a specific usage. By repositioning the printed character to the beginning of the same line, it allowed to overstrike already typed characters. Apple initially decided to also use a single character but for some reason picked the other one: CR . When it switched to a BSD interface, it moved to LF . These choices have nothing to do with the fact an OS is commercial or not. 1 This is the answer to your question. | {
"score": 9,
"source": [
"https://unix.stackexchange.com/questions/411825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/180441/"
]
} |
411,844 | I need to replace the following text: "name":["abc1234"], age:"24" with "name": "abc1234", age: "24" | If I understand correctly, you want do replace the "name" list by its first element. If this is the case try a Json processor: jq '.name=.name[0]' ex.json (adaptations to the unpost full example may be needed) | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210449/"
]
} |
411,951 | After adding a shared folder in VirtualBox, I am able to access it like admin:///media/sf_MyFolder/ , however, Ubuntu asks me to enter admin's password, even twice. How to mount a VirtualBox's shared folder so that it can be accessed without authentication by a non-admin user? | I had the same problem, and found the solution here : Note: Access to auto-mounted shared folders is only granted to the user group vboxsf, which is created by the VirtualBox Guest Additions installer. Hence guest users have to be member of that group to have read/write access or to have read-only access in case the folder is not mapped writable. From the terminal, you can enter: sudo adduser <username> vboxsf Reboot and you should be good to go. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/411951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/185188/"
]
} |
412,002 | When inserting a USB stick or device to computer, there is always the risk that the device is malicious, will act as an HID and potentially do some damage on the computer. How can I prevent this problem? Is disabling HID on specific USB port sufficient? How do I do that? | Install USBGuard — it provides a framework for authorising USB devices before activating them. With the help of a tool such as USBGuard Notifier or the USBGuard Qt applet , it can pop up a notification when you connect a new device, asking you what to do; and it can store permanent rules for known devices so you don’t have to confirm over and over. Rules are defined using a comprehensive language with support for any USB attribute (including serial number, insertion port...), so you can write rules that are as specific as you want — whitelist this keyboard if it has this identifier, this serial number, is connected to this port, etc. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/412002",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/141945/"
]
} |
412,003 | How do I print anything after the colon? Input: color:white,name:green so I would like to print anything after : Output: white,green | Simple sed approach (while your input is pretty simple): sed 's/[^,:]*://g' file The output: white,green | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227199/"
]
} |
412,065 | In trying to access a cluster in my lab by ssh and it work. but then I'm not able to do anything : user@users:~> nautilusX11 connection rejected because of wrong authentication.Could not parse arguments: Cannot open display or user@users:~> geditX11 connection rejected because of wrong authentication.(gedit:151222): Gtk-WARNING **: cannot open display: localhost:11.0 It worked until today... and I don't know how to check if something had change. I don't have the root password for this machine, is there anything i can do ? I have read lot of thing about this error such as this but nothing solved... EDIT : The local OS is Ubuntu 16 and the server is OpenSuse. I'm connecting this way : ssh -XY -p22 [email protected] EDIT 2 : user@users:~> envMODULE_VERSION_STACK=3.1.6LESSKEY=/etc/lesskey.binNNTPSERVER=newsINFODIR=/usr/local/info:/usr/share/info:/usr/infoMANPATH=/usr/local/man:/usr/share/manHOSTNAME=usersXKEYSYMDB=/usr/share/X11/XKeysymDBHOST=usersTERM=xterm-256colorSHELL=/bin/bashPROFILEREAD=trueHISTSIZE=1000SSH_CLIENT=10.44.0.1 49729 22MORE=-slSSH_TTY=/dev/pts/2JRE_HOME=/usr/lib64/jvm/jreUSER=userLS_COLORS=no=00:fi=00:di=01;34:ln=00;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=41;33;01:ex=00;32:*.cmd=00;32:*.exe=01;32:*.com=01;32:*.bat=01;32:*.btm=01;32:*.dll=01;32:*.tar=00;31:*.tbz=00;31:*.tgz=00;31:*.rpm=00;31:*.deb=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.lzma=00;31:*.zip=00;31:*.zoo=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.tb2=00;31:*.tz2=00;31:*.tbz2=00;31:*.avi=01;35:*.bmp=01;35:*.fli=01;35:*.gif=01;35:*.jpg=01;35:*.jpeg=01;35:*.mng=01;35:*.mov=01;35:*.mpg=01;35:*.pcx=01;35:*.pbm=01;35:*.pgm=01;35:*.png=01;35:*.ppm=01;35:*.tga=01;35:*.tif=01;35:*.xbm=01;35:*.xpm=01;35:*.dl=01;35:*.gl=01;35:*.wmv=01;35:*.aiff=00;32:*.au=00;32:*.mid=00;32:*.mp3=00;32:*.ogg=00;32:*.voc=00;32:*.wav=00;32:LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib:/usr/local/cuda-5.5/lib64:XNLSPATH=/usr/share/X11/nlsENV=/etc/bash.bashrcHOSTTYPE=x86_64FROM_HEADER=MSM_PRODUCT=MSMPAGER=lessCSHEDIT=emacsXDG_CONFIG_DIRS=/etc/xdgMINICOM=-c onMODULE_VERSION=3.1.6MAIL=/var/mail/userPATH=/usr/local/cuda-5.5/bin:/home/user/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/usr/lib64/jvm/jre/bin:/usr/lib/mit/bin:/usr/lib/mit/sbinCPU=x86_64JAVA_BINDIR=/usr/lib64/jvm/jre/binINPUTRC=/home/user/.inputrcPWD=/home/userJAVA_HOME=/usr/lib64/jvm/jreLANG=en_US.UTF-8PYTHONSTARTUP=/etc/pythonstartMODULEPATH=/usr/share/modules:/usr/share/modules/modulefilesLOADEDMODULES=QT_SYSTEM_DIR=/usr/share/desktop-dataSHLVL=1HOME=/home/userLESS_ADVANCED_PREPROCESSOR=noOSTYPE=linuxLS_OPTIONS=-N --color=tty -T 0XCURSOR_THEME=DMZMSM_HOME=/usr/local/MegaRAID Storage ManagerWINDOWMANAGER=/usr/bin/gnomeG_FILENAME_ENCODING=@locale,UTF-8,ISO-8859-15,CP1252LESS=-M -IMACHTYPE=x86_64-suse-linuxLOGNAME=userXDG_DATA_DIRS=/usr/share:/etc/opt/kde3/share:/opt/kde3/shareSSH_CONNECTION=172.17.10.15 22MODULESHOME=/usr/share/modulesLESSOPEN=lessopen.sh %sINFOPATH=/usr/local/info:/usr/share/info:/usr/infoDISPLAY=localhost:12.0XAUTHLOCALHOSTNAME=usersLESSCLOSE=lessclose.sh %s %sG_BROKEN_FILENAMES=1JAVA_ROOT=/usr/lib64/jvm/jreCOLORTERM=1_=/usr/bin/env | Xauthority Mini How To On GNU/Linux systems running an X11 display server, the file ~/.Xauthority stores authentication cookies or cryptographic keys used to authorize connection to the display. In most cases, the authentication mechanism is a symmetric cookie which is referred to as a Magic Cookie . The same cookie is used by the server as well as the client. Each X11 authentication cookie is under the control of the individual system authenticated user. Since the authetication cookie is stored as a plain text security token, the permissions on the ~/.Xauthority file should be rw for the owner only, 600 in octal format. However, the permissions on the authorization file are not enforced. A user can list, export, create, or delete authentication cookies using the xauth program. The following command will create an authoratization cookie for DISPLAY 32 . xauth add localhost:32 - `mcookie` Manual creation and manipulation of cookies is usually not needed when using X11 forwarding with ssh , because ssh starts an X11 proxy on the remote machine and automatically generates authorization cookies on the local display. However, for certain configurations the authorization cookie may need to be manually created and copied to the local machine. This can be done in an ssh session and then use scp to copy the cookie. ssh into remote machine: ssh -XY user@remote Check if an authorization cookie is present for the current X11 display echo $DISPLAYxauth list If there's no environment variable named $DISPLAY then the X11 proxy did not start properly. It's important to note that DISPLAY 0 is typically locally logged in users and is only running if an xserver has been locally started via xinit . There is no requirement for a locally started X11 server in order for X11 forwarding to function through ssh . If there's a $DISPLAY environment variable set but no corresponding authorization cookie for that display number, you can create one: xauth add $DISPLAY - `mcookie` And verify that there is now a cookie: xauth list You can copy that cookie and merge it into the local machine: user@remote> xauth nextract ~/xcookie $DISPLAYuser@remote> exituser@local> scp user@remote:~/xcookie ~/xcookieuser@local> xauth nmerge ~/xcookie And then verify that the cookie has been installed: user@local> xauth list Try out your X11 forwarding ssh connection. Notes on ~/.Xauthority ~/.Xauthority is a binary file which contains all the authorization information for each display the user may access. Each record is delimited by the two bytes 0x0100 . Each field is preceeded by a hexidemical count of the field's number of bytes. All text is encoded in hexidecimal ASCII. The following table is the basic structure of the most common configuration of a MIT MAGIC COOKIE authorization: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0100 0004 61616161 0002 3435 0012 4d49542d4d414749432d434f4f4b49452d31 0010 c0bdd1c539be89a2090f1bbb6b414c2c ----------------- ----------- ------------------ ------------ ---------------------- ------------- -------------------------------------- ------------ --------------------------------------- start-of-record 0xNumBytes 0xASCII Hostname 0xNumBytes 0xASCII Display Num 0xNumBytes 0xASCII Auth Type 0xNumBytes 0xkey------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The top line is retrievable from the ~/.Xauthority file via the xauth nlist command. Of course, your authorization file will have different information from my example. If the Security Extensions are in use with the X11 server, there are several configuration options for each authorization line including time limited authorization per cookie. | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/412065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/262164/"
]
} |
412,115 | I usually use ./aaa.sh 2>&1 | tee -a log But found a new command which seems easier: ./aaa.sh &> log So what am I giving up in the second case? | Apples and oranges. But first... As far as 2>&1 vs &> , they both act to direct stderr to the same place that stdout is being directed to. But you are giving up portability with the second one as it is not POSIX-compliant and any script using it will only work with those shells that support it. That being said they are "semantically equivalent" as described on the Bash man page... There are two formats for redirecting standard output and standard error: &>word and >&word Of the two forms, the first is preferred. This is semantically equivalent to >word 2>&1 When using the second form, word may not expand to a number or -. If it does, other redirection operators apply (see Duplicating File Descriptors below) for compatibility reasons. However there's more to it. You also are using the tee command which adds additional functionality to the first version. It will take its stdin input and direct it to two different places: stdout (usually your screen/terminal if you're running this interactively) and the specified file which will have the data appended to it ( -a says to append rather than overwrite). Compare this to the second version where the combined stdout and stderr overwrite the log file and do not get displayed on your screen/terminal. Conclusion: As mentioned at the beginning these are really two different commands altogether but, stretching the notion of equivalence, generally speaking the first version is superior as it is portable and you get the benefit of seeing the output of aaa.sh even while it is being saved to a file. Of course, if you don't want to see that or you want to erase the previous file contents then it's a different story. Apples and oranges. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/167182/"
]
} |
412,192 | I've noticed when I SSH into a remote machine over a slow link, SSH seems to "stick" after a relatively large amount of data is transferred. For example, typing text over the link is fine, but running ls /etc will freeze the connection for a few minutes. I would understand if the delay was because I was maxing out the connection speed and it returned to normal once the data had been transferred, but the connection freezes for far longer than you'd expect for the amount of data actually involved. The weird thing is that if I open two SSH connections to and from the same machines, when one of them has frozen, the other one still works fine. So I can't be maxing out the connection speed, otherwise they would both freeze at the same time. There is no traffic shaping active on either box or (as far as I can tell) the routers in between, so it shouldn't be anything dropping packets to keep the average transfer speed within a certain range. Can anyone suggest anything that could cause this kind of behaviour, or anything further to check? The same thing happens with scp and sshfs , with scp reporting a huge transfer rate (many MB/sec, then the speed slowly falls back to stalled for a few minutes, then if I'm lucky it'll repeat until the file finishes transferring.) sshfs works but often (not always) after a file is saved the mount point is non-responsive for a few minutes, temporarily blocking any program that tries to access it. EDIT: I tried using iperf and get some interesting stats: Local: 0.0- 0.9 sec 256 KBytes 2.25 Mbits/secRemote: 0.0- 7.0 sec 256 KBytes 302 Kbits/secLocal: 0.0-15.9 sec 1.00 MBytes 529 Kbits/secRemote: 0.0-16.0 sec 1.00 MBytes 524 Kbits/sec It looks like below a certain amout of data, the local end can send a lot faster than the remote can receive. I guess this isn't an SSH problem after all. I will investigate some TCP settings that might adjust this, but if anyone knows any for sure please advise! | Your SSH connection is exceeding the MTU size somewhere between client and server, and Path Maximum Transmission Unit Discovery is not working. (This is one of several reasons that blanket prevention of ICMP traffic in the name of security is a bad idea.) Further reading https://networkengineering.stackexchange.com/questions/13417/ SCP reproducably breaks SSH pipe | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412192",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6662/"
]
} |
412,214 | On Linux my webcam works fine, but when using artificial lighting the white-balance is too reddish. Color look fine with natural illumination.Is there a way to calibrate the colors or have some form of auto-adjustment which works? I used guvcview to tinker with the settings but haven't managed to find a suitable combination of settings to show natural colors. | At least on my webcam, the v4l2-ctl -l command shows two settings related to white balance: # v4l2-ctl -l[...] white_balance_temperature_auto (bool) : default=1 value=1[...] white_balance_temperature (int) : min=2800 max=6500 step=1 default=4000 value=4000 flags=inactive[...] I must set the white_balance_temperature_auto setting to 0 before the white_balance_temperature setting will have any effect. # v4l2-ctl -c white_balance_temperature_auto=0# v4l2-ctl -c white_balance_temperature=3000 # or whatever value you want Note that the white_balance_temperature setting controls what the camera assumes the lighting environment to be, so decreasing the value makes the camera assume the ambient light is more reddish, and so it will make the picture more bluish to compensate. Use the -d option to use a specific video device like -d /dev/video0 . | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412214",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30019/"
]
} |
412,234 | I have the following file: ---------- 1 Steve Steve 341 2017-12-21 01:51 myFile.txt I switched the user to root in the terminal, and I have noticed the following behaviors: I can read this file and write to it. I can't execute this file. If I set the x bit in the user permissions ( ---x------ ) or the group permissions ( ------x--- ) or the others permissions ( ---------x ) of the file, then I would be able to execute this file. Can anyone explain to me or point me to a tutorial that explains all of the rules that apply when the root user is dealing with files and directories? | Privileged access to files and directories is actually determined by capabilities, not just by being root or not. In practice, root usually has all possible capabilities, but there are situations where all/many of them could be dropped, or some given to other users (their processes). In brief, you already described how the access control checks work for a privileged process. Here's how the different capabilities actually affect it: The main capability here is CAP_DAC_OVERRIDE , a process that has it can "bypass file read, write, and execute permission checks". That includes reading and writing to any files, as well as reading, writing and accessing directories. It doesn't actually apply to executing files that are not marked as executable. The comment in generic_permission ( fs/namei.c ), before the access checks for files, says that Read/write DACs are always overridable. Executable DACs are overridable when there is at least one exec bit set. And the code checks that there's at least one x bit set if you're trying to execute the file. I suspect that's only a convenience feature, to prevent accidentally running random data files and getting errors or odd results. Anyway, if you can override permissions, you could just make an executable copy and run that. (Though it might make a difference in theory for setuid files of a process was capable of overriding file permissions ( CAP_DAC_OVERRIDE ), but didn't have other related capabilities ( CAP_FSETID / CAP_FOWNER / CAP_SETUID ). But having CAP_DAC_OVERRIDE allows editing /etc/shadow and stuff like that, so it's approximately equal to just having full root access anyway.) There's also the CAP_DAC_READ_SEARCH capability that allows to read any files and access any directories, but not to execute or write to them; and CAP_FOWNER that allows a process to do stuff that's usually reserved only for the file owner, like changing the permission bits and file group. Overriding the sticky bit on directories is mentioned only under CAP_FOWNER , so it seems that CAP_DAC_OVERRIDE would not be enough to ignore that. (It would give you write permission, but usually in sticky directories you have that anyway, and +t limits it.) (I think special devices count as "files" here. At least generic_permission() only has a type check for directories, but I didn't check outside of that.) Of course, there are still situations where even capabilities will not help you modify files: some files in /proc and /sys , since they're not really actual files SELinux and other security modules that might limit root chattr immutable +i and append only +a flags on ext2/ext3/ext4, both of which stop even root, and prevent also file renames etc. network filesystems, where the server can do its own access control, e.g. root_squash in NFS maps root to nobody FUSE, which I assume could do anything read-only mounts read-only devices | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/412234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228808/"
]
} |
412,259 | Currently, I'm running these two commands to create a quick backup of the directory. Is there a way to combine the two commands into one, so that I am copying and renaming the new directory in one command? #cp -R /tf/Custom_App /tf/Custom_App_backups/#mv /tf/Custom_App_backups/Custom_App /tf/Custom_App_backups/Custom_App_2017-12-21 | You should be able to do just cp -R /tf/Custom_App /tf/Custom_App_backups/Custom_App_2017-12-21 However , if the target directory already exists, this would append the final part of the source path to the destination path, creating /tf/Custom_App_backups/Custom_App_2017-12-21/Custom_App , and then copy the rest of the tree within that. To prevent this, use /tf/Custom_App/. as the source. Of course, in that case you might want to rm -r /tf/Custom_App_backups/Custom_App_2017-12-21 first, if you don't want older files lying around there after the copy. The difference between /some/dir and /some/dir/. was discussed a while back in cp behaves weirdly when . (dot) or .. (dot dot) are the source directory | {
"score": 7,
"source": [
"https://unix.stackexchange.com/questions/412259",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106228/"
]
} |
412,269 | For my task, I need to block some hostnames, but since some websites may reply with different IP addresses to different DNS queries (for example, Google DNS and any other DNS server), I'd like to resolve same hostname using different DNS servers to get as many possible IP addresses as possible. Can I solve this task using command line utilities on Ubuntu 16+? Are there any alternative solutions? In short: I'd like to resolve "example.com" to IP using DNS #A and resolve "example.com" to IP using DNS #B without making any serious changes to my network configuration. | Yes you can with the tools @pawel7318 mentioned. dig dig @nameserver hostname nslookup nslookup hostname nameserver host host hostname nameserver | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412269",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/187866/"
]
} |
412,279 | I have a command, which looks like so: $ ssh [long list of parameters, eventually I connect to internalhost] My task is to replace this long list of parameters with just internalhost , so that I could run it like so $ ssh internalhost How can I do that? | Yes you can with the tools @pawel7318 mentioned. dig dig @nameserver hostname nslookup nslookup hostname nameserver host host hostname nameserver | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146271/"
]
} |
412,330 | OS : Ubuntu 16.04.3 Shell : Bash 4.3.48 I know that is possible to temporarily change the content of a variable as in var=value command , being probably IFS= read -r var the most notable case of this. And, thanks to Greg's wiki , I also understand: # Why thisfoo() { echo "$var"; }var=value foo# And this does workvar=value; echo "$var"# But this doesn'tvar=value echo "$var" What escapes my understanding is this: $ foo() { echo "${var[0]}"; }$ var=(bar baz) foo(bar baz) As far as I know (and following the logic of previous examples), it should print bar , not (bar baz) . Does this only happen to me? Is this the intended behavior and I'm missing something? Or is this a bug? | Generally calling: var=value cmd where cmd is a function is not portable. With bash , that only works for scalar variables (and with x=(...) parsed as an array but assigned as a scalar) and there are a number of issues with scoping if you do that, with ksh93 and yash , it works but the variable definition remains afterwards. With mksh , you get a syntax error. In the Bourne shell, it didn't work at all, even for scalar variables. Also note that even with scalar variables, whether the variable ends up being exported within the function (that is, passed to commands being executed) varies from shell to shell (it is in bash, yash, mksh, zsh, but not in ksh, ash). It only works the way you'd expect with zsh . Note that zsh array indices start at 1. bash-4.4$ zsh$ a=(before value)$ f() echo $a[1]$ a=(temp value) ftemp$ echo $a[1]before | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/243481/"
]
} |
412,379 | I am stuck on this. i have an input file that looks like 16:20:03 BuyDRIPAMEX500 13,51 USD16:05:10 SellSQNYSE100 36,32 USD15:48:52 SellNXTDNasdaq500 4,99 USD15:48:52 SellNXTDNasdaq500 4,99 USD15:46:07 BuySOXLAMEX50 147,7209 USD15:40:20 BuyTEUMAMEX1 700 1,36 USD15:40:19 BuyTEUMAMEX300 1,36 USD my goal is to get each four-line record onto one line, e.g. 16:20:03 Buy DRIP AMEX 500 13,51 USD16:05:10 Sell SQ NYSE 100 36,32 USD I know that each record is four lines. i also know that each record starts with (is separated by) a time on format hh:mm:ss I have tried various awk commands specifying RS/FS OFS/ORSI have tried different variants of sed like sed 'N;N;s/\n/ /' The awk prints first record only.The sed doesn't manage to get all elements on same line I can post more specific examples of what i have tried. It looks reallly simply. anyone who can give me a hint? If you know an easier solution in another language, feel free to elaborate | Using paste $ paste -d' ' - - - - <file 16:20:03 Buy DRIP AMEX 500 13,51 USD 16:05:10 Sell SQ NYSE 100 36,32 USD 15:48:52 Sell NXTD Nasdaq 500 4,99 USD 15:48:52 Sell NXTD Nasdaq 500 4,99 USD 15:46:07 Buy SOXL AMEX 50 147,7209 USD 15:40:20 Buy TEUM AMEX 1 700 1,36 USD 15:40:19 Buy TEUM AMEX 300 1,36 USD Using sed $ sed 'N;N;N; s/\n/ /g' file 16:20:03 Buy DRIP AMEX 500 13,51 USD 16:05:10 Sell SQ NYSE 100 36,32 USD 15:48:52 Sell NXTD Nasdaq 500 4,99 USD 15:48:52 Sell NXTD Nasdaq 500 4,99 USD 15:46:07 Buy SOXL AMEX 50 147,7209 USD 15:40:20 Buy TEUM AMEX 1 700 1,36 USD 15:40:19 Buy TEUM AMEX 300 1,36 USD Using awk $ awk '{line=line " " $0} NR%4==0{print substr(line,2); line=""}' file 16:20:03 Buy DRIP AMEX 500 13,51 USD 16:05:10 Sell SQ NYSE 100 36,32 USD 15:48:52 Sell NXTD Nasdaq 500 4,99 USD 15:48:52 Sell NXTD Nasdaq 500 4,99 USD 15:46:07 Buy SOXL AMEX 50 147,7209 USD 15:40:20 Buy TEUM AMEX 1 700 1,36 USD 15:40:19 Buy TEUM AMEX 300 1,36 USD | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216922/"
]
} |
412,446 | I want to disable ping response all the time on my Ubuntu operating system, the following commands work but only until the system reboots: Ping off: echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all Ping on: echo "0" > /proc/sys/net/ipv4/icmp_echo_ignore_all How would I be able to leave echo off even after having rebooted my laptop? | How would I be able to leave echo off even when I am rebooting my laptop? You can use one of the following three ways (as root): Edit /etc/sysctl.conf Add the following line to your /etc/sysctl.conf : net.ipv4.icmp_echo_ignore_all=1 Then: sysctl -p Using iptables: iptables -I INPUT -p icmp --icmp-type echo-request -j DROP With cron Run crontab -e as root, then add the following line: @reboot echo "1" > /proc/sys/net/ipv4/icmp_echo_ignore_all Start and enable the service: systemctl start cron.servicesystemctl enable cron.service | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/412446",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267110/"
]
} |
412,480 | # btrfs filesystem defragment -r -v -czstd:15 /ERROR: unknown compression type zstd:15# btrfs filesystem defragment -r -v -czstd_15 /ERROR: unknown compression type zstd_15# btrfs filesystem defragment -r -v -czstd15 /ERROR: unknown compression type zstd15 The btrfs manual page doesn't give the clue on how to select a compression level: -c[algo] compress file contents while defragmenting. Optional argument selects the compression algorithm, zlib (default), lzo or zstd. Currently it’s not possible to select no compression. See also section EXAMPLES. How to select a non-default zstd compression level to re-compress existing btrfs filesystems? Note: btrfs filesystem defragment on snapshots might result in much larger disk space consumption : Warning: Defragmenting with Linux kernel versions < 3.9 or ≥ 3.14-rc2 as well as with Linux stable kernel versions ≥ 3.10.31, ≥ 3.12.12 or ≥ 3.13.4 will break up the ref-links of COW data (for example files copied with cp --reflink, snapshots or de-duplicated data). This may cause considerable increase of space usage depending on the broken up ref-links. | Kernel 5.1 added ZSTD level support. I tested it with rc1 today using a mount option compress=zstd:12 in /etc/fstab. The default level is 3. To be clear: The change affects only files that are written after this mount command. Some benchmark results: https://lkml.org/lkml/2019/1/28/1930 | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412480",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17560/"
]
} |
412,483 | I want to capture all disks that do not have a filesystem ( all disks that mkfs not runs on them ) I tried the below, but still gives the OS ( sda ). What is the best approach with lsblk or other command to capture all disks that are without filesystem? lsblk -f | egrep -v "xfs|ext3|ext4" NAME FSTYPE LABEL UUID MOUNTPOINT fd0 sda └─sda2 LVM2_member v0593a-KiKU-9emb-STbx-ByMz-S95k-jChr0m ├─vg00-lv_swap swap 1beb675f-0b4c-4225-8455-e876cafc5756 [SWAP] sdg sdh sdi sdj sdk sr0 | lsblk -o NAME,FSTYPE -dsn This will print a list of block devices that are not themselves holders for partitions (they do not have a partition table). The detected file system type is in the second column. If its blank there is no recognized file system. So to get the output you want in one command lsblk -o NAME,FSTYPE -dsn | awk '$2 == "" {print $1}' | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
412,516 | I want to create an array in Bash which to contain all active network interfaces. I have managed to create a for loop printing them but when I try to create the array, it just contains the last lo interface but not the other. This is my code: #!/bin/bashfor iface in $(ifconfig | cut -d ' ' -f1| tr ':' '\n' | awk NF)do printf "$iface%s\n" declare -a array_test=["$iface"]donefor i in "${array_test[@]}"; do echo "$i"; done And this is my output: eno1eno2eno3lo[lo] Also, how can I exclude the lo localhost interface from the array? | Here is a solution, assign the list and then add item to it: #!/bin/basharray_test=()for iface in $(ifconfig | cut -d ' ' -f1| tr ':' '\n' | awk NF)do printf "$iface\n" array_test+=("$iface")doneecho ${array_test[@]} If you want the output displayed one item per line: for i in "${array_test[@]}"; do echo "$i"; done To remove localhost from output: if [ "$iface" != "lo" ] then array_test+=("$iface")fi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/216159/"
]
} |
412,519 | Every then and now ffmpeg tells me myfile.avi: Protocol not foundDid you mean file:myfile.avi if executing ffmpeg -i myfile.avi -vcodec copy -acodec copy -pix_fmt yuv420p myfile.mp4 . Using ffmpeg -i file:myfile.avi -vcodec copy -acodec copy -pix_fmt yuv420p myfile.mp4 then fails: file:myfile.avi: Protocol not foundDid you mean file:file:myfile.avi Executing the same (out of bash history) after reboot (normally I just go into pm-suspend-hybrid ) works, converting from .avi to .mp4 like expected. Any ideas what might be the reason and how to fix this? complete in-/output: $ ffmpeg -i myfile.avi -vcodec copy -acodec copy -pix_fmt yuv420p myfile.mp4ffmpeg version 3.2.9-1~deb9u1 Copyright (c) 2000-2017 the FFmpeg developers built with gcc 6.3.0 (Debian 6.3.0-18) 20170516 configuration: --prefix=/usr --extra-version='1~deb9u1' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared WARNING: library configuration mismatch avutil configuration: --disable-everything --disable-all --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --enable-avcodec --enable-avformat --enable-avutil --enable-fft --enable-rdft --enable-static --enable-libopus --disable-bzlib --disable-error-resilience --disable-iconv --disable-lzo --disable-network --disable-schannel --disable-sdl --disable-symver --disable-xlib --disable-zlib --disable-securetransport --disable-d3d11va --disable-dxva2 --disable-vaapi --disable-vda --disable-vdpau --disable-videotoolbox --disable-nvenc --enable-decoder='vorbis,libopus,flac' --enable-decoder='pcm_u8,pcm_s16le,pcm_s24le,pcm_s32le,pcm_f32le' --enable-decoder='pcm_s16be,pcm_s24be,pcm_mulaw,pcm_alaw' --enable-demuxer='ogg,matroska,wav,flac' --enable-parser='opus,vorbis,flac' --extra-cflags=-I/ssd/trunk_blink_tot/src/third_party/opus/src/include --optflags='"-O2"' --enable-decoder='theora,vp8' --enable-parser='vp3,vp8' --enable-pic --enable-decoder='aac,h264,mp3' --enable-demuxer='aac,mp3,mov' --enable-parser='aac,h264,mpegaudio' --enable-lto avcodec configuration: --disable-everything --disable-all --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --enable-avcodec --enable-avformat --enable-avutil --enable-fft --enable-rdft --enable-static --enable-libopus --disable-bzlib --disable-error-resilience --disable-iconv --disable-lzo --disable-network --disable-schannel --disable-sdl --disable-symver --disable-xlib --disable-zlib --disable-securetransport --disable-d3d11va --disable-dxva2 --disable-vaapi --disable-vda --disable-vdpau --disable-videotoolbox --disable-nvenc --enable-decoder='vorbis,libopus,flac' --enable-decoder='pcm_u8,pcm_s16le,pcm_s24le,pcm_s32le,pcm_f32le' --enable-decoder='pcm_s16be,pcm_s24be,pcm_mulaw,pcm_alaw' --enable-demuxer='ogg,matroska,wav,flac' --enable-parser='opus,vorbis,flac' --extra-cflags=-I/ssd/trunk_blink_tot/src/third_party/opus/src/include --optflags='"-O2"' --enable-decoder='theora,vp8' --enable-parser='vp3,vp8' --enable-pic --enable-decoder='aac,h264,mp3' --enable-demuxer='aac,mp3,mov' --enable-parser='aac,h264,mpegaudio' --enable-lto avformat configuration: --disable-everything --disable-all --disable-doc --disable-htmlpages --disable-manpages --disable-podpages --disable-txtpages --disable-static --enable-avcodec --enable-avformat --enable-avutil --enable-fft --enable-rdft --enable-static --enable-libopus --disable-bzlib --disable-error-resilience --disable-iconv --disable-lzo --disable-network --disable-schannel --disable-sdl --disable-symver --disable-xlib --disable-zlib --disable-securetransport --disable-d3d11va --disable-dxva2 --disable-vaapi --disable-vda --disable-vdpau --disable-videotoolbox --disable-nvenc --enable-decoder='vorbis,libopus,flac' --enable-decoder='pcm_u8,pcm_s16le,pcm_s24le,pcm_s32le,pcm_f32le' --enable-decoder='pcm_s16be,pcm_s24be,pcm_mulaw,pcm_alaw' --enable-demuxer='ogg,matroska,wav,flac' --enable-parser='opus,vorbis,flac' --extra-cflags=-I/ssd/trunk_blink_tot/src/third_party/opus/src/include --optflags='"-O2"' --enable-decoder='theora,vp8' --enable-parser='vp3,vp8' --enable-pic --enable-decoder='aac,h264,mp3' --enable-demuxer='aac,mp3,mov' --enable-parser='aac,h264,mpegaudio' --enable-lto avdevice configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu avfilter configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu avresample configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu swscale configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu swresample configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu postproc configuration: --disable-decoder=amrnb --disable-decoder=libopenjpeg --disable-mips32r2 --disable-mips32r6 --disable-mips64r6 --disable-mipsdsp --disable-mipsdspr2 --disable-mipsfpu --disable-msa --disable-libopencv --disable-podpages --disable-stripping --enable-avfilter --enable-avresample --enable-gcrypt --enable-gnutls --enable-gpl --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libilbc --enable-libkvazaar --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx265 --enable-libxvid --enable-libzvbi --enable-nonfree --enable-opengl --enable-openssl --enable-postproc --enable-pthreads --enable-shared --enable-version3 --incdir=/usr/include/x86_64-linux-gnu --libdir=/usr/lib/x86_64-linux-gnu --prefix=/usr --toolchain=hardened --enable-frei0r --enable-chromaprint --enable-libx264 --enable-libiec61883 --enable-libdc1394 --enable-vaapi --disable-opencl --enable-libmfx --disable-altivec --shlibdir=/usr/lib/x86_64-linux-gnu libavutil 55. 34.101 / 55. 33.100 libavcodec 57. 64.101 / 57. 63.103 libavformat 57. 56.101 / 57. 55.100 libavdevice 57. 1.100 / 57. 6.100 libavfilter 6. 65.100 / 6. 82.100 libavresample 3. 1. 0 / 3. 5. 0 libswscale 4. 2.100 / 4. 6.100 libswresample 2. 3.100 / 2. 7.100 libpostproc 54. 1.100 / 54. 5.100myfile.avi: Protocol not foundDid you mean file:myfile.avi? Mulvya asked for following: $ ffmpeg -protocols -v 0Supported file protocols:Input: async bluray cache concat crypto data ffrtmpcrypt ffrtmphttp file ftp gopher hls http httpproxy https mmsh mmst pipe rtmp rtmpe rtmps rtmpt rtmpte rtmpts rtp sctp srtp subfile tcp tls udp udplite unixOutput: crypto ffrtmpcrypt ffrtmphttp file ftp gopher http httpproxy https icecast md5 pipe prompeg rtmp rtmpe rtmps rtmpt rtmpte rtmpts rtp sctp srtp tee tcp tls udp udplite unix Comparing the ldd output (suggested by andcoz) gives a difference of $ diff ldd-not-working-libs-only ldd-working-libs-only2d1< /usr/lib/chromium-browser/libffmpeg.so 16d14< librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 88a87> librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 Full output: $ cat ldd-not-working linux-vdso.so.1 (0x00007fff303e5000) /usr/lib/chromium-browser/libffmpeg.so (0x00007f8fb7d89000) libavdevice.so.57 => /usr/lib/x86_64-linux-gnu/libavdevice.so.57 (0x00007f8fb7b5b000) libavfilter.so.6 => /usr/lib/x86_64-linux-gnu/libavfilter.so.6 (0x00007f8fb76da000) libavformat.so.57 => /usr/lib/x86_64-linux-gnu/libavformat.so.57 (0x00007f8fb7295000) libavcodec.so.57 => /usr/lib/x86_64-linux-gnu/libavcodec.so.57 (0x00007f8fb5b6e000) libavresample.so.3 => /usr/lib/x86_64-linux-gnu/libavresample.so.3 (0x00007f8fb594c000) libpostproc.so.54 => /usr/lib/x86_64-linux-gnu/libpostproc.so.54 (0x00007f8fb572e000) libswresample.so.2 => /usr/lib/x86_64-linux-gnu/libswresample.so.2 (0x00007f8fb550f000) libswscale.so.4 => /usr/lib/x86_64-linux-gnu/libswscale.so.4 (0x00007f8fb527e000) libavutil.so.55 => /usr/lib/x86_64-linux-gnu/libavutil.so.55 (0x00007f8fb4ff5000) libva.so.1 => /usr/lib/x86_64-linux-gnu/libva.so.1 (0x00007f8fb4dd5000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8fb4ad1000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8fb48b4000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8fb4515000) librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f8fb430d000) libXv.so.1 => /usr/lib/x86_64-linux-gnu/libXv.so.1 (0x00007f8fb4108000) libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f8fb3dc8000) libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f8fb3bb6000) libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f8fb398e000) libxcb-shm.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shm.so.0 (0x00007f8fb378a000) libxcb-xfixes.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-xfixes.so.0 (0x00007f8fb3582000) libxcb-shape.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shape.so.0 (0x00007f8fb337e000) libcdio_paranoia.so.1 => /usr/lib/x86_64-linux-gnu/libcdio_paranoia.so.1 (0x00007f8fb3176000) libcdio_cdda.so.1 => /usr/lib/x86_64-linux-gnu/libcdio_cdda.so.1 (0x00007f8fb2f6e000) libsndio.so.6.1 => /usr/lib/x86_64-linux-gnu/libsndio.so.6.1 (0x00007f8fb2d5e000) libjack.so.0 => /usr/lib/x86_64-linux-gnu/libjack.so.0 (0x00007f8fb2b17000) libasound.so.2 => /usr/lib/x86_64-linux-gnu/libasound.so.2 (0x00007f8fb280a000) libSDL2-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libSDL2-2.0.so.0 (0x00007f8fb24ee000) libdc1394.so.22 => /usr/lib/x86_64-linux-gnu/libdc1394.so.22 (0x00007f8fb2277000) libGL.so.1 => /usr/lib/x86_64-linux-gnu/libGL.so.1 (0x00007f8fb2005000) libpulse.so.0 => /usr/lib/x86_64-linux-gnu/libpulse.so.0 (0x00007f8fb1db4000) libcaca.so.0 => /usr/lib/x86_64-linux-gnu/libcaca.so.0 (0x00007f8fb1aeb000) libraw1394.so.11 => /usr/lib/x86_64-linux-gnu/libraw1394.so.11 (0x00007f8fb18db000) libavc1394.so.0 => /usr/lib/x86_64-linux-gnu/libavc1394.so.0 (0x00007f8fb16d6000) librom1394.so.0 => /usr/lib/x86_64-linux-gnu/librom1394.so.0 (0x00007f8fb14d1000) libiec61883.so.0 => /usr/lib/x86_64-linux-gnu/libiec61883.so.0 (0x00007f8fb12c4000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f8fb10c0000) libvidstab.so.1.0 => /usr/lib/libvidstab.so.1.0 (0x00007f8fb0eab000) libtesseract.so.3 => /usr/lib/x86_64-linux-gnu/libtesseract.so.3 (0x00007f8fb0708000) librubberband.so.2 => /usr/lib/x86_64-linux-gnu/librubberband.so.2 (0x00007f8fb04d2000) libmfx.so.0 => /usr/lib/x86_64-linux-gnu/libmfx.so.0 (0x00007f8fb02be000) libfribidi.so.0 => /usr/lib/x86_64-linux-gnu/libfribidi.so.0 (0x00007f8fb00a7000) libfreetype.so.6 => /usr/lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f8fafdf8000) libfontconfig.so.1 => /usr/lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f8fafbba000) libbs2b.so.0 => /usr/lib/x86_64-linux-gnu/libbs2b.so.0 (0x00007f8faf9b4000) libass.so.9 => /usr/lib/x86_64-linux-gnu/libass.so.9 (0x00007f8faf783000) libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20 (0x00007f8faf474000) libopenmpt.so.0 => /usr/lib/x86_64-linux-gnu/libopenmpt.so.0 (0x00007f8faf0e5000) libgme.so.0 => /usr/lib/x86_64-linux-gnu/libgme.so.0 (0x00007f8faee98000) libbluray.so.2 => /usr/lib/x86_64-linux-gnu/libbluray.so.2 (0x00007f8faec4b000) libgnutls.so.30 => /usr/lib/x86_64-linux-gnu/libgnutls.so.30 (0x00007f8fae8b2000) libchromaprint.so.1 => /usr/lib/x86_64-linux-gnu/libchromaprint.so.1 (0x00007f8fae69a000) libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007f8fae48a000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f8fae270000) libzvbi.so.0 => /usr/lib/x86_64-linux-gnu/libzvbi.so.0 (0x00007f8fadfe3000) libxvidcore.so.4 => /usr/lib/x86_64-linux-gnu/libxvidcore.so.4 (0x00007f8fadccf000) libx265.so.116 => /usr/lib/x86_64-linux-gnu/libx265.so.116 (0x00007f8fad649000) libx264.so.150 => /usr/lib/x86_64-linux-gnu/libx264.so.150 (0x00007f8fad2ce000) libvpx.so.4 => /usr/lib/x86_64-linux-gnu/libvpx.so.4 (0x00007f8face91000) libvorbisenc.so.2 => /usr/lib/x86_64-linux-gnu/libvorbisenc.so.2 (0x00007f8facbe8000) libvorbis.so.0 => /usr/lib/x86_64-linux-gnu/libvorbis.so.0 (0x00007f8fac9bc000) libvo-amrwbenc.so.0 => /usr/lib/x86_64-linux-gnu/libvo-amrwbenc.so.0 (0x00007f8fac7a2000) libtheoraenc.so.1 => /usr/lib/x86_64-linux-gnu/libtheoraenc.so.1 (0x00007f8fac563000) libtheoradec.so.1 => /usr/lib/x86_64-linux-gnu/libtheoradec.so.1 (0x00007f8fac345000) libspeex.so.1 => /usr/lib/x86_64-linux-gnu/libspeex.so.1 (0x00007f8fac12c000) libsnappy.so.1 => /usr/lib/x86_64-linux-gnu/libsnappy.so.1 (0x00007f8fabf24000) libshine.so.3 => /usr/lib/x86_64-linux-gnu/libshine.so.3 (0x00007f8fabd17000) libopus.so.0 => /usr/lib/x86_64-linux-gnu/libopus.so.0 (0x00007f8fabac8000) libopenjp2.so.7 => /usr/lib/x86_64-linux-gnu/libopenjp2.so.7 (0x00007f8fab88d000) libopenh264.so.2 => /usr/lib/x86_64-linux-gnu/libopenh264.so.2 (0x00007f8fab598000) libopencore-amrwb.so.0 => /usr/lib/x86_64-linux-gnu/libopencore-amrwb.so.0 (0x00007f8fab384000) libopencore-amrnb.so.0 => /usr/lib/x86_64-linux-gnu/libopencore-amrnb.so.0 (0x00007f8fab15a000) libmp3lame.so.0 => /usr/lib/x86_64-linux-gnu/libmp3lame.so.0 (0x00007f8faaec3000) libkvazaar.so.3 => /usr/lib/x86_64-linux-gnu/libkvazaar.so.3 (0x00007f8faac3d000) libilbc.so.2 => /usr/lib/x86_64-linux-gnu/libilbc.so.2 (0x00007f8faaa26000) libgsm.so.1 => /usr/lib/x86_64-linux-gnu/libgsm.so.1 (0x00007f8faa819000) libfdk-aac.so.1 => /usr/lib/x86_64-linux-gnu/libfdk-aac.so.1 (0x00007f8faa561000) libcrystalhd.so.3 => /usr/lib/x86_64-linux-gnu/libcrystalhd.so.3 (0x00007f8faa346000) liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f8faa120000) libsoxr.so.0 => /usr/lib/x86_64-linux-gnu/libsoxr.so.0 (0x00007f8fa9ebd000) libvdpau.so.1 => /usr/lib/x86_64-linux-gnu/libvdpau.so.1 (0x00007f8fa9cb9000) libva-x11.so.1 => /usr/lib/x86_64-linux-gnu/libva-x11.so.1 (0x00007f8fa9ab3000) libva-drm.so.1 => /usr/lib/x86_64-linux-gnu/libva-drm.so.1 (0x00007f8fa98b0000) /lib64/ld-linux-x86-64.so.2 (0x00007f8fb861a000) libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f8fa96ac000) libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f8fa94a6000) libcdio.so.13 => /usr/lib/x86_64-linux-gnu/libcdio.so.13 (0x00007f8fa9281000) libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f8fa906b000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f8fa8ce9000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f8fa8ad2000) libpulse-simple.so.0 => /usr/lib/x86_64-linux-gnu/libpulse-simple.so.0 (0x00007f8fa88cd000) libXcursor.so.1 => /usr/lib/x86_64-linux-gnu/libXcursor.so.1 (0x00007f8fa86c2000) libXinerama.so.1 => /usr/lib/x86_64-linux-gnu/libXinerama.so.1 (0x00007f8fa84bf000) libXi.so.6 => /usr/lib/x86_64-linux-gnu/libXi.so.6 (0x00007f8fa82af000) libXrandr.so.2 => /usr/lib/x86_64-linux-gnu/libXrandr.so.2 (0x00007f8fa80a4000) libXss.so.1 => /usr/lib/x86_64-linux-gnu/libXss.so.1 (0x00007f8fa7ea1000) libXxf86vm.so.1 => /usr/lib/x86_64-linux-gnu/libXxf86vm.so.1 (0x00007f8fa7c9b000) libwayland-egl.so.1 => /usr/lib/x86_64-linux-gnu/libwayland-egl.so.1 (0x00007f8fa7a99000) libwayland-client.so.0 => /usr/lib/x86_64-linux-gnu/libwayland-client.so.0 (0x00007f8fa788a000) libwayland-cursor.so.0 => /usr/lib/x86_64-linux-gnu/libwayland-cursor.so.0 (0x00007f8fa7682000) libxkbcommon.so.0 => /usr/lib/x86_64-linux-gnu/libxkbcommon.so.0 (0x00007f8fa7442000) libusb-1.0.so.0 => /lib/x86_64-linux-gnu/libusb-1.0.so.0 (0x00007f8fa7229000) libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f8fa6fff000) libxcb-dri3.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-dri3.so.0 (0x00007f8fa6dfc000) libxcb-present.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-present.so.0 (0x00007f8fa6bf9000) libxcb-sync.so.1 => /usr/lib/x86_64-linux-gnu/libxcb-sync.so.1 (0x00007f8fa69f2000) libxshmfence.so.1 => /usr/lib/x86_64-linux-gnu/libxshmfence.so.1 (0x00007f8fa67f0000) libglapi.so.0 => /usr/lib/x86_64-linux-gnu/libglapi.so.0 (0x00007f8fa65c1000) libXdamage.so.1 => /usr/lib/x86_64-linux-gnu/libXdamage.so.1 (0x00007f8fa63be000) libXfixes.so.3 => /usr/lib/x86_64-linux-gnu/libXfixes.so.3 (0x00007f8fa61b8000) libX11-xcb.so.1 => /usr/lib/x86_64-linux-gnu/libX11-xcb.so.1 (0x00007f8fa5fb6000) libxcb-glx.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0 (0x00007f8fa5d9b000) libxcb-dri2.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-dri2.so.0 (0x00007f8fa5b96000) libdrm.so.2 => /usr/lib/x86_64-linux-gnu/libdrm.so.2 (0x00007f8fa5986000) libpulsecommon-10.0.so => /usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-10.0.so (0x00007f8fa5703000) libdbus-1.so.3 => /lib/x86_64-linux-gnu/libdbus-1.so.3 (0x00007f8fa54b3000) libcap.so.2 => /lib/x86_64-linux-gnu/libcap.so.2 (0x00007f8fa52ad000) libslang.so.2 => /lib/x86_64-linux-gnu/libslang.so.2 (0x00007f8fa4dc5000) libncursesw.so.5 => /lib/x86_64-linux-gnu/libncursesw.so.5 (0x00007f8fa4b95000) libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f8fa496b000) liblept.so.5 => /usr/lib/x86_64-linux-gnu/liblept.so.5 (0x00007f8fa44fb000) libsamplerate.so.0 => /usr/lib/x86_64-linux-gnu/libsamplerate.so.0 (0x00007f8fa418f000) libfftw3.so.3 => /usr/lib/x86_64-linux-gnu/libfftw3.so.3 (0x00007f8fa3d92000) libpng16.so.16 => /usr/lib/x86_64-linux-gnu/libpng16.so.16 (0x00007f8fa3b5f000) libharfbuzz.so.0 => /usr/lib/x86_64-linux-gnu/libharfbuzz.so.0 (0x00007f8fa38ca000) libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007f8fa36b6000) libmpg123.so.0 => /usr/lib/x86_64-linux-gnu/libmpg123.so.0 (0x00007f8fa3457000) libvorbisfile.so.3 => /usr/lib/x86_64-linux-gnu/libvorbisfile.so.3 (0x00007f8fa324e000) libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007f8fa2ee7000) libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007f8fa2c82000) libidn.so.11 => /lib/x86_64-linux-gnu/libidn.so.11 (0x00007f8fa2a4e000) libtasn1.so.6 => /usr/lib/x86_64-linux-gnu/libtasn1.so.6 (0x00007f8fa283b000) libnettle.so.6 => /usr/lib/x86_64-linux-gnu/libnettle.so.6 (0x00007f8fa2604000) libhogweed.so.4 => /usr/lib/x86_64-linux-gnu/libhogweed.so.4 (0x00007f8fa23cf000) libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10 (0x00007f8fa214c000) libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f8fa1f41000) libogg.so.0 => /usr/lib/x86_64-linux-gnu/libogg.so.0 (0x00007f8fa1d38000) libcairo.so.2 => /usr/lib/x86_64-linux-gnu/libcairo.so.2 (0x00007f8fa1a24000) libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f8fa17f7000) libXrender.so.1 => /usr/lib/x86_64-linux-gnu/libXrender.so.1 (0x00007f8fa15ed000) libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007f8fa13e4000) libudev.so.1 => /lib/x86_64-linux-gnu/libudev.so.1 (0x00007f8fb87a0000) libICE.so.6 => /usr/lib/x86_64-linux-gnu/libICE.so.6 (0x00007f8fa11c7000) libSM.so.6 => /usr/lib/x86_64-linux-gnu/libSM.so.6 (0x00007f8fa0fbf000) libXtst.so.6 => /usr/lib/x86_64-linux-gnu/libXtst.so.6 (0x00007f8fa0db9000) libsystemd.so.0 => /lib/x86_64-linux-gnu/libsystemd.so.0 (0x00007f8fb8714000) libwrap.so.0 => /lib/x86_64-linux-gnu/libwrap.so.0 (0x00007f8fa0baf000) libsndfile.so.1 => /usr/lib/x86_64-linux-gnu/libsndfile.so.1 (0x00007f8fa0937000) libasyncns.so.0 => /usr/lib/x86_64-linux-gnu/libasyncns.so.0 (0x00007f8fa0731000) libjpeg.so.62 => /usr/lib/x86_64-linux-gnu/libjpeg.so.62 (0x00007f8fa04c6000) libgif.so.7 => /usr/lib/x86_64-linux-gnu/libgif.so.7 (0x00007f8fa02bc000) libtiff.so.5 => /usr/lib/x86_64-linux-gnu/libtiff.so.5 (0x00007f8fa0045000) libwebp.so.6 => /usr/lib/x86_64-linux-gnu/libwebp.so.6 (0x00007f8f9fde4000) libglib-2.0.so.0 => /lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f8f9fad0000) libgraphite2.so.3 => /usr/lib/x86_64-linux-gnu/libgraphite2.so.3 (0x00007f8f9f8a3000) libpixman-1.so.0 => /usr/lib/x86_64-linux-gnu/libpixman-1.so.0 (0x00007f8f9f5fc000) libxcb-render.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-render.so.0 (0x00007f8f9f3ee000) libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f8f9f1e9000) libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f8f9efc1000) liblz4.so.1 => /usr/lib/x86_64-linux-gnu/liblz4.so.1 (0x00007f8f9edaf000) libnsl.so.1 => /lib/x86_64-linux-gnu/libnsl.so.1 (0x00007f8f9eb97000) libFLAC.so.8 => /usr/lib/x86_64-linux-gnu/libFLAC.so.8 (0x00007f8f9e920000) libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f8f9e709000) libjbig.so.0 => /usr/lib/x86_64-linux-gnu/libjbig.so.0 (0x00007f8f9e4fb000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f8f9e288000) | I came to this topic for the message, just to add one tip for users, I found out that if the filename itself has ':' in it would cause the problem, as some programs generate files with the time stamp like "audio 12:34:14.ogg" | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104277/"
]
} |
412,577 | I faced an urge to run different scripts simultaneously on different servers. (+/- 2 seconds not a problem) The condition is to launch the scripts by the invocation from the primary server, and NOT by the Cron schedule. I tried SSH, but this way I must wait till the script finishes running on the remote server and only then drop the session. I'm looking for an alternative way, that allows to start the script on remote server without waiting for its end.P.S. - I'm not using CLI in my use case, but an external script invocation app, that triggers the script on the primary server. Thanks. | GNU screen will allow you to execute remote jobs without staying connected to the server, except on systems where systemd kills all processes upon logout 1 . Specifically, you can use: #!/bin/shssh server1 screen -d -m command1ssh server2 screen -d -m command2# ... Each local ssh session terminates immediately after launching the remote screen process, which in turn exits as soon as the executed command finishes. You can suffix each ssh line with & to make the connections in parallel. Excerpt from the manual, screen(1) : -d -m Start screen in "detached" mode. This creates a new session but doesn't attach to it. This is useful for system startup scripts. 1 If you are on a system that uses systemd configured with killUserProcesses=yes , you will need to replace screen with systemd-run --user --scope screen . | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267199/"
]
} |
412,588 | I have a small collection of old, custom programs for IRIX on SGI MIPS, some of which I need to run for work related reasons because there are no modern alternatives available and we need to access them for legacy stuff in the back room. I'm just the guy that was lucky enough to get tasked with finding a solution. There's no source code available and little to no documentation on any of it. What are my options short of spending $1000+ for a powerful and fully functional SGI workstation on ebay? I'm hesitant because, you know, it's ebay. And it's not like I can buy these things new from SGI anymore, which means I'll have to take my chances by relying purely on re-sellers of used and refurbished products. I spoke with SGI on the phone and they said that they don't support the hardware or software and that they won't even provide me with documentation or part numbers, so I'm out of luck on that end. IRIX simply won't boot in QEMU no matter how closely I try to configure it to match the real hardware, probably due to the custom graphics hardware and all kinds of little undocumented hacks and fixes for optimization done by the engineers in those old machines. I know there are people on Nekochan forums working on this and they've got some kind of headless boot in QEMU, but I need the whole OS and GUI to work. It doesn't have to be stable or all that fast, it just needs to work well enough to run these programs I have. | As far as I am aware, there is no fully working emulator for SGIs, because the graphics hardware consists of custom chips, is not properly documented, and hasn't been reverse engineered yet. Also, the disk images that would be needed to run an emulator are still under licence. There is some code in Mame , but I think this is work in progress, and I haven't tried to run it (because I don't have access to a disk image). | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267204/"
]
} |
412,613 | On Mac, the reset command in terminal almost does the same thing as clear . On Ubuntu Linux, and maybe other flavors of Linux, the reset command actually "resets" the terminal so that you can't scroll up or see previously input commands by scrolling. Is there a way to make the reset command on Mac act/do the same thing as reset does on Linux? If so, how can I do it? | Actually (on MacOS), it's not "the exact same thing" (the manual page description for "clear" is different from "reset" ). MacOS comes with ncurses 5.7 ( 9 years old), with some updates to the terminal database. If you want something newer, installing MacPorts lets you update ncurses to the current (less a few months) version. By the way, that would be newer than Ubuntu, which generally lags development versions by 6 months to 2 or more years. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/241691/"
]
} |
412,638 | Suppose I have a graphical program named app . Usage example: app -t 'first tab' -t 'second tab' opens two 'tabs' in that program. The question is: how can I execute the command (i.e. app ) from within a bash script if the number of arguments can vary? Consider this: #!/bin/bashtabs=( 'first tab' 'second tab')# Open the app (starting with some tabs).app # ... How to get `app -t 'first tab' -t 'second tab'`? I would like the above script to have an effect equivalent to app -t 'first tab' -t 'second tab' . How can such a bash script be written? Edit: note that the question is asking about composing command line arguments on the fly using an array of arguments. | Giving the arguments from an array is easy, "${array[@]}" expands to the array entries as distinct words (arguments). We just need to add the -t flags. To do that, we can loop over the first array, and build another array for the full list of arguments, adding the -t flags as we go: #!/bin/bashtabs=("first tab" "second tab")args=()for t in "${tabs[@]}" ; do args+=(-t "$t")doneapp "${args[@]}" Use "$@" instead of "${tabs[@]}" to take the command line arguments of the script instead of a hard coded list. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/217242/"
]
} |
412,695 | I know about the chattr +i filename command which makes a file read only for all users. However, the problem is that I can revoke this by using chattr -i filename . Is there a way to make a file readable by everyone on the system, but not writable by anyone, even the root, and with no going back (No option to make it writable again)? | Put it on a CD or a DVD. The once-writable kind, not the erasable ones. Or some other kind of a read-only device. Ok, I suppose you want a software solution, so here are some ideas: You could possibly create an SELinux ruleset that disables the syscall (*) that chattr uses, even for root. Another possibility would be to use capabilities : setting +i requires the CAP_LINUX_IMMUTABLE capability, so if you can arrange the capability bounding set of all processes to not include that, then no-one can change those flags. But you'd need support from init to have that apply to all processes. Systemd can do that , but I think it would need to be done for each service separately. (* maybe it was an ioctl instead.) However, if you do that, remember that a usual root can modify the filesystem from the raw device (that's what debugfs is for), so you'd need to prevent that, too, as well as prevent modifying the kernel (loading modules). Loading modules can be prevented with the kernel.modules_disabled sysctl, but I'm not sure about preventing access to raw devices. And make all the relevant configuration files also immutable. Anyway, after that, you'd also need to prevent changing the way the system boots, otherwise someone could reboot the system with a kernel that allows overriding the above restrictions. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/231287/"
]
} |
412,707 | I have read that positional parameters start at $1 (for example: $1 , $2 , $3 and so on are positional parameters). But $0 is not a positional parameter. But why $0 is not a positional parameter? I think this could be a reason, but not sure: Positional parameters only take their values when a script is executed. For example: if we do ./myScript Hello , then $1 will have the value Hello . But $0 can take its value on two occasions: when a script is executed (it would have the value of the script name), and when bash itself is executed without a script (it would have the value bash or -bash ). | There's a clear parallel from the numbered parameters ( $0 , $1 , ...) to argv[] , the array that contains the command line parameters when a process starts. The first element of the array, argv[0] , usually holds the name of the process, and the actual arguments start from argv[1] . (Usually. It doesn't have to. The description of execve(2) states :"The value in argv[0] should point to a filename string that is associated with the process being started") At least post-facto, it's easy to imagine the convention was just copied directly to the shell. The values aren't directly copied, though. At least on my systems the shell process that starts when running ./script.sh with the hashbang #!/bin/bash -x gets the parameters /bin/bash , -x , ./script.sh . That is, the value that goes to $0 as seen by the script, is in argv[2] of the shell process. I assume most people would consider the command name distinct from its parameters, so $0 is functionally different from the others, so it's not unreasonable to call it differently, too. Of course we could have a scripting language that had a different naming convention. Perl puts the program name in a variable called $0 , and the arguments in the array @ARGV , starting from index zero, i.e. $ARGV[0] etc. Anyway, the most obvious answer would be to say that $0 is not a positional parameter, because the standard says so : Under 2.5 Parameters and Variables 2.5.1 Positional Parameters A positional parameter is a parameter denoted by the decimal value represented by one or more digits, other than the single digit 0. 2.5.2 Special Parameters # Expands to the decimal number of positional parameters. The command name (parameter 0) shall not be counted in the number given by ' # ' because it is a special parameter, not a positional parameter. 0 (Zero.) Expands to the name of the shell or shell script. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/228808/"
]
} |
412,716 | I have read that bash can do integer arithmetic without using an external command, for example: echo "$((3 * (2 + 1)))" Can bash also do floating-point arithmetic without using an external command? | No. Bash cannot perform floating point arithmetic natively. This is not what you're looking for but may help someone else: Alternatives bc bc allows floating point arithmetic, and can even convert whole numbers to floating point by setting the scale value. (Note the scale value only affects division within bc but a workaround for this is ending any formula with division by 1) $ echo '10.1 / 1.1' | bc -l9.18181818181818181818$ echo '55 * 0.111111' | bc -l6.111105$ echo 'scale=4; 1 + 1' | bc -l2$ echo 'scale=4; 1 + 1 / 1' | bc -l2.0000 awk awk is a programming language in itself, but is easily leveraged to perform floating point arithmetic in your bash scripts, but that's not all it can do! echo | awk '{print 10.1 / 1.1}'9.18182$ awk 'BEGIN{print 55 * 0.111111}'6.11111$ echo | awk '{print log(100)}'4.60517$ awk 'BEGIN{print sqrt(100)}'10 I used both echo piped to awk and a BEGIN to show two ways of doing this. Anything within an awk BEGIN statement will be executed before input is read, however without input or a BEGIN statement awk wouldn't execute so you need to feed it input. Perl Another programming language that can be leveraged within a bash script. $ perl -l -e 'print 10.1 / 1.1'9.18181818181818$ somevar="$(perl -e 'print 55 * 0.111111')"; echo "$somevar"6.111105 Python Another programming language that can be leveraged within a bash script. $ python -c 'print 10.1 / 1.1'9.18181818182$ somevar="$(python -c 'print 55 * 0.111111')"; echo "$somevar"6.111105 Ruby Another programming language that can be leveraged within a bash script. $ ruby -l -e 'print 10.1 / 1.1'9.18181818181818$ somevar="$(ruby -e 'print 55 * 0.111111')"; echo "$somevar"6.111105 | {
"score": 6,
"source": [
"https://unix.stackexchange.com/questions/412716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267288/"
]
} |
412,805 | I cannot find anywhere the log level meaning of crond .I know that 0 is pretty much "log everything" while 8 is "show only important info" thanks to the crond help: / # crond --helpBusyBox v1.26.2 (2017-11-23 08:40:54 GMT) multi-call binary.Usage: crond -fbS -l N -d N -L LOGFILE -c DIR -f Foreground -b Background (default) -S Log to syslog (default) -l N Set log level. Most verbose:0, default:8 -d N Set log level, log to stderr -L FILE Log to FILE -c DIR Cron dir. Default:/var/spool/cron/crontabs but where I can find exactly the documentation/meaning about the different levels? I'm on Alpine 3.6. | The particular semantics of the log level values for crond are only defined in the code, it seems. All of the crond logging there goes through a crondlog() function in busybox/miscutils/crond.c function: static void crondlog(unsigned level, const char *msg, va_list va){ if (level >= G.log_level) { /* Do logging... */ So that only those messages with levels higher than the one you specify via the -l command-line option are logged. Then, elsewhere in that crond.c file, we see that crondlog() is only called via the log5() , log7() , and log8() wrapper functions. Which means that those are the only levels at which that crond program logs messages. These log levels are specific to crond , and are not related to any syslog(3) levels or other programs. In short, the meaning of these levels is only found in the source code for this program. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412805",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267353/"
]
} |
412,806 | Why did the first Linux developers choose to implement a non-preemptive kernel? Is it to save synchronization? As far as I know, Linux was developed in the early '90s, when PCs had a single processor. What advantage does a non-preemptive kernel give in such PCs? Why, however, the advantage is reduced by multi-core processors? | In the context of the Linux kernel, when people talk about preemption they often refer to the kernel’s ability to interrupt itself — essentially, switch tasks while running kernel code. Allowing this to happen is quite complex, which is probably the main reason it took a long time for the kernel to be made preemptible. At first most kernel code couldn’t be interrupted anyway, since it was protected by the big kernel lock. That lock was progressively eliminated from more and more kernel code, allowing multiple simultaneous calls to the kernel in parallel (which became more important as SMP systems became more common). But that still didn’t make the kernel itself preemptible; that took more development still, culminating in the PREEMPT_RT patch set which was eventually merged in the mainline kernel (and was capable of preempting the BKL anyway). Nowadays the kernel can be configured to be more or less preemptible, depending on the throughput and latency characteristics you’re after; see the related kernel configuration for details. As you can see from the explanations in the kernel configuration, preemption affects throughput and latency, not concurrency. On single-CPU systems, preemption is still useful because it allows events to be processed with shorter reaction times; however, it also results in lower throughput (since the kernel spends time switching tasks). Preemption allows any given CPU, in a single or multiple CPU system, to switch to another task more rapidly. The limiting factor on multi-CPU systems isn’t preemption, it’s locks, big or otherwise: any time code takes a lock, it means that another CPU can’t start performing the same action. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267352/"
]
} |
412,835 | I want to append text to file like echo "abc" >>file.txt . But this add abc after new line How can I add abc in the end of file with echo without new line? | echo "abc" >>file.txt puts a newline after abc , not before. If you end up with abc on its own line, that means that the newline before abc was already present in file.txt . Note that it is perfectly normal for a text file to end in a newline. On unix, a line consists of a sequence of characters other than null⁰ or newline followed by a newline. 1 Therefore any non-empty text file ends with a newline character. If you want to add text to the last line of a file, then you can't do it with >> , because this always appends to the file, so it always writes after the last newline. Instead you need a tool that is capable of modifying an existing file. For example, you can use sed : sed '$ s/$/abc/' file.txt >file.txt.new && mv file.txt.new file.txt In the sed command, the first $ means “do the following command only on the last line”, the command s/REGEX/REPLACEMENT/ replaces REGEX by REPLACEMENT, and the regular expression $ matches at the end of the line. Linux's sed command has a built-in feature to automate this create-new-file-and-replace sequence, so you can shorten that to sed -i '$ s/$/abc/' file.txt ⁰ That's a null byte, which ASCII calls NUL and Unicode calls U+0000. Text processing programs may or may not cope with this character. 1 See the definitions of Text File , Line , and Newline Character in the "Definitions" section of the Base Definitions chapter of IEEE 1003.1-2008:2016. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412835",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267375/"
]
} |
412,868 | Is there a simple way to reverse an array? #!/bin/basharray=(1 2 3 4 5 6 7)echo "${array[@]}" so I would get: 7 6 5 4 3 2 1 instead of: 1 2 3 4 5 6 7 | I have answered the question as written, and this code reverses the array. (Printing the elements in reverse order without reversing the array is just a for loop counting down from the last element to zero.) This is a standard "swap first and last" algorithm. array=(1 2 3 4 5 6 7)min=0max=$(( ${#array[@]} -1 ))while [[ min -lt max ]]do # Swap current first and last elements x="${array[$min]}" array[$min]="${array[$max]}" array[$max]="$x" # Move closer (( min++, max-- ))doneecho "${array[@]}" It works for arrays of odd and even length. | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/240990/"
]
} |
412,869 | Can I somehow allow a GUI application running inside a flatpak allow to access and execute a binary at /bin respectively /var/bin ? Even if I allow full system access ( --filesystem=host ) it cannot even see/find the file there. My use case would be to execute shellcheck . | There are different ways: If your flatpak has host access, you could e.g. run /usr/local/bin/example in /var/run/host/usr/local/bin/example . I.e. /usr/local is mounted to /var/run/host/usr/local . However, that may still fail due to libraries not being at the correct place etc. Thus, you either need to adjust the env variables so it works there, or follow the way described below. Spawn commands outside of flatpak You may use flatpak-spawn to run commands in a different environment. However, usually you want to spwan the commands on the host system, thus breaking out of the sandbox. To do so, you obviously need to weaken the sandbox of the flatpak. Just add this permission: flatpak override com.packagename.App --talk-name=org.freedesktop.Flatpak Afterwards, you can run flatpak-spawn --host to run commands outside of the flatpak from the host. Now, to really use this in a GUI, you hopefully have some ways to change the path to the binaries you want to run there. This can get complicated, as you need to pass additional params and in the end you may end up having to write small wrapper scripts. In my case, I actually did, and you can find them here . They allow (in my case) Atom (but likely possible with any IDE) to run shellcheck or gpg … | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412869",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/146739/"
]
} |
412,871 | I am trying to autostart a python script everytime it crashes on my raspberry pi. I am adhering to the shell script solution offered here: https://raspberrypi.stackexchange.com/questions/14735/how-do-i-restart-a-python-program-on-my-pi-when-it-crashes In the root directory, I created a shell script with sudo nano constantrun.sh . The contents of this shell script is currently (after taking into account changes offered in the comments and answers): #!/bin/shCOMMAND='python home/pi/projects/mypythonscript.py'LOGFILE=restart.txtwritelog() { now=`date` echo "$now $*" >> $LOGFILE}writelog "Starting"while true ; do $COMMAND writelog "Exited with status $?" writelog "Restarting"done I then ran sudo chmod +x constantrun.sh to make it executable. Following that, I ran the script with sudo sh constantrun.sh successfully. The script can now initialise. However, as my python script uses a mysqlconnector module, the error I recieve now is : Traceback (most recent call last): File "mypythonscript.py", line 8, in <module> import mysql.connector as mariadb ImportError: No module named mysql.connector Using a file explorer, it appears that the mysql.connnector is found installed in home/pi/.local/lib/python2.7/site-packages/mysql_connector-2.1.6.dist-info How can I resolve this error? | There are different ways: If your flatpak has host access, you could e.g. run /usr/local/bin/example in /var/run/host/usr/local/bin/example . I.e. /usr/local is mounted to /var/run/host/usr/local . However, that may still fail due to libraries not being at the correct place etc. Thus, you either need to adjust the env variables so it works there, or follow the way described below. Spawn commands outside of flatpak You may use flatpak-spawn to run commands in a different environment. However, usually you want to spwan the commands on the host system, thus breaking out of the sandbox. To do so, you obviously need to weaken the sandbox of the flatpak. Just add this permission: flatpak override com.packagename.App --talk-name=org.freedesktop.Flatpak Afterwards, you can run flatpak-spawn --host to run commands outside of the flatpak from the host. Now, to really use this in a GUI, you hopefully have some ways to change the path to the binaries you want to run there. This can get complicated, as you need to pass additional params and in the end you may end up having to write small wrapper scripts. In my case, I actually did, and you can find them here . They allow (in my case) Atom (but likely possible with any IDE) to run shellcheck or gpg … | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267287/"
]
} |
412,877 | I have some .mkv files that contain 6.1 audio in FLAC format. mediainfo reports the audio track in these files as: AudioID : 2Format : FLACFormat/Info : Free Lossless Audio CodecCodec ID : A_FLACDuration : 2mn 29sBit rate mode : VariableChannel(s) : 7 channelsChannel positions : Front: L C R, Side: L R, Back: C, LFE Sampling rate : 48.0 KHzBit depth : 24 bitsDelay relative to video : 14msWriting library : libFLAC 1.3.0 (UTC 2013-05-26)Language : EnglishDefault : YesForced : No I also have a "Home Theater" 6.1 amp (Sony STR-DE895 , if anyone cares) that accepts digital audio natively through an S/PDIF (optical and coax) connection in the following formats: PCM (limited to 2 channels on S/PDIF) DTS (5.1) DTS-ES (6.1) NEO6 (6.1) Dolby Digital (5.1) DIGITAL-EX (6.1) PRO LOGIC II I'd like to have these .mkv files driving all the 6.1 speakers from the amp, but if I convert the .mkv file with a command like this: ffmpeg -i Input.FLAC.6.1.mkv -c:s copy -c:v copy -c:a ac3 Output.AC3.6.1.mkv Then I get 5.1 audio, i.e. I lose the center back channel. Per mediainfo : AudioID : 2Format : AC-3Format/Info : Audio Coding 3Mode extension : CM (complete main)Format settings, Endianness : BigCodec ID : A_AC3Duration : 2mn 29sBit rate mode : ConstantBit rate : 448 KbpsChannel(s) : 6 channelsChannel positions : Front: L C R, Side: L R, LFESampling rate : 48.0 KHzBit depth : 16 bitsCompression mode : LossyDelay relative to video : 9msStream size : 8.00 MiB (9%)Writing library : Lavc57.107.100 ac3Language : EnglishDefault : YesForced : NoDURATION : 00:02:29.768000000NUMBER_OF_FRAMES : 1755NUMBER_OF_BYTES : 56974307_STATISTICS_WRITING_APP : mkvmerge v8.2.0 ('World of Adventure') 64bit_STATISTICS_WRITING_DATE_UTC : 2015-08-01 13:29:10_STATISTICS_TAGS : BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES Notice how it changed from: Channel(s) : 7 channelsChannel positions : Front: L C R, Side: L R, Back: C, LFE To: Channel(s) : 6 channelsChannel positions : Front: L C R, Side: L R, LFE If I try to force the number of channels with -ac 7 I get: [ac3 @ 0x43f2a40] Specified channel layout '6.1' is not supported Trying to convert to DTS has the exact same result. I.e. replacing: -c:a ac3 With: -strict experimental -c:a dts Results in a mediainfo of: AudioID : 2Format : DTSFormat/Info : Digital Theater SystemsMode : 16Format settings, Endianness : BigCodec ID : A_DTSDuration : 2mn 29sBit rate mode : Constant Bit rate : 1 413 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFESampling rate : 48.0 KHz Bit depth : 16 bitsCompression mode : LossyDelay relative to video : 14msStream size : 25.2 MiB (23%)Writing library : Lavc57.107.100 dcaLanguage : EnglishDefault : YesForced : NoDURATION : 00:02:29.774000000NUMBER_OF_FRAMES : 1755NUMBER_OF_BYTES : 56974307 _STATISTICS_WRITING_APP : mkvmerge v8.2.0 ('World of Adventure') 64bit_STATISTICS_WRITING_DATE_UTC : 2015-08-01 13:29:10_STATISTICS_TAGS : BPS DURATION NUMBER_OF_FRAMES NUMBER_OF_BYTES And trying to force 6.1 with -ac 7 causes the same '6.1' is not supported error as above. For what is worth, the ffmpeg used in the tests above was: $ ffmpeg -versionffmpeg version 3.4.1-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developersbuilt with gcc 6.4.0 (Debian 6.4.0-10) 20171112configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc-6 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gray --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-librtmp --enable-libsoxr --enable-libspeex --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimglibavutil 55. 78.100 / 55. 78.100libavcodec 57.107.100 / 57.107.100libavformat 57. 83.100 / 57. 83.100libavdevice 57. 10.100 / 57. 10.100libavfilter 6.107.100 / 6.107.100libswscale 4. 8.100 / 4. 8.100libswresample 2. 9.100 / 2. 9.100libpostproc 54. 7.100 / 54. 7.100 So, how can I convert the audio in the .mkv file to a format supported by my system, while preserving the 6.1 channel format? | There are different ways: If your flatpak has host access, you could e.g. run /usr/local/bin/example in /var/run/host/usr/local/bin/example . I.e. /usr/local is mounted to /var/run/host/usr/local . However, that may still fail due to libraries not being at the correct place etc. Thus, you either need to adjust the env variables so it works there, or follow the way described below. Spawn commands outside of flatpak You may use flatpak-spawn to run commands in a different environment. However, usually you want to spwan the commands on the host system, thus breaking out of the sandbox. To do so, you obviously need to weaken the sandbox of the flatpak. Just add this permission: flatpak override com.packagename.App --talk-name=org.freedesktop.Flatpak Afterwards, you can run flatpak-spawn --host to run commands outside of the flatpak from the host. Now, to really use this in a GUI, you hopefully have some ways to change the path to the binaries you want to run there. This can get complicated, as you need to pass additional params and in the end you may end up having to write small wrapper scripts. In my case, I actually did, and you can find them here . They allow (in my case) Atom (but likely possible with any IDE) to run shellcheck or gpg … | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/77874/"
]
} |
412,900 | I want to distinguish my command from the rest of the output of the shell easily through different colors. But I don't have much experience with customizing my bash shell, so I don't know how to reset the color (after) my input. my current PS1 variable looks like this: export PS1="$red\u$green\$(__git_ps1) $turk\w$white$ "# '\$(__git_ps1)' git status prompt (generates a space before it even if empty) So my input is white. But even the output of the commands is white because it is not reset. Furthermore, if the command itself color codes its output, then it itself resets the colors which result in some ugly mixing of white and gray. So how do I reset the color after my input command? | The DEBUG trap runs before each command, so you can abuse it to reset the colors, if you have colored the command line input. ("abuse" since this isn't debugging.) With this: $ promptcol="$(tput sgr0)$(tput setaf 3)"$ cmdcol="$(tput sgr0)$(tput bold)"$ normalcol="$(tput sgr0)"$ trap 'echo -n "$normalcol"' DEBUG$ PS1="\[$promptcol\]\w\$ \[$cmdcol\]" I get this: Note that you need the \[...\] surrounding the color codes, so that the shell knows how to calculate the width of the prompt properly. Also, note that you can't put the \[...\] within the variables , the shell handles the prompt escapes first, and variable expansions only after that. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412900",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/267420/"
]
} |
412,906 | I have file of following format: 1.0 2.0 3.0 4.0 5.0 Is it possible to copy value in first line 1.0 to the beginning of every line in awk? Like this: 1.0 1.0 1.0 2.0 1.0 3.0 1.0 4.0 1.0 5.0 | awk 'NR==1 {f=$1} {print f,$1}' file Output: 1.0 1.01.0 2.01.0 3.01.0 4.01.0 5.0 If current line number ( NR ) is 1 then save column 1 ( $1 ) to variable f . For every line print content of variable f and content of column 1. | {
"score": 4,
"source": [
"https://unix.stackexchange.com/questions/412906",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
412,933 | The following array represented the numbers of disks on each linux machines Each single array includes the number of disks on a linux machine . echo ${ARRAY_DISK_Quantity[*]}4 4 4 4 2 4 4 4 what is the simple way to identify that all array's values are equal? Good status: 4 4 4 4 4 4 4 4 Bad status: 4 4 4 4 4 4 2 4 Bad status: 6 6 6 6 6 6 6 6 6 6 2 6 2 | bash + GNU sort + GNU grep solution: if [ "${#array[@]}" -gt 0 ] && [ $(printf "%s\000" "${array[@]}" | LC_ALL=C sort -z -u | grep -z -c .) -eq 1 ] ; then echo okelse echo badfi English explanation: if unique-sorting the elements of the array results in only one element, then print "ok". Otherwise print "bad". The array is printed with NUL bytes separating each element, piped into GNU sort (relying on the -z aka --zero-terminated and -u aka --unique options), and then into grep (using options -z aka --null-data and -c aka --count ) to count the output lines. Unlike my previous version, I can't use wc here because it requires input lines terminated with a newline...and using sed or tr to convert NULs to newlines after the sort would defeat the purpose of using NUL separators. grep -c makes a reasonable substitute. Here's the same thing rewritten as a function: function count_unique() { local LC_ALL=C if [ "$#" -eq 0 ] ; then echo 0 else echo "$(printf "%s\000" "$@" | sort --zero-terminated --unique | grep --null-data --count .)" fi}ARRAY_DISK_Quantity=(4 4 4 4 2 4 4 4)if [ "$(count_unique "${ARRAY_DISK_Quantity[@]}")" -eq 1 ] ; then echo "ok"else echo "bad"fi | {
"score": 5,
"source": [
"https://unix.stackexchange.com/questions/412933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/237298/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.