source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
186,166
I opened vim in my iterm2. Firstly, I typed something on insert mode, like Hello At this stage, if I didn't exit from insert mode, delete key works and can delete the whole word if I want. If I quit insert mode, and open insert mode again, this Hello can't be removed by delete key. But the newly typed-in content can be removed. I've renamed my .vimrc, problem still exits. In the shell command-line, my delete key works well. And I did some test on remote server through iterm , didn't encounter the same issue. What could be the cause of this problem? How to fix it? PS: As I'm using macbook, the delete key is corresponding for backspace in PC.
Just put this to your .vimrc : set backspace=indent,eol,start
{ "source": [ "https://unix.stackexchange.com/questions/186166", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
186,214
How to match the hidden files inside the given directories for example If I give the below command it's not giving the result of the hidden files, du -b maybehere*/* how to achieve this simple using a single command instead of using du -b maybehere*/.* maybehere*/* as I need to type maybehere twice.
Take advantage of the brace expansion: du -b maybehere*/{*,.[^.],.??*} or alternatively du -b maybehere*/{,.[^.],..?}* The logic behind this is probably not obvious, so here is explanation: * matches all non-hidden files .[^.] matches files which names started with single dot followed by not a dot; that are only 2 character filenames in the first form. .??* matches hidden files which are at least 3 character long ..?* like above, but second character must be a dot The whole point is to exclude hard links to current and parent directory ( . and .. ), but include all normal files in such a way that each of them will be counted only once! For example the simplest would be to just write du -b maybehere*/{.,}* It means that that the list contains a dot . and "nothing" (nothing is between , and closing } ), thus all hidden files (which start from a dot) and all non-hidden files (which start from "nothing") would match. The problem is that this would also match . and .. , and this is most probably not what you want, so we have to exclude it somehow. Final word about brace expansion. Brace expansion is a mechanism by which you can include more files/strings/whatever to the commandline by writing fewer characters. The syntax is {word1,word2,...} , i.e. it is a list of comma separated strings which starts from { and end with } . bash manual gives a very basic and at the same time very common example of usage: $ echo a{b,c,d}e abe ace ade
{ "source": [ "https://unix.stackexchange.com/questions/186214", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104071/" ] }
186,422
When I am in a directory in bash, and I press cd Space Tab , it shows everything in the directory as a possibility. ( Show all 1000 possibilities? ) This is really cumbersome when I am in a directory with lots of regular files and relatively few directories. So, is it possible to make the choices for autocompletion of cd to only include directories? I know I can get a directory listing within a directory by doing ls -d */ but I'm not sure how to proceed from there. I am using CentOS 6.6 Final .
Just add complete -d cd in your ~/.bashrc (or other bash configuration file).
{ "source": [ "https://unix.stackexchange.com/questions/186422", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66677/" ] }
186,449
Let us say I run a command or shell script, and it gives me output. Without knowing the internals of this command or shell script, how does one determine if is the output was from stderr or stdout ? For e.g., $ ls -ld / drwxrwxr-t 35 root admin 1258 Dec 11 19:16 / vs ls -ld /test ls: /test: No such file or directory How do I ascertain that the first command printed to stdout and the second to stderr (did it?)?
There's no way to tell once the output has already been printed. In this case, both stdout and stderr are connected to the terminal, so the information about which stream was written to was already lost by the time the text appeared on your terminal; they were combined by the program before ever making it to the terminal. What you can do, in a case like the above, would be to run the command with stdout and stderr redirected to different places and see what happens. Or run it twice, once with stdout redirected to /dev/null and once with stderr redirected to /dev/null , and see which of those cases results in the text showing up. You can redirect stdout to /dev/null by tacking >/dev/null on the end of the command line, and you can redirect stderr to /dev/null by adding 2>/dev/null .
{ "source": [ "https://unix.stackexchange.com/questions/186449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4252/" ] }
186,566
My question is why nowadays some operating system event handling is still written in assembly language instead of a higher level language such as C, when the kernel itself is written mostly in C?
The language abstracts away access to CPU registers, and an OS when handling events has to save context, so it needs access to the registers at the point of the event, thus breaking the C spec.
{ "source": [ "https://unix.stackexchange.com/questions/186566", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85882/" ] }
186,568
On my server (Synology DS212) some files and folders have nobody nobody users and groups. What are the characteristics of this user and group? Who can write of read this file? How can I change it? For which user and group?
The nobody user is a pseudo user in many Unixes and Linux distributions. According to the Linux Standard Base , the nobody user and its group are an optional mnemonic user and group. That user is meant to represent the user with the least permissions on the system. In the best case that user and its group are not assigned to any file or directory (as owner). This user is in his corresponding group that is (according to LSB) also called "nobody" and in no other group. In earlier Unixes and Linux distributions daemon (for example a webserver) were called under the nobody user. If a malicious user gained control over such a daemon, the damage he can perform is limited to what the daemon can. But the problem is, when there are multiple daemons running with the nobody user, this has no sense anymore. That's why today such daemons have their own user. The nobody user should have no shell assigned to it. Different distributions handle that in different ways: some refer to /sbin/nologin that prints a message; some refer to /bin/false that simply exits with 1 (false); or some just disable the user in /etc/shadow . According to Linux Standard Base, the nobody user is "Used by NFS". In fact the NFS daemon is one of the few that still needs the nobody user. If the owner of a file or directory in a mounted NFS share doesn't exist at the local system, it is replaced by the nobody user and its group. You can change the permission of a file owned by the nobody user just simply with the root user and chown . But at the machine hosting the NFS share, that user might exist, so take care. I also use a Synology system. They run the apache web-server under the nobody user.
{ "source": [ "https://unix.stackexchange.com/questions/186568", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60030/" ] }
186,663
I was looking for a command to limit numbers read in from stdin . I wrote a little script for that purpose (critique is welcome), but I was wondering if there was not a standard command for this, simple and (I think) common use case. My script which finds the minimum of two numbers: #!/bin/bash # $1 limit [ -z "$1" ] && { echo "Needs a limit as first argument." >&2; exit 1; } read number if [ "$number" -gt "$1" ]; then echo "$1" else echo "$number" fi
If you know you are dealing with two integers a and b , then these simple shell arithmetic expansions using the ternary operator are sufficient to give the numerical max: $(( a > b ? a : b )) and numerical min: $(( a < b ? a : b )) E.g. $ a=10 $ b=20 $ max=$(( a > b ? a : b )) $ min=$(( a < b ? a : b )) $ echo $max 20 $ echo $min 10 $ a=30 $ max=$(( a > b ? a : b )) $ min=$(( a < b ? a : b )) $ echo $max 30 $ echo $min 20 $ Here is a shell script demonstrating this: #!/usr/bin/env bash [ -z "$1" ] && { echo "Needs a limit as first argument." >&2; exit 1; } read number echo Min: $(( $number < $1 ? $number : $1 )) echo Max: $(( $number > $1 ? $number : $1 ))
{ "source": [ "https://unix.stackexchange.com/questions/186663", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88357/" ] }
186,776
I need to delete from a folder all files older than a specific file. Running bash on CentOS 7. I have a solution for this, but I think there should be a more elegant way do it: reference_file=/my/reference/file get_modify_time() { stat $1 | grep -Po "Modify: \K[0-9- :]*" } pit=$(get_modify_time $reference_file) for f in /folder/0000* ; do [[ "$pit" > "$(get_modify_time $f)" ]] && rm $f ; done
I haven't tried it, but find should be able to handle the whole operation just fine: $ find dir/ -type f ! -newer reference -delete ... or... $ find dir/ -type f ! -newer reference ! -name reference -delete Basically: ! -newer reference matches files which have been modified less recently than reference . -delete deletes them. ! -name reference excludes reference , in case it is also located under dir/ and you want to keep it. This should delete all files older than reference , and located under dir/ .
{ "source": [ "https://unix.stackexchange.com/questions/186776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67047/" ] }
186,821
I'm looking for grep to show all characters which do not starts with numbers. I have done something like this: grep -v '^[1-2]*[a-zA-Z]?' -o but it do not work. Do you have any idea for some reg exp?
grep -v '^[0-9]' Will output all the lines that do not ( -v ) match lines beginning ^ with a number [0-9] For example $ cat test string string123 123string 1string2 $ grep -v '^[0-9]' test string string123 or if you want to remove all the words that begin with a digit sed 's/[[:<:]][[:digit:]][[:alnum:]_]*[[:>:]]//g' or with shortcuts and assertions sed 's/\<\d\w*\>//g' For example $ cat test one two2 3three 4four4 five six seven 8eight 9nine ten 11eleven 12twelve a b c d $ sed 's/[[:<:]][[:digit:]][[:alnum:]_]*[[:>:]]//g' test one two2 five six seven ten a b c d
{ "source": [ "https://unix.stackexchange.com/questions/186821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102994/" ] }
186,892
I want to give node.js the ability to listen on port 80, and shutdown the computer. Initially I tried these two commands in sequence: setcap cap_net_bind_service=+ep /usr/bin/nodejs setcap cap_sys_boot=+ep /usr/bin/nodejs Then my app was failing to bind to port 80. I checked with getcap: # getcap /usr/bin/nodejs /usr/bin/nodejs = cap_sys_boot+ep If I run setcap again for cap_net_bind_service: # getcap /usr/bin/nodejs /usr/bin/nodejs = cap_net_bind_service+ep I don't see anything in the man page http://linux.die.net/man/8/setcap about setting multiple capabilities, and try some things in desperation: # setcap cap_net_bind_service=+ep /usr/bin/nodejs cap_sys_boot=+ep /usr/bin/nodejs # getcap /usr/bin/nodejs /usr/bin/nodejs = cap_sys_boot+ep # setcap cap_net_bind_service=+ep cap_sys_boot=+ep /usr/bin/nodejs Failed to set capabilities on file `cap_sys_boot=+ep' (No such file or directory) How do I set multiple capabilities?­­­­­­­
And one last desperate syntax guess pays off: # setcap cap_net_bind_service,cap_sys_boot=+ep /usr/bin/nodejs # getcap /usr/bin/nodejs /usr/bin/nodejs = cap_net_bind_service,cap_sys_boot+ep
{ "source": [ "https://unix.stackexchange.com/questions/186892", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104532/" ] }
187,145
What is the difference between echo "Hello " ; echo "world" and echo "Hello " && echo "world" Both seems to run the two commands after each other.
echo "Hello " ; echo "world" means run echo "world" no matter what the exit status of the previous command echo "Hello" is i.e. echo "world" will run irrespective of success or failure of the command echo "Hello" . Whereas in case of echo "Hello " && echo "world" , echo "world" will only run if the first command ( echo "Hello" ) is a success (i.e. exit status 0). The following commands give an example of how the shell handles commands chaining using the different operators: $ false ; echo "OK" OK $ true ; echo "OK" OK $ false && echo "OK" $ true && echo "OK" OK $ false || echo "OK" OK $ true || echo "OK" $
{ "source": [ "https://unix.stackexchange.com/questions/187145", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33928/" ] }
187,221
How to get information about .deb package archive? Like: package information, version, installed-size, architecture, description and licensing information etc. from .deb package archive?
You can use dpkg-deb command to manipulate Debian package archive (.deb). From manpage:- -I, --info archive [control-file-name...] Provides information about a binary package archive. If no control-file-names are specified then it will print a summary of the contents of the package as well as its control file. If any control-file-names are specified then dpkg-deb will print them in the order they were specified; if any of the components weren't present it will print an error message to stderr about each one and exit with status 2. Example Usage:- $ dpkg-deb -I intltool_0.50.2-2_all.deb new debian package, version 2.0. size 52040 bytes: control archive=1242 bytes. 831 bytes, 19 lines control 1189 bytes, 18 lines md5sums Package: intltool Version: 0.50.2-2 Architecture: all Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Debian GNOME Maintainers <[email protected]> Installed-Size: 239 Depends: gettext (>= 0.10.36-1), patch, automake | automaken, perl (>= 5.8.1), libxml-parser-perl, file Provides: xml-i18n-tools Section: devel Priority: optional Multi-Arch: foreign Homepage: https://launchpad.net/intltool Description: Utility scripts for internationalizing XML Automatically extracts translatable strings from oaf, glade, bonobo ui, nautilus theme and other XML files into the po files. . Automatically merges translations from po files back into .oaf files (encoding to be 7-bit clean). The merging mechanism can also be extended to support other types of XML files. You can list the content by dpkg-deb -c :- Example Usage: $ dpkg-deb -c libnotify-bin_0.7.6-1ubuntu3_i386.deb drwxr-xr-x root/root 0 2014-02-22 05:24 ./ drwxr-xr-x root/root 0 2014-02-22 05:24 ./usr/ drwxr-xr-x root/root 0 2014-02-22 05:24 ./usr/bin/ -rwxr-xr-x root/root 9764 2014-02-22 05:24 ./usr/bin/notify-send drwxr-xr-x root/root 0 2014-02-22 05:24 ./usr/share/ drwxr-xr-x root/root 0 2014-02-22 05:24 ./usr/share/man/ drwxr-xr-x root/root 0 2014-02-22 05:24 ./usr/share/man/man1/ -rw-r--r-- root/root 773 2014-02-22 05:24 ./usr/share/man/man1/notify-send.1.gz drwxr-xr-x root/root 0 2014-02-22 05:24 ./usr/share/doc/ drwxr-xr-x root/root 0 2014-02-22 05:25 ./usr/share/doc/libnotify-bin/ -rw-r--r-- root/root 1327 2011-07-31 03:11 ./usr/share/doc/libnotify-bin/copyright lrwxrwxrwx root/root 0 2014-02-22 05:25 ./usr/share/doc/libnotify-bin/AUTHORS -> ../libnotify4/AUTHORS lrwxrwxrwx root/root 0 2014-02-22 05:25 ./usr/share/doc/libnotify-bin/NEWS.gz -> ../libnotify4/NEWS.gz lrwxrwxrwx root/root 0 2014-02-22 05:25 ./usr/share/doc/libnotify-bin/changelog.Debian.gz -> ../libnotify4/changelog.Debian.gz Getting licensing information:- Most of archive's copyright information is available from /usr/share/doc/<pkgname>/copyright Example :- $ dpkg-deb -c gparted_0.18.0-1_i386.deb | grep -i copyright -rw-r--r-- root/root 1067 2011-12-08 00:34 ./usr/share/doc/gparted/copyright Which you can extract by -x and look for License under which it is released. Here:- $ cat /usr/share/doc/gparted/copyright | grep -i ^license -A 5 License: This package is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; version 2 dated June, 1991. For more, run man dpkg-deb .
{ "source": [ "https://unix.stackexchange.com/questions/187221", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
187,339
I am using Mac OS X 10.9.4, following is my script to copy files from local machine to different host #!/bin/bash #!/usr/bin/expect echo "I will fail if you give junk values!!" echo " " echo "Enter file name: " read filePath echo " " echo "Where you want to copy?" echo "Enter" echo "1. if Host1" echo "2. if Host2" echo "3. if Host3" read choice echo " " if [ $choice -eq "1" ] then spawn scp filePath uname@host1:/usr/tmp expect "password" send "MyPassword\r" interact elif [ $choice -eq "2" ] then spawn scp filePath uname@host2:/usr/tmp expect "password" send "MyPassword\r" interact elif [ $choice -eq "3" ] then spawn scp filePath uname@host3:/usr/tmp expect "password" send "MyPassword\r" interact else echo "Wrong input" fi when running this script i am getting following ./rcopy.sh: line 21: spawn: command not found couldn't read file "password": no such file or directory ./rcopy.sh: line 23: send: command not found ./rcopy.sh: line 24: interact: command not found
Your script is attempting to combine two interpreters. You have both #!/bin/bash and #!/usr/bin/expect . That won't work. You can only use one of the two. Since bash was first, your script is being run as a bash script. However, within your script, you have expect commands such as spawn and send . Since the script is being read by bash and not by expect , this fails. You could get around this by writing different expect scripts and calling them from your bash script or by translating the whole thing to expect . The best way though, and one that avoids the horrible practice of having your passwords in plain text in a simple text file, is to set up passwordless ssh instead. That way, the scp won't need a password and you have no need for expect : First, create a public ssh key on your machine: ssh-keygen -t rsa You will be asked for a passphrase which you will be asked to enter the first time you run any ssh command after each login. This means that for multiple ssh or scp commands, you will only have to enter it once. Leave the passphrase empty for completely passwordless access. Once you have generated your public key, copy it over to each computer in your network : while read ip; do ssh-copy-id -i ~/.ssh/id_rsa.pub user1@$ip done < IPlistfile.txt The IPlistfile.txt should be a file containing a server's name or IP on each line. For example: host1 host2 host3 Since this is the first time you do this, you will have to manually enter the password for each IP but once you've done that, you will be able to copy files to any of these machines with a simple: scp file user@host1:/path/to/file Remove the expect from your script. Now that you have passwordless access, you can use your script as: #!/bin/bash echo "I will fail if you give junk values!!" echo " " echo "Enter file name: " read filePath echo " " echo "Where you want to copy?" echo "Enter" echo "1. if Host1" echo "2. if Host2" echo "3. if Host3" read choice echo " " if [ $choice -eq "1" ] then scp filePath uname@host1:/usr/tmp elif [ $choice -eq "2" ] then scp filePath uname@host2:/usr/tmp elif [ $choice -eq "3" ] then scp filePath uname@host3:/usr/tmp else echo "Wrong input" fi
{ "source": [ "https://unix.stackexchange.com/questions/187339", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104834/" ] }
187,340
I have an host with proxmox with single public ip and some virtual machine installed whit webservers and multiple doimains, the first VM is a proxy with haproxy that forward the request to other VM and in proxmox host i have this iptables script: iptables -F iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -A INPUT -p icmp --icmp-type echo-request -j DROP iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth0 -j MASQUERADE iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 22100 -j DNAT --to-destination 192.168.1.100:22 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.168.1.100:80 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.168.1.100:443 iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 22101 -j DNAT --to-destination 192.168.1.101:22 iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 22102 -j DNAT --to-destination 192.168.1.102:22 iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 22103 -j DNAT --to-destination 192.168.1.103:22 iptables-save > /etc/iptables.rules Internal lan is 192.168.1.0, the interface eth0 has public ip, the proxy is 192.168.1.100 and the other machine is 101, 102, 103 etc.. In another VM i have installed a website that works if i connect from external, instead if i launch curl www.mydomain.com from the same VM i have curl: (7) Failed connect to www.mydomain.com:80 ; Connection refused, i think it is a problem of iptables
Your script is attempting to combine two interpreters. You have both #!/bin/bash and #!/usr/bin/expect . That won't work. You can only use one of the two. Since bash was first, your script is being run as a bash script. However, within your script, you have expect commands such as spawn and send . Since the script is being read by bash and not by expect , this fails. You could get around this by writing different expect scripts and calling them from your bash script or by translating the whole thing to expect . The best way though, and one that avoids the horrible practice of having your passwords in plain text in a simple text file, is to set up passwordless ssh instead. That way, the scp won't need a password and you have no need for expect : First, create a public ssh key on your machine: ssh-keygen -t rsa You will be asked for a passphrase which you will be asked to enter the first time you run any ssh command after each login. This means that for multiple ssh or scp commands, you will only have to enter it once. Leave the passphrase empty for completely passwordless access. Once you have generated your public key, copy it over to each computer in your network : while read ip; do ssh-copy-id -i ~/.ssh/id_rsa.pub user1@$ip done < IPlistfile.txt The IPlistfile.txt should be a file containing a server's name or IP on each line. For example: host1 host2 host3 Since this is the first time you do this, you will have to manually enter the password for each IP but once you've done that, you will be able to copy files to any of these machines with a simple: scp file user@host1:/path/to/file Remove the expect from your script. Now that you have passwordless access, you can use your script as: #!/bin/bash echo "I will fail if you give junk values!!" echo " " echo "Enter file name: " read filePath echo " " echo "Where you want to copy?" echo "Enter" echo "1. if Host1" echo "2. if Host2" echo "3. if Host3" read choice echo " " if [ $choice -eq "1" ] then scp filePath uname@host1:/usr/tmp elif [ $choice -eq "2" ] then scp filePath uname@host2:/usr/tmp elif [ $choice -eq "3" ] then scp filePath uname@host3:/usr/tmp else echo "Wrong input" fi
{ "source": [ "https://unix.stackexchange.com/questions/187340", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64116/" ] }
187,404
I need to password protect my PDF file(s), because I am going to send them through email and I want anyone who would view my PDF file(s) to be prompted for a password. How can I add a password to a PDF in Linux Mint 17.1?
You can use the program pdftk to set both the owner and/or user password pdftk input.pdf output output.pdf owner_pw xyz user_pw abc where owner_pw and user_pw are the commands to add the passwords xyz and abc respectively (you can also specify one or the other but the user_pw is necessary in order to prohibit opening). You might also want to ensure that encryption strength is 128 bits by adding (though currently 128 bits is default ): .... encrypt_128bit If you cannot run pdftk as it is no longer in every distro, you can try qpdf . Using qpdf --help gives information on the syntax. Using the same "values" as for pdftk : qpdf --encrypt abc xyz 256 -- input.pdf output.pdf
{ "source": [ "https://unix.stackexchange.com/questions/187404", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74809/" ] }
187,415
I try to launch Firefox over SSH, using ssh -X user@hostname and then firefox -no-remote but it's very very slow. How can I fix this? Is it a connection problem?
The default ssh settings make for a pretty slow connection. Try the following instead: ssh -YC4c arcfour,blowfish-cbc user@hostname firefox -no-remote The options used are: -Y Enables trusted X11 forwarding. Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls. -C Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11 and TCP connections). The compression algorithm is the same used by gzip(1), and the “level” can be controlled by the CompressionLevel option for pro‐ tocol version 1. Compression is desirable on modem lines and other slow connections, but will only slow down things on fast networks. The default value can be set on a host-by-host basis in the configuration files; see the Compression option. -4 Forces ssh to use IPv4 addresses only. -c cipher_spec Selects the cipher specification for encrypting the session. For protocol version 2, cipher_spec is a comma-separated list of ciphers listed in order of preference. See the Ciphers keyword in ssh_config(5) for more information. The main point here is to use a different encryption cypher, in this case arcfour which is faster than the default, and to compress the data being transferred. NOTE: I am very, very far from an expert on this. The command above is what I use after finding it on a blog post somewhere and I have noticed a huge improvement in speed. I am sure the various commenters below know what they're talking about and that these encryption cyphers might not be the best ones. It is very likely that the only bit of this answer that is truly relevant is using the -C switch to compress the data being transferred.
{ "source": [ "https://unix.stackexchange.com/questions/187415", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90717/" ] }
187,583
I've seen history | grep blah and history |grep blah ; and history|grep blah also works, though no one ever seems to use it. Is there any significance in the spaces (e.g. piping to/from different commands requires different use of spaces), or is it always arbitrary?
bash defines several metacharacters . From man bash : metacharacter A character that, when unquoted, separates words. One of the following: | & ; ( ) < > space tab Because metacharacters separate words, it does not matter whether they are surrounded by spaces. The pipe symbol, | , is a metacharacter and hence, as you noticed, it does not need spaces around it. Note that [ , ] , { , } , and = are not metacharacters. Their meaning, by contrast, depends strongly on whether they are surrounded by blanks. Examples of when spaces are and are not needed As you noticed, it does not matter whether | is surrounded by spaces. Let us consider some examples that commonly confuse bash users. Consider: $ (date) Sun Mar 1 12:47:07 PST 2015 The parens above force the date command to be run in a subshell. Because ( and ) are metacharacters, no spaces are needed. By contrast: $ {date} bash: {date}: command not found Since { and } are not metacharacters, the shell treats {date} as one word. Instead of looking for the date command, it looks for a command named {date} . Because it doesn't find one, an error results. Another common problem is the test command. The following works successfully: $ [ abc ] && echo Yes Yes Remove the spaces and an error occurs: $ [abc] && echo Yes bash: [abc]: command not found Because [ and ] are not metacharacters, the shell treats [.bashrc] as a single word and the result, just like in the date example, is an error. Assignment statements are also sensitive to spaces. The following assignment is successful: $ v=date $ echo $v date Add a space and the assignment fails: $ v= date Sun Mar 1 12:55:05 PST 2015 In the above, the shell temporarily sets v to empty and then executes the date command. Add a space before = also causes a failure but for a different reason: $ v =date bash: v: command not found Here, the shell attempts to execute the command v with the argument =date . The error is because it found no command named v .
{ "source": [ "https://unix.stackexchange.com/questions/187583", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/85900/" ] }
187,651
I'm reading shell tutorial today from http://www.tutorialspoint.com/unix/unix-quoting-mechanisms.htm In which it mentions: If a single quote appears within a string to be output, you should not put the whole string within single quotes instead you whould preceed that using a backslash () as follows: echo 'It\'s Shell Programming' I tried this on my centos server, it doesn't work, a > prompts out to hint me type more. I was wondering, since two single quotes transform every special characters into normal characters, which include escape symbol \ , but exclude itself, the ' , how should I represent a single single quote ' in a single-quoted phrase?
The tutorial is wrong. POSIX says: A single-quote cannot occur within single-quotes. Here's some alternatives: echo $'It\'s Shell Programming' # ksh, bash, and zsh only, does not expand variables echo "It's Shell Programming" # all shells, expands variables echo 'It'\''s Shell Programming' # all shells, single quote is outside the quotes echo 'It'"'"'s Shell Programming' # all shells, single quote is inside double quotes Further reading: Quotes - Greg's Wiki
{ "source": [ "https://unix.stackexchange.com/questions/187651", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
187,679
I was wondering if there is a (simple) possibility to redo/reverse a command that was executed in the bash shell? (to make it undone) Is there something similar to the ctrl + z combination to redo any action (for example in word or LibreOffice)?
You should understand that bash is just an execution environment. It executes commands that you call - it's not the business of the shell to even know what the command does, you can call any executable you want. In most cases, it's not even clear what an undo would do - for instance, can you "unplay" a movie? Can you "unsend" an e-mail? What would "undo running firefox" even mean, for instance? You may close it, but bookmarks, downloads and history won't be the same. If you run a command, it is executed, whatever it does. It's up to you to know what you are doing. Note that this doesn't mean individual commands don't have "undo"... they can - you can even write a wrapper function that does something to protect you from foolish mistakes. For instance, mv is easily reversible by just moving the file back where it came from, unless you have overwritten something. That's why -i switch exists, to ask you before overwriting. Technically, inverse of cp is rm , unless something was overwritten (again, -i asks you about it). rm is more permanent, to try to get the files back, you have to actually do some lower-level hacking (there are tools for that). If you considered the filesystem as a black-box, it technically wouldn't be possible at all (only the details of logical and physical layout of data allows you to do some damage control). rm means rm , if you want "trash" functionality, that's actually just mv into some pre-arranged directory (and possibly a scheduled service to maintain or empty it) - nothing special about it. But you can use -i to prompt you before deletion. You may use a function or an alias to always include -i in these commands. Note that most applications are protecting you from data loss in different ways. Most (~all) text editors create backup files with ~ at the end in case you want to bring the old version back. On some distros, ls is aliased by default so that it hides them ( -B ), but they are there. A lot of protection is given by managing permissions properly: don't be root unless you need to be, make files read-only if you don't want them to change. Sometimes it's useful to have a "sandbox" environment - you run things on a copy, see if it's alright, and then merge the changes (or abandon the changes). chroot or lxc can prevent your scripts to escape from a directory and do damage. When you try to execute things in bulk - for instance, if you have a complex find command, while loop, a long pipeline, or anything like that, it's a good idea to first just echo the commands that will get executed. Then, if the commands look reasonable, remove echo and run it for real. And of course, if you really aren't sure about what you are doing, make a copy first. I sometimes just create a tarball of the current directory. Speaking of tarballs - tarbombs and zipbombs are quite common unfortunately (when people make an archive without a proper subdirectory, and unpacking scatters the files around, making a huge mess). I got used to just making a subdirectory myself before unpacking (I could list the contents, but I'm lazy). I'm thinking about making a script that will create a subdirectory only if the contents were archived without a subdirectory. But when it does happen, ls -lrt helps to find the most recent files to put where they belong. I just gave this as an example - a program can have many side effects which the shell has no way of knowing about (How could it? It's a different program being called!) The only sure way of avoiding mistakes is to be careful (think twice, run once). Possibly the most dangerous commands are the ones that deal with the filesystem: mkfs, fdisk/gdisk and so on. They can utterly destroy the filesystem (although with proper forensic software, at least partial reverse-engineering is possible). Always double-check the device you are formatting and partitioning is correct, before running the command.
{ "source": [ "https://unix.stackexchange.com/questions/187679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/100276/" ] }
187,742
I am using a script to regularly download my gmail messages that compresses the raw .eml into .gz files. The script creates a folder for each day, and then compresses every message into its own file. I would like a way to search through this archive for a "string." Grep alone doesn't appear to do it. I also tried SearchMonkey.
If you want to grep recursively in all .eml.gz files in the current directory, you can use: find . -name \*.eml.gz -print0 | xargs -0 zgrep "STRING" You have to escape the first * so that the shell does not interpret it. -print0 tells find to print a null character after each file it finds; xargs -0 reads from standard input and runs the command after it for each file; zgrep works like grep , but uncompresses the file first.
{ "source": [ "https://unix.stackexchange.com/questions/187742", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62794/" ] }
187,889
How do I replace only the last occurrence of "-" in a string with a space using sed ? For example: echo $MASTER_DISK_RELEASE swp-RedHat-Linux-OS-5.5.0.0-03 but I want to get the following output ( replacing the last hyphen [“-“] with a space ) swp-RedHat-Linux-OS-5.5.0.0 03
You can do it with single sed : sed 's/\(.*\)-/\1 /' or, using extended regular expression: sed -r 's/(.*)-/\1 /' The point is that sed is very greedy, so matches as many characters before - as possible, including others - . $ echo 'swp-RedHat-Linux-OS-5.5.0.0-03' | sed 's/\(.*\)-/\1 /' swp-RedHat-Linux-OS-5.5.0.0 03
{ "source": [ "https://unix.stackexchange.com/questions/187889", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67059/" ] }
188,033
Is it possible to rename the current working directory from within a shell (Bash in my particular case)? If I attempt to do this the straightforward way, I end up with an error: nathan@nathan-desktop:/tmp/test$ mv . test2 mv: cannot move ‘.’ to ‘test2’: Device or resource busy Is there another way to do this without changing the current directory? I realize that I can easily accomplish this by changing to the parent directory, but I'm curious if this is necessary. After all, if I rename the directory from another shell, I can still create files in the original shell afterwards.
Yes, but you have to refer to the directory by name, not by using the . notation. You can use a relative path, it just has to end with something other than . or .. : /tmp/test$ mv ../test ../test2 /tmp/test$ pwd /tmp/test /tmp/test$ pwd -P /tmp/test2 You can use an absolute path: /tmp/test$ cd -P . /tmp/test2$ mv "$PWD" "${PWD%/*}/test3" /tmp/test2$ Similarly, rmdir . won't ever work, but rmdir "$PWD" does.
{ "source": [ "https://unix.stackexchange.com/questions/188033", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1049/" ] }
188,042
I am currently trying to understand the difference between init.d and cron @reboot for running a script at startup/booting of the system. The use of @reboot (this method was mentioned in this forum by hs.chandra ) is some what simpler, by simply going into crontab -e and creating a @reboot /some_directory/to_your/script/your_script.txt and then your_script.txt shall be executed every time the system is rebooted. An in depth explanation of @reboot is here Alternatively by embedding /etc/init.d/your_script.txt into the second line of your script ie: #!/bin/bash # /etc/init.d/your_script.txt You can run chmod +x /etc/init.d/your_script.txt and that should also result for your_script.txt to run every time the system is booted. What are the key differences between the two? Which is more robust? Is there a better one out of the two? Is this the correct way of embedding a script to run during booting? I will be incorporating a bash .sh file to run during startup.
init.d , also known as SysV script, is meant to start and stop services during system initialization and shutdown. ( /etc/init.d/ scripts are also run on systemd enabled systems for compatibility). The script is executed during the boot and shutdown (by default). The script should be an init.d script, not just a script . It should support start and stop and more (see Debian policy ) The script can be executed during the system boot (you can define when). crontab (and therefore @reboot ). cron will execute any regular command or script, nothing special here. any user can add a @reboot script (not just root) on a Debian system with systemd: cron's @reboot is executed during multi-user.target . on a Debian system with SysV (not systemd), crontab(5) mention: Please note that startup, as far as @reboot is concerned, is the time when the cron(8) daemon startup. In particular, it may be before some system daemons, or other facilities, were startup. This is due to the boot order sequence of the machine. it's easy to schedule the same script at boot and periodically. /etc/rc.local is often considered to be ugly or deprecated (at least by redhat ), still it had some nice features: rc.local will execute any regular command or script, nothing special here. on a Debian system with SysV (not systemd): rc.local was (almost) the last service to start. but on a Debian system with systemd: rc.local is executed after network.target by default (not network-online.target !) Regarding systemd's network.target and network-online.target , read Running Services After the Network is up .
{ "source": [ "https://unix.stackexchange.com/questions/188042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102813/" ] }
188,182
I want to have a script that takes the current working directory to a variable. The section that needs the directory is like this dir = pwd . It just prints pwd how do I get the current working directory into a variable?
There's no need to do that, it's already in a variable: $ echo "$PWD" /home/terdon The PWD variable is defined by POSIX and will work on all POSIX-compliant shells: PWD Set by the shell and by the cd utility. In the shell the value shall be initialized from the environment as follows. If a value for PWD is passed to the shell in the environment when it is executed, the value is an absolute pathname of the current working directory that is no longer than {PATH_MAX} bytes including the terminating null byte, and the value does not contain any components that are dot or dot-dot, then the shell shall set PWD to the value from the environment. Otherwise, if a value for PWD is passed to the shell in the environment when it is executed, the value is an absolute pathname of the current working directory, and the value does not contain any components that are dot or dot-dot, then it is unspecified whether the shell sets PWD to the value from the environment or sets PWD to the pathname that would be output by pwd -P. Otherwise, the sh utility sets PWD to the pathname that would be output by pwd -P. In cases where PWD is set to the value from the environment, the value can contain components that refer to files of type symbolic link. In cases where PWD is set to the pathname that would be output by pwd -P, if there is insufficient permission on the current working directory, or on any parent of that directory, to determine what that pathname would be, the value of PWD is unspecified. Assignments to this variable may be ignored. If an application sets or unsets the value of PWD, the behaviors of the cd and pwd utilities are unspecified. For the more general answer, the way to save the output of a command in a variable is to enclose the command in $() or ` ` (backticks): var=$(command) or var=`command` Of the two, the $() is preferred since it is easier to build complex commands like: command0 "$(command1 "$(command2 "$(command3)")")" Whose backtick equivalent would look like: command0 "`command1 \"\`command2 \\\"\\\`command3\\\`\\\"\`\"`"
{ "source": [ "https://unix.stackexchange.com/questions/188182", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
188,197
I'm using a virtual machine running Windows for development purposes inside a Ubuntu host (I also use the Ubuntu part for my regular activities, but not both at the same time). As I need to compile on Windows regularly, I want to increase the performance of the VM as much as I can. Therefore I want to use a "minimal" version of my desktop environment: if possible, I want only my VM running, in fullscreen. Is it possible to use such a minimal system? If yes, what is it, or how can I achieve this setup myself? An environment chooser on my login screen would be great, but optional.
There's no need to do that, it's already in a variable: $ echo "$PWD" /home/terdon The PWD variable is defined by POSIX and will work on all POSIX-compliant shells: PWD Set by the shell and by the cd utility. In the shell the value shall be initialized from the environment as follows. If a value for PWD is passed to the shell in the environment when it is executed, the value is an absolute pathname of the current working directory that is no longer than {PATH_MAX} bytes including the terminating null byte, and the value does not contain any components that are dot or dot-dot, then the shell shall set PWD to the value from the environment. Otherwise, if a value for PWD is passed to the shell in the environment when it is executed, the value is an absolute pathname of the current working directory, and the value does not contain any components that are dot or dot-dot, then it is unspecified whether the shell sets PWD to the value from the environment or sets PWD to the pathname that would be output by pwd -P. Otherwise, the sh utility sets PWD to the pathname that would be output by pwd -P. In cases where PWD is set to the value from the environment, the value can contain components that refer to files of type symbolic link. In cases where PWD is set to the pathname that would be output by pwd -P, if there is insufficient permission on the current working directory, or on any parent of that directory, to determine what that pathname would be, the value of PWD is unspecified. Assignments to this variable may be ignored. If an application sets or unsets the value of PWD, the behaviors of the cd and pwd utilities are unspecified. For the more general answer, the way to save the output of a command in a variable is to enclose the command in $() or ` ` (backticks): var=$(command) or var=`command` Of the two, the $() is preferred since it is easier to build complex commands like: command0 "$(command1 "$(command2 "$(command3)")")" Whose backtick equivalent would look like: command0 "`command1 \"\`command2 \\\"\\\`command3\\\`\\\"\`\"`"
{ "source": [ "https://unix.stackexchange.com/questions/188197", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105348/" ] }
188,205
I have a scenario like VAR = `Some command brings the file path name and assigns it to VAR` For example VAR can have value like /root/user/samp.txt I want to grep command like grep HI $VAR This doesnt works gives an error saying cannot open /root/user/samp.txt same error when tried cat $VAR. How to handle this ? I have given try echo $VAR | grep HI grep HI "$VAR" Using korn shell
There's no need to do that, it's already in a variable: $ echo "$PWD" /home/terdon The PWD variable is defined by POSIX and will work on all POSIX-compliant shells: PWD Set by the shell and by the cd utility. In the shell the value shall be initialized from the environment as follows. If a value for PWD is passed to the shell in the environment when it is executed, the value is an absolute pathname of the current working directory that is no longer than {PATH_MAX} bytes including the terminating null byte, and the value does not contain any components that are dot or dot-dot, then the shell shall set PWD to the value from the environment. Otherwise, if a value for PWD is passed to the shell in the environment when it is executed, the value is an absolute pathname of the current working directory, and the value does not contain any components that are dot or dot-dot, then it is unspecified whether the shell sets PWD to the value from the environment or sets PWD to the pathname that would be output by pwd -P. Otherwise, the sh utility sets PWD to the pathname that would be output by pwd -P. In cases where PWD is set to the value from the environment, the value can contain components that refer to files of type symbolic link. In cases where PWD is set to the pathname that would be output by pwd -P, if there is insufficient permission on the current working directory, or on any parent of that directory, to determine what that pathname would be, the value of PWD is unspecified. Assignments to this variable may be ignored. If an application sets or unsets the value of PWD, the behaviors of the cd and pwd utilities are unspecified. For the more general answer, the way to save the output of a command in a variable is to enclose the command in $() or ` ` (backticks): var=$(command) or var=`command` Of the two, the $() is preferred since it is easier to build complex commands like: command0 "$(command1 "$(command2 "$(command3)")")" Whose backtick equivalent would look like: command0 "`command1 \"\`command2 \\\"\\\`command3\\\`\\\"\`\"`"
{ "source": [ "https://unix.stackexchange.com/questions/188205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105356/" ] }
188,264
Original file claudio antonio claudio michele I want to change only the first occurrence of "claudio" with "claudia" so that I would get the following: claudia antonio claudio michele I have tried the following: sed -e '1,/claudio/s/claudio/claudia/' nomi The above command performs global substitution (it replaces all occurrences of 'claudio') . Why?
If you are using GNU sed , try: sed -e '0,/claudio/ s/claudio/claudia/' nomi sed does not start checking for the regex that ends a range until after the line that starts that range. From man sed (POSIX manpage, emphasis mine): An editing command with two addresses shall select the inclusive range from the first pattern space that matches the first address through the next pattern space that matches the second. The 0 address is not standard though, that's a GNU sed extension not supported by any other sed implementation. Using awk Ranges in awk work more as you were expecting: $ awk 'NR==1,/claudio/{sub(/claudio/, "claudia")} 1' nomi claudia antonio claudio michele Explanation: NR==1,/claudio/ This is a range that starts with line 1 and ends with the first occurrence of claudio . sub(/claudio/, "claudia") While we are in the range, this substitute command is executed. 1 This awk's cryptic shorthand for print the line.
{ "source": [ "https://unix.stackexchange.com/questions/188264", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
188,285
In my terminal shell, I ssh'ed into a remote server, and I cd to the directory I want. Now in this directory, there is a file called table that I want to copy to my local machine /home/me/Desktop . How can I do this? I tried scp table /home/me/Desktop but it gave an error about no such file or directory. Does anyone know how to do this?
The syntax for scp is: If you are on the computer from which you want to send file to a remote computer: scp /file/to/send username@remote:/where/to/put Here the remote can be a FQDN or an IP address. On the other hand if you are on the computer wanting to receive file from a remote computer: scp username@remote:/file/to/send /where/to/put scp can also send files between two remote hosts: scp username@remote_1:/file/to/send username@remote_2:/where/to/put So the basic syntax is: scp username@source:/location/to/file username@destination:/where/to/put You can read man scp to get more ideas on this.
{ "source": [ "https://unix.stackexchange.com/questions/188285", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55077/" ] }
188,536
I have a script that will pipe its output to |tee scriptnameYYMMDD.txt . After each cycle of the for loop in which the output is generated, I'll be reversing the file contents with tac scriptnameYYYYMMDD.txt > /var/www/html/logs/scriptname.txt so that the log output is visible in a browser window with the newest lines at the top. I'll have several scripts doing this in parallel. I'm trying to minimize the disk activity, so output from |tee scriptnameYYYYMMDD.txt to a RAMdisk would be best. mktemp creates a file in the /tmp folder, but that doesn't appear to be off-disk.
You can mount a tmpfs partititon and write the file there: mount -t tmpfs -o size=500m tmpfs /mountpoint This partition now is limited to 500 MB. If your temporary file grows larger than 500 MB an error will occur: no space left on device . But, it doesn't matter when you specify a larger amount of space than your systems RAM has. tmpfs uses swap space too, so you cannot force a system crash, as opposed to ramfs . You can now write your file into /mountpoint : command | tee /mountpoint/scriptnameYYYYMMDD.txt
{ "source": [ "https://unix.stackexchange.com/questions/188536", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38537/" ] }
188,584
On my PC I have to following routing table: Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0 I don't understand how it is analyzed, I mean from top-down or bottom-up? If it is analyzed from top-down then everything will always be sent to the router in my home even though the IP destination was 192.168.1.15; but what I knew (wrongly?) was that if a PC is inside my same local network then once I recovered the MAC destination through a broadcast message then my PC could send directly the message to the destination.
The routing table is used in order of most specific to least specific. However on linux it's a bit more complicated than you might expect. Firstly there is more than one routing table, and when which routing table is used is dependent on a number of rules. To get the full picture: $ ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default $ ip route show table local broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.0.0 dev eth0 proto kernel scope link src 192.168.1.27 local 192.168.1.27 dev eth0 proto kernel scope host src 192.168.1.27 broadcast 192.168.1.255 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table main default via 192.168.1.254 dev eth0 192.168.0.0/23 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table default $ The local table is the special routing table containing high priority control routes for local and broadcast addresses. The main table is the normal routing table containing all non-policy routes. This is also the table you get to see if you simply execute ip route show (or ip ro for short). I recommend not using the old route command anymore, as it only shows the main table and its output format is somewhat archaic. The table default is empty and reserved for post-processing if previous default rules did not select the packet. You can add your own tables and add rules to use those in specific cases. One example is if you have two internet connections, but one host or subnet must always be routed via one particular internet connection. The Policy Routing with Linux book explains all this in exquisite detail.
{ "source": [ "https://unix.stackexchange.com/questions/188584", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66485/" ] }
188,597
The following function is called as the first line in every other function in order to handle optional debugging, context sensitive help, etc. Because of this, calling a function that in turn calls another function can (usually will) result in a circular reference. How can the circular reference be avoided without losing functionality? function fnInit () { ### ### on return from fnInit... ### 0 implies "safe to continue" ### 1 implies "do NOT continue" ### # local _fn= local _msg= # ### handle optional debugging, context sensitive help, etc. # [[ "$INSPECT" ]] && { TIMELAPSE= ...; } ### fnInit --inspect # [[ "$1" == --help ]] && { ... ; return 1; } ### fnInit --help # : # : return 0 }
The routing table is used in order of most specific to least specific. However on linux it's a bit more complicated than you might expect. Firstly there is more than one routing table, and when which routing table is used is dependent on a number of rules. To get the full picture: $ ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default $ ip route show table local broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 broadcast 192.168.0.0 dev eth0 proto kernel scope link src 192.168.1.27 local 192.168.1.27 dev eth0 proto kernel scope host src 192.168.1.27 broadcast 192.168.1.255 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table main default via 192.168.1.254 dev eth0 192.168.0.0/23 dev eth0 proto kernel scope link src 192.168.1.27 $ ip route show table default $ The local table is the special routing table containing high priority control routes for local and broadcast addresses. The main table is the normal routing table containing all non-policy routes. This is also the table you get to see if you simply execute ip route show (or ip ro for short). I recommend not using the old route command anymore, as it only shows the main table and its output format is somewhat archaic. The table default is empty and reserved for post-processing if previous default rules did not select the packet. You can add your own tables and add rules to use those in specific cases. One example is if you have two internet connections, but one host or subnet must always be routed via one particular internet connection. The Policy Routing with Linux book explains all this in exquisite detail.
{ "source": [ "https://unix.stackexchange.com/questions/188597", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27437/" ] }
188,721
I have a linux machine and a windows machine, the linux machine has a samba share with a .exe file on it. I can read and write files from the windows machine to the samba share, but I cannot execute the .exe file. How can I setup samba to allow me to execute it?
This behavior because of a security policy of the modern Samba. Fix by adding this line to your /etc/samba/smb.conf under [global] section: [global] acl allow execute always = True Source: Samba's Wiki .
{ "source": [ "https://unix.stackexchange.com/questions/188721", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105705/" ] }
188,733
How can I make the following command substitution work? $ time real 0m0.000s user 0m0.000s sys 0m0.000s $ oldtime="$(time)" bash: command substitution: line 23: syntax error near unexpected token `)' bash: command substitution: line 23: `time)"' I guess it doesn't work because the output of the command has multiple lines, because one line output works: $ oldtime="$(echo hello)" $ echo $oldtime hello
This behavior because of a security policy of the modern Samba. Fix by adding this line to your /etc/samba/smb.conf under [global] section: [global] acl allow execute always = True Source: Samba's Wiki .
{ "source": [ "https://unix.stackexchange.com/questions/188733", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
188,737
In rsync , --compress or -z will compress file data during the transfer. If I understand correctly, it compresses files before transfer and then decompress them after transfer. Does the time reduced during transfer due to compression outweight the time for compression and decompression? Does the answer to the question depend on if I backup to an external HDD via usb (2.0 or 3.0), or to a server by SSH over the Internet?
It's a general question. Does compression and decompression at endpoints improve the effective bandwidth of a link? The effective (perceived) bandwith of a link doing compression and decompression at endpoints is a function of: how fast you can compress (your CPU speed) your network's actual bandwidth The function is described with this 3D graph, which you might want to consult for your particular situation: The graph originates with the Compression Tools Compared 2005 article by http://www.linuxjournal.com/ .
{ "source": [ "https://unix.stackexchange.com/questions/188737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
188,836
Why does Unix allow files with a period at the end of the name? Is there any use for this? For example: filename. I am asking because I have a simple function that echoes the extension of a file. ext() { echo ${1##*.} } But knowing it will print nothing if the filename ends in a . , I wondered whether it would be more reliable to write: ext() { extension=${1##*.} if [ -z "$extension" ]; then echo "$1" else echo "$extension" fi } Clearly this depends on what you are trying to accomplish, but if a . at the end of the file name were not allowed, I wouldn't have wondered anything in the first place.
Unix filenames are just sequences of bytes , and can contain any byte except / and NUL in any position. There is no built-in concept of an "extension" as there is in Windows and its filesystems, and so no reason not to allow filenames to end (or start) with any character that can appear in them generally — a . is no more special than an x . Why does Unix allow files with a period at the end of the name? "A sequence of bytes" is a simple and non-exclusionary definition of a name when there's no motivating reason to count something out, which there wasn't. Making and applying a rule to exclude something specifically is more work. Is there a use for it? If you want to make a file with that name, sure. Is there a use for a filename ending with x ? I can't say I would generally make a filename with a . at the end, but both . and x are explicitly part of the portable filename character set that is required to be universally supported, and neither is special in any way, so if I had a use for it (maybe for a mechanically-generated encoding) then I could, and I could rely on it working. As well, the special filenames . (dot) and .. (dot-dot), which refer to the current and parent directories, are mandated by POSIX, and both end with a . . Any code dealing with filenames in general needs to address those anyway.
{ "source": [ "https://unix.stackexchange.com/questions/188836", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/90327/" ] }
188,930
As far as I know, I can use the tee command to split the standard output onto the screen and further files: command -option1 -option2 argument | tee file1 file2 file3 Is it possible to redirect the output to commands instead of files using tee, so that I could theoretically create a chain of commands?
You could use named pipes ( http://linux.die.net/man/1/mkfifo ) on the command line of tee and have the commands reading on the named pipes. mkfifo /tmp/data0 /tmp/data1 /tmp/data2 cmd0 < /tmp/data0 & cmd1 < /tmp/data1 & cmd2 < /tmp/data2 & command -option1 -option2 argument | tee /tmp/data0 /tmp/data1 /tmp/data2 When command finishes, tee will close the named pipes, which will signal an EOF (read of 0 bytes) on each of the /tmp/dataN which would normally terminate the cmdN processes. Real example: $ mkfifo /tmp/data0 /tmp/data1 /tmp/data2 $ wc -l < /tmp/data0 & wc -w < /tmp/data1 & wc -c < /tmp/data2 & $ tee /tmp/data0 /tmp/data1 /tmp/data2 < /etc/passwd >/dev/null $ 61 1974 37 Because of the background processes, the shell returned a prompt before the program output. All three instances of wc terminated normally.
{ "source": [ "https://unix.stackexchange.com/questions/188930", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
188,943
I'm new to embedded and am reading 'Embedded Linux Primer' at the moment. I tried to build an xscale arm kernel: make ARCH=arm CROSS_COMPILE=xscale_be- ixp4xx_defconfig # # configuration written to .config followed by the make: ~/linux-stable$ make ARCH=arm CROSS_COMPILE=xscale_be- zImage make: xscale_be-gcc: Command not found CHK include/config/kernel.release CHK include/generated/uapi/linux/version.h CHK include/generated/utsrelease.h make[1]: `include/generated/mach-types.h' is up to date. CC kernel/bounds.s /bin/sh: 1: xscale_be-gcc: not found make[1]: *** [kernel/bounds.s] Error 127 make: *** [prepare0] Error 2 I had downloaded and extracted gcc-arm-none-eabi-4_9-2014q4 from https://launchpad.net/gcc-arm-embedded and set the path PATH=/opt/gcc-arm-none-eabi-4_9-2014q4/bin/ Do I need another compiler for the xscale architecture? Any ideas where I can find xscale_be-gcc?
You could use named pipes ( http://linux.die.net/man/1/mkfifo ) on the command line of tee and have the commands reading on the named pipes. mkfifo /tmp/data0 /tmp/data1 /tmp/data2 cmd0 < /tmp/data0 & cmd1 < /tmp/data1 & cmd2 < /tmp/data2 & command -option1 -option2 argument | tee /tmp/data0 /tmp/data1 /tmp/data2 When command finishes, tee will close the named pipes, which will signal an EOF (read of 0 bytes) on each of the /tmp/dataN which would normally terminate the cmdN processes. Real example: $ mkfifo /tmp/data0 /tmp/data1 /tmp/data2 $ wc -l < /tmp/data0 & wc -w < /tmp/data1 & wc -c < /tmp/data2 & $ tee /tmp/data0 /tmp/data1 /tmp/data2 < /etc/passwd >/dev/null $ 61 1974 37 Because of the background processes, the shell returned a prompt before the program output. All three instances of wc terminated normally.
{ "source": [ "https://unix.stackexchange.com/questions/188943", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105845/" ] }
189,104
Is there a way to back up and restore file ownership and permissions (the things that can be changed with chown and chmod )? You can do this in Windows using icacls . What about access control lists?
You can do this with the commands from the acl package (which should be available on all mainstream distributions, but might not be part of the base installation). They back up and restore ACL when ACL are present, but they also work for basic permissions even on systems that don't support ACL. To back up permissions in the current directory and its subdirectories recursively: getfacl -R . >permissions.facl To restore permissions: setfacl --restore=permissions.facl
{ "source": [ "https://unix.stackexchange.com/questions/189104", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
189,684
I've got a [csv] file with duplicate datum reprinted ie the same data printed twice. I've tried using sort's uniq by sort myfile.csv | uniq -u however there is no change in the myfile.csv , also I've tried sudo sort myfile.csv | uniq -u but no difference. So currently my csv file looks like this a a a b b c c c c c I would like to look like it a b c
The reason the myfile.csv is not changing is because the -u option for uniq will only print unique lines. In this file, all lines are duplicates so they will not be printed out. However, more importantly, the output will not be saved in myfile.csv because uniq will just print it out to stdout (by default, your console). You would need to do something like this: $ sort -u myfile.csv -o myfile.csv The options mean: -u - keep only unique lines -o - output to this file instead of stdout You should view man sort for more information.
{ "source": [ "https://unix.stackexchange.com/questions/189684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102813/" ] }
189,687
All, Is there any tools like Norton Ghost for Linux to help backup and restore SUSE when necessary? Thanks.
The reason the myfile.csv is not changing is because the -u option for uniq will only print unique lines. In this file, all lines are duplicates so they will not be printed out. However, more importantly, the output will not be saved in myfile.csv because uniq will just print it out to stdout (by default, your console). You would need to do something like this: $ sort -u myfile.csv -o myfile.csv The options mean: -u - keep only unique lines -o - output to this file instead of stdout You should view man sort for more information.
{ "source": [ "https://unix.stackexchange.com/questions/189687", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53618/" ] }
189,787
What is the difference between echo and echo -e ? And which quotes ("" or '') should be used with the echo command? i.e: echo "Print statement" or echo 'Print statement' ? Also, what are the available options that can be used along with echo ?
echo by itself displays a line of text. It will take any thing within the following "..." two quotation marks, literally, and just print out as it is. However with echo -e you're making echo to enable interpret backslash escapes. So with this in mind here are some examples INPUT: echo "abc\n def \nghi" OUTPUT:abc\n def \nghi INPUT: echo -e "abc\n def \nghi" OUTPUT:abc def ghi Note: \n is new line, ie a carriage return. If you want to know what other sequences are recognized by echo -e type in man echo to your terminal.
{ "source": [ "https://unix.stackexchange.com/questions/189787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99113/" ] }
189,878
I have been using a rsync script to synchronize data at one host with the data at another host. The data has numerous small-sized files that contribute to almost 1.2TB. In order to sync those files, I have been using rsync command as follows: rsync -avzm --stats --human-readable --include-from proj.lst /data/projects REMOTEHOST:/data/ The contents of proj.lst are as follows: + proj1 + proj1/* + proj1/*/* + proj1/*/*/*.tar + proj1/*/*/*.pdf + proj2 + proj2/* + proj2/*/* + proj2/*/*/*.tar + proj2/*/*/*.pdf ... ... ... - * As a test, I picked up two of those projects (8.5GB of data) and I executed the command above. Being a sequential process, it tool 14 minutes 58 seconds to complete. So, for 1.2TB of data it would take several hours. If I would could multiple rsync processes in parallel (using & , xargs or parallel ), it would save my time. I tried with below command with parallel (after cd ing to source directory) and it took 12 minutes 37 seconds to execute: parallel --will-cite -j 5 rsync -avzm --stats --human-readable {} REMOTEHOST:/data/ ::: . This should have taken 5 times less time, but it didn't. I think, I'm going wrong somewhere. How can I run multiple rsync processes in order to reduce the execution time?
Following steps did the job for me: Run the rsync --dry-run first in order to get the list of files those would be affected. $ rsync -avzm --stats --safe-links --ignore-existing --dry-run \ --human-readable /data/projects REMOTE-HOST:/data/ > /tmp/transfer.log I fed the output of cat transfer.log to parallel in order to run 5 rsync s in parallel, as follows: $ cat /tmp/transfer.log | \ parallel --will-cite -j 5 rsync -avzm --relative \ --stats --safe-links --ignore-existing \ --human-readable {} REMOTE-HOST:/data/ > result.log Here, --relative option ( link ) ensured that the directory structure for the affected files, at the source and destination, remains the same (inside /data/ directory), so the command must be run in the source folder (in example, /data/projects ).
{ "source": [ "https://unix.stackexchange.com/questions/189878", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/48188/" ] }
189,905
I know linux has 3 built-in tables and each of them has its own chains as follow: FILTER : PREROUTING, FORWARD, POSTROUTING NAT : PREROUTING, INPUT, OUTPUT, POSTROUTING MANGLE : PREROUTING, INPUT, FORWARD, OUTPUT, POSTROUTING But I can't understand how they are traversed, in which order, if there is. For example, how are they traversed when: I send a packet to a pc in my same local network when I send a packet to a pc in a different network when a gateway receives a packet and it has to forward it when I receive a packet destinated to me any other case (if any)
Wikipedia has a great diagram to show the processing order. For more details you can also look at the iptables documentation, specifically the traversing of tables and chains chapter . Which also includes a flow diagram . The order changes dependent on how netfilter is being used (as a bridge or network filter and whether it has interaction with the application layer). Generally (though there are more devil in the details in the chapter linked above) the chains are processed as: See the INPUT chain as "traffic inbound from outside to this host". See the FORWARD chain as "traffic that uses this host as a router" (source and destination are not this host). see the OUTPUT chain as "traffic that this host wants to send out". PREROUTING / POSTROUTING has different uses for each of the table types (for example for the nat tables, PREROUTING is for inbound (routed/forwarded) SNAT traffic and POSTROUTING is for outbound (routed/forwarded) DNAT traffic. Look at the docs for more specifics. The various tables are: Mangle is to change packets (Type Of Service, Time To Live etc) on traversal. Nat is to put in NAT rules. Raw is to be used for marking and connection tracking. Filter is for filtering packets. So for your five scenarios: If the sending host your host with iptables, OUTPUT The same as above The FORWARD chain (provided the gateway is the host with iptables) If "me" is the host with iptables, INPUT Look at the chain rules above (which is the general rule of thumb) and the flow diagram (and this also varies on what you are trying to achieve with IPTables)
{ "source": [ "https://unix.stackexchange.com/questions/189905", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66485/" ] }
190,289
I pressed something around my mouse pad (keys in the altgr region+mousepad - quite possibly multitouch) and suddenly the whole X11 display zoomed around 10%. That means I can see 90% of the 1920x1080 screen in a somewhat blurry version. When I move the cursor, the 90% follows the cursor, so by panning around I can see everything on the screen. Since it applies to everything my guess is that it is caused by xfwm or Xorg. If I suspend the machine, it seems to go away in the lock screen, but when the lock screen is unlocked, the blurriness and zoom re-appears. Taking a screenshot grabs what is displayed on my screen (i.e. the 90% but scaled to 1920x1080). I can see the usefulness of this in certain situations, but I would really like to exit it (other than rebooting). I use xfce on Linux Mint.
Alt + scrollwheel . So in my case, I had pressed Alt + two fingers on the mouse pad.
{ "source": [ "https://unix.stackexchange.com/questions/190289", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2972/" ] }
190,337
If have a long text file and I want to display all the lines in which a given pattern occurs, I do: grep -n form innsmouth.txt | cut -d : -f1 Now, I have a sequence of numbers (one number per line) I would like to make a 2D graphical representation with the occurrence on the x-axis and the line number on the y-axis. How can I achieve this?
You could use gnuplot for this: primes 1 100 |gnuplot -p -e 'plot "/dev/stdin"' produces something like You can configure the appearance of the graph to your heart's delight, output in various image formats, etc.
{ "source": [ "https://unix.stackexchange.com/questions/190337", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
190,344
I have two devices & mdash ; the first one has 20 partitions and the second has one big partition. I would like to clone specific partition (content + data) from device one to device two. How can I do this? How can I create in the second device the same partition with same features as the source partition? For example, I want to duplicate the partition type, filesystem type, flags, ... etc of the original partition.
You could use gnuplot for this: primes 1 100 |gnuplot -p -e 'plot "/dev/stdin"' produces something like You can configure the appearance of the graph to your heart's delight, output in various image formats, etc.
{ "source": [ "https://unix.stackexchange.com/questions/190344", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106702/" ] }
190,398
From what I understand, the purpose of a swap partition in Linux is to free up some "not as frequently accessed" information from RAM and move it to a specific partition on your harddrive (at the cost of making it slower to read from or write to), essentially allowing active applications more of the "high speed memory". This is great for when you are on a machine with a small amount of RAM and don't want to run into problems if you run out. However, if your system has 16 GB or 32 GB of RAM, and assuming you aren't running a MySQL database for StackExchange or editing a 1080p full length movie in Linux, should a swap partition be used?
Yes. You should most definitely always have swap enabled, except if there is a very compelling, forbidding reason (like, no disk at all, or only network disk present). Should you have a swap on the order of the often recommended ridiculous sizes (such as, twice the amount of RAM)? Well, no . The reason is that swap is not only useful when your applications consume more memory than there is physical RAM (actually, in that case, swap is not very useful at all because it seriously impacts performance). The main incentive for swap nowadays is not to magically turn 16GiB of RAM into 32 GiB, but to make more efficient use of the installed, available RAM. On a modern computer, RAM does not go unused. Unused RAM is something that you could just as well not have bought and saved the money instead. Therefore, anything you load or anything that is otherwise memory-mapped, anything that could possibly be reused by anyone any time later (limited by security constraints) is being cached. Very soon after the machine has booted, all physical RAM will have been used for something . Whenever you ask for a new memory page from the operating system, the memory manager has to make an educated decision: Purge a page from the buffer cache Purge a page from a mapping (effectively the same as #1, on most systems) Move a page that has not been accessed for a long time -- preferably never -- to swap (this could in fact even happen proactively, not necessarily at the very last moment) Kill your process, or kill a random process (OOM) Kernel panic Options #4 and #5 are very undesirable and will only happen if the operating system has absolutely no other choice. Options #1 and #2 mean that you throw something away that you will possibly be needing soon again. This negatively impacts performance. Option #3 means you move something that you (probably) don't need any time soon onto slow storage. That's fine because now something that you do need can use the fast RAM. By removing option #3, you have effectively limited the operating system to doing either #1 or #2. Reloading a page from disk is the same as reloading it from swap, except having to reload from swap is usually less likely (due to making proper paging decisions). In other words, by disabling swap you gain nothing, but you limit the operation system's number of useful options in dealing with a memory request. Which might not be , but very possibly may be a disadvantage (and will never be an advantage). [EDIT] The careful reader of the mmap manpage , specifically the description of MAP_NORESERVE , will notice another good reason why swap is somewhat of a necessity even on a system with "enough" physical memory: "When swap space is not reserved one might get SIGSEGV upon a write if no physical memory is available." -- Wait a moment, what does that mean? If you map a file, you can access the file's contents directly as if the file was somehow, by magic, in your program's address space. For read-only access, the operating system needs in principle no more than a single page of physical memory which it can repopulate with different data every time you access a different virtual page (for efficiency reasons, that's of course not what is done, but in principle you could access terabytes worth of data with a single page of physical memory). Now what if you also write to a file mapping? In this case, the operating system must have a physical page -- or swap space -- ready for every page written to. There's no other way to keep the data around until the dirty pages writeback process has done its work (which can be several seconds). For this reason, the OS reserves (but doesn't necessarily ever commit) swap space, so in case you are writing to a mapping while there happens to be no physical page unused (that's a quite possible, and normal condition), you're guaranteed that it will still work. Now what if there is no swap? It means that no swap can be reserved (duh!), and this means that as soon as there are no free physical pages left, and you're writing to a page, you are getting a pleasant surprise in the form of your process receiving a segmentation fault, and probably being killed. [/EDIT] However, the traditional recommendation of making swap twice the size of RAM is nonsensical. Although disk space is cheap, it does not make sense to assign that much swap. Wasting something that is cheap is still wasteful, and you absolutely don't want to be continually swapping in and out working sets several hundreds of megabytes (or larger) in size. There is no single "correct" swap size (there are as many "correct" sizes as there are users and opinions). I usually assign a fixed 512MiB, regardless of RAM size, which works very well for me. The reasoning behind that is that 512MiB is something that you can always afford nowadays, even on a small disk. On the other hand, adding several gigabytes of swap is none better. You are not going to use them, except if something is going seriously wrong. Even on a SSD, swap is orders of magnitude slower than RAM (due to bus bandwidth and latency), and while it is very acceptable to move something to swap that probably won't be needed again (i.e. you most likely won't be swapping it in again, so your pool of available pages is effectively enlarged for free), if you really need considerable amounts of swap (that is, you have an application that uses e.g. a 50GiB dataset), you're pretty much lost. Once your computer starts swapping in and out gigabytes worth of pages, everything goes to a crawl. So, for most people (including me) this is not an option, and having that much swap therefore makes no sense.
{ "source": [ "https://unix.stackexchange.com/questions/190398", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5769/" ] }
190,431
I've tried to figure this out myself, but the myriad of options just baffles me. I want to use ideally either ffmpeg or mencoder (or something else, but those two I know I have working) to convert any incoming video to a fixed screen size. If the video is wider or too short for it, then centre crop the video. If it's then not the right size, the resize up or down to make it exactly the fixed screen size. The exact final thing I need is 720x480 in a XVid AVI with an MP3 audio track. I've found lots of pages showing how to resize to a maximum resolution, but I need the video to be exactly that resolution (with extra parts cropped off, no black bars). Can anyone tell me the command line to run - or at least get me some/most of the way there? If it needs to be multiple command lines (run X to get the resolution, do this calculation and then run Y with the output of that calculation) I can script that.
I'm no ffmpeg guru, but this should do the trick. First of all, you can get the size of input video like this: ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width in.mp4 With a reasonably recent ffmpeg, you can resize your video with these options: ffmpeg -i in.mp4 -vf scale=720:480 out.mp4 You can set the width or height to -1 in order to let ffmpeg resize the video keeping the aspect ratio. Actually, -2 is a better choice since the computed value should even. So you could type: ffmpeg -i in.mp4 -vf scale=720:-2 out.mp4 Once you get the video, it may be bigger than the expected 720x480 since you let ffmpeg compute the height, so you'll have to crop it. This can be done like this: ffmpeg -i in.mp4 -filter:v "crop=in_w:480" out.mp4 Finally, you could write a script like this (can easily be optimized, but I kept it simple for legibility): #!/bin/bash FILE="/tmp/test.mp4" TMP="/tmp/tmp.mp4" OUT="/tmp/out.mp4" OUT_WIDTH=720 OUT_HEIGHT=480 # Get the size of input video: eval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width ${FILE}) IN_WIDTH=${streams_stream_0_width} IN_HEIGHT=${streams_stream_0_height} # Get the difference between actual and desired size W_DIFF=$[ ${OUT_WIDTH} - ${IN_WIDTH} ] H_DIFF=$[ ${OUT_HEIGHT} - ${IN_HEIGHT} ] # Let's take the shorter side, so the video will be at least as big # as the desired size: CROP_SIDE="n" if [ ${W_DIFF} -lt ${H_DIFF} ] ; then SCALE="-2:${OUT_HEIGHT}" CROP_SIDE="w" else SCALE="${OUT_WIDTH}:-2" CROP_SIDE="h" fi # Then perform a first resizing ffmpeg -i ${FILE} -vf scale=${SCALE} ${TMP} # Now get the temporary video size eval $(ffprobe -v error -of flat=s=_ -select_streams v:0 -show_entries stream=height,width ${TMP}) IN_WIDTH=${streams_stream_0_width} IN_HEIGHT=${streams_stream_0_height} # Calculate how much we should crop if [ "z${CROP_SIDE}" = "zh" ] ; then DIFF=$[ ${IN_HEIGHT} - ${OUT_HEIGHT} ] CROP="in_w:in_h-${DIFF}" elif [ "z${CROP_SIDE}" = "zw" ] ; then DIFF=$[ ${IN_WIDTH} - ${OUT_WIDTH} ] CROP="in_w-${DIFF}:in_h" fi # Then crop... ffmpeg -i ${TMP} -filter:v "crop=${CROP}" ${OUT}
{ "source": [ "https://unix.stackexchange.com/questions/190431", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106755/" ] }
190,490
I have a linux fedora21 client laptop behind a corporate firewall (which lets through http and https ports but not ssh 22) and I have a linux fedora21 server at home behind my own router. Browsing with https works when I specify my home server's public IP address (because I configured my home router) Is it possible to ssh (remote shell) to my home server over the http/s port? I saw a tool called corkscrew . would that help? opensshd and httpd run on the home server. What else would need configuration?
What is possible depends on what the firewall allows. If the firewall allows arbitrary traffic on port 443 Some firewalls take the simple way out and allow anything on port 443. If that's the case, the easiest way to reach your home server is to make it listen to SSH connections on port 443. If your machine is directly connected to the Internet, simply add Port 443 to /etc/ssh/sshd_config or /etc/sshd_config just below the line that says Port 22 . If your machine is behind a router/firewall that redirects incoming connections, make it redirect incoming connections to port 443 to your server's port 22 with something like iptables -t nat -I PREROUTING -p tcp -i wan0 --dport 443 -j DNAT --to-destination 10.1.2.3:22 where wan0 is the WAN interface on your router and 10.1.2.3 is your server's IP address on your home network. If you want to allow your home server to listen both to HTTPS connections and SSH connections on port 443, it's possible — SSH and HTTPS traffic can easily be distinguished (in SSH, the server talks first, whereas in HTTP and HTTPS, the client talks first). See http://blog.stalkr.net/2012/02/sshhttps-multiplexing-with-sshttp.html and http://wrouesnel.github.io/articles/Setting%20up%20sshttp/ for tutorials on how to set this up with sshttp , and also Have SSH on port 80 or 443 while webserver (nginx) is running on these ports If you have a web proxy that allows CONNECT tunnelling Some firewalls block all outgoing connections, but allow browsing the web via a proxy that allows the HTTP CONNECT method to effectively pierce a hole in the firewall. The CONNECT method may be restricted to certain ports, so you may need to combine this with listening on port 443 as above. To make SSH go via the proxy, you can use a tool like corkscrew . In your ~/.ssh/config , add a ProxyCommand line like the one below, if your web proxy is http://web-proxy.work.example.com:3128 : Host home HostName mmm.dyndns.example.net ProxyCommand corkscrew web-proxy.work.example.com 3128 %h %p then you can connect by just running ssh home . Wrapping SSH in HTTP(S) Some firewalls don't allow SSH traffic, even on port 443. To cope with these, you need to disguise or tunnel SSH into something that the firewall lets through. See http://dag.wiee.rs/howto/ssh-http-tunneling/ for a tutorial on doing this with proxytunnel .
{ "source": [ "https://unix.stackexchange.com/questions/190490", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106795/" ] }
190,492
I've been trying to set up VSFTPD on Centos 6.6 to allow virtual users. Below is my vsftpd.conf , which is configured to allow only virtual users in /etc/vsftpd/vsftpd-virtual-user.db . listen=YES local_umask=002 anonymous_enable=NO local_enable=YES virtual_use_local_privs=YES write_enable=YES pam_service_name=vsftpd_virtual guest_enable=YES local_root=/var/sites chroot_local_user=YES hide_ids=YES connect_from_port_20=YES pasv_enable=YES pasv_addr_resolve=YES pasv_address=10.175.9.23 pasv_min_port=1024 pasv_max_port=65535 I have also set up the vsftpd_virtual module in /etc/pam.d/vsftpd_virtual which contains the following: #%PAM-1.0 auth required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user account required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user session required pam_loginuid.so When trying to log in to FTP on localhost, I'm getting a 530 error from FTP and the following line in /var/log/secure : vsftpd: pam_userdb(vsftpd_virtual:auth): user_lookup: could not open database `/etc/vsftpd/vsftpd-virtual-user': Permission denied The file permissions for the database file seem fine, but I may be wrong: Access: (0777/-rwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
What is possible depends on what the firewall allows. If the firewall allows arbitrary traffic on port 443 Some firewalls take the simple way out and allow anything on port 443. If that's the case, the easiest way to reach your home server is to make it listen to SSH connections on port 443. If your machine is directly connected to the Internet, simply add Port 443 to /etc/ssh/sshd_config or /etc/sshd_config just below the line that says Port 22 . If your machine is behind a router/firewall that redirects incoming connections, make it redirect incoming connections to port 443 to your server's port 22 with something like iptables -t nat -I PREROUTING -p tcp -i wan0 --dport 443 -j DNAT --to-destination 10.1.2.3:22 where wan0 is the WAN interface on your router and 10.1.2.3 is your server's IP address on your home network. If you want to allow your home server to listen both to HTTPS connections and SSH connections on port 443, it's possible — SSH and HTTPS traffic can easily be distinguished (in SSH, the server talks first, whereas in HTTP and HTTPS, the client talks first). See http://blog.stalkr.net/2012/02/sshhttps-multiplexing-with-sshttp.html and http://wrouesnel.github.io/articles/Setting%20up%20sshttp/ for tutorials on how to set this up with sshttp , and also Have SSH on port 80 or 443 while webserver (nginx) is running on these ports If you have a web proxy that allows CONNECT tunnelling Some firewalls block all outgoing connections, but allow browsing the web via a proxy that allows the HTTP CONNECT method to effectively pierce a hole in the firewall. The CONNECT method may be restricted to certain ports, so you may need to combine this with listening on port 443 as above. To make SSH go via the proxy, you can use a tool like corkscrew . In your ~/.ssh/config , add a ProxyCommand line like the one below, if your web proxy is http://web-proxy.work.example.com:3128 : Host home HostName mmm.dyndns.example.net ProxyCommand corkscrew web-proxy.work.example.com 3128 %h %p then you can connect by just running ssh home . Wrapping SSH in HTTP(S) Some firewalls don't allow SSH traffic, even on port 443. To cope with these, you need to disguise or tunnel SSH into something that the firewall lets through. See http://dag.wiee.rs/howto/ssh-http-tunneling/ for a tutorial on doing this with proxytunnel .
{ "source": [ "https://unix.stackexchange.com/questions/190492", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106802/" ] }
190,495
How can I launch a bash command with multiple args (for example " sudo apt update ") from a python script?
@milne's answer works, but subprocess.call() gives you little feedback. I prefer to use subprocess.check_output() so you can analyse what was printed to stdout: import subprocess res = subprocess.check_output(["sudo", "apt", "update"]) for line in res.splitlines(): # process the output line by line check_output throws an error on on-zero exit of the invoked command Please note that this doesn't invoke bash or another shell if you don't specify the shell keyword argument to the function (the same is true for subprocess.call() , and you shouldn't if not necessary as it imposes a security hazard), it directly invokes the command. If you find yourself doing a lot of (different) command invocations from Python, you might want to look at plumbum . With that you can do the (IMO) more readable: from plumbum.cmd import sudo, apt, echo, cut res = sudo[apt["update"]]() chain = echo["hello"] | cut["-c", "2-"] chain()
{ "source": [ "https://unix.stackexchange.com/questions/190495", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70379/" ] }
190,907
I've just cat /var/log/auth.log log and see, that there are many | grep "Failed password for" records. However, there are two possible record types - for valid / invalid user. It complicates my attempts to | cut them. I would like to see create a list (text file) with IP addresses of possible attackers and number of attempts for each IP address. Is there any easy way to create it? Also, regarding only ssh : What all records of /var/log/auth.log should I consider when making list of possible attackers? Example of my 'auth.log' with hidden numbers: cat /var/log/auth.log | grep "Failed password for" | sed 's/[0-9]/1/g' | sort -u | tail Result: Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user ucpss from 111.11.111.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user vijay from 111.111.11.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user webalizer from 111.111.11.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user xapolicymgr from 111.111.11.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user yarn from 111.111.11.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user zookeeper from 111.111.11.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for invalid user zt from 111.11.111.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for mysql from 111.111.11.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for root from 111.11.111.111 port 11111 ssh1 Mar 11 11:11:11 vm11111 sshd[111]: Failed password for root from 111.111.111.1 port 11111 ssh1
You could use something like this: grep "Failed password for" /var/log/auth.log | grep -Po "[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+" \ | sort | uniq -c It greps for the string Failed password for and extracts ( -o ) the ip address. It is sorted, and uniq counts the number of occurences. The output would then look like this (with your example as input file): 1 111.111.111.1 3 111.11.111.111 6 111.111.11.111 The last one in the output has tried 6 times.
{ "source": [ "https://unix.stackexchange.com/questions/190907", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13428/" ] }
191,122
I am having a variable which shows on echo like this $ echo $var 129 148 I have to take only 129 as output. How will I split 129 and 148?
In addition to jasonwryan's suggestion , you can use cut : echo $var | cut -d' ' -f1 The above cut s the echo output with a space delimiter ( -d' ' ) and outputs the first field ( -f1 )
{ "source": [ "https://unix.stackexchange.com/questions/191122", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106940/" ] }
191,138
I have a folder containing approximately 320116 .pdb.gz files. I want to uncompress them all. If I use gunzip *.gz it gives me an error i.e. argument list too long. The folder is about 2GB. Please give me an appropriate suggestion.
find . -name '*.pdb.gz' -exec gunzip {} + -exec gunzip {} + will provide gunzip with many but not too many file names on its command line. This is more efficient than -exec gunzip {} \; which starts a new gunzip process for each and every file.
{ "source": [ "https://unix.stackexchange.com/questions/191138", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107208/" ] }
191,205
In Bash, how does one do base conversion from decimal to another base, especially hex. It seems easy to go the other way: $ echo $((16#55)) 85 With a web-search, I found a script that does the maths and character manipulation to do the conversion, and I could use that as a function, but I'd have thought that bash would already have a built-in base conversion -- does it?
With bash (or any shell, provided the printf command is available (a standard POSIX command often built in the shells)): printf '%x\n' 85 ​​​​​​​​​​​​​​​​​ With zsh , you can also do: dec=85 hex=$(([##16]dec)) That works for bases from 2 to 36 (with 0-9a-z case insensitive as the digits). $(([#16]dev)) (with only one # ) expands to 16#55 or 0x55 (as a special case for base 16) if the cbases option is enabled (also applies to base 8 ( 0125 instead of 8#125 ) if the octalzeroes option is also enabled). With ksh93 , you can use: dec=85 base54=${ printf %..54 "$dec"; } Which works for bases from 2 to 64 (with 0-9a-zA-Z@_ as the digits). With ksh and zsh , there's also: $ typeset -i34 x=123; echo "$x" 34#3l Though that's limited to bases up to 36 in ksh88, zsh and pdksh and 64 in ksh93. Note that all those are limited to the size of the long integers on your system ( int 's with some shells). For anything bigger, you can use bc or dc . $ echo 'obase=16; 9999999999999999999999' | bc 21E19E0C9BAB23FFFFF $ echo '16o 9999999999999999999999 p' | dc 21E19E0C9BAB23FFFFF With supported bases ranging from 2 to some number required by POSIX to be at least as high as 99. For bases greater than 16, digits greater than 9 are represented as space-separated 0-padded decimal numbers. $ echo 'obase=30; 123456' | bc 04 17 05 06 Or same with dc ( bc used to be (and still is on some systems) a wrapper around dc ): $ echo 30o123456p | dc 04 17 05 06
{ "source": [ "https://unix.stackexchange.com/questions/191205", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88196/" ] }
191,206
I wonder whether I can compile application on one Linux distribution and use it on another Linux distribution (same CPU architecture). If not what problems I can run into? Only problems which came to my mind are are concerning dynamically linked libraries: Lack of some library or version of library e.g. lack of /usr/lib/qt5.so Can compiler flags be an issue here? Are there some other possible difficulties?
With bash (or any shell, provided the printf command is available (a standard POSIX command often built in the shells)): printf '%x\n' 85 ​​​​​​​​​​​​​​​​​ With zsh , you can also do: dec=85 hex=$(([##16]dec)) That works for bases from 2 to 36 (with 0-9a-z case insensitive as the digits). $(([#16]dev)) (with only one # ) expands to 16#55 or 0x55 (as a special case for base 16) if the cbases option is enabled (also applies to base 8 ( 0125 instead of 8#125 ) if the octalzeroes option is also enabled). With ksh93 , you can use: dec=85 base54=${ printf %..54 "$dec"; } Which works for bases from 2 to 64 (with 0-9a-zA-Z@_ as the digits). With ksh and zsh , there's also: $ typeset -i34 x=123; echo "$x" 34#3l Though that's limited to bases up to 36 in ksh88, zsh and pdksh and 64 in ksh93. Note that all those are limited to the size of the long integers on your system ( int 's with some shells). For anything bigger, you can use bc or dc . $ echo 'obase=16; 9999999999999999999999' | bc 21E19E0C9BAB23FFFFF $ echo '16o 9999999999999999999999 p' | dc 21E19E0C9BAB23FFFFF With supported bases ranging from 2 to some number required by POSIX to be at least as high as 99. For bases greater than 16, digits greater than 9 are represented as space-separated 0-padded decimal numbers. $ echo 'obase=30; 123456' | bc 04 17 05 06 Or same with dc ( bc used to be (and still is on some systems) a wrapper around dc ): $ echo 30o123456p | dc 04 17 05 06
{ "source": [ "https://unix.stackexchange.com/questions/191206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27960/" ] }
191,254
Why do many commands provide the option -q or --quiet to suppress output when you can easily achieve the same thing by redirecting standard output to the null file?
While you can easily redirect in a shell, there are other contexts where it's not as easy, like when executing the command in another language without using a shell command-line. Even in a shell: find . -type f -exec grep -q foo {} \; -printf '%s\n' to print the size of all the files that contain foo . If you redirect to /dev/null , you lose both find and grep output. You'd need to resort to -exec sh -c 'exec grep foo "$1" > /dev/null' sh {} \; (that is, spawn an extra shell). grep -q foo is shorter to type than grep foo > /dev/null Redirecting to /dev/null means the output is still written and then discarded, that's less efficient than not writing it (and not allocate, prepare that output to be written) that allows further optimisations. In the case of grep for instance, since with -q , grep knows the output is not required, it exits as soon as it finds the first match. With grep > /dev/null , it would still try to find all the matches. quiet doesn't necessarily mean silent . For some commands, it means reduce verbosity (the opposite of -v|--verbose ). For instance, mplayer has a --quiet and a --really-quiet . With some commands, you can use -qqq to decrease verbosity 3 times.
{ "source": [ "https://unix.stackexchange.com/questions/191254", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36718/" ] }
191,662
I am trying to set up a staging environment in a VM, in order to test updates before applying them to my main system. In order to do so, I have done a basic installation of Debian Wheezy (same as on the main system) in the VM, then ran as root from within the VM: # dpkg --clear-selections # dpkg --add-architecture i386 # apt-get update # ssh me@main-system 'dpkg --get-selections | grep -v deinstall' | \ dpkg --set-selections The i386 architecture is unfortunately needed in my case; the system is amd64 native. The problem is with dpkg --set-selections run in the VM. I do have some packages that require special handling (those are actually the main reason why I want a staging environment in the first place) but when I run the last command above, I get about a gazillion lines of output like: dpkg: warning: package not in database at line NNN: package-name for packages that really should be available in the base system. Examples include xterm , yelp and zip . Now for my question: What is the specific process for transferring the package selection list from one Debian system to another (assuming same Debian release level, in Wheezy) and then subsequently applying those changes? The goal is that both have the same list of installed packages, ideally such that doing a diff between the outputs of dpkg --get-selections or dpkg --list on the two comes back showing no differences. The grep -v deinstall part is borrowed from Prevent packages from being removed after doing dpkg --set-selections over on Ask Ubuntu. I have changed the source in the VM to be the same as on the main system, also installing apt-transport-https : deb https://ftp-stud.hs-esslingen.de/debian/ wheezy main non-free deb-src https://ftp-stud.hs-esslingen.de/debian/ wheezy main non-free deb https://ftp-stud.hs-esslingen.de/debian/ wheezy-updates main non-free deb-src https://ftp-stud.hs-esslingen.de/debian/ wheezy-updates main non-free deb [arch=amd64] http://archive.zfsonlinux.org/debian wheezy main Looking at the --set-selections output, I'm seeing: dpkg: warning: package not in database at line 1: a2ps dpkg: warning: package not in database at line 1: abiword dpkg: warning: package not in database at line 1: abiword-common dpkg: warning: package not in database at line 1: abiword-plugin-grammar dpkg: warning: package not in database at line 1: abiword-plugin-mathview dpkg: warning: package not in database at line 1: accountsservice dpkg: warning: package not in database at line 1: acl dpkg: warning: package not in database at line 4: aglfn dpkg: warning: package not in database at line 4: aisleriot dpkg: warning: package not in database at line 4: alacarte dpkg: warning: package not in database at line 4: alien ... The line numbers looked odd, and the corresponding portion of the output of --get-selections is: a2ps install abiword install abiword-common install abiword-plugin-grammar install abiword-plugin-mathview install accountsservice install acl install acpi-support-base install acpid install adduser install aglfn install aisleriot install alacarte install alien install Notice that in between acl and aglfn are acpi-support-base , acpid and adduser for which no errors are being reported . It seems that the packages for which errors are being reported are either un according to dpkg -l , or dpkg -l doesn't have any idea at all about them ( dpkg-query: no packages found matching ... ). I know there are some locally installed packages, but not many. i386 doesn't figure until gcc-4.7-base:i386 install much farther down the list (line 342 in the --get-selections output).
To clone a Debian installation, use the apt-clone utility. It's available (as a separate package, not part of the default installation) in Debian since wheezy and in Ubuntu since 12.04. On the existing machine, run apt-clone clone foo This creates a file foo.apt-clone.tar.gz . Copy it to the destination machine, and run apt-get install apt-clone apt-clone restore foo.apt-clone.tar.gz If you're working with an old system where apt-clone isn't available, or if you just want to replicate the list of installed packages but not any configuration file, here are the manual steps. On the source machine: cat /etc/apt/sources.list /etc/apt/sources.list.d >sources.list dpkg --get-selections >selections.list apt-mark showauto >auto.list On the target machine: cp sources.list /etc/apt/ apt-get update /usr/lib/dpkg/methods/apt/update /var/lib/dpkg/ dpkg --set-selections <selections.list apt-get dselect-upgrade xargs apt-mark auto <auto.list I believe that you're affected by an incompatible change in dpkg that first made it into wheezy. See bug #703092 for background. The short story is that dpkg --set-selections now only accepts package names that are present in the file /var/lib/dpkg/status or /var/lib/dpkg/available . If you only use APT to manage packages, like most people, then /var/lib/dpkg/available is not kept up-to-date. After running apt-get update and before running dpkg --set-selections and apt-get -u dselect-upgrade , run the following command: apt-cache dumpavail >/tmp/apt.avail dpkg --merge-avail /tmp/apt.avail From jessie onwards, you can simplify this to apt-cache dumpavail | dpkg --merge-avail Alternatively, run /usr/lib/dpkg/methods/apt/update /var/lib/dpkg/ or even simpler apt-get install dctrl-tools sync-available Another simple method that doesn't require installing an additional package but will download the package lists again is dselect update See the dpkg FAQ for more information. (This is mentioned in the dpkg man page, but more in a way that would remind you of the issue if you were already aware, not in a way that explains how to solve the problem!) Note that cloning a package installation with dpkg --set-selections doesn't restore the automatic/manual mark in APT. See Restoring all data and dependencies from dpkg --set-selections '*' for more details. You can save the marks on the source system with apt-mark showauto >auto.list and restore them on the target system with xargs apt-mark auto <auto.list
{ "source": [ "https://unix.stackexchange.com/questions/191662", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2465/" ] }
191,694
I would like to create a file by using the echo command and the redirection operator, the file should be made of a few lines. I tried to include a newline by "\n" inside the string: echo "first line\nsecond line\nthirdline\n" > foo but this way no file with three lines is created but a file with only one line and the verbatim content of the string. How can I create using only this command a file with several lines ?
You asked for using some syntax with the echo command: echo $'first line\nsecond line\nthirdline' > foo (But consider also the other answer you got.) The $'...' construct expands embedded ANSI escape sequences.
{ "source": [ "https://unix.stackexchange.com/questions/191694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
191,719
I have a file open in Vim inside a Linux virtual machine guest and I then try to open the file on the Windows host, and I do not get that warning that goes "Swap file blah.swp already exists!" (The file is shared to the guest.) I want that warning because that is the only way I can find out I am already editing the file somewhere else, like in this case, in the VM! It doesn't matter whether I edit the file on Windows first and then use Vim on Linux in the VM, or I edit the file in the Linux VM and then open the file in Vim on Windows: it's the same result, no warning. You could say the behavior is uniform then from Linux to Windows. In both cases Vim creates a .swo file silently, without complaining as it (I believe) should. However, if the file is opened a second time on the VM while being already open on the VM, I do get the warning, and same thing on Windows (for those who want to ask about my Vim settings). Reading :help recovery does not give anything informative. Version is Vim 7.4 in both cases.
You asked for using some syntax with the echo command: echo $'first line\nsecond line\nthirdline' > foo (But consider also the other answer you got.) The $'...' construct expands embedded ANSI escape sequences.
{ "source": [ "https://unix.stackexchange.com/questions/191719", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107551/" ] }
191,924
I have 4 files which are like file A >TCONS_00000867 >TCONS_00001442 >TCONS_00001447 >TCONS_00001528 >TCONS_00001529 >TCONS_00001668 >TCONS_00001921 file b >TCONS_00001528 >TCONS_00001529 >TCONS_00001668 >TCONS_00001921 >TCONS_00001922 >TCONS_00001924 file c >TCONS_00001529 >TCONS_00001668 >TCONS_00001921 >TCONS_00001922 >TCONS_00001924 >TCONS_00001956 >TCONS_00002048 file d >TCONS_00001922 >TCONS_00001924 >TCONS_00001956 >TCONS_00002048 All files contain more than 2000 lines and are sorted by first column. I want to find common lines in all files. I tried awk and grep and comm but not working.
Since the files are already sorted: comm -12 a b | comm -12 - c | comm -12 - d comm finds comm on lines between files. By default comm prints 3 TAB-separated columns: The lines unique to the first file, The lines unique to the second file, The lines common to both files. With the -1 , -2 , -3 options, we suppress the corresponding column. So comm -12 a b reports the lines common to a and b . - can be used in place of a file name to mean stdin.
{ "source": [ "https://unix.stackexchange.com/questions/191924", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106326/" ] }
191,977
I am using a minimal Debian system which does not have the top program installed. I tried to install top with sudo apt-get install top , but top is not a package name. It seems that top is a part of some other package. How can I find out which package I should install to get it? More generally, how can I find the package that contains a program?
The direct answer is procps . Here is how you can find this out for yourself: # Install apt-file, which allows you to search # for the package containing a file sudo apt-get install apt-file # Update the package/file mapping database sudo apt-file update # Search for "top" at the end of a path apt-file search --regexp '/top$' The output of the final command should look something like this: crossfire-maps: /usr/share/games/crossfire/maps/santo_dominion/magara/well/top crossfire-maps-small: /usr/share/games/crossfire/maps/santo_dominion/magara/well/top liece: /usr/share/emacs/site-lisp/liece/styles/top lxpanel: /usr/share/lxpanel/profile/two_panels/panels/top procps: /usr/bin/top quilt: /usr/share/quilt/top You can see that only procps provides an executable in your standard PATH, which gives a clue that it might be the right one. You can also find out more about procps to make sure like it seems like the right one: $ apt-cache show procps Package: procps Version: 1:3.3.3-3 [...] Description-en: /proc file system utilities This package provides command line and full screen utilities for browsing procfs, a "pseudo" file system dynamically generated by the kernel to provide information about the status of entries in its process table (such as whether the process is running, stopped, or a "zombie"). . It contains free, kill, pkill, pgrep, pmap, ps, pwdx, skill, slabtop, snice, sysctl, tload, top, uptime, vmstat, w, and watch.
{ "source": [ "https://unix.stackexchange.com/questions/191977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/31995/" ] }
192,008
Is there a tool that debugs routing tables on a Linux machine? I mean one that I can use by inputting an ip address into it, it'll take the existing routing table into account and output the matches from the table, so I can get an idea where the packets will go?
Use ip route get . From Configuring Network Routing : The ip route get command is a useful feature that allows you to query the route on which the system will send packets to reach a specified IP address, for example: # ip route get 23.6.118.140 23.6.118.140 via 10.0.2.2 dev eth0 src 10.0.2.15 cache mtu 1500 advmss 1460 hoplimit 64 In this example, packets to 23.6.118.140 are sent out of the eth0 interface via the gateway 10.0.2.2.
{ "source": [ "https://unix.stackexchange.com/questions/192008", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3570/" ] }
192,012
Why does sed -i executed on symlink destroys that link and replaces it with destination file? How to avoid this? eg. $ ls -l pet* -rw-rw-r-- 1 madneon madneon 4 mar 23 16:46 pet lrwxrwxrwx 1 madneon madneon 6 mar 23 16:48 pet_link -> pet $ sed -i 's/cat/dog/' pet_link $ ls -l pet* -rw-rw-r-- 1 madneon madneon 4 mar 23 16:48 pet -rw-rw-r-- 1 madneon madneon 4 mar 23 16:49 pet_link And why isn't it considered a bug?
The -i / --in-place flag edits a file in place. By default, sed reads the given file, processes it outputting into a temporary file, then copies the temporary file over the original, without checking whether the original was a symlink. GNU sed has a --follow-symlinks flag, which makes it behave as you want: $ echo "cat" > pet $ ln --symbolic pet pet_link $ sed --in-place --follow-symlinks 's/cat/dog/' pet_link $ cat pet dog
{ "source": [ "https://unix.stackexchange.com/questions/192012", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107312/" ] }
192,042
I am currently looking ways to suppress error command in Linux, in particular, the command cp . I do: root@ubuntu:~$ cp /srv/ftp/201*/wha*/*.jj ~/. cp: cannot stat `/srv/ftp/201*/wha*/*.jj': No such file or directory How do I suppress the error message that gets printed on the screen? I.e., I don't want to see this error message in my monitor.
To suppress error output in bash , append 2>/dev/null to the end of your command. This redirects filehandle 2 (STDERR) to /dev/null . There are similar constructs in other shells, though the specific construct may vary slightly.
{ "source": [ "https://unix.stackexchange.com/questions/192042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/107013/" ] }
192,066
I am trying to debug DHCP on my laptop (I am using dhcping and dhcdump to see what the DHCP server sends back). Following is my /etc/dhcp/dhclient.conf . option rfc3442-classless-static-routes code 121 = array of unsigned integer 8; send host-name = gethostname(); request subnet-mask, broadcast-address, time-offset, routers, domain-name-servers, interface-mtu, rfc3442-classless-static-routes; I think, I have an idea what all these options mean, except for rfc3442-classless-static-routes . Also, I don't see anything pertaining to rfc3442-classless-static-routes in the DHCP replies. What is the meaning of rfc3442-classless-static-routes and in what situation would I make use of it? (the documentation makes no sense whatsoever)
The original DHCP specification (RFC 2131 and 2132 ) defines an option (33) that allows the administrator of the DHCP service to issue static routes to the client if needed. Unfortunately, that original design is flawed these days as it assumes classful network addresses , which is rarely used. The rfc3442-classless-static-routes option allows you to use classless network addresses (or CIDR) instead. CIDR requires a subnet mask to be explicitly stated, but the original DHCP option 33 doesn't have space for this. Therefore, this option (as defined in RFC 3442) simply enables an newer replacement DHCP option (option 121) which defines static routes using CIDR notation. Basically, if you need to issue static routes to your devices using DHCP and these static routes use CIDR then you need to enable this option. A static routes can be used if you have split a network into to multiple smaller networks and need to inform each routers about how traffic gets from one to another without using one of the many dynamic routing protocols available. You basically set up each router with a statement to the effect of "to get to network a.b.c.d, send traffic through f.g.h.i" . If the route you set up in the router are classful, then you do not need to enable this option. However, if the routes are CIDR then you will need to enable this option. Fortunately, many home/cafe network use the 192.168.0.0 network with a subnet of 255.255.255.0 (or /24 ), which is a true Class-C network, therefore you can avoid this option. On the other hand, some home/cafe networks run on the 10.0.0.0 network. This is a Class-A network by default. If you are breaking this into many 10.0.x.0 sub-nets for example, then these will all be CIDR networks which means you will need to enable this option. The above is only true if you also need to issue this routing information to your hosts via DHCP. Whether you need to issue these static routing information to your hosts is defined by the design of your network. I'd hazard a guess that a basic home/cafe network doesn't need it as static routes are usually defined at the routers. The configuration you have above simply defines a new option (there are many predefined options that dhclient already understands) as option 121 which consists of an array of 8-bit unsigned integers. It then configures the client to request this option if it is set on the DHCP server. If the DHCP server returns a value for this option a dhclient exit hook script ( /etc/dhclient/dhclient-exit-hooks.d/rfc3442-classless-routes ) reads the value and configures the routing table accordingly.
{ "source": [ "https://unix.stackexchange.com/questions/192066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105621/" ] }
192,206
I am not sure if it is the only possible way, but I read that in order to put a single pixel onto the screen at a location of your choice one has to write something into a place called framebuffer. So I became curious, if it is possible to enter into this place and write something into it in order to display a single pixel somewhere on the screen.
yes, outside X-server, in tty, try command: cat /dev/urandom >/dev/fb0 if colourfull pixels fills the screen, then your setup is ok, and you can try playing with this small script: #!/usr/bin/env bash fbdev=/dev/fb0 ; width=1280 ; bpp=4 color="\x00\x00\xFF\x00" #red colored function pixel() { xx=$1 ; yy=$2 printf "$color" | dd bs=$bpp seek=$(($yy * $width + $xx)) \ of=$fbdev &>/dev/null } x=0 ; y=0 ; clear for i in {1..500}; do pixel $((x++)) $((y++)) done where function 'pixel' should be an answer... write a pixel to screen by changing byte values (blue-green-red-alpha) on x-y offset of device /dev/fbX which is frame buffer for the video-card. or try one liner pixel draw (yellow on x:y=200:100, if width is 1024): printf "\x00\xFF\xFF\x00" | dd bs=4 seek=$((100 * 1024 + 200)) >/dev/fb0 UPDATE: this code works even inside X-server, if we just configure X to use frame buffer . by specifying fb0 inside /usr/share/X11/xorg.conf.d/99-fbdev.conf
{ "source": [ "https://unix.stackexchange.com/questions/192206", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
192,263
I occasionally search through files in vim or less using / or ? but as far as I can tell, the search patterns are case sensitive. So for example, /foo won't find the same things that /FOO will. Is there an way way to make it less strict? How can I search in vim or less for a pattern that is NOT case sensitive?
In vi or vim you can ignore case by :set ic , and all subsequent searches will consider the setting until you reset it by :set noic . In less there are options -i and -I to ignore case.
{ "source": [ "https://unix.stackexchange.com/questions/192263", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/1822/" ] }
192,621
I have 2 questions. The first one is for the -sf options and the second one is for the more specific usage of -f options. By googling, I figured out the description of command ln , option -s and -f . (copy from http://linux.about.com/od/commands/l/blcmdl1_ln.htm ) -s, --symbolic : make symbolic links instead of hard links -f, --force : remove existing destination files I understand these options individually. But, how could one use this -s and -f options simultaneously? -s is used for creating a link file and -f is used for removing a link file. Why use this merged option? To know more about ln command, I made some examples. $ touch foo # create sample file $ ln -s foo bar # make link to file $ vim bar # check how link file works: foo file opened $ ln -f bar # remove link file Everything works fine before next command $ ln -s foo foobar $ ln -f foo # remove original file By the description of -f option, this last command should not work, but it does! foo is removed. Why is this happening?
First of all, to find what a command's options do, you can use man command . So, if you run man ln , you will see: -f, --force remove existing destination files -s, --symbolic make symbolic links instead of hard links Now, the -s , as you said, is to make the link symbolic as opposed to hard. The -f , however, is not to remove the link. It is to overwrite the destination file if one exists. To illustrate: $ ls -l total 0 -rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 bar -rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 foo $ ln -s foo bar ## fails because the target exists ln: failed to create symbolic link ‘bar’: File exists $ ln -sf foo bar ## Works because bar is removed and replaced with the link $ ls -l total 0 lrwxrwxrwx 1 terdon terdon 3 Mar 26 13:19 bar -> foo -rw-r--r-- 1 terdon terdon 0 Mar 26 13:18 foo
{ "source": [ "https://unix.stackexchange.com/questions/192621", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108094/" ] }
192,642
How to run wkhtmltopdf headless?! Installation on Debian Whezzy apt-get install wkhtmltopdf Command wkhtmltopdf --title "$SUBJECT" -q $SOURCEFILE $OUTPUTFILE Error QXcbConnection: Could not connect to display
This is a bug , and the fix hasn't been brought to the Debian repositories. Quoting ashkulz (who closed the bug report) : You're using the version of wkhtmltopdf in the debian repositories, which does not support running headless. So you can either... Download wkhtmltopdf from source and compile it (see the instructions in the INSTALL.md file ; you may remove the --recursive option from their git clone line, if you already have Qt 4.8 installed). Run it inside xvfb , as suggested by masterkorp in the bug report .
{ "source": [ "https://unix.stackexchange.com/questions/192642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83275/" ] }
192,671
According to Debian Network setup document allow-hotplug <interface_name> stanza in /etc/network/interfaces file starts an interface when the kernel detects a hotplug event from the interface. What is this hotplug event?
allow-hotplug <interface> , is used the same way auto is by most people. However, the hotplug event is something that involves kernel/udev detection against the hardware, that could be a cable being connected to the port, or a USB-to-Ethernet dongle that will be up and running whenever you plug on USB, or either a PCMCIA wireless card being connected to the slot. My personal opinion: I also think that allow-hotplug could have more documented examples to make this thing easier to understand. As pointed out by other U&L members and Debian lists, those two options create the "chicken and egg problem" when there are no cables connected or when an event is created: Re: network reference v2: questions about allow-hotplug Re: Netcfg and allow-hotplug vs auto References: Good detailed explanation of /etc/network/interfaces syntax? ; Re: Netcfg and allow-hotplug vs auto ; Howto Set Up Multiple Network Schemes on a Linux Laptop PCMCIA, Cardbus, USB ; Debian networking. Basic sintax of /etc/networ/interfaces ;
{ "source": [ "https://unix.stackexchange.com/questions/192671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
192,673
I'm looking for any information regarding how secure an encrypted Linux file system is when contained in a VirtualBox virtual drive on a Windows host? Specifically I'm looking for answers to the following questions: Does the fact it is hosted as a guest system expose the encrypted data to any new attack vectors? Aside from the threat of key loggers on the Host OS, malware etc., when the virtual machine is turned on is there the threat of a rogue host process accessing the virtual machine's file system on the fly ? When both the Host and Guest OSes are turned off and the data is at rest on a storage device, is it any easier/harder to retrieve the encrypted file system?
allow-hotplug <interface> , is used the same way auto is by most people. However, the hotplug event is something that involves kernel/udev detection against the hardware, that could be a cable being connected to the port, or a USB-to-Ethernet dongle that will be up and running whenever you plug on USB, or either a PCMCIA wireless card being connected to the slot. My personal opinion: I also think that allow-hotplug could have more documented examples to make this thing easier to understand. As pointed out by other U&L members and Debian lists, those two options create the "chicken and egg problem" when there are no cables connected or when an event is created: Re: network reference v2: questions about allow-hotplug Re: Netcfg and allow-hotplug vs auto References: Good detailed explanation of /etc/network/interfaces syntax? ; Re: Netcfg and allow-hotplug vs auto ; Howto Set Up Multiple Network Schemes on a Linux Laptop PCMCIA, Cardbus, USB ; Debian networking. Basic sintax of /etc/networ/interfaces ;
{ "source": [ "https://unix.stackexchange.com/questions/192673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106977/" ] }
192,698
When I type "grep doc" in the terminal, it just don't do anything, stopping the terminal from doing anything else before I escape using Ctrl + C or Z . I know this isn't how I'm supposed to use grep, but just curious why this is happening.
grep by default searches standard input if no files are given: grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) for lines containing a match to the given PATTERN. By default, grep prints the matching lines. If you just do grep doc grep expects standard input to come and search inside it (don't enter parts between < and > into the terminal, these are comments): $ grep doc a b c <PRESS ENTER HERE> doc <NO MATCH WAS FOUND IN PREVIOUS LINE, TYPE doc AND PRESS ENTER AGAIN> doc <MATCH WAS FOUND>
{ "source": [ "https://unix.stackexchange.com/questions/192698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108136/" ] }
192,706
With sysvinit , a sudoers entry like this would suffice: %webteam cms051=/sbin/service httpd * This would allow for commands such as: sudo service httpd status sudo service httpd restart Now, with systemd , the service name is the final argument. I.e., the service restart would be done with: systemctl restart httpd.service Naturally, I thought defining the command as systemctl * httpd.service would work but that would allow something like systemctl restart puppet.service httpd.service which is not the desired effect. With that being considered, what would be the best way allow non-root users to control a systemd service then? This doesn't need to be sudoers ; perhaps a file permission change may be sufficient?
Just add all needed commands to sudoers separately: %webteam cms051=/usr/bin/systemctl restart httpd.service %webteam cms051=/usr/bin/systemctl stop httpd.service %webteam cms051=/usr/bin/systemctl start httpd.service %webteam cms051=/usr/bin/systemctl status httpd.service
{ "source": [ "https://unix.stackexchange.com/questions/192706", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2372/" ] }
192,716
I am on CentOS 6, trying to enable core dumps for an application I am developing. I have put: ulimit -H -c unlimited >/dev/null ulimit -S -c unlimited >/dev/null in to my bash profile, but a core dump still did not generate (in a new terminal). I have also changed my /etc/security/limits.conf so that the soft limits is zero for all users. How do I set the location of the core files to be output? I wanted to specify the location and append the time the dump was generated, as part of the file name?
To set location of core dumps in CentOS 6 you can edit /etc/sysctl.conf . For example if you want core dumps in /var/crash : kernel.core_pattern=/var/crash/core-%e-%s-%u-%g-%p-%t #corrected spaces before and after = Where variables are: %e is the filename %g is the gid the process was running under %p is the pid of the process %s is the signal that caused the dump %t is the time the dump occurred %u is the uid the process was running under Also you have to add /etc/sysconfig/init DAEMON_COREFILE_LIMIT='unlimited' Now apply new changes: $ sysctl -p But there is a caveat whit this way. If the kernel parameter kernel.core_pattern is always reset and overwritten at reboot to the following configuration even when a value is manually specified in /etc/sysctl.conf : |/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e In short when abrtd.service starts kernel.core_pattern is overwritten automatically by the system installed abrt-addon-ccpp . There are two ways to resolve this: Setting DumpLocation option in the /etc/abrt/abrt.conf configuration file. The destination directory can be specified by setting DumpLocation = /var/crash in the /etc/abrt/abrt.conf configuration file, and sysctl kernel.core_pattern 's displayed value is a same but actually core file will be created to the directory under /var/crash . Also if you have SELinux enabled you have to run: $ semanage fcontext -a -t public_content_rw_t "/var/crash(/.*)?" $ setsebool -P abrt_anon_write 1 And finally restart abrtd.service : $ service abrtd.service restart Stop abrtd service. kernel.core_pattern will not be overwritten. - (I've never tested).
{ "source": [ "https://unix.stackexchange.com/questions/192716", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50597/" ] }
192,786
In order to understand another answer (by glenn jackman): find / -type d -print0 | while read -r -d '' dir; do ls -ltr "$dir" | sed '$!d'; done the first step is to understand the usage of the option -r of the read command. First, I thought, it would be sufficient to simply execute man read to look up the meaning of the -r option, but I realized the man page does not contain any explanation for options at all, so I Googled for it.  I got some read -t , read -p examples, but no read -r .
There is no stand-alone read command: instead, it is a shell built-in, and as such is documented in the man page for bash : read [ -ers ] [ -a aname ] [ -d delim ] [ -i text ] [ -n nchars ] [ -N nchars ] [ -p prompt ] [ -t timeout ] [ -u fd ] [ name ...] ︙ -r Backslash does not act as an escape character.  The backslash is considered to be part of the line.  In particular, a backslash-newline pair may not be used as a line continuation. So, to summarize, read normally allows long lines to be broken using a trailing backslash character, and normally reconstructs such lines. This slightly surprising behavior can be deactivated using -r .
{ "source": [ "https://unix.stackexchange.com/questions/192786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
192,944
I'm trying to curl HTTPS website in the following way: $ curl -v https://thepiratebay.se/ However it fails with the error: * About to connect() to thepiratebay.se port 443 (#0) * Trying 173.245.61.146... * connected * Connected to thepiratebay.se (173.245.61.146) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS alert, Server hello (2): * error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure * Closing connection #0 curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure Using -k / --insecure or adding insecure to my ~/.curlrc doesn't make any difference. How do I ignore or force the certificate using curl command line? When using wget seems to work fine. Also works when testing with openssl as below: $ openssl s_client -connect thepiratebay.se:443 CONNECTED(00000003) SSL handshake has read 2651 bytes and written 456 bytes New, TLSv1/SSLv3, Cipher is AES128-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES128-SHA I've: $ curl --version curl 7.28.1 (x86_64-apple-darwin10.8.0) libcurl/7.28.1 OpenSSL/0.9.8| zlib/1.2.5 libidn/1.17 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp Features: IDN IPv6 Largefile NTLM NTLM_WB SSL libz
Some sites disable support for SSL 3.0 (possible because of many exploits/vulnerabilities), so it's possible to force specific SSL version by either -2 / --sslv2 or -3 / --sslv3 . Also -L is worth a try if requested page has moved to a different location. In my case it was a curl bug ( found in OpenSSL ), so curl needed to be upgraded to the latest version (>7.40) and it worked fine. See also: 3 Common Causes of Unknown SSL Protocol Errors with cURL Error when Installing Meteor at SO [Bug 861137] Re: Openssl TLS errors while connecting to SSLv3 sites
{ "source": [ "https://unix.stackexchange.com/questions/192944", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21471/" ] }
192,945
I have just started to use Scientific Linux (7.0) (although I assume this question might be distribution neutral..). The kernel version is 3.10.0-123.20.1.el7.x86_64. Coming back to my question. I switched to root account and from there created an new user account test-account using the command adduser test-account . It didn't prompt me for a password neither did I use the option to provide password. So I guess it's a "without password" account. I can login into this account from root account - which I suppose I'd be able to without providing password even if the test account had a password. However when I try to login into this(test-account) from a third account - it prompts me for password. And just pressing Enter doesn't work. Is it possible to login into this account from a non-root account. Is there a way (without switching to root or using sudo ) ?
By default on enterprise GNU/Linux and its derivatives, the adduser command creates a user which is disabled until you explicitly specify a password for that user. Here is an example on CentOS 6.5, which should be the same as Scientific Linux. $ sudo adduser test $ sudo grep test /etc/shadow test:!!:123456:0:99999:7::: that's because in the /etc/shadow file, the password field is !! , as you can see in the example. Once you run passwd for this account, it will change the user's password and allow the user to be able to login. So what you should be able to do is the following to have a user without a password, simply create an account then delete the password. $ sudo adduser test $ sudo passwd -d test Removing password for user test. passwd: Success $ su test $ whoami test now any user should be able to use su and login as the user test in my example. You will not have to use sudo to login as the account. Although this is possible and you can have an account without a password, it is not advised. If you simply set the password for the user, you should be allowed to login. $ sudo passwd test [sudo] password for <YOURACCOUNT>: Changing password for user test. New password: Retype new password: passwd: all authentication tokens updated successfully.
{ "source": [ "https://unix.stackexchange.com/questions/192945", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39843/" ] }
193,039
I'm new to bash and can't find a good tutorial to answer my question. array=( item1 item2 item3 ) for name in ${array[@]}; do echo current/total ... some other codes done I want to calculate the current and total value, as the expected output of this being: 1/3 2/3 3/3 Thanks for any kind of tips.
You can access the array indices using ${!array[@]} and the length of the array using ${#array[@]} , e.g. : #!/bin/bash array=( item1 item2 item3 ) for index in ${!array[@]}; do echo $index/${#array[@]} done Note that since bash arrays are zero indexed , you will actually get : 0/3 1/3 2/3 If you want the count to run from 1 you can replace $index by $((index+1)) . If you want the values as well as the indices you can use "${array[index]}" i.e. #!/bin/bash array=( item1 item2 item3 ) for index in ${!array[@]}; do echo $((index+1))/${#array[@]} = "${array[index]}" done giving 1/3 = item1 2/3 = item2 3/3 = item3
{ "source": [ "https://unix.stackexchange.com/questions/193039", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
193,066
The ssh won't let me login, because account is locked. I want to unlock the user on my server for public key authorization over ssh, but do not enable password-ed login. I've tried: # passwd -u username passwd: unlocking the password would result in a passwordless account. You should set a password with usermod -p to unlock the password of this account. Auth log entries: Mar 28 00:00:00 vm11111 sshd[11111]: User username not allowed because account is locked Mar 28 00:00:00 vm11111 sshd[11111]: input_userauth_request: invalid user username [preauth]
Unlock the account and give the user a complex password as @Skaperen suggests. Edit /etc/ssh/sshd_config and ensure you have: PasswordAuthentication no Check that the line isn't commented ( # at the start) and save the file. Finally, restart the sshd service. Before you do this, ensure that your public key authentication is working first. If you need to do this for only one (or a small number) of users, leave PasswordAuthentication enabled and instead use Match User : Match User miro, alice, bob PasswordAuthentication no Place at the bottom of the file as it is valid until the next Match command or EOF. You can also use Match Group <group name> or a negation Match User !bloggs As you mention in the comments, you can also reverse it so that Password Authentication is disabled in the main part of the config and use Match statements to enable it for a few users: PasswordAuthentication no . . . Match <lame user> PasswordAuthentication yes
{ "source": [ "https://unix.stackexchange.com/questions/193066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/13428/" ] }
193,095
I have picked up -- probably on Usenet in the mid-1990s (!) -- that the construct export var=value is a Bashism, and that the portable expression is var=value export var I have been advocating this for years, but recently, somebody challenged me about it, and I really cannot find any documentation to back up what used to be a solid belief of mine. Googling for "export: command not found" does not seem to bring up any cases where somebody actually had this problem, so even if it's genuine, I guess it's not very common. (The hits I get seem to be newbies who copy/pasted punctuation, and ended up with 'export: command not found or some such, or trying to use export with sudo ; and newbie csh users trying to use Bourne shell syntax.) I can certainly tell that it works on OS X, and on various Linux distros, including the ones where sh is dash . sh$ export var=value sh$ echo "$var" value sh$ sh -c 'echo "$var"' # see that it really is exported value In today's world, is it safe to say that export var=value is safe to use? I'd like to understand what the consequences are. If it's not portable to v7 "Bourne classic", that's hardly more than trivia. If there are production systems where the shell really cannot cope with this syntax, that would be useful to know.
export foo=bar was not supported by the Bourne shell (an old shell from the 70s from which modern sh implementations like ash/bash/ksh/yash/zsh derive). That was introduced by ksh . In the Bourne shell, you'd do: foo=bar export foo or: foo=bar; export foo or with set -k : export foo foo=bar Now, the behaviour of: export foo=bar varies from shell to shell. The problem is that assignments and simple command arguments are parsed and interpreted differently. The foo=bar above is interpreted by some shells as a command argument and by others as an assignment (sometimes). For instance, a='b c' export d=$a is interpreted as: 'export' 'd=b' 'c' with some shells ( ash , older versions of zsh (in sh emulation), yash ) and: 'export' 'd=b c' in the others ( bash , ksh ). While export \d=$a or var=d export $var=$a would be interpreted the same in all shells (as 'export' 'd=b' 'c' ) because that backslash or dollar sign stops those shells that support it to consider those arguments as assignments. If export itself is quoted or the result of some expansion (even in part), depending on the shell, it would also stop receiving the special treatment. See Are quotes needed for local variable assignment? for more details on that. The Bourne syntax though: d=$a; export d is interpreted the same by all shells without ambiguity ( d=$a export d would also work in the Bourne shell and POSIX compliant shells but not in recent versions of zsh unless in sh emulation). It can get a lot worse than that. See for instance that recent discussion about bash when arrays are involved. (IMO, it was a mistake to introduce that feature ).
{ "source": [ "https://unix.stackexchange.com/questions/193095", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19240/" ] }
193,101
I have an old Win XP NEC laptop and I tried to boot a live usb with Lubuntu 14.10 to install Lubuntu, but when I tried to boot the live USB, after about a minute the boot process hangs at a line that says: [Firmware Bug] ACPI: No _BQC method, cannot determine initial brightness. I left it there for ~ 15 minutes and it was still stuck there. I tried rebooting, unplugging everything and booting, but nothing worked. I can only boot to windows XP. I cannot even boot to a Linux terminal. I've looked at many different StackExchange articles and I've tried Google. Please help! -Keith
export foo=bar was not supported by the Bourne shell (an old shell from the 70s from which modern sh implementations like ash/bash/ksh/yash/zsh derive). That was introduced by ksh . In the Bourne shell, you'd do: foo=bar export foo or: foo=bar; export foo or with set -k : export foo foo=bar Now, the behaviour of: export foo=bar varies from shell to shell. The problem is that assignments and simple command arguments are parsed and interpreted differently. The foo=bar above is interpreted by some shells as a command argument and by others as an assignment (sometimes). For instance, a='b c' export d=$a is interpreted as: 'export' 'd=b' 'c' with some shells ( ash , older versions of zsh (in sh emulation), yash ) and: 'export' 'd=b c' in the others ( bash , ksh ). While export \d=$a or var=d export $var=$a would be interpreted the same in all shells (as 'export' 'd=b' 'c' ) because that backslash or dollar sign stops those shells that support it to consider those arguments as assignments. If export itself is quoted or the result of some expansion (even in part), depending on the shell, it would also stop receiving the special treatment. See Are quotes needed for local variable assignment? for more details on that. The Bourne syntax though: d=$a; export d is interpreted the same by all shells without ambiguity ( d=$a export d would also work in the Bourne shell and POSIX compliant shells but not in recent versions of zsh unless in sh emulation). It can get a lot worse than that. See for instance that recent discussion about bash when arrays are involved. (IMO, it was a mistake to introduce that feature ).
{ "source": [ "https://unix.stackexchange.com/questions/193101", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108372/" ] }
193,223
I would to grep certain parts of some shell command output in the shell script: $ uname -r >> 3.14.37-1-lts Where I just need the 3.14.37 . And also for the shell script variable VERSION that has the value "-jwl35", I would like to take only the value "jwl35". How can I use regular expression to this in shell script? Thanks in advance!
Many, many ways. Here are a few: GNU Grep $ echo 3.14.37-1-lts | grep -oP '^[^-]*' 3.14.37 sed $ echo 3.14.37-1-lts | sed 's/^\([^-]*\).*/\1/' 3.14.37 Perl $ echo 3.14.37-1-lts | perl -lne '/^(.*?)-/ && print $1 3.14.37 or $ echo 3.14.37-1-lts | perl -lpe 's/^(.*?)-.*/$1/' 3.14.37 or $ echo 3.14.37-1-lts | perl -F- -lane 'print $F[0]' 3.14.37 awk $ echo 3.14.37-1-lts | awk -F- '{print $1}' 3.14.37 cut $ echo 3.14.37-1-lts | cut -d- -f1 3.14.37 Shell, even! $ echo 3.14.37-1-lts | while IFS=- read a b; do echo "$a"; done 3.14.37
{ "source": [ "https://unix.stackexchange.com/questions/193223", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108427/" ] }
193,345
I have a directory: /var/lib/mysql/test_db/ which contains numerous files that make up the test_db database I have now created a new directory: /var/lib/mysql/data/ I am trying to move the test_db directory and it's contents into the data directory. I've tried various commands revolving around sudo mv /var/lib/mysql/test_db/ /var/lib/mysql/data/test_db/ But I keep getting the error: mv: cannot move /var/lib/mysql/test_db/ to /var/lib/msyql/data/test_db/: No such file or directory But if I run: ls -lah I get drwxrwxrwx 2 root root 32K Mar 27 15:58 test_db drwxrwxrwx 3 mysql mysql 4.0K Mar 30 10:51 data which from what I can tell means they are both directories, and therefore both exist. As you can see I have changed permissions on them both ( chmod 777 test_db ), but that didn't work. What am I missing?
Remove the target database directory and move the test_db directory itself. (This will implicitly move its contents, too.) sudo rmdir /var/lib/mysql/data/test_db sudo mv /var/lib/mysql/test_db /var/lib/mysql/data Generally you don't need to provide a trailing slash on directory names. Reading your comments, if you find that you're still getting a "no such file or directory" error, it may be that your source directory, test_db has already been moved into the target test_db directory (giving you /var/lib/mysql/data/test_db/test_db/... ). If this is the case then the rmdir above will also fail with a "no such file or directory" error. Fix it with this command, and then re-run the two at the top of this answer: sudo mv /var/lib/mysql/data/test_db/test_db /var/lib/mysql
{ "source": [ "https://unix.stackexchange.com/questions/193345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102428/" ] }
193,352
I'm just jumping into unix from a different world, and wanted to know if while true do /someperlscript.pl done The perl script itself internally has a folder/file watcher that executes when files are changed in the target location. Is this ( while true ) a good idea? If not, what is a preferred robust approach? TIA EDIT : Since this seems to have generated a fair bit of interest, here is the complete scenario. The perl script itself watches a directory using a file watcher. Upon receiving new files (they arrive via rsync), it picks up the new one and processes it. Now the incoming files may be corrupt (don't ask.. coming from a raspberry pi), and sometimes the process may not be able to deal with it. I don't know exactly why, because we aren't aware of all the scenarios yet. BUT - if the process does fail for some reason, we want it to be up and running and deal with the next file, because the next file is completely unrelated to the previous one that might have caused the error. Usually I would have used some sort of catch all and wrapped the entire code around it so that it NEVER crashes. But was not sure for perl. From what I've understood, using something like supervisord is a good approach for this.
That depends on how fast the perl script returns. If it returns quickly, you might want to insert a small pause between executions to avoid CPU load, eg: while true do /someperlscript.pl sleep 1 done This will also prevent a CPU hog if the script is not found or crashes immediately. The loop might also better be implemented in the perl script itself to avoid these issues. Edit: As you wrote the loop only purpose is to restart the perl script should it crashes, a better approach would be to implement it as a monitored service but the precise way to do it is OS dependent. Eg: Solaris smf, Linux systemd or a cron based restarter.
{ "source": [ "https://unix.stackexchange.com/questions/193352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108541/" ] }
193,368
I want to use scp to upload files but sometimes the target directory may not exist. Is it possible to create the folder automatically? If so, how? If not, what alternative way can I try?
This is one of the many things that rsync can do. If you're using a version of rsync released in the past several years,¹ its basic command syntax is similar to scp :² $ rsync -r local-dir remote-machine:path That will copy local-source and its contents to $HOME/path/local-dir on the remote machine, creating whatever directories are required.³ rsync does have some restrictions here that can affect whether this will work in your particular situation. It won't create multiple levels of missing remote directories, for example; it will only create up to one missing level on the remote. You can easily get around this by preceding the rsync command with something like this: $ ssh remote-host 'mkdir -p foo/bar/qux' That will create the $HOME/foo/bar/qux tree if it doesn't exist. It won't complain or do anything else bad if it does already exist. rsync sometimes has other surprising behaviors. Basically, you're asking it to figure out what you meant to copy, and its guesses may not match your assumptions. Try it and see. If it doesn't behave as you expect and you can't see why, post more details about your local and remote directory setups, and give the command you tried. Footnotes : Before rsync 2.6.0 (1 Jan 2004), it required the -e ssh flag to make it behave like scp because it defaulted to the obsolete RSH protocol . scp and rsync share some flags, but there is only a bit of overlap. When using SSH as the transfer protocol, rsync uses the same defaults. So, just like scp , it will assume there is a user with the same name as your local user on the remote machine by default.
{ "source": [ "https://unix.stackexchange.com/questions/193368", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45317/" ] }
193,482
Is there a simple command line to extract the last part of a string separated by hyphens? E.g., I want to extract 123 from foo-bar-123 .
You can use Bash's parameter expansion : string="foo-bar-123" && printf "%s\n" "${string##*-}" 123 If you want to use another process, with Awk: echo "foo-bar-123" | awk -F- '{print $NF}' Or, if you prefer Sed: echo "foo-bar-123" | sed 's/.*-//' A lighter external process, as Glenn Jackman suggests is cut : cut -d- -f3 <<< "$string"
{ "source": [ "https://unix.stackexchange.com/questions/193482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3120/" ] }
193,714
I am aware of following thread and supposedly an answer to it . Except an answer is not an answer in generic sense. It tells what the problem was in one particular case, but not in general. My question is: is there a way to debug ordering cycles in a generic way? E.g.: is there a command which will describe the cycle and what links one unit to another? For example, I have following in journalctl -b (please disregard date, my system has no RTC to sync time with): Jan 01 00:00:07 host0 systemd[1]: Found ordering cycle on sysinit.target/start Jan 01 00:00:07 host0 systemd[1]: Found dependency on local-fs.target/start Jan 01 00:00:07 host0 systemd[1]: Found dependency on cvol.service/start Jan 01 00:00:07 host0 systemd[1]: Found dependency on basic.target/start Jan 01 00:00:07 host0 systemd[1]: Found dependency on sockets.target/start Jan 01 00:00:07 host0 systemd[1]: Found dependency on dbus.socket/start Jan 01 00:00:07 host0 systemd[1]: Found dependency on sysinit.target/start Jan 01 00:00:07 host0 systemd[1]: Breaking ordering cycle by deleting job local-fs.target/start Jan 01 00:00:07 host0 systemd[1]: Job local-fs.target/start deleted to break ordering cycle starting with sysinit.target/start where cvol.service (the one that got introduced, and which breaks the cycle) is: [Unit] Description=Mount Crypto Volume After=boot.mount Before=local-fs.target [Service] Type=oneshot RemainAfterExit=no ExecStart=/usr/bin/cryptsetup open /dev/*** cvol --key-file /boot/*** [Install] WantedBy=home.mount WantedBy=root.mount WantedBy=usr-local.mount According to journalctl, cvol.service wants basic.service, except that it doesn't, at least not obviously. Is there a command which would demonstrate where this link is derived from? And in general, is there a command, which would find the cycles and show where each link in the cycle originates?
You can visualise the cycle with the commands systemd-analyze verify , systemd-analyze dot and the GraphViz dot tool: systemd-analyze verify default.target |& perl -lne 'print $1 if m{Found.*?on\s+([^/]+)}' | xargs --no-run-if-empty systemd-analyze dot | dot -Tsvg >cycle.svg You should see something like this: Here you can see the cycle: c.service->b.service->a.service->c.service Color legend: black = Requires dark blue = Requisite dark grey = Wants red = Conflicts green = After Links: systemd-analyze(1) dot(1)
{ "source": [ "https://unix.stackexchange.com/questions/193714", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14554/" ] }
193,815
While researching another problem, I came across a command , locate something | xargs -I {} bash -c "if [ -d "{}" ]; then echo {}; fi" that I wanted to learn more about. So I ran man xargs and get the following output: XARGS(1) General Commands Manual XARGS(1) NAME xargs - build and execute command lines from standard input SYNOPSIS xargs [-0prtx] [-E eof-str] [-e[eof-str]] [--eof[=eof-str]] [--null] [-d delimiter] [--delimiter delimiter] [-I replace-str] [-i[replace- str]] [--replace[=replace-str]] [-l[max-lines]] [-L max-lines] [--max-lines[=max-lines]] [-n max-args] [--max-args=max-args] [-s max- chars] [--max-chars=max-chars] [-P max-procs] [--max-procs=max-procs] [--interactive] [--verbose] [--exit] [--no-run-if-empty] [--arg-file=file] [--show-limits] [--version] [--help] [command [initial-arguments]] DESCRIPTION This manual page documents the GNU version of xargs... I am trying to get better at using documentation to learn about Linux programs, but that "Synopsis" section is intimidating to new users. It literally looks like gibberish compared to man locate or man free . So far, I understand that square brackets mean optional and nested brackets mean options in optional. But how am I supposed to induce a valid command with that? I am not asking for help with xargs here. I am looking for help interpreting a man page to understand complicated commands. I want to stop making Google-indexed web blogs and personal help from others my first approach to learning Linux commands.
Well, this is my very personal way to read manpages: The manpager When you open a manpage using the man command, the output will be displayed/rendered by the less or more commands, or any other command that will be set as your pager(manpager). If you are using Linux you are probably served with your man infrastructure already configured to use /usr/bin/less -is (unless you installed some minimal distro) as man(1) , explain on it's Options section: -P pager Specify which pager to use. This option overrides the MANPAGER environment variable, which in turn overrides the PAGER variable. By default, man uses /usr/bin/less -is. On FreeBSD and OpenBSD is just a matter of editing the MANPAGER environment variable since they will mostly use more , and some features like search and text highlight could be missing. There is a good answer to the question of what differences more , less and most have here (never used most ). The ability to scroll backwards and scroll forward by page with Space or both ways by line with ↓ or ↑ (also, using vi bindings j and k ) is essential while browsing manpages. Press h while using less to see the summary of commands available. And that's why I suggest you to use less as your man pager. less have some essential features that will be used during this answer. How is a command formatted? Utility Conventions : The Open Group Base Specifications Issue 7 - IEEE Std 1003.1, 2013 Edition. You should visit that link before trying to understand a manpage. This online reference describes the argument syntax of the standard utilities and introduces terminology used throughout POSIX.1-2017 for describing the arguments processed by the utilities. This will also indirectly get you updated about the real meaning of words like parameters, arguments, argument option... The head of any manpage will look less cryptic to you after understanding the notation of the utility conventions: utility_name[-a][-b][-c option_argument] [-d|-e][-f[option_argument]][operand...] Have in mind what you want to do. When doing your research about xargs you did it for a purpouse, right? You had a specific need that was reading standard output and executing commands based on that output. But, when I don't know which command I want? Use man -k or apropos (they are equivalent). If I don't know how to find a file: man -k file | grep search . Read the descriptions and find one that will better fit your needs. Example: apropos -r '^report' bashbug (1) - report a bug in bash df (1) - report file system disk space usage e2freefrag (8) - report free space fragmentation information filefrag (8) - report on file fragmentation iwgetid (8) - Report ESSID, NWID or AP/Cell Address of wireless network kbd_mode (1) - report or set the keyboard mode lastlog (8) - reports the most recent login of all users or of a given user pmap (1) - report memory map of a process ps (1) - report a snapshot of the current processes. pwdx (1) - report current working directory of a process uniq (1) - report or omit repeated lines vmstat (8) - Report virtual memory statistics Apropos works with regular expressions by default, ( man apropos , read the description and find out what -r does), and on this example I'm looking for every manpage where the description starts with "report". To look for information related with reading standard input/output processing and reaching xargs as a possible option: man -k command| grep input xargs (1) - build and execute command lines from standard input Always read the DESCRIPTION before starting Take a time and read the description. By just reading the description of the xargs command we will learn that: xargs reads from STDIN and executes the command needed. This also means that you will need to have some knowledge of how standard input works, and how to manipulate it through pipes to chain commands The default behavior is to act like /bin/echo . This gives you a little tip that if you need to chain more than one xargs , you don't need to use echo to print. We have also learned that unix filenames can contain blank and newlines, that this could be a problem and the argument -0 is a way to prevent things explode by using null character separators. The description warns you that the command being used as input needs to support this feature too, and that GNU find support it. Great. We use a lot of find with xargs . xargs will stop if exit status 255 is reached. Some descriptions are very short and that is generally because the software works on a very simple way. Don't even think of skipping this part of the manpage ;) Other things to pay attention... You know that you can search for files using find . There is a ton of options and if you only look at the SYNOPSIS , you will get overwhelmed by those. It's just the tip of the iceberg. Excluding NAME , SYNOPSIS , and DESCRIPTION , you will have the following sections: AUTHORS : the people who created or assisted in the creation of the command. BUGS : lists any known defects. Could be only implementation limitations. ENVIRONMENT : Aspects of your shell that could be affected by the command, or variables that will be used. EXAMPLES or NOTES : Self explanatory. REPORTING BUGS : Who you will have to contact if you find bugs on this tool or in its documentation. COPYRIGHT : Person who created and disclaimers about the software. All related with the license of the software itself. SEE ALSO : Other commands, tools or working aspects that are related to this command, and could not fit on any of the other sections. You will most probably find interesting info about the aspects you want of a tool on the examples/notes section. Example On the following steps I'll take find as an example, since it's concepts are "more simple" than xargs to explain(one command find files and the other deals with stdin and pipelined execution of other command output). Let's just pretend that we know nothing (or very little) about this command. I have a specific problem that is: I have to look for every file with the .jpg extension, and with 500KiB (KiB = 1024 byte, commonly called kibibyte), or more in size inside a ftp server folder. First, open the manual: man find . The SYNOPSIS is slim. Let's search for things inside the manual: Type / plus the word you want ( size ). It will index a lot of entries -size that will count specific sizes. Got stuck. Don't know how to search with "more than" or "less than" a given size, and the man does not show that to me. Let's give it a try, and search for the next entry found by hitting n . OK. Found something interesting: find \( -size +100M -fprintf /root/big.txt %-10s %p\n \) . Maybe this example is showing us that with -size +100M it will find files with 100MB or more. How could I confirm? Going to the head of the manpage and searching for other words. Again, let's try the word greater . Pressing g will lead us to the head of the manpage. / greater , and the first entry is: Numeric arguments can be specified as +n for **greater** than n, -n for less than n, n for exactly n. Sounds great. It seems that this block of the manual confirmed what we suspected. However, this will not only apply to file sizes. It will apply to any n that can be found on this manpage (as the phrase said: "Numeric arguments can be specified as"). Good. Let us find a way to filter by name: g / insensitive . Why? Insensitive? Wtf? We have a hypothetical ftp server, where "that other OS" people could give a file name with extensions as .jpg , .JPG , .JpG . This will lead us to: -ilname pattern Like -lname, but the match is case insensitive. If the -L option or the -follow option is in effect, this test returns false unless the symbolic link is broken. However, after you search for lname you will see that this will only search for symbolic links. We want real files. The next entry: -iname pattern Like -name, but the match is case insensitive. For example, the patterns `fo*' and `F??' match the file names `Foo', `FOO', `foo', `fOo', etc. In these patterns, unlike filename expan‐ sion by the shell, an initial '.' can be matched by `*'. That is, find -name *bar will match the file `.foobar'. Please note that you should quote patterns as a matter of course, otherwise the shell will expand any wildcard characters in them. Great. I don't even need to read about -name to see that -iname is the case insensitive version of this argument. Lets assemble the command: Command: find /ftp/dir/ -size +500k -iname "*.jpg" What is implicit here: The knowledge that the wildcard ? represents "any character at a single position" and * represents "zero or more of any character". The -name parameter will give you a summary of this knowledge. Tips that apply to all commands Some options, mnemonics and "syntax style" travel through all commands making you buy some time by not having to open the manpage at all. Those are learned by practice and the most common are: Generally, -v means verbose. -vvv is a variation "very very verbose" on some software. Following the POSIX standard, generally one dash arguments can be stacked. Example: tar -xzvf , cp -Rv . Generally -R and/or -r means recursive. Almost all commands have a brief help with the --help option. --version shows the version of a software. -p , on copy or move utilities means "preserve permissions". -y means YES, or "proceed without confirmation" in most cases. Note that the above are not always true though. For example, the -r switch can mean very different things for different software. It is always a good idea to check and make sure when a command could be dangerous, but these are common defaults. Default values of commands. At the pager chunk of this answer, we saw that less -is is the pager of man . The default behavior of commands are not always shown at a separated section on manpages, or at the section that is most top placed. You will have to read the options to find out defaults, or if you are lucky, typing / pager will lead you to that info. This also requires you to know the concept of the pager(software that scrolls the manpage), and this is a thing you will only acquire after reading lots of manpages. Why is that important? This will open up your perception if you find differences on scroll and color behavior while reading man(1) on Linux( less -is pager) or FreeBSD man(1) for example. And what about the SYNOPSIS syntax? After getting all the information needed to execute the command, you can combine options, option-arguments and operands inline to make your job done. Overview of concepts: Options are the switches that dictates a command behavior. " Do this " " don't do this " or " act this way ". Often called switches. Option-arguments are used on most cases when an option isn´t binary(on/off) like -t on mount, that specifies the type of a filesystem( -t iso9660 , -t ext2 ). " Do this with closed eyes " or " feed the animals, but only the lions ". Also called arguments. Operands are things you want that command to act upon. If you use cat file.txt , the operand is a file inside your current directory, and it´s contents will be shown on STDOUT . ls is a command where an operand is optional. The three dots after the operand implicitly tells you that cat can act on multiple operands(files) at the same time. You may notice that some commands have set what type of operand it will use. Example: cat [OPTION] [FILE]... Related synopsis stuff: Understand synopsis in manpage When will this method not work? Manpages that have no examples Manpages where options have a short explanation When you use generic keywords like and , to , for inside the manpages Manpages that are not installed. It seems to be obvious but, if you don't have lftp (and its manpages) installed you can't know that is a suitable option as a more sophisticated ftp client by running man -k ftp In some cases the examples will be pretty simple, and you will have to make some executions of your command to test, or in a worst case scenario, Google it. Other: Programming languages and it's modules: If you are programming or just creating scripts, keep in mind that some languages have it's own manpages systems, like perl ( perldocs ), python( pydocs ), etc, holding specific information about methods/funcions, variables, behavior, and other important information about the module you are trying to use and learn. This was useful to me when i was creating a script to download unread IMAP emails using the perl Mail::IMAPClient module. You will have to figure out those specific manpages by using man -k or searching online. Examples: [root@host ~]# man -k doc | grep perl perldoc (1) - Look up Perl documentation in Pod format [root@host ~]# perldoc Mail::IMAPClient IMAPCLIENT(1) User Contributed Perl Documentation IMAPCLIENT(1) NAME Mail::IMAPClient - An IMAP Client API SYNOPSIS use Mail::IMAPClient; my $imap = Mail::IMAPClient->new( Server => ’localhost’, User => ’username’, Password => ’password’, Ssl => 1, Uid => 1, ); ...tons of other stuff here, with sections like a regular manpage... With python: [root@host ~]# pydoc sys Help on built-in module sys: NAME sys FILE (built-in) MODULE DOCS http://www.python.org/doc/current/lib/module-sys.html DESCRIPTION This module provides access to some objects used or maintained by the interpreter and to functions that interact strongly with the interpreter. ...again, another full-featured manpage with interesting info... Or, the help() funcion inside python shell if you want to read more details of some object: nwildner@host:~$ python3.6 Python 3.6.7 (default, Oct 21 2018, 08:08:16) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> help(round) Help on built-in function round in module builtins: round(...) round(number[, ndigits]) -> number Round a number to a given precision in decimal digits (default 0 digits). This returns an int when called with one argument, otherwise the same type as the number. ndigits may be negative. Bonus: The wtf command can help you with acronyms and it works as whatis if no acronym on it's database is found, but what you are searching is part of the man database. On Debian this command is part of the bsdgames package. Examples: nwildner@host:~$ wtf rtfm RTFM: read the fine/fucking manual nwildner@host:~$ wtf afaik AFAIK: as far as I know nwildner@host:~$ wtf afak Gee... I don't know what afak means... nwildner@host:~$ wtf tcp tcp: tcp (7) - TCP protocol. nwildner@host:~$ wtf systemd systemd: systemd (1) - systemd system and service manager
{ "source": [ "https://unix.stackexchange.com/questions/193815", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99989/" ] }
193,827
What is DISPLAY=:0 and what does it mean? It isn't a command, is it? ( gnome-panel is a command.) DISPLAY=:0 gnome-panel
DISPLAY=:0 gnome-panel is a shell command that runs the external command gnome-panel with the environment variable DISPLAY set to :0 . The shell syntax VARIABLE = VALUE COMMAND sets the environment variable VARIABLE for the duration of the specified command only. It is roughly equivalent to (export VARIABLE = VALUE ; exec COMMAND ) . The environment variable DISPLAY tells GUI programs how to communicate with the GUI. A Unix system can run multiple X servers , i.e. multiple display. These displays can be physical displays (one or more monitor), or remote displays (forwarded over the network, e.g. over SSH), or virtual displays such as Xvfb , etc. The basic syntax to specify displays is HOST : NUMBER ; if you omit the HOST part, the display is a local one. Displays are numbered from 0, so :0 is the first local display that was started. On typical setups, this is what is displayed on the computer's monitor(s). Like all environment variables, DISPLAY is inherited from parent process to child process. For example, when you log into a GUI session, the login manager or session starter sets DISPLAY appropriately, and the variable is inherited by all the programs in the session. When you open an SSH connection with X forwarding, SSH sets the DISPLAY environment variable to the forwarded connection, so that the programs that you run on the remote machine are displayed on the local machine. If there is no forwarded X connection (either because SSH is configured not to do it, or because there is no local X server), SSH doesn't set DISPLAY . Setting DISPLAY explicitly causes the program to be displayed in a place where it normally wouldn't be. For example, running DISPLAY=:0 gnome-panel in an SSH connection starts a Gnome panel on the remote machine's local display (assuming that there is one and that the user is authorized to access it). Explicitly setting DISPLAY=:0 is usually a way to access a machine's local display from outside the local session, such as over a remote access or from a cron job.
{ "source": [ "https://unix.stackexchange.com/questions/193827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
194,050
I have a text file with following data and each row ends with |END| . T|somthing|something|END|T|something2|something2|END| I am tryig to replace |END| with \n new line with sed. sed 's/\|END\|/\n/g' test.txt But it's producing wrong output like below: T | s o m e ... But what I want is this: T|somthing|something T|something2|something2 I also tried with tr . It didn't work either.
Use this: sed 's/|END|/\n/g' test.txt What you attempted doesn't work because sed uses basic regular expressions , and your sed implementation has a \| operator meaning “or” (a common extension to BRE), so what you wrote replaces (empty string or END or empty string) by a newline.
{ "source": [ "https://unix.stackexchange.com/questions/194050", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72369/" ] }
194,088
My question originates from my problem in getting ffmpeg started. I have installed ffmpeg and it is displayed as installed: whereis ffmpeg ffmpeg: /usr/bin/ffmpeg /usr/bin/X11/ffmpeg /usr/share/ffmpeg /usr/share/man/man1/ffmpeg.1.gz Later, I figured out, that some programs depend on libraries that do not come with the installation itself, so I checked with ldd command what is missing: # ldd /usr/bin/ffmpeg linux-vdso.so.1 => (0x00007fff71fe9000) libavfilter.so.0 => not found libpostproc.so.51 => not found libswscale.so.0 => not found libavdevice.so.52 => not found libavformat.so.52 => not found libavcodec.so.52 => not found libavutil.so.49 => not found libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5f20bdf000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5f209c0000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5f205fb000) /lib64/ld-linux-x86-64.so.2 (0x00007f5f20f09000) As it turns out my ffmpeg is cut off from 7 libraries too work. I first thought that each of those libraries have to be installed, but than I figured out, that some or all might be installed, but their location unknown to ffmpeg. I read that /etc/ld.so.conf and /etc/ld.so.cache contain the paths to the libraries, but I was confused, because, there was only one line in /etc/ld.so.conf cat /etc/ld.so.conf include /etc/ld.so.conf.d/*.conf but a very long /etc/ld.so.cache . I am now at a point where I feel lost how to investigate further, It might be a helpful next step to figure out, how I can determine if a given library is indeed installed even if its location unknown to ffmpeg. ---------Output---of----apt-cache-policy-----request--------- apt-cache policy Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.canonical.com/ubuntu/ trusty/partner Translation-en 500 http://archive.canonical.com/ubuntu/ trusty/partner i386 Packages release v=14.04,o=Canonical,a=trusty,n=trusty,l=Partner archive,c=partner origin archive.canonical.com 500 http://archive.canonical.com/ubuntu/ trusty/partner amd64 Packages release v=14.04,o=Canonical,a=trusty,n=trusty,l=Partner archive,c=partner origin archive.canonical.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/main Translation-en 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=multiverse origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=universe origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=restricted origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/main i386 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=main origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=multiverse origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=universe origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=restricted origin security.ubuntu.com 500 http://security.ubuntu.com/ubuntu/ trusty-security/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty-security,n=trusty,l=Ubuntu,c=main origin security.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main i386 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty-updates,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/main Translation-en 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/main i386 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/multiverse amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=multiverse origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/universe amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=universe origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/restricted amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=restricted origin archive.ubuntu.com 500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages release v=14.04,o=Ubuntu,a=trusty,n=trusty,l=Ubuntu,c=main origin archive.ubuntu.com 700 http://extra.linuxmint.com/ rebecca/main i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin extra.linuxmint.com 700 http://extra.linuxmint.com/ rebecca/main amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin extra.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/import i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=import origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/upstream i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=upstream origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/main i386 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/import amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=import origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/upstream amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=upstream origin packages.linuxmint.com 700 http://packages.linuxmint.com/ rebecca/main amd64 Packages release v=17.1,o=linuxmint,a=rebecca,n=rebecca,l=linuxmint,c=main origin packages.linuxmint.com Pinned packages:
You can use: ldconfig -p | grep libavfilter If there is no output library is not installed. I am not sure if this is 100% reliable. At least in man page of ldconfig for option -p: Print the lists of directories and candidate libraries stored in the current cache.
{ "source": [ "https://unix.stackexchange.com/questions/194088", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
194,157
I'm using bash on Linux. I am getting a success from the following if statement, but shouldn't this return a fail code? if [[ ■ = [⅕⅖⅗] ]] ; then echo yes ; fi The square does NOT equal any of the characters, so I don't see why I get a success code. It important for me to keep the double brackets in my case. Is there any other way to do a range in this scenario, or what any other suggestions?
That's a consequence of those characters having the same sorting order. You'll also notice that sort -u << EOF ■ ⅕ ⅖ ⅗ EOF returns only one line. Or that: expr ■ = ⅕ returns true (as required by POSIX). Most locales shipped with GNU systems have a number of characters (and even sequences of characters (collating sequences)) that have the same sorting order. In the case of those ■⅕⅖⅗ ones, it's because the order is not defined, and those characters whose order is not defined end up having the same sorting order in GNU systems. There are characters that are explicitly defined as having the same sorting order like Ș and Ş (though there's no apparent (to me anyway) real logic or consistency on how it is done). That is the source of quite surprising and bogus behaviours. I have raised the issue very recently on the Austin group (the body behind POSIX and the Single UNIX Specification) mailing list and the discussion is still ongoing as of 2015-04-03. In this case, whether [y] should match x where x and y sort the same is unclear to me, but since a bracket expression is meant to match a collating element, that suggests that the bash behaviour is expected. In any case, I suppose [⅕-⅕] or at least [⅕-⅖] should match ■ . You'll notice that different tools behave differently. ksh93 behaves like bash , GNU grep or sed don't. Some other shells have different behaviours some like yash even more buggy. To have a consistent behaviour, you need a locale where all characters sort differently. The C locale is the typical one. However the character set in the C locale on most systems is ASCII. On GNU systems, you generally have access to a C.UTF-8 locale that can be used instead to work on UTF-8 character. So: (export LC_ALL=C.UTF-8; [[ ■ = [⅕⅖⅗] ]]) or the standard equivalent: (export LC_ALL=C.UTF-8 case ■ in ([⅕⅖⅗]) true;; (*) false; esac) should return false. Another alternative would be to set only LC_COLLATE to C which would work on GNU systems, but not necessarily on others where it could fail to specify the sorting order of multi-byte character. One lesson of that is that equality is not as clear a notion as one would expect when it comes to comparing strings. Equality might mean, from strictest to least strict. Same number of bytes and all byte constituents have the same value. Same number of characters and all characters are the same (for instance, refer to the same codepoint in the current charset). The two strings have the same sorting order as per the locale's collation algorithm (that is, neither a < b nor b > a is true). Now, for 2 or 3, that assumes both strings contain valid characters. In UTF-8 and some other encodings, some sequence of bytes don't form valid characters. 1 and 2 are not necessarily equivalent because of that, or because some characters may have more than one possible encoding. That's typically the case of stateful encodings like ISO-2022-JP where A can be expressed as 41 or 1b 28 42 41 ( 1b 28 42 being the sequence to switch to ASCII and you can insert as many of those as you want, that won't make a difference), though I wouldn't expect those types of encoding still being in use, and GNU tools at least generally don't work properly with them. Also beware that most non-GNU utilities can't deal with the 0 byte value (the NUL character in ASCII). Which of those definitions is used depends on the utility and utility implementation or version. POSIX is not 100% clear on that. In the C locale, all 3 are equivalent. Outside of that YMMV.
{ "source": [ "https://unix.stackexchange.com/questions/194157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109055/" ] }
194,357
While solving some CTF challenges online, I came across a situation where I needed to bruteforce a server. This is the code I wrote: #!/bin/bash for i in {0..9}{0..9}{0..9}{0..9} do echo "Now trying code.." echo $i echo "a fixed string" $i | nc localhost *port here* >> /tmp/me/dump.txt done This was incredibly, painfully slow . I needed to try combinations from 1000 to 9999 and this took around 5 seconds for each 10 tries. Then, following an advice, I put a '&' at the end of this line: echo "a fixed string" $i | nc localhost *port here* >> /tmp/me/dump.txt & And, it tried 100s of combinations within seconds. I was very surprised. Could someone explain the logic to me? What did the '&' do?
Adding & spawns a background process. If you write a; b , it will run command a , wait for it to finish, then run command b , in sequence. If you write a & b , it will spawn a as a background process. It will not wait for it to finish, and it will start running b immediately. It will run both at once. You can see what it does by experimenting in the shell. If you have X installed, xterm is a good way to see what happens: typing $ xterm will cause another terminal window to open, and the first one will wait until you close it. Only when you close it will you get your shell back. If you type $ xterm & then it will run it in the background, and you will get your shell back immediately, while the xterm window will also remain open. So if you write echo "a fixed string" $i | nc localhost *port here* >> /tmp/me/dump.txt it makes the connection, sends the string, stores what comes out in the file, and only then moves on to then next one. Adding the & makes it not wait. It will end up running all ten thousand of them more or less simultaneously. Your script seems to "end" more quickly, because it probably did not actually finish in that time. It just made ten thousand background jobs, and then ended the foreground one. This also means that, in your case, it will try to open ten thousand connections more or less at once. Depending on what the other end can handle, some of them might well fail. Not only that, but there is no guarantee that they will run in order, in fact they almost certainly won't, so what will actually end up in /tmp/me/dump.txt is anyone's guess. Did you check if the output was correct?
{ "source": [ "https://unix.stackexchange.com/questions/194357", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/79027/" ] }
194,365
Last night I SSH 'ed to different systems.... one system/SSH reported that the " authenticity of HOSTNAME couldn't be established....... " and it asks if I want to continue or something, I didn't and found this peculiar so I tried to SSH to the system from one of the systems I already had SSH access/open, which didn't report that message(which means no change to the system since last login). Then I looked at my ~/.ssh/known_hosts and the system was in there so it should know the host I was connecting from, then tried again using the up/down arrows to browse bash history so I didn't make any mistakes in the commands and I didn't... And this time it worked without any notice about failed authenticity and asked for the password as usual. Should I be worried, was this as Debian say's "someone doing something nasty"? The point is... why the message, then not the message(without me doing or changing anything)..... weird.
Adding & spawns a background process. If you write a; b , it will run command a , wait for it to finish, then run command b , in sequence. If you write a & b , it will spawn a as a background process. It will not wait for it to finish, and it will start running b immediately. It will run both at once. You can see what it does by experimenting in the shell. If you have X installed, xterm is a good way to see what happens: typing $ xterm will cause another terminal window to open, and the first one will wait until you close it. Only when you close it will you get your shell back. If you type $ xterm & then it will run it in the background, and you will get your shell back immediately, while the xterm window will also remain open. So if you write echo "a fixed string" $i | nc localhost *port here* >> /tmp/me/dump.txt it makes the connection, sends the string, stores what comes out in the file, and only then moves on to then next one. Adding the & makes it not wait. It will end up running all ten thousand of them more or less simultaneously. Your script seems to "end" more quickly, because it probably did not actually finish in that time. It just made ten thousand background jobs, and then ended the foreground one. This also means that, in your case, it will try to open ten thousand connections more or less at once. Depending on what the other end can handle, some of them might well fail. Not only that, but there is no guarantee that they will run in order, in fact they almost certainly won't, so what will actually end up in /tmp/me/dump.txt is anyone's guess. Did you check if the output was correct?
{ "source": [ "https://unix.stackexchange.com/questions/194365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36440/" ] }
194,406
I am starting to learn some Regex, therefore I use this command repeatedly: grep pattern /usr/share/dict/american-english Only the part with pattern changes, so I have to write the long expression " /usr/share/dict/american-english " again and again. Someone made the remark that it is possible to expand an argument of a command from the command history by typing cryptic character combinations instead of the full expression. Could you tell me those cryptic character combinations ?
You can use <M-.> (or <Esc>. if your Meta key is being used for something else), that is, Meta-dot (or <esc> dot), where Meta is usually the Alt key, to recall the last argument of the previous command. So, first you would type $ grep foo /usr/share/dict/american-english And then if you wanted to grep for something else, you would type $ grep bar After typing a space and then Esc . (that is, first pressing the escape key, and then the period key): $ grep bar /usr/share/dict/american-english You can also use either of the following: $ grep bar !:2 $ grep bar !$ Where !:2 and !$ mean "second argument" and "last argument" respectively.
{ "source": [ "https://unix.stackexchange.com/questions/194406", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102788/" ] }
194,691
Right now, any time I use vagrant, it tries to use libvirt as the provider. I want to use VirtualBox by default. vagrant-libvirt is not installed. It's bothersome because some commands don't work, like vagrant status : [florian@localhost local]$ vagrant status The provider 'libvirt' could not be found, but was requested to back the machine 'foobar'. Please use a provider that exists. [florian@localhost local]$ vagrant status --provider=virtualbox An invalid option was specified. The help for this command is available below. Usage: vagrant status [name] -h, --help Print this help
According to vagrant's documentation , the default provider should be virtualbox , and the VAGRANT_DEFAULT_PROVIDER variable lets you override it. However, VAGRANT_DEFAULT_PROVIDER is empty, so it should be virtualbox , right? Well, if I set the variable to virtualbox , it works again. So I guess fedora sets the default variable somewhere else. Solution: $ echo "export VAGRANT_DEFAULT_PROVIDER=virtualbox" >> ~/.bashrc $ source ~/.bashrc
{ "source": [ "https://unix.stackexchange.com/questions/194691", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26143/" ] }
194,700
patterns.txt: "BananaOpinion" "ExitWarning" "SomeMessage" "Help" "Introduction" "MessageToUser" Strings.xml <string name="Introduction">One day there was an apple that went to the market.</string> <string name="BananaOpinion">Bananas are great!</string> <string name="MessageToUser">We would like to give you apples, bananas and tomatoes.</string> Expected output: "ExitWarning" "SomeMessage" "Help" How do I print the terms in patterns.txt that are not found in Strings.xml ? I can print the matched/unmatched lines in Strings.xml , but how do I print the unmatched patterns ? I'm using ggrep (GNU grep) version 2.21, but am open to other tools. Apologies if this is a duplicate of another question that I couldn't find.
You could use grep -o to print only the matching part and use the result as patterns for a second grep -v on the original patterns.txt file: grep -oFf patterns.txt Strings.xml | grep -vFf - patterns.txt Though in this particular case you could also use join + sort : join -t\" -v1 -j2 -o 1.1 1.2 1.3 <(sort -t\" -k2 patterns.txt) <(sort -t\" -k2 strings.xml)
{ "source": [ "https://unix.stackexchange.com/questions/194700", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109407/" ] }
194,780
[root@server]# awk '!seen[$0]++' out.txt > cleaned awk: (FILENAME=out.txt FNR=8547098) fatal error: internal error Aborted [root@server]# The ""server"" has: 8 GByte RAM + 16 GByte SWAP, x>300 GByte free space, amd64, desktop CPU. Scientific Linux 6.6. Nothing else runs on it to make LOAD. Awk aborts after a few seconds.. out.txt is ~1.6 GByte. GNU Awk 3.1.7. Question : How can I remove the duplicate lines while keeping the order of the lines? Case is important too, ex: "A" and "a" is two different line, have to keep it. But "a" and "a" is duplicate, only the first one is needed. Answer could be in anything.. if awk is not good for this.. then perl/sed.. what could the problem be? [root@server]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 61945 max locked memory (kbytes, -l) 99999999 max memory size (kbytes, -m) unlimited open files (-n) 999999 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 99999999 cpu time (seconds, -t) unlimited max user processes (-u) 61945 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [root@server]# Update: I tried this on a RHEL machine, it doesn't aborts, but I didn't had time to wait for it to finish.. why doesn SL linux differ from RHEL? Update: I'm trying on an Ubuntu 14 virtual gues.. so far it works! It's not an ulimit problem: mawk 1.3.3 root@asdf-VirtualBox:~# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 51331 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 51331 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited root@asdf-VirtualBox:~#
I doubt it will make a difference but, just in case, here's how to do the same thing in Perl: perl -ne 'print if ++$k{$_}==1' out.txt If the problem is keeping the unique lines in memory, that will have the same issue as the awk you tried. So, another approach could be: cat -n out.txt | sort -k2 -k1n | uniq -f1 | sort -nk1,1 | cut -f2- How it works: On a GNU system, cat -n will prepend the line number to each line following some amount of spaces and followed by a <tab> character. cat pipes this input representation to sort . sort 's -k2 option instructs it only to consider the characters from the second field until the end of the line when sorting, and sort splits fields by default on white-space (or cat 's inserted spaces and <tab> ) . When followed by -k1n , sort considers the 2nd field first, and then secondly—in the case of identical -k2 fields—it considers the 1st field but as sorted numerically. So repeated lines will be sorted together but in the order they appeared. The results are piped to uniq —which is told to ignore the first field ( -f1 - and also as separated by whitespace) —and which results in a list of unique lines in the original file and is piped back to sort . This time sort sorts on the first field ( cat 's inserted line number) numerically, getting the sort order back to what it was in the original file and pipes these results to cut . Lastly, cut removes the line numbers that were inserted by cat . This is effected by cut printing only from the 2nd field through the end of the line (and cut 's default delimiter is a <tab> character) . To illustrate: $ cat file bb aa bb dd cc dd aa bb cc $ cat -n file | sort -k2 | uniq -f1 | sort -k1 | cut -f2- bb aa dd cc
{ "source": [ "https://unix.stackexchange.com/questions/194780", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81044/" ] }
194,863
I have found the command to delete files older than 5 days in a folder find /path/to/files* -mtime +5 -exec rm {} \; But how do I also do this for subdirectories in that folder?
Be careful with special file names (spaces, quotes) when piping to rm. There is a safe alternative - the -delete option: find /path/to/directory/ -mindepth 1 -mtime +5 -delete That's it, no separate rm call and you don't need to worry about file names. Replace -delete with -depth -print to test this command before you run it ( -delete implies -depth ). Explanation: -mindepth 1 : without this, . (the directory itself) might also match and therefore get deleted. -mtime +5 : process files whose data was last modified 5*24 hours ago.
{ "source": [ "https://unix.stackexchange.com/questions/194863", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102085/" ] }