source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
215,604
xxxxxx15 |xxxxxx02|RM99999 |xxxxx |Ankur |xxxxx |xxxxxxxx|M|xxxxxxxx| | | |xxxxxxx|xxx|xxxxxxxx| |10 |New York| 23.00|F|P| | |NA Want to replace 10 with 65, closet I got is sed -i '/^.\{20\}RM99999/ s/^\(?:[^|]*\|\)\{16\}\([^|]*\)/\165/' test.txt But it replaces, the first character with 65 ( RM99999 can be in more locations, but need to replace the line which has RM99999 on 20th character)
I may have misunderstood. But you can recursively use chmod and chown eg. chown -R username:username /path/directory To recursively apply permission 700 you can use: chmod -r 700 /path/directory Of course the above is for Linux so not sure if mac osx is the same. EDIT: Yea sorry forgot to mention you need to be root to chown something, I just assumed u knew this...my bad.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/215604", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123156/" ] }
215,606
I am trying to create an array based upon filenames, and get in trouble with whitespaces. This seems common. But - as far as I can see - the quotes are set correctly, I guess it must be the way the array is built. to_dump="$(find . -maxdepth 1 -print0 )"to_dump_array=($to_dump)read -p " -> " finalcase "$final" in a) for drop in "${to_dump_array[@]}" ; do cp "$drop" --recursive --force Destination_Folder && \ echo "dropped \"$drop\" ; done ;; b) echo "Won't drop anything" ;;esac I guess there should be a nicer way to build an array from a find query. Also, where else am I wrong?
-print0 should not be used in a $(...) substitution, because strings in bash variables are null-terminated. I asked a question whose answer was similar to what this question requires: https://stackoverflow.com/a/30469553/1091693 Adapting that answer to your question: to_dump=()while IFS= read -r -d ''; do to_dump+=( "$REPLY" )done < <(find . -maxdepth 1 -print0) This creates an array called to_dump and uses the read command to read NULL-delimited elements from find . The reason < <(...) is being used here rather than a pipe is to avoid an implicit subshell which would prevent the array from being modified. It's worth noting that your original find command probably wants a -mindepth 1 , or it will pick . (the current directory) and you'll end up doing a recursive copy on that. I've noticed you use -maxdepth 1 as an argument to find, so perhaps this will be more useful: shopt -s nullglobto_dump=( * .[!.]* ..?* ) Avoiding find , this uses bash builtins only, doesn't fork, and is for the most part quite clean. The first line, shopt -s nullglob , is a bash(-only) command which turns on the nullglob option. This option is described in man 1 bash : If set, bash allows patterns which match no files (see Pathname Expansion above) to expand to a null string, rather than themselves. In simpler terms, if you type * but it doesn't match files, it will remove the * . The default behaviour is to put the * in there anyway. The second line adds 3 globs to the array: * : All files not beginning with . .[!.]* : All files beginning with one . and one non- . character. This is to avoid matching the . and .. directories. ..?* : All files beginning with .. and at least one more character. Added for the same reason as the previous glob, covering the cases it missed. Bash expands the globs into the definition of the array, and it expands them correctly -- no splitting on whitespace or anything like that. A caveat on the usage of nullglob: If you have nullglob turned on, curl google.com/search?q=test will result in curl complaining at you for not passing it arguments, and ls /var/fasdfasafs* will give you a listing of the current directory. This is one of the reasons it's not turned on by default.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215606", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36242/" ] }
215,623
I tried to reinstall my mysql server on ubuntu 14. After reinstall all seems to run fun until the service mysql status/start/stop command When the server is started the mysql status is stop/waiting but it runs (I did added ( sudo update-rc.d mysql defaults )). output of the ps -e | grep mysql 1897 ? 00:00:00 mysqld_safe 2650 ? 00:00:05 mysqld after service mysql start 1897 ? 00:00:00 mysqld_safe 2650 ? 00:00:05 mysqld14392 ? 00:00:00 mysqld First question: How can I do a clean reinstall of my mysql server/or fix the 2 mysql instances? Second question: Why is there mysqld_safe and mysqld process, which should run and is needed? The syslog of start as question here: Jul 14 09:01:07 myserv kernel: [30516.307340] type=1400 audit(1436857267.353:26): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/mysqld" pid=12368 comm="apparmor_parser"Jul 14 09:01:09 myserv /etc/mysql/debian-start[16837]: Upgrading MySQL tables if necessary.Jul 14 09:01:09 myserv /etc/mysql/debian-start[16842]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignoredJul 14 09:01:09 myserv /etc/mysql/debian-start[16842]: Looking for 'mysql' as: /usr/bin/mysqlJul 14 09:01:09 myserv /etc/mysql/debian-start[16842]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheckJul 14 09:01:09 myserv /etc/mysql/debian-start[16842]: This installation of MySQL is already upgraded to 5.5.43, use --force if you still need to run mysql_upgradeJul 14 09:01:09 myserv /etc/mysql/debian-start[16894]: Checking for insecure root accounts.Jul 14 09:01:09 myserv /etc/mysql/debian-start[16920]: Triggering myisam-recover for all MyISAM tables
-print0 should not be used in a $(...) substitution, because strings in bash variables are null-terminated. I asked a question whose answer was similar to what this question requires: https://stackoverflow.com/a/30469553/1091693 Adapting that answer to your question: to_dump=()while IFS= read -r -d ''; do to_dump+=( "$REPLY" )done < <(find . -maxdepth 1 -print0) This creates an array called to_dump and uses the read command to read NULL-delimited elements from find . The reason < <(...) is being used here rather than a pipe is to avoid an implicit subshell which would prevent the array from being modified. It's worth noting that your original find command probably wants a -mindepth 1 , or it will pick . (the current directory) and you'll end up doing a recursive copy on that. I've noticed you use -maxdepth 1 as an argument to find, so perhaps this will be more useful: shopt -s nullglobto_dump=( * .[!.]* ..?* ) Avoiding find , this uses bash builtins only, doesn't fork, and is for the most part quite clean. The first line, shopt -s nullglob , is a bash(-only) command which turns on the nullglob option. This option is described in man 1 bash : If set, bash allows patterns which match no files (see Pathname Expansion above) to expand to a null string, rather than themselves. In simpler terms, if you type * but it doesn't match files, it will remove the * . The default behaviour is to put the * in there anyway. The second line adds 3 globs to the array: * : All files not beginning with . .[!.]* : All files beginning with one . and one non- . character. This is to avoid matching the . and .. directories. ..?* : All files beginning with .. and at least one more character. Added for the same reason as the previous glob, covering the cases it missed. Bash expands the globs into the definition of the array, and it expands them correctly -- no splitting on whitespace or anything like that. A caveat on the usage of nullglob: If you have nullglob turned on, curl google.com/search?q=test will result in curl complaining at you for not passing it arguments, and ls /var/fasdfasafs* will give you a listing of the current directory. This is one of the reasons it's not turned on by default.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215623", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123160/" ] }
215,659
I want to validate the below date format in shell script 2015-Jul-13 I'm using date -d "2015-Jul-13" +"%Y-%b-%d" but its giving error as date: invalid date '2015-Jul-13'
GNU date does not support YYYY-MMM-DD . However, it does understand DD-MMM-YYYY . So if you really have to handle dates of this format you can do it with something like this, which simply swaps the arguments around to a format that date expects: ymd='2015-Jul-13'dmy=$(echo "$ymd" | awk -F- '{ OFS=FS; print $3,$2,$1 }')if date --date "$dmy" >/dev/null 2>&1then echo OKfi Here's an all shell solution. Breaking it down, the IFS=- instructions tells the shell to split a forthcoming command line by hyphen - instead of whitespace. The set $ymd parses the $ymd variable as for a command line, but now splits by hyphen, assigning values to the parameters $1 , $2 et seq . The echo "$3-$2-$1" trivially outputs the three captured values in reversed order. ymd='2015-Jul-13'dmy=$(IFS=-; set $ymd; echo "$3-$2-$1")if date --date "$dmy" >/dev/null 2>&1then echo OKfi
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/215659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15340/" ] }
215,671
I want to list all the processes what are running, which I'm doing using either ps aux , or ps auxf , but I also want to get the elapsed time for all of them. I've seen the command ps -o etime,cmd , which displays the elapsed time, and the command, but it doesn't seem to list all of them. Can I combine the aux(f) and -o etime at all?
My apologies,I figured out that the columns are overwritten by the -o . Here is what you were looking for: ps -e -o user,pid,%cpu,%mem,vsz,rss,tty,stat,start,time,command,etime,euid
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215671", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63703/" ] }
215,688
Is it possible to disable the behavior which causes selecting a pane on opposite side of a tmux window if there are no more panes left in a direction select-pane command was originally triggered at? If not, is there a way how to determine if any other panes exist in a specific direction? If a tmux window doesn't have a (v)split active window and a select-pane command is triggered, an error message is thrown - this is expected behavior. Thank you for response
Add this to your ~/.tmux.conf : set-option -g default-shell /bin/bashunbind Up unbind Down unbind Right unbind Left bind Up run-shell "if [ $(tmux display-message -p '#{pane_at_top}') -ne 1 ]; then tmux select-pane -U; fi"bind Down run-shell "if [ $(tmux display-message -p '#{pane_at_bottom}') -ne 1 ] ; then tmux select-pane -D; fi"bind Right run-shell "if [ $(tmux display-message -p '#{pane_at_right}') -ne 1 ]; then tmux select-pane -R; fi"bind Left run-shell "if [ $(tmux display-message -p '#{pane_at_left}') -ne 1 ]; then tmux select-pane -L; fi" Basically, this should run with tmux versions 2.6 + (after which they added the pane_at_top, pane_at_bottom, pane_at_left, pane_at_right environment variables. For tmux < v2.6, I'm not entirely sure how you could implement this. Further more, if you want to launch a custom-shell, do it through set-option -g default-command fish (or zsh or csh or whatever). As an alternative, if you want to use a non-bash shell as your tmux default shell, set it as such ( set-option -g default-shell ) and then you can code out the logic above in the shell script of your choice. However, (as was in my case) using certain shells doesn't give you the convenience of one-liner if commands (or it might just be I don't know enough about certain shells, or maybe multiple lines do work in run-shell. Source: github Issues thread that I started
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215688", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123196/" ] }
215,694
Host - Windows 7 Guest - CentOS I am trying to install kernel-headers using yum since during the installation of vmware-tools I get a message asking for the path to the kernel header files for the 3.10.0-229.7.2.e17.x86_64 . Running yum install kernel-headers returns Package kernel-headers-3.10.0-229.7.2.e17.x86_64 already installed and latest version . But the directory /usr/src/kernels is empty. Are the kernel headers installed somewhere else? Or should I be asking yum to install something else? Path provided to vmware-tools for kernel headers Searching for a valid kernel header path...The path "" is not a valid path to the 3.10.0-229.7.2.e17.x86_64 kernel headers.Would you like to change it? [yes] Providing the path /usr/include/linux gives the same response again but with "" replaced with the path provided.
The correct package to install all of the required dependencies for building kernel modules is kernel-devel (see the CentOS documentation for more information). The headers are not installed in /usr/src/kernels , rather they're installed in a number of directories below /usr/include (the default location for C header files). You can list the contents of the kernel-headers package you installed using: rpm -ql kernel-headers
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/215694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123030/" ] }
215,699
After installing FreeBSD we would like to use following commands to install additional packages. pkgpkg install nanopkg install xorgpkg install mate-desktop matepkg install slim The above commands require an Internet connection to install packages. But how can we install those packages without an Internet connection by using CD/DVD/USB?
Based upon an open issue in FreeBSD 10.1-RELEASE Errata : Create a /dist directory, then mount the DVD. # mkdir -p /dist# mount -t cd9660 /dev/cd0 /dist Make sure REPOS_DIR is correctly pointing to your local repository. For sh(1) : # export REPOS_DIR=/dist/packages/repos – or, for csh(1) : # setenv REPOS_DIR /dist/packages/repos Use pkg(7) to bootstrap pkg(8) , then install packages. # pkg bootstrap --yes# pkg install xorg [...] Limitations of -dvd1.iso files FreeBSD-13.1-RELEASE-amd64-dvd1.iso does not provide packages for mate , mate-desktop , nano , or slim .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215699", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105902/" ] }
215,707
I often see a checksum given next to a file available for download. The purpose of this practice eludes me. It is obviously to detect corrupt files, but what could be the cause of this corruption and is it at all likely? Surely the file will not be damaged by transmission errors since those are detected by the network protocol. And surely any attacker who could alter the file for malicious purposes could likewise alter the given checksum. Are we checking for hard drive errors? Are those more likely to happen when writing then when reading? Am I missing something important?
To detect corruption is not entirely correct. To ascertain the integrity of the software would be a more correct usage. Normally a software is not distributed from a single server. The same software may be distributed from many servers. So when you download a particular software, the server closest to your destination is chosen as the download source to increase the download speed. However, these 'non-official' (third party) servers cannot be always trusted. They might/can include trojans/viruses/adware/backdoors into the program which is not good . So to ensure that the software downloaded is exactly same as that of the 'official' software released by the concerned organisation, the checksum is used. The algorithms used for generating checksums are such that even a slight change in the program results in an entirely different checksum. Example taken from Practical Unix and Internet Security MD5(There is $1500 in the blue box.) = 05f8cfc03f4e58cbee731aa4a14b3f03 MD5(There is $1100 in the blue box.) = d6dee11aae89661a45eb9d21e30d34cb The messages, which differ by only a single character (and, within that character, by only a single binary bit), have completely different message digests. If the downloaded file has the same checksum as the checksum given on the 'official' website, then the software can be assumed to be not modified. Side Note: In theory, two different files CAN have the same hash value. For the Hash/checksum algorithm to be considered secure, it should be computationally very expensive to find another file which produces the same checksum.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215707", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32694/" ] }
215,726
So I've got a strange issue going on with SSH.I've set up my SSH server to allow passwordless logins via SSH keys. However, when I try to login, the first time I login after a long time (~day), it requires a password. If I immediately close the connection and try to ssh-connect again it accepts the SSH key. Does anyone know how to get this so it always accepts the SSH key? Here's my /etc/ssh/sshd_config # Package generated configuration file# See the sshd_config(5) manpage for details# What ports, IPs and protocols we listen forPort 22# Use these options to restrict which interfaces/protocols sshd will bind to#ListenAddress ::#ListenAddress 0.0.0.0Protocol 2# HostKeys for protocol version 2HostKey /etc/ssh/ssh_host_rsa_keyHostKey /etc/ssh/ssh_host_dsa_keyHostKey /etc/ssh/ssh_host_ecdsa_keyHostKey /etc/ssh/ssh_host_ed25519_key#Privilege Separation is turned on for securityUsePrivilegeSeparation yes# Lifetime and size of ephemeral version 1 server keyKeyRegenerationInterval 3600ServerKeyBits 1024# LoggingSyslogFacility AUTHLogLevel INFO# Authentication:LoginGraceTime 120PermitRootLogin without-passwordStrictModes yesRSAAuthentication yesPubkeyAuthentication yes#AuthorizedKeysFile %h/.ssh/authorized_keys# Don't read the user's ~/.rhosts and ~/.shosts filesIgnoreRhosts yes# For this to work you will also need host keys in /etc/ssh_known_hostsRhostsRSAAuthentication no# similar for protocol version 2HostbasedAuthentication no# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication#IgnoreUserKnownHosts yes# To enable empty passwords, change to yes (NOT RECOMMENDED)PermitEmptyPasswords no# Change to yes to enable challenge-response passwords (beware issues with# some PAM modules and threads)ChallengeResponseAuthentication no# Change to no to disable tunnelled clear text passwords#PasswordAuthentication yes# Kerberos options#KerberosAuthentication no#KerberosGetAFSToken no#KerberosOrLocalPasswd yes#KerberosTicketCleanup yes# GSSAPI options#GSSAPIAuthentication no#GSSAPICleanupCredentials yesX11Forwarding yesX11DisplayOffset 10PrintMotd noPrintLastLog yesTCPKeepAlive yes#UseLogin no#MaxStartups 10:30:60#Banner /etc/issue.net# Allow client to pass locale environment variablesAcceptEnv LANG LC_*Subsystem sftp /usr/lib/openssh/sftp-server# Set this to 'yes' to enable PAM authentication, account processing,# and session processing. If this is enabled, PAM authentication will# be allowed through the ChallengeResponseAuthentication and# PasswordAuthentication. Depending on your PAM configuration,# PAM authentication via ChallengeResponseAuthentication may bypass# the setting of "PermitRootLogin without-password".# If you just want the PAM account and session checks to run without# PAM authentication, then enable this but set PasswordAuthentication# and ChallengeResponseAuthentication to 'no'.UsePAM yes`
This can happen on systems with auto-mounted home directories, like in a system with Active Directory or LDAP. Type mount after you login to see if your home directory was auto mounted. Unfortunately, there isn't a way (that I know of) to fix this. Usually, an entire /home directory is mounted, so all user's home directories are available during SSH authentication.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215726", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118264/" ] }
215,776
I'd like to install software from source (e.g., third-party GitHub repos) to my machine. Generally /usr/local/bin and /usr/local/src are for non-distribution-specific software, right? Taking ownership of /usr/local seems risky: anything running with my privileges could make nefarious changes to executables in /usr/local/bin, or to sources in /usr/local/src. But the alternative, building and installing as root ( sudo ), doesn't make sense to me. GitHub warns against running git as root. Even if I copied the sources from a local repo elsewhere, I'd have to run make and make install as sudo , meaning the software I'm installing could hijack the rest of my machine. I could just put everything in /home, but that seems like a cop-out -- isn't this what /usr/local is for ?
Don't take ownership of /usr/local . Use sudo to install software. But use your own account to build it. git clone … # or tar -x or …cd …./configuremakesudo make install Why not take ownership of /usr/local ? You nailed it. That would allow any program running on your account to write there. Against a malicious program, you've lost anyway — infecting a local account is the big step, escalating to root isn't difficult (e.g. by piggybacking on the next time you run sudo ). But against a badly configured program, it's better not to have writable bits in the system-wide directories. As for the choice between /usr/local and your home directory: your home directory is for things you only want for your account, /usr/local is for things that are installed system-wide.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123247/" ] }
215,782
I'm trying to match exit codes of a process that is documented to return hexadecimal exit codes (e.g. 0x00 for success, 0x40 - 0x4F on user error, 0x50 - 0x5F on internal error, etc.). I'd like to handle the exit code via a case statement, but the "obvious" solution doesn't match: $ $val = 10$ case $val in> 0xA) echo match;;> *) echo no match;;> esacno match Is there a readable way to match hexadecimal values in a case statement?
Don't take ownership of /usr/local . Use sudo to install software. But use your own account to build it. git clone … # or tar -x or …cd …./configuremakesudo make install Why not take ownership of /usr/local ? You nailed it. That would allow any program running on your account to write there. Against a malicious program, you've lost anyway — infecting a local account is the big step, escalating to root isn't difficult (e.g. by piggybacking on the next time you run sudo ). But against a badly configured program, it's better not to have writable bits in the system-wide directories. As for the choice between /usr/local and your home directory: your home directory is for things you only want for your account, /usr/local is for things that are installed system-wide.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215782", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19157/" ] }
215,821
I have many files of different lengths but the same extension and I have tried many commands to rename all of them at once. Is it possible to change only the last 10 characters of the base of all filenames? The 10 last characters are always the same. For example : img(12345678).txttest(12345678).txt download(12345678).txtupload(12345678).txt I want to replace (12345678) with abcdefghij
There are two Linux commands called rename that are commonly available in distributions. I prefer the perl-based rename, as it's more powerful. You can check which one you have using $ prename --version . If you have the perl-based rename, $ rename --versionperl-rename 1.9$ rename 's/\(12345678\)/abcdefghij/' *.txt If you want to check it first with a dry run, use the -n flag. If you have the other rename, $ rename --versionrename from util-linux 2.26.2$ rename '(12345678)' abcdefghij *.txt To remove the last 10 characters before .txt generally If the characters are not always the same, you can use this in the general case. For perl-based rename, rename 's/.{10}\.txt\Z/abcdefghij.txt/' *.txt -n For the other rename, I'm not sure if it's possible.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123287/" ] }
215,848
I just switched over from Windows to Linux Mint. However, as I do web design, there are many fonts on my Windows hard disk that I would want to carry over to my Linux installation.It is possible to install fonts by double-clicking them from within a file manager and then pressing the 'install' button, but this only works for one font at a time. As I have about five-hundred of them, I would like to install all of them at the same time. What I've tried to do is copy over all fonts from the Windows Fonts folder (C:\Windows\Fonts) to /usr/share/fonts/opentype/windows_fonts and /usr/share/fonts/truetype/windows_fonts However, none of the fonts show up correctly. Instead, programs that use those fonts read all glyphs as white boxes (e.g. 'unknown characters') Is there another way to install them all at once (or to automate installing them)?
If you need to install a lot of fonts, then copy the files to ~/.fonts or /usr/share/fonts for system-wide installation and issue the command fc-cache -fv .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215848", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123305/" ] }
215,870
Using Linux Mint (Cinnamon Desktop) I have not been able to connect to the eduroam wireless university network. The reason is that I can't find a proper way to enter the respective network settings, i.e. changing security settings as well as filling in my account name and password. This is what I get (apologies for the Dutch language): As you can see, some other networks (in this case: UU visitor) let me change settings if I want to. edit: it does not prompt me for the settings when I try to connect to the network How can I access the settings of any network which does not provide the settings icon? Or if you happen to be an expert on eduroam, how can I access that network specifically?
I'm using Linux Mint (since 17.1) 1 and followed this guide which was working for me, so far. (Please leave a comment if this is working for other LM releases as well so that I can update this answer) What I did in a nutshell.. Started the Network Connections 2 app: Added a new Wi-Fi network connection: Entered the following credentials: I think that was it. 1 As comments indicate/confirm, this should also work with the following OS versions: 17.2, 18.0, 18.1 and 18.3 2 Note: Make sure you open "Network Connections" instead of "Network"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215870", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123314/" ] }
215,931
I'm trying to use apropos so that I would look for all man pages starting with system . I try: apropos ^systemapropos "^system" but these seem to return lines that don't start with system, but where system occurs somewhere in the line. Any ideas? Edit As per comment below, the above actually works but it matches against several compoments: - cmd name - cmd description - cmd one liner. So when I searched for system, I got a line like this: tapset::task_time (3stap) - systemtap task_time tapset Which makes sense because the description starts with system. One way to get really just the lines starting with "system" would be: apropos "" | grep "^system"
Running apropos '^system' works for me, returning the list of man pages where either the page name itself starts with system or the one line description starts with system. For example, the output on Debian (jessie) includes: system-config-printer (1) - configure a CUPS serversigset (3) - System V signal API I know of no clean way to tell apropros to search only in page names or in the one-line description, but there's always grep : apropos system | grep -- '^system' # page namesapropos system | grep -- '- system' # descriptions Either of these can be encapsulated in a shell function such as this: apro() { apropos "$1" | grep -- "^$1"; }
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215931", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77271/" ] }
215,934
I started working at my current position since November 17th 2014. I would like to know how many days have run up to now. Any ideas on how to use Linux to come up with a simple and nice solution?
echo $(( (`date +%s` - `date +%s -d '2014/11/17'`) / 86400 )) days ago
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/215934", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22046/" ] }
215,937
I have a list of words that should be present in the order of appearance in completion to a certain key, but bash seems to be internally sorting what is assigned to COMPREPLY array. How to avoid that? Example: _comm() { _init_completion -s -n : || return case $prev in -a) COMPREPLY=(zxy abcdef tyuu fgsfds) ;; esac}complete -F _comm comm If you run this code, bash should complete $ comm -a with abcdef fgsfds tyuu zxy i.e. sorted alphabetically.
Since Bash 4.4 you can use nosort option. In your example change the last line to: complete -o nosort -F _comm comm and you should get completions without alphabetical sorting. Important note: options (specified with -o ) must precede functions ( -F ). That’s why coderofsalvation’s code didn’t work.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215937", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10075/" ] }
215,948
I have created an alias: alias shh='sqlplus hfdora/hfdora@hfd1" After creating this alias I was able to enter my database only by typing shh . But after closing my shell, I wasn't able to find the alias next time. Even after typing only alias , shh was not showing in the list. Is there any file to make an alias permanent so that it will not be erased?
For ksh : printf "%s\n" "alias shh='sqlplus hfdora/hfdora@hfd1" >> ~/.kshrcsource ~/.kshrc For bash : printf "%s\n" "alias shh='sqlplus hfdora/hfdora@hfd1" >> ~/.bashrcsource ~/.bashrc For zsh : printf "%s\n" "alias shh='sqlplus hfdora/hfdora@hfd1" >> ~/.zshrcsource ~/.zshrc Use source for the instant effect And as @glennjackman said: A note to readers: ~/.kshrc is for ksh93 . For ksh88 , either put your aliases in ~/.profile , or use ~/.kshrc but add this to your ~/.profile : export ENV=$HOME/.kshrc
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215948", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118311/" ] }
215,958
I want to delete the 5th word of each line in a file. The current content of the file: File is not updated or and will be removed System will shut down f within 10 seconds Please save your work 55 or copy to other location Kindly cooperate with us D Expected output: File is not updated and will be removed System will shut down within 10 seconds Please save your work or copy to other location Kindly cooperate with us
How about cut : $ cut -d' ' -f1-4,6- file.txt File is not updated and will be removed System will shut down within 10 seconds Please save your work or copy to other location Kindly cooperate with us -d' ' sets the delimiter as space -f1-4,6- selects the first to 4th field (word), leaving the 5th one and then continue printing from 6th to the rest.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/215958", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/118311/" ] }
215,986
Is there a way in a single SSH command to login via SSH to a remote server passing through an intermediate server? In essence, I need to create a tunnel to my "bridge server" and via the tunnel to login to the remote server. For example, I'm trying to compress the following into a single ssh command: ssh -N -L 2222:remoteserver.com:22 [email protected] ssh -p 2222 remote_userid@localhost This currently works, but I would rather be able to squeeze everything into a single command such that if I exit my ssh shell, my tunnel closes at the same time. I have tried the following in my config but to no avail: Host axp User remote_userid HostName remoteserver.com IdentityFile ~/.ssh/id_rsa.eric ProxyCommand ssh -W %h:%p [email protected] As per @jasonwryan comments and the transparent-mulithop link , I'm able to get the following command working: ssh -A -t [email protected] ssh -A [email protected] but now I would like to package that up neatly into my .ssh/config file, and not quite sure what I need to use as my ProxyCommand. I've seen a couple of links online as well as @boomshadow's answer that requires nc , but unfortunately the AIX server I'm using as my bridge machine does not have netcat installed on it.
The ProxyCommand is what you need. At my company, all the DevOps techs have to use a "jumpstation" in order to access the rest of the VPC's. The jumpstation is VPN access-controlled. We've got our SSH config setup to automatically go through the jumpstation automatically. Here is an edited version of my .ssh/config file: Host *.internal.company.comUser jacobIdentityFile ~/.ssh/id_rsaProxyCommand ssh -q -A jacob@company-internal-jumphost nc -q0 %h %p Every time I do an 'ssh' to a server on that 'internal' subdomain, it will automatically jump through the jumpstation first. Edit :Here is the entire section of the .ssh/config for the 'Internal' VPC for us to log into it: # Internal VPCHost company-internal-jumphost Hostname 10.210.x.x #(edited out IP for security) IdentityFile ~/.ssh/id_rsaHost 10.210.* User ubuntu IdentityFile ~/.ssh/company-id_rsa ProxyCommand ssh -q -A jacob@company-internal-jumphost nc -q0 %h %pHost *.internal.company.com User jacob IdentityFile ~/.ssh/id_rsa ProxyCommand ssh -q -A jacob@company-internal-jumphost nc -q0 %h %p
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/215986", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/28391/" ] }
215,989
Here's the output of a grep command I ran: [user@localhost] : ~/Documents/challenge $ grep -i -e ".\{32\}" fileA fileBfileA:W0mMhUcRRnG8dcghE4qvk3JA9lGt8nDlfileB:observacion = new Observacion();fileB:observacion.setCodigoOf(ordenBO.getCodigo());fileB:observacion.setDetalle(of.getObservacion().getSolicitante());fileB:observacion.setTipoObservacion(TipoObservacionOrdenFleteMaestro.SOLICITANTE);fileB:observacion.setProceso(TipoProcesoObservacionMaestro.MODIFICACION);fileB:observacion.setFecha(Utiles.getFechaSistema());fileB:java.util.Date fechaHora = Calendar.getInstance().getTime();fileB:observacion.setUsuarioCrecion(usuarioSesionado.getUsuario().getUsuario());fileB:daoObservacion.agregaObservacion(observacion); I'm looking for 32 character long string in two files: fileA and fileB . Importantly, fileA contains exactly 32 characters only, with no line breaks: [user@localhost] : ~/Documents/challenge $ hexdump -C fileA00000000 57 30 6d 4d 68 55 63 52 52 6e 47 38 64 63 67 68 |W0mMhUcRRnG8dcgh|00000010 45 34 71 76 6b 33 4a 41 39 6c 47 74 38 6e 44 6c |E4qvk3JA9lGt8nDl|00000020 The problem with my grep command is that it is returning any line that has more than 32 chars. How can I make it return only lines with exactly 32 chars . The issue for me is that I can't modify my regex to match on a line break, because there is no line break. My expected output would be simply: fileA:W0mMhUcRRnG8dcghE4qvk3JA9lGt8nDl (note: this is for a challenge that I've already solved with my ugly solution, but in this scenario we can only use grep and piping or redirecting output is not allowed)
This works. ^ denotes start of line, plus the {32} you already had, then a $ for end of line. $ cat fileA fileB1234567890123456789012345678901212345678901234567890123456789012312345678901234567890123456789012123456789012345678901234567890124$ grep -E "^.{32}$" fileA fileBfileA:12345678901234567890123456789012fileB:12345678901234567890123456789012$ And as pointed out by @steeldriver, posix grep includes -x, so the following approach also works : grep -xE ".{32}" fileA fileB
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215989", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/89986/" ] }
215,994
I have this kind of behavior and I don't know how to fix it or even how to search a fix for it since I don't know how would I even call it. Here's what's happening: The first time I boot, when I type out the username and press Retrun the password prompt appears about a half or a quarter seconds later. The problem is that after I type the username I usually press Return and immediately start typing the password; however since the Password: hasn't appeared yet, the tty starts printing the characters I type directly onto the screen. For example, let's say my username and password are: Username and Password respectively.If I were to login in a tty1 it would look something like this. The "Pa" at the beginning is there because I had started typing "Password" before Password: actually appeared. Debian GNU/Linux stretch/sid hostname tty1hostname login: UsernamePaPassword: A simple solution to this would of course be to type the username and wait a little before typing out the password, however I wish to get to the bottom of this and find the cause of this problem. I have a fear that some day the prompt could lag a lot longer than a quarter of a second (e.g. few seconds) and I would have accidentally typed my whole password onto screen before the Password: finally appeared. Is there a way to know what's happening here?
This works. ^ denotes start of line, plus the {32} you already had, then a $ for end of line. $ cat fileA fileB1234567890123456789012345678901212345678901234567890123456789012312345678901234567890123456789012123456789012345678901234567890124$ grep -E "^.{32}$" fileA fileBfileA:12345678901234567890123456789012fileB:12345678901234567890123456789012$ And as pointed out by @steeldriver, posix grep includes -x, so the following approach also works : grep -xE ".{32}" fileA fileB
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/215994", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104950/" ] }
216,042
I'm looking for a way to show all of the URLs in a redirect chain, preferably from the shell. I've found a way to almost do it with curl, but it only shows the first and last URL. I'd like to see all of them. There must be a way to do this simply, but I can't for the life of me find what it is. Edit: Since submitting this I've found out how to do it with Chrome (CTRL+SHIFT+I->Network tab). But, I'd still like to know how it can be done from the Linux command line.
How about simply using wget ? $ wget http://picasaweb.google.com 2>&1 | grep Location:Location: /home [following]Location: https://www.google.com/accounts/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2&ltmpl=gp&passive=true [following]Location: https://accounts.google.com/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%3A%2F%2Fpicasaweb.google.com%2Fhome&service=lh2&ltmpl=gp&passive=true [following] curl -v also shows some info, but looks not as useful as wget . $ curl -v -L http://picasaweb.google.com 2>&1 | egrep "^> (Host:|GET)"> GET / HTTP/1.1> Host: picasaweb.google.com> GET /home HTTP/1.1> Host: picasaweb.google.com> GET /accounts/ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2&ltmpl=gp&passive=true HTTP/1.1> Host: www.google.com> GET /ServiceLogin?hl=en_US&continue=https%3A%2F%2Fpicasaweb.google.com%2Flh%2Flogin%3Fcontinue%3Dhttps%253A%252F%252Fpicasaweb.google.com%252Fhome&service=lh2&ltmpl=gp&passive=true HTTP/1.1> Host: accounts.google.com
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/216042", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/70203/" ] }
216,066
git log -G<regex> -p is a wonderful tool to search a codebase's history for changes that match the specified pattern. However, it can be overwhelming to locate the relevant hunk in the diff/patch output in a sea of mostly irrelevant hunks. It’s of course possible to search the output of git log for the original string/regex, but that does little to reduce the visual noise and distraction of many unrelated changes. Reading up on git log , I see there's the --pickaxe-all , which is the exact opposite of what I want: it broadens the output (to the entire changeset), whereas I want to limit it (to the specific hunk). Essentially, I’m looking for a way to "intelligently" parse the diff/patch into individual hunks and then execute a search against each hunk (targeting just the changed lines), discard the hunks that don’t match, and output the ones that do. Does a tool such as I describe exist? Is there a better approach to get the matched/affected hunks? Some initial research I've done... If it were possible to grep the diff/patch output and make the context option values dynamic—say, via regexps instead of line counts—that might suffice. But grep isn't exactly built that way (nor am I necessarily requesting that feature). I found the patchutils suite, which initially sounded like it might suit my needs. But after reading its man pages, the tools doesn't appear to handle matching hunks based on regexps. (They can accept a list of hunks, though...) I finally came across splitpatch.rb , which seems to handle the parsing of the patch well, but it would need to be significantly augmented to handle reading patches via stdin , matching desired hunks, and then outputting the hunks.
here https://stackoverflow.com/a/35434714/5305907 is described a way to do what you are looking for. effectively: git diff -U1 | grepdiff 'console' --output-matching=hunk It shows only the hunks that match with the given string "console".
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216066", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/68744/" ] }
216,071
My company PC is behind the firewall, I want to connect to my remote server. The port is open however I can not connect to it, does anyone know the root cause? From my company PC connect to my remote server: # telnet my-server 2221Trying x.x.x.x...Connected to my-server.Escape character is '^]'.^C^C^C# nc -vzw5 my-server 2221Connection to my-server 2221 port [tcp/rockwell-csp1] succeeded!# ssh -vvv my-server -p 2221OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013debug1: Reading configuration data /root/.ssh/configdebug1: Applying options for *debug1: Reading configuration data /etc/ssh/ssh_configdebug1: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to my-server [x.x.x.x] port 2221.debug1: Connection established.debug1: permanently_set_uid: 0/0debug1: identity file /root/.ssh/identity type -1debug1: identity file /root/.ssh/identity-cert type -1debug3: Not a RSA1 key file /root/.ssh/id_rsa.debug2: key_type_from_name: unknown key type '-----BEGIN'debug3: key_read: missing keytypedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug3: key_read: missing whitespacedebug2: key_type_from_name: unknown key type '-----END'debug3: key_read: missing keytypedebug1: identity file /root/.ssh/id_rsa type 1debug1: identity file /root/.ssh/id_rsa-cert type -1debug1: identity file /root/.ssh/id_dsa type -1debug1: identity file /root/.ssh/id_dsa-cert type -1debug1: identity file /root/.ssh/id_ecdsa type -1debug1: identity file /root/.ssh/id_ecdsa-cert type -1^C The process will stuck forever. However at the same time, I check my remote server status, I can clearly saw the connection has been established: # netstat -attcp 0 402 myserver:ssh x.x.x.x:11307 ESTABLISHED After a while, the connection status will change to FIN_WAIT1, then closed: # netstat -at tcp 0 402 myserver:ssh x.x.x.x:11307 FIN_WAIT1 Tcpdump on server side while client initiate a connection request: # tcpdump -i ppp0 port 2221 -vvtcpdump: listening on ppp0, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes12:09:01.408239 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60) server_ip.2221 > client_ip.20999: Flags [S.], cksum 0x21e6 (correct), seq 2805531925, ack 581774329, win 14400, options [mss 1452,sackOK,TS val 9959078 ecr 74287789,nop,wscale 4], length 012:09:01.424747 IP (tos 0x0, ttl 50, id 41302, offset 0, flags [DF], proto TCP (6), length 52) client_ip.20999 > server_ip.2221: Flags [.], cksum 0x8711 (correct), seq 1, ack 1, win 457, options [nop,nop,TS val 74287802 ecr 9959078], length 012:09:01.448272 IP (tos 0x10, ttl 64, id 62398, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5dba (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959082 ecr 74287802], length 40212:09:01.674641 IP (tos 0x10, ttl 64, id 62399, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5da3 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959105 ecr 74287802], length 40212:09:01.904523 IP (tos 0x10, ttl 64, id 62400, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5d8c (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959128 ecr 74287802], length 40212:09:02.364225 IP (tos 0x10, ttl 64, id 62401, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5d5e (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959174 ecr 74287802], length 40212:09:03.283694 IP (tos 0x10, ttl 64, id 62402, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5d02 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959266 ecr 74287802], length 40212:09:05.122593 IP (tos 0x10, ttl 64, id 62403, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5c4a (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959450 ecr 74287802], length 40212:09:08.810407 IP (tos 0x10, ttl 64, id 62404, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5ad9 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959819 ecr 74287802], length 40212:09:15.006311 IP (tos 0x10, ttl 64, id 17769, offset 0, flags [DF], proto TCP (6), length 52) server_ip.2221 > client_ip.4708: Flags [F.], cksum 0x0499 (correct), seq 1497941342, ack 2936162453, win 900, options [nop,nop,TS val 9960438 ecr 74001029], length 012:09:16.176090 IP (tos 0x10, ttl 64, id 62405, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x57f8 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9960556 ecr 74287802], length 40212:09:30.927316 IP (tos 0x10, ttl 64, id 62406, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5234 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9962032 ecr 74287802], length 40212:10:00.429743 IP (tos 0x10, ttl 64, id 62407, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x46ac (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9964984 ecr 74287802], length 40212:10:59.354673 IP (tos 0x10, ttl 64, id 62408, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x2fa4 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9970880 ecr 74287802], length 40212:12:57.364324 IP (tos 0x10, ttl 64, id 62409, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x0184 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9982688 ecr 74287802], length 40212:14:01.653934 IP (tos 0x10, ttl 64, id 62410, offset 0, flags [DF], proto TCP (6), length 52) server_ip.2221 > client_ip.20999: Flags [F.], cksum 0x0e69 (correct), seq 403, ack 1, win 900, options [nop,nop,TS val 9989120 ecr 74287802], length 0
debug1: Connection established.[...]debug1: identity file /root/.ssh/id_ecdsa-cert type -1^C When a client connects to an SSH server, the first data exchange is that the server and client send their version strings to each other. The OpenSSH client normally logs this immediately after the list of identity files, for example from my system: [...]debug1: identity file /home/devuser/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 Your client never logged receiving the SSH version string from the server. One of three things is probably happening: A firewall or something similar is blocking or dropping TCP data packets from the server to the client. The client is connecting to an SSH server, but it's malfunctioning. The client is connecting to something other than an ssh server. You'll need to troubleshoot this on the server. The OpenSSH server logs through syslog. You should start by checking the syslog logs to see what if anything sshd logged about the connection attempt.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216071", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/72384/" ] }
216,086
I would like send a chat message (like mail chat) between two systems, the systems are inter-connected proxy IP.
You can use talk or ytalk More info: Talk ytalk Alternatively: You can use netcat , On box1: nc -l 3333 On box2: nc $IP 3333 , where $IP equals the local IP address of the first system. Once you do this, in the same box (box2) , type something and press enter. Take a look on your other box. You can also choose a different port and get it opened on the firewall.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216086", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123451/" ] }
216,101
I would like to print specific information about network configuration for different interfaces over all the servers I manage: the interface name the interface ipv4 address the interface hardware mac address … Unfortunately, a simple ip -o addr show doesn't allow to parse easily its output with awk because of the line-breaks. Is it possible to have ip addr show printed on exactly one line per interface? Else, is it possible to achieve the same result using awk and/or sed ? This goes beyond my knowledge of those two commands since the lines have to be concatenated tree by tree…
Just use --brief flag. ip --brief address show
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/216101", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40025/" ] }
216,202
I have a script containing: #!/bin/bashprintenv When I run it from the command line: env testscript.shbash testscript.shsh testscript.sh every time, it outputs SHELL=/bin/bash . However, when it is run from the cron, it always outputs SHELL=/bin/sh . Why is this? How can I make cron apply the shebang? I already checked the cron PATH; it does include /bin.
The shebang is working and cron has nothing to do with that. When a file is executed, if that file's content begins with #! , the kernel executes the file specified on the #! line and passes it the original file as an argument. Your problem is that you seem to believe that SHELL in a shell script reflects the shell that is executing the script. This is not the case. In fact, in most contexts, SHELL means the user's prefered interactive shell, it is meant for applications such as terminal emulator to decide which shell to execute. In cron, SHELL is the variable that tells cron what program to use to run the crontab entries (the part of the lines after the time indications). Shells do not set the SHELL variable unless it is not set when they start. The fact that SHELL is /bin/sh is very probably irrelevant. Your script has a #!/bin/bash line, so it's executed by bash. If you want to convince yourself, add ps $$ in the script to make ps show information about the shell executing the script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5032/" ] }
216,203
There is a python program I am making, and I am planning to have it launch via a bash script. However the program acts badly when the computer tries to launch it twice. Since I really only need this program to launch once, how do I tell if the program is already running or not.
One way would be to log the PID of the python process in, say, /var/run then the bash script could see if the file with that PID exists and if it does if that PID is still running. Another possibility would be to use pgrep to see if the process is running if there is a unique enough part of the name (python is likely too common to use, but the py script itself would probably work). For example: if pgrep -f "python yourScript.py" &>/dev/null; then echo "it is already running" exitelse python yourScript.pyfi assuming yourScript.py will daemonize itself or something like that
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/121406/" ] }
216,241
I'm setting up a network with openwrt. I have a router that I have set up for wifi which I will use as an access point. I want to add another router to my network so that the two can exchange traffic wirelessly. Right now, my computer is connected to the access point over a wifi connection. If I connect the access point to the second router with an ethernet cord, everything can ping back and forth. But like I said, I want the two routers to be able to communicate wirelessly. My question is, if I want the two routers to send traffic back and forth wirelessly, do I need to configure one node as a repeater or as a bridge? From what I've been reading, either can work. The only difference seems like, if I have a repeater, I have to have one router dedicated just to that. I'm fairly new to wireless communication. I've been wading through forums and openwrt docs for a while. I haven't been able to implement either a bridge or a repeater successfully so I thought maybe my understanding of the fundamentals was wrong. Thanks for any help!
The two options are similar. Bridge This sets up your device so that it bridges traffic between the ethernet interface and the wireless interface. Nothing more. Nothing less. Your ethernet interface needs to be connected back to the rest of your network so that wireless devices connecting to the Access Point can see your network. If you have multiple Access Points configured as bridges they all need to have an ethernet connection back to the same point. To allow roaming transparently between them they must all use the same SID but should be on different channels. Repeater This sets up your Access Point so that it listens to another AP and re-broadcasts what it hears. It also acts as an Access Point for local wireless devices and then rebroadcasts the traffic back to the other AP. There is no wired connection to your network, so a Repeater can be installed anywhere within wireless range of another connected Access Point. The disadvantage is that the presence of a single-radio Repeater on your network will halve the wireless throughput. Typically such a Repeater will have to use the same channel as the Access Point to which it's paired. Newer Repeaters can listen and transmit simultaneously so throughput is not significantly impaired. If you have a single Access Point connected to a router in a home scenario then you want Bridged mode. If you need multiple Access Points, the primary one will always be Bridged. The additional devices will either be Bridged APs or Repeaters. Of choice, if you can run an ethernet cable (or powerline) to the secondary device(s), I'd go for the bridged option every time.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216241", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120436/" ] }
216,267
Is there any difference between a directory name such as mydirectory and mydirectory/ I noticed this happens when I execute ls in some directories - some of the directory names have a slash and some don't. This is problematic because if I want to access a file contained in a directory, I may need to add a slash at the end: vi $mydirectory"/"$myfile or simply do vi $mydirectory$myfile
Without / it might also be a file. In some situations it can be deadly. For example when using mv : mv file1 mydirectorymv file2 mydirectorymv file3 mydirectory All right? But if mydirectory did not exist or wasn't a directory, the final result is that file1 and file2 are gone and file3 is now named mydirectory . mv file1 mydirectory/mv file2 mydirectory/mv file3 mydirectory/ If mydirectory did not exist, all you get is three error messages and file1 , file2 and file3 are still there. So the / removes some ambiguity. Apart from that there aren't really any rules. Some programs may behave differently depending whether you supplied the / at the end or not. It's up to them what to make of it. In some cases you also get problems if you use too many / . For example find keeps surplus / in its output, which might trip you up if you try to find file/pathnames using simple string comparisons instead of, say, realpath or something.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/216267", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/117308/" ] }
216,280
The Midnight Commander is a very helpful tool when we're using only the text mode. But sometimes it bothers me that I have to see all the hidden files inside a folder (files that begin with "."). I've tried to find how to do it changing some configurations by myself and then looking on the man page. But I didn't succeed. Does anyone know how can I do it?
Choose Options from the menu bar, then Panel options.You have it right there, 5th option on the left column: "Show hidden files".
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/216280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/103357/" ] }
216,312
I have a text file which is an ASCII file itself, but contains octal escape sequences representing codes in utf-8: \350\207\252\345\212\250\346\216 Is there some program or command that can convert such ASCII file toa text file actually encoded in utf-8? By the way, this site is "Online ASCII(Unicode Escaped) to Unicode(UTF-8) converter tool", and this site is "Online Unicode(UTF-8) to ASCII(Unicode Escaped) converter tool". Do theymake the conversion in my question? If not, what kinds ofconversion do they make?
If you have these escape sequences in a shell variable, in dash, mksh or bash: printf %b "$string_with_backslash_escapes" This isn't POSIX: the %b specifier is POSIX but it requires a 0 after each backslash. This also interprets other backslash escapes: \n as a newline, \t as a tab, etc. Here's a perl one-liner that converts octal escape sequences only. perl -pe 's[\\(?:([0-7]{1,3})|(.))] [defined($1) ? chr(oct($1)) : $2]eg' http://www.rapidmonkey.com/unicodeconverter/reverse.jsp interprets octal values as Latin-1 characters, I don't know why Unicode and UTF-8 are mentioned in the page. I have no idea what http://www.rapidmonkey.com/unicodeconverter/advanced.jsp does.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216312", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/674/" ] }
216,399
I have the following structure of text lines: 3923 001 L05 LV,L05 RM3923 002 L12 RA,L12 LA3923 003 I06 ALL3923 004 G04 RV,Z09 ALL but i would need this: 3923 001 L05 LV3923 001 L05 RM3923 002 L12 RA3923 002 L12 LA3923 003 I06 ALL3923 004 G04 RV3923 004 Z09 ALL Is this possible with a regex ? Basicly I need every line copyd the amount of times it contains a "," and then made unique starting from the 10nth character; if i could get the first part done, so just a copy of every line x the amount of comma's, i could clean the rest manually
Given the format of your example this should work for any number of comma separated strings after the initial large space(if it's a tab just change the spaces in the second s/// to \t sed ':;h;s/,.*//;p;x;s/ [^,]*,/ /;t;d' file3923 001 L05 LV3923 001 L05 RM3923 002 L12 RA3923 002 L12 LA3923 003 I06 ALL3923 004 G04 RV3923 004 Z09 ALL If you want tabs then if you want to write the tab as \t, you can give it to Bash using $'' quotes: sed $':;h;s/,.*//;p;x;s/\t[^,]*,/ /;t;d'. Or just insert a literal tab (for bash, you need to type control-v to enter it literally). . — Toby Speight
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123654/" ] }
216,414
I wonder if it is possible to set a "global" alias in bash, like zsh's -g alias option - not "global" from the user's point of view but from the shell's point of view. What I want to know is: Can an alias (or something else?) be substituted anywhere on a line in bash? e.g..: alias ...='../..'
From the bash(1) man page: ALIASES Aliases allow a string to be substituted for a word when it is used as the first word of a simple command. [...] So bash aliases do not have this capability, nor does bash have a trivial pre-exec capability (but see here for a hack though). As a partial workaround you may be able to use a completion function, here's a minimal starting point: function _comp_cd() { local cur=${COMP_WORDS[COMP_CWORD]} # the current token [[ $cur =~ \.\.\. ]] && { cur=${cur/.../..\/..} COMPREPLY=( $cur ) return } COMPREPLY=() # let default kick in}complete -o bashdefault -o default -F _comp_cd cd Now when you hit tab on a cd command and the word under the cursor contains "...", each will be replaced with "../..". Completion suffers from a slight problem too though (excluding its complexity) which you can probably guess from the above, you need to specify it on a command by command basis. The bash-completion package uses a default completion handler, with on-the-fly loading of completion functions to deal with this. If you're feeling adventurous you should be able to modify its internal function _filedir() function which is used for general file/directory expansion so as to include a similar substitution "...". (All of which reminds of the NetWare shell, which made "..." Just Work.)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216414", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37008/" ] }
216,464
In my bash script I have a variable that I am trying to pass to a pattern to search for using awk. However what I expected to happen is not working. I have the following text file (text.txt): -----------Task: a (some info) ....------------Task: b (some info) ....------------Task: c (some info) ....------------ My script has the following: letter=aawk -v var="$letter" '/Task .* \var/' RS='-+' text.txt When I do this however I get nothing but if I do the following: awk '/Task .* a/' RS='-+' text.txt I get what I expect: Task: a (some info) .... NOTE: I need to pass it as a variable because I have a loop that is constantly changing the variable and that's what I am trying to look for. I'd rather use awk since that what I am most familiar with but I am not opposed to hearing other suggestions such as sed or grep.
You could pass the whole pattern to awk letter=aawk -v pattern="Task .* $letter" -v RS='-+' ' $0 ~ pattern' text.txt or construct the pattern as a string in awk letter=aawk -v ltr="$letter" -v RS='-+' ' BEGIN {pattern = "Task .* " ltr} $0 ~ pattern' text.txt Since awk variables are not prefixed with $ , you can't embed them inside a /regex constant/ -- it's just text in there. (It's my preference to put all awk variables at the front with -v )
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/122269/" ] }
216,482
I'm trying to download a file from ftp server using curl : curl --user kshitiz:pAssword ftp://@11.111.11.11/myfile.txt -o /tmp/myfile.txt -v curl connects to the server and freezes: * Hostname was NOT found in DNS cache* Trying 11.111.11.11... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 11.111.11.11 (11.111.11.11) port 21 (#0)< 220-You Are Attempting To Access a Private< 220-Network. Unauthorized Access is Strictly< 220-Forbidden. Violators Will be Prosecuted!< 220-- Management< 220 This is a private system - No anonymous login> USER kshitiz< 331 User kshitiz OK. Password required> PASS pAssword< 230-OK. Current directory is /< 230 4432718 Kbytes used (54%) - authorized: 8192000 Kb> PWD< 257 "/" is your current location* Entry path is '/' 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0> EPSV* Connect data stream passively* ftp_perform ends with SECONDARY: 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0< 229 Extended Passive mode OK (|||10653|)* Hostname was NOT found in DNS cache* Trying 11.111.11.11...* Connecting to 11.111.11.11 (11.111.11.11) port 10653 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0* Connected to 11.111.11.11 (11.111.11.11) port 21 (#0)> TYPE A 0 0 0 0 0 0 0 0 --:--:-- 0:04:02 --:--:-- 0^C Connecting with ftp and fetching a file works however: Status: Connecting to 11.1.1.11:21...Status: Connection established, waiting for welcome message...Response: 220-You Are Attempting To Access a PrivateResponse: 220-Network. Unauthorized Access is StrictlyResponse: 220-Forbidden. Violators Will be Prosecuted!Response: 220-- ManagementResponse: 220 This is a private system - No anonymous loginCommand: USER kshitizResponse: 331 User kshitiz OK. Password requiredCommand: PASS ******Response: 230-OK. Current directory is /Response: 230 4432718 Kbytes used (54%) - authorized: 8192000 KbStatus: Server does not support non-ASCII characters.Status: ConnectedStatus: Starting download of /myfile.txtCommand: CWD /Response: 250 OK. Current directory is /Command: PWDResponse: 257 "/" is your current locationCommand: TYPE IResponse: 200 TYPE is now 8-bit binaryCommand: PASVResponse: 227 Entering Passive Mode (10,9,4,66,39,139)Command: RETR myfile.txtResponse: 150 Accepted data connectionResponse: 226-File successfully transferredResponse: 226 0.000 seconds (measured here), 3.39 Kbytes per secondStatus: File transfer successful, transferred 1 B in 1 second What's the deal with the TYPE A command? Why doesn't curl work when ftp does?
Adding --disable-epsv switch fixed the problem. A little explanation: I just went through many hours of trying to figure out wierd FTP problems. The way that the problem presented was that after login, when the FTP client attempted a directory listing (or any other command), it would just hang. EPSV is "extended passive mode", and is a newer extension to FTP's historical passive mode (PASV) ... most recent FTP clients attempt EPSV first, and then only use the traditional PASV if it fails. ... if the firewall is blocking EPSV, the client will think that the command is successful [ and keep waiting for response ]. Read more here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216482", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19506/" ] }
216,483
I have a shell script that calls on a Perl script to do some file processing. The Perl scripts exits with either a zero or one value. I have the Unix set -e command at the beginning of my script to abort the script if the Perl script exits with a value of one. I was just wondering if there is any command in Unix that I can use that would execute one command before the script is aborted if the Perl script exits with a one value? Essentially, I want the script to send me an email stating whether the Perl script ran successfully. My code looks like this right now: #!/bin/bashset -efunction email_success {#Some code for the email }function email_fail {#Some code for the email}usr/bin/perl perlscript.plif [$? -eq 0]; then email_successelse email_failfi #More commands to be executed if its successful
Adding --disable-epsv switch fixed the problem. A little explanation: I just went through many hours of trying to figure out wierd FTP problems. The way that the problem presented was that after login, when the FTP client attempted a directory listing (or any other command), it would just hang. EPSV is "extended passive mode", and is a newer extension to FTP's historical passive mode (PASV) ... most recent FTP clients attempt EPSV first, and then only use the traditional PASV if it fails. ... if the firewall is blocking EPSV, the client will think that the command is successful [ and keep waiting for response ]. Read more here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216483", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120923/" ] }
216,491
Is there a way in Gnome3 on CentOS7 to list the actual keyboard shortcuts for things like the activities view? I can find lots of web pages that tell me what they should be, but I'd like to know for sure. For instance, a "Gnome Help" site says that the shortcut for the activities view is "Alt-F1", but that just brings up the Application menu. I want a shorter sequence to bring this up. That same page also refers to a "Super" key, but I don't have that key on this HP "Z Book". After I get a list of these shortcuts, how can I change them?
Adding --disable-epsv switch fixed the problem. A little explanation: I just went through many hours of trying to figure out wierd FTP problems. The way that the problem presented was that after login, when the FTP client attempted a directory listing (or any other command), it would just hang. EPSV is "extended passive mode", and is a newer extension to FTP's historical passive mode (PASV) ... most recent FTP clients attempt EPSV first, and then only use the traditional PASV if it fails. ... if the firewall is blocking EPSV, the client will think that the command is successful [ and keep waiting for response ]. Read more here .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216491", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123728/" ] }
216,512
I am wondering how the login actually works. It certainly is not part of the kernel, because I can set the login to use ldap for example, or keep using /etc/passwd ; but the kernel certainly is able to use information from it to perform authentication and authorization activities. There is also a systemd daemon, called logind which seems to start up the whole login mechanism. Is there any design document I can look at, or can someone describe it here?
The login binary is pretty straightforward (in principle). It's just a program that runs as root user (started, indirectly through getty or an X display manager, from init , the first user-space process). It performs authentication of the logging-in user, and if that is successful, changes user (using one of the setuid() family of system calls), sets appropriate environment variables, umask, etc, and exec() s a login shell. It may be instructive to read the source code, but if you do so, you'll find it easiest (assuming the standard shadow-utils login that Debian installs) to read it assuming USE_PAM is not set, at least until you are comfortable with its operation, or you'll find too much distraction.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216512", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7950/" ] }
216,544
Input.txt: 8B0C remove 8B0D remove 8B0E remove 8B0F 8B10 remove 8B14 remove 8B15 remove 8B16 remove 8B17 remove 8AC0 8AC1 remove 8AC2 remove 8AC3 remove 8AE4 8AE5 8AE6 remove Desired output: 8B0F 8AC0 8AE4 8AE5 I want to print a line if that line or the next line does not contain 'remove'. I am using solaris 5.10, KSH.
With sed : sed '$!N;/remove/!P;D' infile This pulls the N ext line into pattern space (if not ! on la $ t line) and checks if pattern space matches remove . If it doesn't (means none of the two lines in the pattern space contains the string remove ) it P rints up to the first \n ewline character (i.e. it prints the first line). Then it D eletes up to the first \n ewline character and restarts the cycle. This way, there are never more than two lines in the pattern space. It's probably easier to understand the N , P , D cycle if you add l before and after the N to look at the pattern space: sed 'l;$!N;l;/remove/!P;D' infile so, using only the last six lines from your example: 8AC3 remove 8AE4 8AE5 8AE6 remove the last command outputs: 8AC3$ 8AC3\n remove$ remove$ remove\n 8AE4$ 8AE4$ 8AE4\n 8AE5$ 8AE4 8AE5$ 8AE5\n 8AE6$ 8AE5 8AE6$ 8AE6\n remove$ remove$ remove$ Here is a short explanation: cmd output cmd l 8AC3$ N # read in the next linel 8AC3\n remove$ D # delete up to \n (pattern space matches so no P)l remove$ N # read in the next linel remove\n 8AE4$ D # delete up to \n (pattern space matches so no P)l 8AE4$ N # read in the next linel 8AE4\n 8AE5$ # pattern space doesn't match so print up to \nP 8AE4 D # delete up to \nl 8AE5$ N # read in the next linel 8AE5\n 8AE6$ # pattern space doesn't match so print up to \nP 8AE5 D # delete up to \n l 8AE6$ N # read in the next linel 8AE6\n remove$ D # delete up to \n (pattern space matches so no P)l remove$ # last line so no N l remove$ D # delete (pattern space matches so no P)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/216544", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/115560/" ] }
216,570
I am building a disk image for an embedded system (to be placed on an 4GB SD card). I want the system to have two partitions. A 'Root'(200Mb), and a 'Data' partition(800Mb). I create an empty 1GB file with dd.Then I use parted to set up the partitions.I mount them each in a loop device then format them; ext2 for 'Root' ext4 for 'Data'. Add my root file system to the 'Root' partition and leave 'Data' empty. Here's where the problem is. I am now stuck with a 1GB image, with only 200MB of data on it. Shouldn't I, in theory, be able to truncate the image down to say.. 201MB and still have the file system mountable? Unfortunately I have not found this to be the case. I recall in the past having used a build environment from Freescale that used to create 30Mb images, that would have partitions for utilizing an entire 4GB sdcard. Unfortunately, at this time, I can not find how they were doing that. I have read the on-disk format for the ext file system, and if there is no data in anything past the first super block (except for backup super blocks, and unused block tables) I thought I could truncate there. Unfortunately, when I do this, the mounting system freaks out. I can then run FSCK, restore the super blocks, and block tables, and can mount it then no problem. I just don't think that should be necessary. Perhaps a different file system could work? Any ideas? thanks, edit changed partition to read file system. The partition is still there and deoesn't change, but the file system is getting destroyed after truncating the image. edit I have found the case to be that when I truncate the file to a size just larger than the first set of 'Data' partition superblock and inode/block tables, (Somewhere in the data-block range) the file system becomes umountable without doing a fsck to restore the rest of the super blocks and block/inode tables
Firstly, writing a sparse image to a disk will not result in anything but the whole of the size of that image file - holes and all - covering the disk. This is because handling of sparse files is a quality of the filesystem - and a raw device (such as the one to which you write the image) has no such thing yet . A sparse file can be stored safely and securely on a medium controlled by a filesystem which understands sparse files (such as an ext4 device) but as soon as you write it out it will envelop all that you intend it to. And so what you should do is either: Simply store it on an fs which understands sparse files until you are prepared to write it. Make it two layers deep... Which is to say, write out your main image to a file, create another parent image with an fs which understands sparse files, then copy your image to the parent image, and... When it comes time to write the image, first write your parent image, then write your main image. Here's how to do 2: Create a 1GB sparse file... dd bs=1kx1k seek=1k of=img </dev/null Write two ext4 partitions to its partition table: 1 200MB, 2 800MB... printf '%b\n\n\n\n' n '+200M\nn\n' 'w\n\c' | fdisk img Create two ext4 filesystems on a -P artitioned loop device and put a copy of the second on the first... sudo sh -c ' for p in "$(losetup --show -Pf img)p"* ### the for loop will iterate do mkfs.ext4 "$p" ### over fdisks two partitions mkdir -p ./mnt/"${p##*/}" ### and mkfs, then mount each mount "$p" ./mnt/"${p##*/}" ### on dirs created for them done; sync; cd ./mnt/*/ ### next we cp a sparse image cp --sparse=always "$p" ./part2 ### of part2 onto part1 dd bs=1kx1k count=175 </dev/zero >./zero_fill ### fill out part1 w/ zeroes sync; cd ..; ls -Rhls . ### sync, and list contents umount */; losetup -d "${p%p*}" ### last umount, destroy rm -rf loop*p[12]/ ' ### loop devs and mount dirs mke2fs 1.42.12 (29-Aug-2014)Discarding device blocks: doneCreating filesystem with 204800 1k blocks and 51200 inodesFilesystem UUID: 2f8ae02f-4422-4456-9a8b-8056a40fab32Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729Allocating group tables: doneWriting inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: donemke2fs 1.42.12 (29-Aug-2014)Discarding device blocks: doneCreating filesystem with 210688 4k blocks and 52752 inodesFilesystem UUID: fa14171c-f591-4067-a39a-e5d0dac1b806Superblock backups stored on blocks: 32768, 98304, 163840Allocating group tables: doneWriting inode tables: doneCreating journal (4096 blocks): doneWriting superblocks and filesystem accounting information: done175+0 records in175+0 records out183500800 bytes (184 MB) copied, 0.365576 s, 502 MB/s./:total 1.0K1.0K drwxr-xr-x 3 root root 1.0K Jul 16 20:49 loop0p1 0 drwxr-xr-x 2 root root 40 Jul 16 20:42 loop0p2./loop0p1:total 176M 12K drwx------ 2 root root 12K Jul 16 20:49 lost+found 79K -rw-r----- 1 root root 823M Jul 16 20:49 part2176M -rw-r--r-- 1 root root 175M Jul 16 20:49 zero_fill./loop0p1/lost+found:total 0./loop0p2:total 0 Now that's a lot of output - mostly from mkfs.ext4 - but notice especially the ls bits at the bottom. ls -s will show the actual -s ize of a file on disk - and it is always displayed in the first column. Now we can basically reduce our image to only the first partition... fdisk -l img Disk img: 1 GiB, 1073741824 bytes, 2097152 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xc455ed35Device Boot Start End Sectors Size Id Typeimg1 2048 411647 409600 200M 83 Linuximg2 411648 2097151 1685504 823M 83 Linux There fdisk tells us there are 411647 +1 512 byte sectors in the first partition of img ... dd seek=411648 of=img </dev/null That truncates the img file to only its first partition. See? ls -hls img 181M -rw-r--r-- 1 mikeserv mikeserv 201M Jul 16 21:37 img ...but we can still mount that partition... sudo mount "$(sudo losetup -Pf --show img)p"*1 ./mnt ...and here are its contents... ls -hls ./mnt total 176M 12K drwx------ 2 root root 12K Jul 16 21:34 lost+found 79K -rw-r----- 1 root root 823M Jul 16 21:34 part2176M -rw-r--r-- 1 root root 175M Jul 16 21:34 zero_fill And we can append the stored image of the second partition to the first... sudo sh -c ' dd seek=411648 if=./mnt/part2 of=img umount ./mnt; losetup -D mount "$(losetup -Pf --show img)p"*2 ./mnt ls ./mnt; umount ./mnt; losetup -D' 1685504+0 records in1685504+0 records out862978048 bytes (863 MB) copied, 1.96805 s, 438 MB/slost+found Now that has grown our img file: it's no longer sparse... ls -hls img 1004M -rw-r--r-- 1 mikeserv mikeserv 1.0G Jul 16 21:58 img ...but removing that is as simple the second time as it was the first, of course... dd seek=411648 of=img </dev/nullls -hls img 181M -rw-r--r-- 1 mikeserv mikeserv 201M Jul 16 22:01 img
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216570", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64215/" ] }
216,582
I am using internet connection with data cap. I want to record my daily internet usage in a file, is there any tool for this or perhaps you can suggest a script that would run as daemon? (I am not pro in bash scripting or with linux administrating software so a simple script will be recommended)
I use vnstat , which keeps track of daily stats for the last 30 days, and is available in the Ubuntu/Debian (and probably many more) repos. Just install it and use it like vnstat -i wlan0 -h : wlan0 14:47 ^ r | r | r | r | r | r | r | r | r r r r | rt r rt rt r -+---------------------------------------------------------------------------> | 15 16 17 18 19 20 21 22 23 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 h rx (KiB) tx (KiB) h rx (KiB) tx (KiB) h rx (KiB) tx (KiB) 15 0 0 23 0 0 07 0 016 0 0 00 0 0 08 19,287 7,85917 0 0 01 0 0 09 6,550 3,23118 0 0 02 0 0 10 65,500 9,21619 0 0 03 0 0 11 17,491 7,50220 0 0 04 0 0 12 5,158 2,50321 0 0 05 0 0 13 15,034 3,49322 0 0 06 0 0 14 4,284 2,503
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
216,618
I've heard confusion come up several times recently around what a Docker container is, and more specifically what's going on inside, with respect to commands & processes that I invoke while inside a Docker container. Can someone please provide a high level overview of what's going on?
Docker gets thrown into the virtualization bucket, because people assume that it's somehow virtualizing the hardware underneath. This is a misnomer that permeates from the terminology that Docker makes use of, mainly the term container. However Docker is not doing anything magical with respect to virtualizing a system's hardware. Rather it's making use of the Linux Kernel's ability to construct "fences" around key facilities, which allows for a process to interact with resources such as network, the filesystem, and permissions (among other things) to give the illusion that you're interacting with a fully functional system. Here's an example that illustrates what's going on when we start up a Docker container and then enter it through the invocation of /bin/bash . $ docker run -it ubuntu:latest /bin/bashroot@c0c5c54062df:/# Now from inside this container, if we run ps -eaf : Switching to another terminal tab where we're logged into the host system that's hosting the Docker container, we can see the process space that the container is "actually" taking up: Now if we go back to the Docker tab and launch several processes within it and background them all, we can see that we now have several child processes running under the primary Bash process which we originally started as part of the Docker container launch. NOTE: The processes are 4 sleep 1000 commands which are being backgrounded. Notice how inside the Docker container the processes are assigned process IDs (PIDs) of 48-51. See them in the ps -eaf output in their as well: However, with this next image, much of the "magic" that Docker is performing is revealed. See how the 4 sleep 1000 processes are actually just child processes to our original Bash process? Also take note that our original Docker container /bin/bash is in fact a child process to the Docker daemon too. Now if we were to wait 1000+ seconds for the original sleep 1000 commands to finish, and then run 4 more new ones, and start another Docker container up like this: $ docker run -it ubuntu:latest /bin/bashroot@450a3ce77d32:/# The host computer's output from ps -eaf would look like this: And other Docker containers, will all just show up as processes under the Docker daemon. So you see, Docker is really not virtualizing ( in the traditional sense ), it's constructing "fences" around the various Kernel resources and limiting the visibility to them for a given process + children.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/216618", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
216,644
In the past, I learned that in Linux/UNIX file systems, directories are just files, which contain the filenames and inode numbers of the files inside the directory. Is there a simple way to see the content of a directory? I mean the way the files names and inodes are stored/organized. I'm not looking for ls , find or something similiar. I also don't want to see the content of the files inside a directory. I want to see the implementation of the directories. If every directory is just a text file with some content, maybe a simple way exists to see the content of this text file. In the bash in Linux it is not possible to do a cat folder . The output is just Is a directory . Update The question How does one inspect the directory structure information of a unix/linux file? addresses the same issue but it has no helpful solution like the one from mjturner .
The tool to display inode detail for a filesystem will be filesystem specific. For the ext2 , ext3 , ext4 filesystems (the most common Linux filesystems), you can use debugfs , for XFS xfs_db , for ZFS zdb . For btrfs some information is available using the btrfs command. For example, to explore a directory on an ext4 filesystem (in this case / is dev/sda1 ): # ls srcAnimation.js Map.js MarkerCluster.js ScriptsUtil.jsDirections.js MapTypeId.js markerclusterer.js TravelMode.jslibrary.js MapUtils.js Polygon.js UnitSystem.jsloadScripts.js Marker.js Polyline.js Waypoint.js# ls -lid src664488 drwxrwxrwx 2 vagrant vagrant 4096 Jul 15 13:24 src# debugfs /dev/sda1debugfs: imap <664488>Inode 664488 is part of block group 81 located at block 2622042, offset 0x0700debugfs: dump src src.outdebugfs: quit# od -c src.out0000000 250 # \n \0 \f \0 001 002 . \0 \0 \0 204 030 \n \00000020 \f \0 002 002 . . \0 \0 251 # \n \0 024 \0 \f 0010000040 A n i m a t i o n . j s 252 # \n \00000060 030 \0 \r 001 D i r e c t i o n s . j0000100 s \0 \0 \0 253 # \n \0 024 \0 \n 001 l i b r0000120 a r y . j s \0 \0 254 # \n \0 030 \0 016 0010000140 l o a d S c r i p t s . j s \0 \00000160 255 # \n \0 020 \0 006 001 M a p . j s \0 \00000200 256 # \n \0 024 \0 \f 001 M a p T y p e I0000220 d . j s 257 # \n \0 024 \0 \v 001 M a p U0000240 t i l s . j s \0 260 # \n \0 024 \0 \t 0010000260 M a r k e r . j s \0 \0 \0 261 # \n \00000300 030 \0 020 001 M a r k e r C l u s t e0000320 r . j s 262 # \n \0 034 \0 022 001 m a r k0000340 e r c l u s t e r e r . j s \0 \00000360 263 # \n \0 024 \0 \n 001 P o l y g o n .0000400 j s \0 \0 264 # \n \0 024 \0 \v 001 P o l y0000420 l i n e . j s \0 265 # \n \0 030 \0 016 0010000440 S c r i p t s U t i l . j s \0 \00000460 266 # \n \0 030 \0 \r 001 T r a v e l M o0000500 d e . j s \0 \0 \0 267 # \n \0 030 \0 \r 0010000520 U n i t S y s t e m . j s \0 \0 \00000540 270 # \n \0 240 016 \v 001 W a y p o i n t0000560 . j s \0 305 031 \n \0 214 016 022 001 . U n i0000600 t S y s t e m . j s . s w p \0 \00000620 312 031 \n \0 p 016 022 001 . U n i t S y s0000640 t e m . j s . s w x \0 \0 \0 \0 \0 \00000660 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 In the above, we start by finding the inode of directory src ( 664488 ) and then dump its contents into file src.out and then display that using od . As you can see, the contents of all of the files in that directory ( Animation.js , etc.) are visible in the dump. This is just a start - see the debugfs manual page or type help within debugfs for more information. If you're using ext4 , you can find more information about the structure and layout of directory entries in the kernel documentation .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/216644", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/120962/" ] }
216,647
I have a file with text as Afghanistan=+93Albania=+355Algeria=+213American Samoa=+1Andorra=+376Angola=+244 It has all the country list and its dialing code. I want to replace: Afghanistan=+93 with Afghanistan(+93)=+93 I can get the selection pattern as =\+[0-9]* , but what will be replacement pattern string? I know of \1 that is the captured match for selection but it don't seem to work for sed . So the regex needs to have selection. How can I do that using sed or any other unix tools?
sed 's/=\(+[0-9]\{1,3\}\)/(\1)=\1/' To address your problem (as I understood): Patterns that need to be memorized in sed are to be enclosed in parentheses - their appearance defines their index number.E.g.: sed 's/\(<memorized_pattern_1>\)<not_memorized>\(<memorized_pattern_2>\)/\2\1/' would swap patterns 1 and 2 and delete the middle one.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216647", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25747/" ] }
216,659
How can i rename all the files in a specific directory where the files contains blanks spaces and special characters ($ and @) in their names? I tried the rename command as follows to replace all the spaces and special characters with a _: $ ls -lrttotal 464-rwxr-xr-x. 1 pmautoamtion pmautoamtion 471106 Jul 17 13:14 Bharti Blocked TRX Report [email protected]$ rename -n 's/ |\$|@/_/g' *$ ls -lrttotal 464-rwxr-xr-x. 1 pmautoamtion pmautoamtion 471106 Jul 17 13:14 Bharti Blocked TRX Report [email protected]$ The command works but won't make any changes in the file names and won't return any error as well. How can in fix this and are there other ways as well?
The -n flag is for --no-act No Action: show what files would have been renamed. So it's normal if you don't have any changes. Regarding your command, it's working for me: $ touch "a @ test"$ lsa @ test$ rename -n 's/ |\$|@/_/g' *a @ test renamed as a___test Maybe depending on your shell, you have to escape the | $ rename -n 's/ \|\$\|@/_/g' * Or you can use the […] notation to group characters: $ rename -n 's/[ @\$]/_/g' *
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216659", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29327/" ] }
216,665
I need to do log analysis and I am facing problem that cat access.log* display latest log file first. I tried to sort logs which takes a lot of time because I have logs from couple of years back. And they are already sorted and just displayed in different order than I need it to. So I need to display files contents in folowing order: access.log.4access.log.3....access.log How do I achieve that?
Try this: ls -rt access.log* | xargs cat First list the files from oldest to newest and then cat each one of them.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216665", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104762/" ] }
216,697
Following an unclean shutdown and a colourful fsck, a whole bunch of files have gone missing. The output of 'rpm -Va' is several hundred lines long (mostly missing files but also some checksum and other mismatches). Is there an easy way to reinstall packages which have missing and/or corrupt files? The OS in question is Fedora 22.
For reference and completeness sake, one command that would be able to achieve what initially asked for would be something like this (quickly fiddled together, but it's working) rpm -qf $(rpm -Va 2>&1 | grep -vE '^$|prelink:' | sed 's|.* /|/|') | sort -u Here's a short explanation of the various parts: rpm -Va 2>&1 Will run a complete verification on all packages currently installed / listed in rpm database. It will also redirect stderr to stdout, as here on my box some errors which are caused by prelink being enabled are reported as errors but we want them on stdout. Attention: Needs to run as root to be able to check all files, permissions and owner/group. grep -vE '^$|prelink:' suppresses display of emtpy lines and such of the prelink errors (example of such an error: prelink: /tmp/#prelink#.B14JBi: Recorded 10 dependencies, now seeing -1 ) sed 's|.* /|/|') will filter the rpm -Va output to only show filenames rpm -qf $() will query for all the obtained filenames in which package those are contained, and output the package name and version | sort -u will suppress duplicate package name/version combinations. Altogether you will receive a list of packages which failed verification. rpm -Va might still show some unrelated issues, as it also checks dependencies between packages, which might need to be suppressed by adding --nodeps .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/216697", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36553/" ] }
216,723
After starting to use Arch I switched to xterm and urxvt terminal and enjoyed the flexibility of them (esp. scaleHeight resource). However, I don't understand why sometimes configs work with Xterm and sometimes with xterm or XTerm (lower vs upper case x and t ). For example, I have this odd config that is working on Ubuntu: XTerm*faceName: terminusXTerm*faceSize: 11XTerm*saveLines: 16384XTerm*loginShell: trueXTerm*charClass: 33:48,35:48,37:48,43:48,45-47:48,64:48,95:48,126:48XTerm*termName: xterm-colorXTerm*eightBitInput: falsexterm*VT100.geometry: 100x80 ! <------ this line would not work with "Xterm" or "XTerm"XTerm*scaleHeight: 1.3 ! <----- but all others work with "XTerm" However, my Arch box runs on "xterm" fine. The same is true for URxvt terminal: I can't simply port my Arch .Xresources to Ubuntu14 work box because parts of it stop working, and I get different setups after running: xrdb -merge .Xresources X.org on xterm did not have any examples (searching for "xterm*" did not return anything on that page). I see examples with Xterm , xterm and XTerm online... It baffles me that the config above works since it is syntactically off. Why is this the case? Does it maybe have something to do with new or old resources in X? Thanks! xterm on ubuntu is Xterm(297). I don't have access to my Arch box at the moment, but it would be pretty up-to-date. I don't know how to tell Xterm version from Arch's repos, but maybe this: https://www.archlinux.org/packages/extra/i686/xterm/ So, if that link is right, then yes, I am running different Xterm versions. I tried upgrading xterm, but it is still 297. apt-get update && apt-get install --only-upgrade xterm I can't do it now, but I might try to recompile the latest version to see if the issue is there. Following the suggestions by ILMostro_7 below I tried XTerm.vt100.geometry , which still did not work. This is Xterm (297) on Ubuntu14. So basically, . or * it seems to only work with little xt . Result of appres XTerm xterm | grep geometry thanks to Gilles. I did not think to look up what exactly xrdb -merge does which resulted in this mess. So my guess is that one of these takes precedence over everything else? xterm.VT100.geometry: 100x100xterm*VT100.geometry: 100x80xterm*VT100*geometry: 50x50xterm.vt100.geometry: 160x40xterm*vt100.geometry: 100x20xterm.geometry: 5x5xterm*geometry: 100x20XTerm.VT100.geometry: 100x100XTerm*VT100.geometry: 50x50XTerm*VT100*geometry: 20x10XTerm.vt100.geometry: 100x5XTerm*vt100.geometry: 40x40XTerm*geometry: 50x50 In fact it looks like xterm.vt100.geometry: 160x40 takes precedence over other ones since that the instance I keep getting. Also, I somehow managed to completely screw up Xterm menues (Ctrl+mouse click) - they show up as a small yellow line. Hehe
X11 resources have a name which consists of a series of components separated by a dot, such as xterm.vt100.geometry . The first component is the name of the application, the second component is a widget in that application, and the last component is a property of the widget. Widgets can be nested, so there can be more than three components, or just two for a property of the application. Specifications of X resources can apply to a single resource or to a set of resources matching a pattern. There are two ways to make a specification apply to multiple resources. You can use a class name instead of an instance name for any component. Conventionally, instance names start with a lowercase letter while class names start with a capital letter. At the application level, the class name is usually fixed for a given application, typically to the capitalized application name, and sometimes other letters are also in uppercase, e.g. XTerm , XCalc , GV , NetHack , ... Applications using the X toolkit support an option -class to set the class name, as well -name to set the instance name (which defaults to the base name of the executable). For example XTerm.vt100.geometry sets a value of the geometry property for the vt100 widget of any instance of the XTerm class; it applies to xterm -name foo but not to xterm -class Foo . At the widget level, there can be multiple widgets with the same class, for example multiple buttons in the same window. Xterm has a single widget of class VT100 , called vt100 , which is the terminal emulator part that covers the whole window. Other widgets include the menus mainMenu , fontMenu and vtMenu of class SimpleMenu . There are wildcards: ? means “any widget”, and * means “any widget sequence”. For example xterm*background defines a background for absolutely everything inside the Xterm window. You can explore the resource tree of an application that supports the editres protocol with editres . Few applications support this protocol, but Xterm is one of them. It's possible for a given resource to be matched by multiple patterns. In this case, precedence rules apply. See the manual for the full rules. In your case, it's likely that there's another entry somewhere that is a closer match for xterm.vt100.geometry than xterm*VT100.geometry , and that match is overriding your setting. The others have no other setting so whatever you do wins.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/216723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123852/" ] }
216,775
For the last 5 years, I have used linux as my everyday OS for performing scientific computing. My work recently gave me a Mac that I will be the primary user for the next few months. I keep running into conflicts between the Free-BSD bash environment on the Mac and the GNU environment I am used to, both with bash scripts I have setup and as I try to run bash commands ( coreutils , findutils , etc). I do not want to switch completely to the Free-BSD utilies as all my other computers as well as our HPC's use linux with the GNU utilities. I want to avoid having to maintain two sets of bash scripts and also having to remember the nuances of the differing flags and functionalities between the two systems. I also do not want to break any of Mac's gui utilites etc that other users will use (either in the next few months or when it is given to someone else). Additionally, responses to this related question warn against completely replacing the Mac Free-BSD utilities with GNU ones. Is it possible to install/setup a separate bash environment to use only the GNU utilities while leaving the system Free-BSD ones in place? My expect the most promising option is setting up my $PATH variable to point to a directory containing the GNU executables (with their standard names) while ignoring the Free-BSD ones. How could I apply this to my cross-platform bash scripts? Are there alternative options worth considering?
First, this is about a lot more than just coreutils . The BSD equivalent to GNU findutils is also quite different , pretty much every command related to dynamic linkage is different, etc. And then on top of that, you have to deal with versioning differences: OS X still ships a lot of older software to remain on GPL2 instead of GPL3, such as Bash 3.x instead of Bash 4.x. The autotools are also often outdated with respect to bleeding-edge Linux distros. The answer to the core part of your question is, "Sure, why not?" You can use Homebrew to install all of these alternative GNU tools, then put $(brew --prefix coreutils)/libexec/gnubin and /usr/local/bin first in your PATH to make sure they're found first: export PATH="$(brew --prefix coreutils)/libexec/gnubin:/usr/local/bin:$PATH" If for some reason brew installs a package to another location, also include that in the PATH variable. If you would rather only replace a few packages, the tricky bit is dealing with all the name changes. Whenever brew installs a program that already has an implementation in the core OS, such as when installing GNU coreutils , it names its version differently so that you can run either, depending on your need at the time. Instead of renaming all of these symlinks¹, I recommend that you fix all of this up with a layer of indirection: $ mkdir ~/linux$ cd ~/linux$ ln -s /usr/local/bin/gmv mv...etc for all the other tools you want to rename to cover OS versions$ export PATH=$HOME/linux:$PATH...try it out... Once you're happy with your new environment, you can put export PATH=$HOME/linux:$PATH into your ~/.bash_profile . That takes care of interactive use, either with bulk replacement or single application replacement. Unfortunately, it does not completely solve the shell script problem, because sometimes shell scripts get their own environment, such as when launched from cron . In that case, you could modify the PATH at the top of each cross-platform shell script: #!/bin/bashexport PATH="$HOME/linux:$(brew --prefix coreutils)/libexec/gnubin:/usr/local/bin:$PATH" You do not need to make it conditional, since it is just offering the shell another place to look for programs. Footnotes e.g. /usr/local/bin/gmv → ../Cellar/coreutils/$version/bin/gmv Related Posts How to replace Mac OS X utilities with GNU core utilities? Install and Use GNU Command Line Tools on Mac OS X
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216775", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83205/" ] }
216,776
I was wondering if there is a way to display the currently connected WiFi and/or ethernet network name in xmobar.
You probably want the Wireless plugin which comes with xmobar http://projects.haskell.org/xmobar/#wireless-interface-args-refreshrate In your config file, you'd have something like this in the commands list: Run Wireless "wlan0" [ "-t", "<essid>" ] 10
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/17263/" ] }
216,784
There's a great answer on StackOverflow about providing a better lock for daemons (synthesized from Eduardo Fleury ) that doesn't depend on the common PID file lock mechanism for daemons. There are lots of good comments there about why PID lock files can sometimes cause problems, so I won't rehash them here. In short, the solution relies on Linux abstract namespace domain sockets, which keeps track of the sockets by name for you, rather than relying on files, which can stick around after the daemon is SIGKILL'd. The example shows that Linux seems to free the socket once the process is dead. But I can't find definitive documentation in Linux that says what exactly Linux does with the abstract socket when the bound process is SIGKILL'd. Does anyone know? Put another way, when precisely is the abstract socket freed to be used again? I don't want to replace the PID file mechanism with abstract sockets unless it definitively solves the problem.
Yes, linux automatically "cleans up" abstract sockets to the extent that cleaning up even makes sense. Here's a minimal working example with which you can verify this: #include <stdio.h>#include <stdlib.h>#include <string.h>#include <unistd.h>#include <sys/socket.h>#include <sys/un.h>intmain(int argc, char **argv){ int s; struct sockaddr_un sun; if (argc != 2 || strlen(argv[1]) + 1 > sizeof(sun.sun_path)) { fprintf(stderr, "usage: %s abstract-path\n", argv[0]); exit(1); } s = socket(AF_UNIX, SOCK_STREAM, 0); if (s < 0) { perror("socket"); exit(1); } memset(&sun, 0, sizeof(sun)); sun.sun_family = AF_UNIX; strcpy(sun.sun_path + 1, argv[1]); if (bind(s, (struct sockaddr *) &sun, sizeof(sun))) { perror("bind"); exit(1); } pause();} Run this program as ./a.out /test-socket & , then run ss -ax | grep test-socket , and you will see the socket in use. Then kill %./a.out , and ss -ax will show the socket is gone. However, the reason you can't find this clean-up in any documentation is that it isn't really cleaning up in the same sense that non-abstract unix-domain sockets need cleaning up. A non-abstract socket actually allocates an inode and creates an entry in a directory, which needs to be cleaned up in the underlying file system. By contrast, think of an abstract socket more like a TCP or UDP port number. Sure, if you bind a TCP port and then exit, that TCP port will be free again. But whatever 16-bit number you used still exists abstractly and always did. The namespace of port numbers is 1-65535 and never changes or needs cleaning. So just think of the abstract socket name like a TCP or UDP port number, just picked from a much larger set of possible port numbers that happen to look like pathnames but are not. You can't bind the same port number twice (barring SO_REUSEADDR or SO_REUSEPORT ). But closing the socket (explicitly or implicitly by terminating) frees the port, with nothing left to clean up.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216784", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84008/" ] }
216,819
I'm trying to add another line at the end of the last line in the VI editor and I need to go to the very last of the last line (command $) to insert a new line by hitting the enter key. But the cursor stays on the last character: if I have to hit enter on this last character, it makes the last character of the last line go to the next line. That is not what I need. I just need to insert a line by hitting the enter key. Operating system: Solaris X11
e is used to go to end of word. You should use $ to go to end of line. You can insert another line from the current position by using o (for open). You can also use A to append something to the end of line from anywhere on the line.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216819", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77103/" ] }
216,829
I have recently installed Arch Linux x64 and I wanted to install the LAMP stack. Everything worked fine, until I arrived to the MySQL part that I installed but can't launch.The output of sudo systemctl start mysqld gives : Job for mysqld.service failed because a timeout was exceeded. See "systemctl status mysqld.service" and "journalctl -xe" for details. and here is the systemctl status mysqld.service output : * mysqld.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled) Active: activating (start-post) (Result: exit-code) since Fri 2015-07-17 22:31:04 CET; 20s ago Process: 9548 ExecStart=/usr/bin/mysqld --pid-file=/run/mysqld/mysqld.pid (code=exited, status=1/FAILURE) Main PID: 9548 (code=exited, status=1/FAILURE); : 9549 (mysqld-post) CGroup: /system.slice/mysqld.service `-control |-9549 /bin/sh /usr/bin/mysqld-post `-9743 sleep 1Jul 17 22:31:04 sn4k3 systemd[1]: Starting MariaDB database server...Jul 17 22:31:04 sn4k3 mysqld[9548]: 150717 22:31:04 [Note] /usr/bin/mysqld (mysqld 10.0.20-MariaDB-log) starting as process 9548 ...Jul 17 22:31:04 sn4k3 mysqld[9548]: 150717 22:31:04 [Warning] Can't create test file /var/lib/mysql/sn4k3.lower-testJul 17 22:31:04 sn4k3 mysqld[9548]: [96B blob data]Jul 17 22:31:04 sn4k3 mysqld[9548]: 150717 22:31:04 [ERROR] AbortingJul 17 22:31:04 sn4k3 mysqld[9548]: 150717 22:31:04 [Note] /usr/bin/mysqld: Shutdown completeJul 17 22:31:04 sn4k3 systemd[1]: mysqld.service: Main process exited, code=exited, status=1/FAILURE
Found the solution you just have to run this command : sudo mysql_install_db --user=mysql --basedir=/usr/ --ldata=/var/lib/mysql/ source : Archlinux wiki
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216829", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52413/" ] }
216,856
I'm trying to install wine on Linux Mint 17.1. I ./configure'd as root to install it. Input: cd Downloads/wine-1.6.2./configure Output: checking build system type... x86_64-unknown-linux-gnuchecking host system type... x86_64-unknown-linux-gnuchecking whether make sets $(MAKE)... yeschecking for gcc... gccchecking whether the C compiler works... noconfigure: error: in `/home/(my username)/Downloads/wine-1.6.2':configure: error: C compiler cannot create executablesSee `config.log' for more details config.log: This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by Wine configure 1.6.2, which was generated by GNU Autoconf 2.69. Invocation command line was $ ./configure ## --------- ## ## Platform. ## ## --------- ## hostname = Math2 uname -m = x86_64 uname -r = 3.13.0-37-generic uname -s = Linux uname -v = #64-Ubuntu SMP Mon Sep 22 21:28:38 UTC 2014 /usr/bin/uname -p = unknown /bin/uname -X = unknown /bin/arch = unknown /usr/bin/arch -k = unknown /usr/convex/getsysinfo = unknown /usr/bin/hostinfo = unknown /bin/machine = unknown /usr/bin/oslevel = unknown /bin/universe = unknown PATH: /usr/local/sbin PATH: /usr/local/bin PATH: /usr/sbin PATH: /usr/bin PATH: /sbin PATH: /bin PATH: /usr/games PATH: /usr/local/games ## ----------- ## ## Core tests. ## ## ----------- ## configure:2879: checking build system type configure:2893: result: x86_64-unknown-linux-gnu configure:2913: checking host system type configure:2926: result: x86_64-unknown-linux-gnu configure:2956: checking whether make sets $(MAKE) configure:2978: result: yes configure:3035: checking for gcc configure:3051: found /usr/bin/gcc configure:3062: result: gcc configure:3291: checking for C compiler version configure:3300: gcc --version >&5 gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2 Copyright (C) 2013 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. configure:3311: $? = 0 configure:3300: gcc -v >&5 Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.8/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.8.2-19ubuntu1' --with-bugurl=file:///usr/share/doc/gcc-4.8/README.Bugs --enable-languages=c,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.8 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.8 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --disable-libmudflap --enable-plugin --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-4.8-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-4.8-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --enable-objc-gc --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) configure:3311: $? = 0 configure:3300: gcc -V >&5 gcc: error: unrecognized command line option '-V' gcc: fatal error: no input files compilation terminated. configure:3311: $? = 4 configure:3300: gcc -qversion >&5 gcc: error: unrecognized command line option '-qversion' gcc: fatal error: no input files compilation terminated. configure:3311: $? = 4 configure:3331: checking whether the C compiler works configure:3353: gcc conftest.c >&5 /usr/bin/ld: cannot find crt1.o: No such file or directory /usr/bin/ld: cannot find crti.o: No such file or directory /usr/bin/ld: cannot find -lc /usr/bin/ld: cannot find crtn.o: No such file or directory collect2: error: ld returned 1 exit status configure:3357: $? = 1 configure:3395: result: no configure: failed program was: | /* confdefs.h */ | #define PACKAGE_NAME "Wine" | #define PACKAGE_TARNAME "wine" | #define PACKAGE_VERSION "1.6.2" | #define PACKAGE_STRING "Wine 1.6.2" | #define PACKAGE_BUGREPORT "[email protected]" | #define PACKAGE_URL "http://www.winehq.org" | /* end confdefs.h. */ | | int | main () | { | | ; | return 0; | } configure:3400: error: in `/home/(my username)/Desktop/Other_Games/wine-1.6.2': configure:3402: error: C compiler cannot create executables See `config.log' for more details ## ---------------- ## ## Cache variables. ## ## ---------------- ## ac_cv_build=x86_64-unknown-linux-gnu ac_cv_env_CCC_set= ac_cv_env_CCC_value= ac_cv_env_CC_set= ac_cv_env_CC_value= ac_cv_env_CFLAGS_set= ac_cv_env_CFLAGS_value= ac_cv_env_CPPFLAGS_set= ac_cv_env_CPPFLAGS_value= ac_cv_env_CPP_set= ac_cv_env_CPP_value= ac_cv_env_CXXFLAGS_set= ac_cv_env_CXXFLAGS_value= ac_cv_env_CXX_set= ac_cv_env_CXX_value= ac_cv_env_DBUS_CFLAGS_set= ac_cv_env_DBUS_CFLAGS_value= ac_cv_env_DBUS_LIBS_set= ac_cv_env_DBUS_LIBS_value= ac_cv_env_FREETYPE_CFLAGS_set= ac_cv_env_FREETYPE_CFLAGS_value= ac_cv_env_FREETYPE_LIBS_set= ac_cv_env_FREETYPE_LIBS_value= ac_cv_env_GNUTLS_CFLAGS_set= ac_cv_env_GNUTLS_LIBS_value= ac_cv_env_GPHOTO2_CFLAGS_set= ac_cv_env_GPHOTO2_CFLAGS_value= ac_cv_env_GPHOTO2_LIBS_set= ac_cv_env_GPHOTO2_LIBS_value= ac_cv_env_GPHOTO2_PORT_CFLAGS_set= ac_cv_env_GPHOTO2_PORT_CFLAGS_value= ac_cv_env_GPHOTO2_PORT_LIBS_set= ac_cv_env_GPHOTO2_PORT_LIBS_value= ac_cv_env_GSTREAMER_CFLAGS_set= ac_cv_env_GSTREAMER_CFLAGS_value= ac_cv_env_GSTREAMER_LIBS_set= ac_cv_env_GSTREAMER_LIBS_value= ac_cv_env_HAL_CFLAGS_set= ac_cv_env_HAL_CFLAGS_value= ac_cv_env_HAL_LIBS_set= ac_cv_env_HAL_LIBS_value= ac_cv_env_LCMS2_CFLAGS_set= ac_cv_env_LCMS2_CFLAGS_value= ac_cv_env_LCMS2_LIBS_set= ac_cv_env_LCMS2_LIBS_value= ac_cv_env_LDFLAGS_set= ac_cv_env_LDFLAGS_value= ac_cv_env_LIBS_set= ac_cv_env_LIBS_value= ac_cv_env_PNG_CFLAGS_set= ac_cv_env_PNG_CFLAGS_value= ac_cv_env_PNG_LIBS_set= ac_cv_env_PNG_LIBS_value= ac_cv_env_SANE_CFLAGS_set= ac_cv_env_SANE_CFLAGS_value= ac_cv_env_SANE_LIBS_set= ac_cv_env_SANE_LIBS_value= ac_cv_env_XMKMF_set= ac_cv_env_XMKMF_value= ac_cv_env_XML2_CFLAGS_set= ac_cv_env_XML2_CFLAGS_value= ac_cv_env_XML2_LIBS_set= ac_cv_env_XML2_LIBS_value= ac_cv_env_XSLT_CFLAGS_set= ac_cv_env_XSLT_CFLAGS_value= ac_cv_env_XSLT_LIBS_set= ac_cv_env_XSLT_LIBS_value= ac_cv_env_build_alias_set= ac_cv_env_build_alias_value= ac_cv_env_host_alias_set= ac_cv_env_host_alias_value= ac_cv_env_target_alias_set= ac_cv_env_target_alias_value= ac_cv_host=x86_64-unknown-linux-gnu ac_cv_prog_ac_ct_CC=gcc ac_cv_prog_make_make_set=yes ## ----------------- ## ## Output variables. ## ## ----------------- ## ALL_TEST_RESOURCES='' ALSALIBS='' APPKITLIB='' APPLICATIONSERVICESLIB='' AR='' ARFLAGS='' BISON='' BUILTINFLAG='' CARBONLIB='' CC='gcc' CFLAGS='' CONVERT='' COREAUDIO='' COREFOUNDATIONLIB='' CORESERVICESLIB='' CPP='' CPPBIN='' CPPFLAGS='' CROSSCC='' CROSSTARGET='' CROSSTEST_DISABLE='' CUPSINCL='' CXX='' CXXFLAGS='' DBUS_CFLAGS='' DBUS_LIBS='' DEFS='' DISKARBITRATIONLIB='' DLLEXT='' DLLFLAGS='' DLLTOOL='' ECHO_C='' ECHO_N='-n' ECHO_T='' EGREP='' EXEEXT='' EXTRACFLAGS='' EXTRA_BINARIES='' FLEX='' FONTCONFIGINCL='' FONTFORGE='' FORCEFEEDBACKLIB='' FRAMEWORK_OPENAL='' FREETYPE_CFLAGS='' FREETYPE_LIBS='' GNUTLS_CFLAGS='' GNUTLS_LIBS='' GPHOTO2_CFLAGS='' GPHOTO2_LIBS='' GPHOTO2_PORT_CFLAGS='' GPHOTO2_PORT_LIBS='' GREP='' GSTREAMER_CFLAGS='' GSTREAMER_LIBS='' HAL_CFLAGS='' HAL_LIBS='' ICOTOOL='' IMPLIBEXT='' INSTALL_DATA='' INSTALL_PROGRAM='' INSTALL_SCRIPT='' IOKITLIB='' LCMS2_CFLAGS='' LCMS2_LIBS='' LDAPLIBS='' LDCONFIG='' LDD='' LDDLLFLAGS='' LDEXECFLAGS='' LDFLAGS='' LDPATH='' LDRPATH_INSTALL='' LDRPATH_LOCAL='' LIBDL='' LIBGETTEXTPO='' LIBKSTAT='' LIBMPG123='' LIBOBJS='' LIBOPENAL='' LIBOPENCL='' LIBPOLL='' LIBPTHREAD='' LIBRT='' LIBS='' LIBWINE_RULES='' LINGUAS='' LINT='' LINTFLAGS='' LN_S='' LTLIBOBJS='' MAINTAINER_MODE='' MAIN_BINARY='' MSGFMT='' OBJEXT='' OPENGL_LIBS='' OSS4INCL='' PACKAGE_BUGREPORT='[email protected]' PACKAGE_NAME='Wine' PACKAGE_STRING='Wine 1.6.2' PACKAGE_TARNAME='wine' PACKAGE_URL='http://www.winehq.org' PACKAGE_VERSION='1.6.2' PATH_SEPARATOR=':' PKG_CONFIG='' PNG_CFLAGS='' PNG_LIBS='' PORCFLAGS='' PRELINK='' QUICKTIMELIB='' RANLIB='' READELF='' RESOLVLIBS='' RSVG='' SANE_CFLAGS='' SANE_LIBS='' SECURITYLIB='' SET_MAKE='' SHELL='/bin/bash' SOCKETLIBS='' TARGETFLAGS='' TOOLSDIR='' TOOLSEXT='' UNWINDFLAGS='' WOW64_DISABLE='' XLIB='' XMKMF='' XML2_CFLAGS='' XML2_LIBS='' XSLT_CFLAGS='' XSLT_LIBS='' X_CFLAGS='' X_EXTRA_LIBS='' X_LIBS='' X_PRE_LIBS='' ZLIB='' ac_ct_AR='' ac_ct_CC='gcc' ac_ct_CXX='' bindir='${exec_prefix}/bin' build='x86_64-unknown-linux-gnu' build_alias='' build_cpu='x86_64' build_os='linux-gnu' build_vendor='unknown' datadir='${datarootdir}' datarootdir='${prefix}/share' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' dvidir='${docdir}' exec_prefix='NONE' host='x86_64-unknown-linux-gnu' host_alias='' host_cpu='x86_64' host_os='linux-gnu' host_vendor='unknown' htmldir='${docdir}' includedir='${prefix}/include' infodir='${datarootdir}/info' libdir='${exec_prefix}/lib' libexecdir='${exec_prefix}/libexec' localedir='${datarootdir}/locale' localstatedir='${prefix}/var' mandir='${datarootdir}/man' oldincludedir='/usr/include' pdfdir='${docdir}' prefix='NONE' program_transform_name='s,x,x,' psdir='${docdir}' sbindir='${exec_prefix}/sbin' sharedstatedir='${prefix}/com' sysconfdir='${prefix}/etc' target_alias='' ## ------------------- ## ## File substitutions. ## ## ------------------- ## MAKE_DLL_RULES='' MAKE_IMPLIB_RULES='' MAKE_PROG_RULES='' MAKE_RULES='' MAKE_TEST_RULES='' ## ----------- ## ## confdefs.h. ## ## ----------- ## /* confdefs.h */ #define PACKAGE_NAME "Wine" #define PACKAGE_TARNAME "wine" #define PACKAGE_VERSION "1.6.2" #define PACKAGE_STRING "Wine 1.6.2" #define PACKAGE_BUGREPORT "[email protected]" #define PACKAGE_URL "http://www.winehq.org" configure: exit 77 (END) I've lurked through here and other places around the internet to find an answer with no luck, so here I am.
Don't worry about these errors: gcc: error: unrecognized command line option '-V' and gcc: error: unrecognized command line option '-qversion' Those are unsuccessful probes but the configure script perseveres after them. Do worry about these: /usr/bin/ld: cannot find crt1.o: No such file or directory/usr/bin/ld: cannot find crti.o: No such file or directory/usr/bin/ld: cannot find -lc/usr/bin/ld: cannot find crtn.o: No such file or directory Those files are part of the libc6-dev package and are required in order to built any type of normal executable. You are probably missing that package. Try installing it (or reinstalling it if it is already installed — perhaps it is broken). Better yet, install the build-essential package. That's a meta-package that will pull in all of the bare essentials for compiling things.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216856", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123939/" ] }
216,863
I have found a long list of free email providers that I want to remove from my email lists - https://gist.github.com/tbrianjones/5992856 Below are two commands I currently use that do the same job for a handful or single domain entries however how can I convert these to import the words from another file? remove.txt for example rather than adding all of them manually. ruby -rcsv -i -ne 'row = CSV::parse_line($_); puts $_ unless row[2] =~ /gmail|hotmail|qq.com|yahoo|live.com|comcast.com|icloud.com|aol.co/i' All.txtsed -i '/^[^,]*,[^,]*hotmail/d' All.txt Below is a line of the data we will be using this on "fox*******","scott@sc***h.com","821 Ke****on Rd","Neenah","Wisconsin","54***6","UNITED STATES"
Don't worry about these errors: gcc: error: unrecognized command line option '-V' and gcc: error: unrecognized command line option '-qversion' Those are unsuccessful probes but the configure script perseveres after them. Do worry about these: /usr/bin/ld: cannot find crt1.o: No such file or directory/usr/bin/ld: cannot find crti.o: No such file or directory/usr/bin/ld: cannot find -lc/usr/bin/ld: cannot find crtn.o: No such file or directory Those files are part of the libc6-dev package and are required in order to built any type of normal executable. You are probably missing that package. Try installing it (or reinstalling it if it is already installed — perhaps it is broken). Better yet, install the build-essential package. That's a meta-package that will pull in all of the bare essentials for compiling things.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216863", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102085/" ] }
216,891
How do I make sudo remember my password for longer so that I don't have to keep typing it? I do not want to sudo su and execute commands as root all the time. I am on Arch Linux and have tried to google this but I get examples to change my password, which is not what I'm after.
There is timestamp_timeout option in your /etc/sudoers . You can set up this option to number of minutes. After that time it will ask for password again. More info in man sudoers . And make sure you edit your sudoers file using visudo, which checks your syntax and which will not leave you with wrong configuration and inaccessible sudo .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/216891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110915/" ] }
216,910
I have a collection of bash scripts and I want to put some common shell options and variable declarations into a "setup.sh" script which would get sourced at the beginning of each script. my directory structure is like: ├── includes│   └── setup.sh├└── server_config ├── build_server_core.sh ├── install_fail2ban.sh Because the scripts may be run from different computers/enviroments I can't simply use a hardcoded path to the setup.sh Is there a one-line command to source a script in a different directory to the running script?
First get the directory of the script itself and then use relative paths like that: DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )source "$DIR/../includes/setup.sh" For more info about finding the correct directory have a look at https://stackoverflow.com/questions/59895
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/216910", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
216,916
How can I compare all files in two directories and copy the ones that are different to another directory? For example, say we have dir1 and dir2 : dir1: build.gradle gradle.properties somejar.jar javacode.java anotherjar.jardir2: build.gradle <-- different from build.gradle in dir1 gradle.properties somejar.jar javacode.java <-- different from javacode.java in dir1 yetanotherjar.jar How may I create a new directory dir3 that contains the different files from dir2 , the common files in dir1 and dir2 and all uncommon files in both dir1 and dir2 ? dir3 should contain: dir3: build.gradle <-- from dir2 gradle.properties <-- these are common files both in dir1 and dir2 somejar.jar <-- javacode.java <-- from dir2 anotherjar.jar <-- from dir1 yetanotherjar.jar <-- from dir2
All you need is cp -n dir2/* dir1/* dir3/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216916", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/82521/" ] }
216,925
I am using Debian GNU/Linux 7.8 (wheezy). While running my MATLAB program today, I got this message in terminal. Message from syslogd@sas21 at Jul 18 16:40:49 ... kernel:[1747708.091929] Uhhuh. NMI received for unknown reason 20 on CPU 4.Message from syslogd@sas21 at Jul 18 16:40:49 ... kernel:[1747708.091932] Do you have a strange power saving mode enabled?Message from syslogd@sas21 at Jul 18 16:40:49 ... kernel:[1747708.091932] Dazed and confused, but trying to continue I also remember hearing some beep sound in between. What does this mean? And what should I do further?
The problem seems to be that the End of Interrupt isn't communicated properly. For libvirt, make sure eoi is enabled: <domain> … <features> <apic eoi='on'/> … On the command line for KVM that translates to -cpu …,+kvm_pv_eoi This seems to work for us with -M q35 , host cpu passthrough and default config otherwise (RTC interrupts queued, PIT interrupts dropped, HPET unavailable).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102841/" ] }
216,927
I am running Linux Mint 17.1, Cinnamon Desktop, Compiz, and XScreenSaver. I want to be able to use the Desktop Lock Screen and turn off the XScreenSaver Lock Screen. Turning off the Lock Screen in the XScreenSaver GUI deactivates all Screen Locking.
The problem seems to be that the End of Interrupt isn't communicated properly. For libvirt, make sure eoi is enabled: <domain> … <features> <apic eoi='on'/> … On the command line for KVM that translates to -cpu …,+kvm_pv_eoi This seems to work for us with -M q35 , host cpu passthrough and default config otherwise (RTC interrupts queued, PIT interrupts dropped, HPET unavailable).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216927", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/123999/" ] }
216,950
Whenever I send the command to turn off or restart my Debian servers, my shell is left hanging and unresponsive (can't type any commands). Performing the same action in Ubuntu results in the session gracefully closing so I don't have a tied-up terminal left hanging there. Is there a package I need to install or a configuration change to be made so that I can get this same behaviour on Debian?
This worked for me: apt-get install libpam-systemd dbus Also make sure that you have UsePAM yes in your ssh config. grep -i UsePAM /etc/ssh/sshd_config Unfortunately, you need to reboot for the solution to take effect... Detailed explanations on serverfault .
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/216950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64440/" ] }
216,953
I'm new to bash and would like my prompt to show something that in tcsh was trivial, yet after a good google search I still cannot do. I would like my prompt to include only the current and parent directories like this: /parent/currentdir $ In tcsh this is achieved by: set prompt = "%C2 %" However in bash so far I have only found that I have to parse pwd to obtain the same output. Isn't there a simpler way, like doing: export PS1="$(some_command) $"
Bash's prompt control features are rather static. If you want more control, you can include variables in your prompt; make sure you haven't turned off the promptvars option . PS1='${PWD#"${PWD%/*/*}/"} \$ ' Note the single quotes: the variable expansions must happen at the time the prompt is displayed, not at the time the PS1 variable is defined. If you want more control over what is displayed, you can use command substitutions. For example, the snippet above loses the ~ abbreviation for the home directory. PS1='$(case $PWD in $HOME) HPWD="~";; $HOME/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";; $HOME/*) HPWD="~/${PWD##*/}";; /*/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";; *) HPWD="$PWD";; esac; printf %s "$HPWD") \$ ' This code is rather cumbersome, so instead of sticking it into the PS1 variable, you can use the PROMPT_COMMAND variable to run code to set HPWD and then use that in your prompt. PROMPT_COMMAND='case $PWD in $HOME) HPWD="~";; $HOME/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";; $HOME/*) HPWD="~/${PWD##*/}";; /*/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";; *) HPWD="$PWD";; esac'PS1='$HPWD \$' Since the shortened prompt only changed on a directory change, you don't need to recalculate it each time a prompt is displayed. Bash doesn't provide a hook that runs on a current directory change, but you can simulate it by overriding cd and its cousins. cd () { builtin cd "$@" && chpwd; }pushd () { builtin pushd "$@" && chpwd; }popd () { builtin popd "$@" && chpwd; }chpwd () { case $PWD in $HOME) HPWD="~";; $HOME/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";; $HOME/*) HPWD="~/${PWD##*/}";; /*/*/*) HPWD="${PWD#"${PWD%/*/*}/"}";; *) HPWD="$PWD";; esac}PS1='$HPWD \$' Note that you don't need to, and should not, export PS1 , since it's a shell setting, not an environment variable. A bash PS1 setting wouldn't be understood by other shells. P.S. If you want a nice interactive shell experience, switch to zsh , where all of these (prompt % expansions largely encompassing tcsh's, chpwd , etc.) are native features. PS1='%2~ %# '
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/216953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104149/" ] }
216,993
Ok it seems an easy thing but I just couldn't make it work after lots and lots of searching. I have Ubuntu 14.04. This is config.json file including configurations of my friend's server running shadowsocks: { "server":"ip address", "server_port":23, "local_port":1234, "password":"password", "timeout":600, "method":"aes-256-cfb"} I do sslocal -c config.json and successfully connect to the server. Now for instance I can make it work with google-chrome using the following command: google-chrome --proxy-server="socks5://127.0.0.1:1234" --host-resolver-rules="MAP * 0.0.0.0 , EXCLUDE localhost" The question is how can I make the whole internet connection to go through that server first. So that every single application can use it by default. Something you can activate and deactivate simply. Things I tried and failed: Using tsocks -> https://askubuntu.com/questions/532375/launch-program-through-shadowsocks Using iptables as sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination 127.0.0.1:1234 -> http://adminsgoodies.com/configuring-ubuntu-for-global-socks5-proxy/ Using System Settings -> Network -> Network Proxy along with dconf-tools to exclude hosts -> https://askubuntu.com/questions/70245/how-do-i-make-the-system-wide-proxy-settings-bypass-the-proxy-for-local-addresse
There is no general way to tunnel every Internet traffic through a SOCKS-proxy. However, there are specific ways for quite a few protocols - but not all of them. SOCKS5 supports TCP and UDP traffic, but not ICMP as far as I know. So, you cannot use traditional ping through such a proxy, for example. Here's a list of proxying clients (most of them support SOCKS5): Link I have personally tried the client portion of Dante and proxychains (the original, not the -ng successor) on Linux, and both worked for me (proxychains proved to be a little bit more stable). Both work by redirecting the socket API requests from the application (so they have to set LD_PRELOAD environment variable for the application) to their own library. This may pose a problem when the application uses a setuid binary as LD_PRELOAD and the setuid feature are security-wise incompatible. Also, not every network-related API functions are redirected so some strange applications could face problems (eg: when the application wants to get a list of network interfaces and IP addresses). These LD_PRELOAD-type proxy clients are not generally designed to work on a system-level, you are supposed to change the way of launching the application instead (by prefixing the application with the proxy client). In theory, you could set up the LD_PRELOAD environmental variable for your whole system or login session, and it might even work for some cases, however, you could run into subtle problems. Also, you cannot easily switch the redirection on or off without restarting the applications.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/216993", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124064/" ] }
217,010
I copied this line from /proc/some_proc_id/cmdline in my ubuntu machine, java^@-jar^@/usr/lib/selenium/selenium-server-standalone.jar^@-port^@4444^@-trustAllSSLCertificates^@ Somehow, the space characters are represented by ^@ in vi. I tried to replace them with space characters using the command, :%s#^@# #g But it says, pattern not found ^@ . How can one replace special characters particularly those that start with carat symbol?
Somehow, the space characters are represented by ^@ in vi. It's not vi that did that. Although you type command lines in shells with spaces between the arguments, command lines are actually discrete sequences of strings internally, not one long space-separated string. The shell separated the command line into individual argument strings before the command was launched. In C, strings are terminated with NUL characters and those are shown as ^@ . How can one replace special characters particularly those that start with carat symbol? In order to type those characters, you must prefix them with Control - v for literal next character. For example in this case: Control - v followed by Control - @ . The special character that introduces literal next characters is normally Control - v but it is actually configurable. Type stty -a to find out what it is set to. Look for lnext in the output.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/217010", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
217,036
Here is what I did less -N file1 > file2 what I want is writting file1 into file2 with line-numbers option. But I failed with that. Any suggestion to do that?Why I failed to do it? Thanks.
less is the wrong tool for the job. You can use cat for that: cat -n file1 >file2 Or nl : nl -ba file1 >file2 Or pr : pr -n -t -T file1 >file2 Or sed : sed '/./=' file1 | sed '/./N; s/\n/\t/' >file2 Or grep : grep -n . file1 | sed 's/:/\t/' >file2 Or awk : awk '{ $0 = NR "\t" $0 } 1' file1 >file2 Or again awk : awk '{ sub(/^/, NR "\t") } 1' file1 >file2 Or perl : perl -pe '$_=$.."\t".$_' file1 >file2 Or again perl : perl -pe 's/^/$.\t/' file1 >file2 Or seq and paste : seq $(wc -l file1 | cut -d' ' -f1) | paste - file1 >file2 Or even a plain shell script: count=0while IFS= read -r line; do let count++ printf '%d\t%s\n' $count "$line"done <file1 >file2 But less is the wrong tool. :)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217036", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84394/" ] }
217,052
How can I properly escape arbitrary commands? For example: sudo -u chris sh -c 'echo "\"leftright\""' The above echos: "leftright" How would I echo out: "left'right" I've tried the following which I would expect to work but does not: sudo -u chris sh -c 'echo "\"left\'right\""' I can't quite get my head round how it is parsed.
less is the wrong tool for the job. You can use cat for that: cat -n file1 >file2 Or nl : nl -ba file1 >file2 Or pr : pr -n -t -T file1 >file2 Or sed : sed '/./=' file1 | sed '/./N; s/\n/\t/' >file2 Or grep : grep -n . file1 | sed 's/:/\t/' >file2 Or awk : awk '{ $0 = NR "\t" $0 } 1' file1 >file2 Or again awk : awk '{ sub(/^/, NR "\t") } 1' file1 >file2 Or perl : perl -pe '$_=$.."\t".$_' file1 >file2 Or again perl : perl -pe 's/^/$.\t/' file1 >file2 Or seq and paste : seq $(wc -l file1 | cut -d' ' -f1) | paste - file1 >file2 Or even a plain shell script: count=0while IFS= read -r line; do let count++ printf '%d\t%s\n' $count "$line"done <file1 >file2 But less is the wrong tool. :)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124109/" ] }
217,057
I ended up with a cur folder with over 350,000 files. So many that I can't find a mail program to manage them. Tried horde on the same server and it gives an error. Can I delete or move some of those files manually (shell)? Or would that create other problems (indexing)? My first goal would be to end up with archive folders per year. Otherwise I'd have to just delete older files until the size becomes manageable again.
Yes, you may delete files from that folder manually. Dovecot is designed to assume that other software besides itself might manipulate the Maildir folder, including adding, removing, and renaming (the portion of the filename after the colon). It will update the indices accordingly as soon as it notices. In order to avoid deleting those mails outright, you could also use regular shell utilities ( mv , mkdir , etc...) to: separate them into multiple smaller folders move them out to a temporary location and move them in again in smaller bunches of manageable size use a good IMAP client that synchronizes the folder contents without having to download everything . (unfortunately, good email clients are in short supply. They all suck. Some just suck less. In contrast, you shouldn't try this if you are using Dovecot with dbox (either sdbox or mdbox). In that case, use doveadm commands to manipulate the mailbox contents without using an email client.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217057", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124110/" ] }
217,076
I'm working with a Raspberry Pi B+ and Raspbian 5/5/2015 and some guides that are a few years old. I've got an external NTFS HDD hooked up to the Pi. Due to the articles age(s), common practices have changed, and it turns out that certain packages and features and functions are now built-in and/or automatic. For example, I found out that Raspbian will handle the automounting of an NTFS drive/volume if you just install ntfs-3g. All instructions and guidance I could find (IRC, raspberry pi forum, and a Linux dude I know) all went extremely technical doing who-knows-what to my system to try to assist me, when in reality we were all tripping over automatic features that no one knew about or thought to check. I've since installed Raspbian fresh to a new SD card and so far just turned on SSH, updated apt-get, and installed ntfs-3g. This is the article I am using right now: http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/?PageSpeed=noscript My next step is to get Samba up and running. Yes, I know Linux people hate Windows, but I'm a Windows guy, so until I can know enough about Linux for a Linux system to be my fallback, Windows machines are my fallbacks. Eventually this Raspberry Pi will get a multi-TB drive, though for right now it's just a few GB; which is why it needs to be NTFS. I would like to check if Samba is installed, since the next step in my instructions tell me how to install and configure it. I've tried a few commands and I'm not sure what to do with the results or how to use them. I searched here and the technical details are above my capabilities and I don't think they address my seemingly simple need to find out if a package is installed or not. I tried: apt-cache dump this gave me way too much data. It scrolled down for a few minutes. I got this from The Raspberry Pi Handbook 3rd Edition (Link - Amazon) I tried apt-cache showpkg sambaPackage: sambaVersions:2:3.6.6-6+deb7u5 (/var/lib/apt/lists/mirrordirector.raspbian.org_raspbian_dists_ wheezy_main_binary-armhf_Packages) Description Language: File: /var/lib/apt/lists/mirrordirector.raspbian.org_raspbian_d ists_wheezy_main_binary-armhf_Packages MD5: 0122ac62ef5f4ae21eb2e195eb45ad1dReverse Depends: winbind,samba 2.2.3-2 task-file-server,samba swat,samba 2:3.6.6-6+deb7u5 smbclient,samba 2.999+3.0.alpha21-4 samba4-common-bin,samba 2:3.3.0~rc2-5 samba4,samba 2:3.3.0~rc2-5 samba-dbg,samba 2:3.6.6-6+deb7u5 samba-common-bin,samba 3.0.20b-1 samba-common,samba 3.0.20b-1 qtsmbstatus-server,samba qemu-system,samba nautilus-share,samba 3.0.27a libwbclient0,samba 2:3.4.1 libpam-winbind,samba 2.2.3-2 libpam-smbpass,samba libnss-winbind,samba 2.2.3-2 gadmin-samba,samba education-main-server,samba dpsyco-samba,sambaDependencies:2:3.6.6-6+deb7u5 - samba-common (5 2:3.6.6-6+deb7u5) libwbclient0 (5 2:3.6.6-6+d eb7u5) libacl1 (2 2.2.51-8) libattr1 (2 1:2.4.46-8) libc6 (2 2.13-28) libcap2 (2 2.10) libcomerr2 (2 1.01) libcups2 (2 1.4.0) libgcc1 (2 1:4.4.0) libgssapi-krb5 -2 (2 1.10+dfsg~) libk5crypto3 (2 1.6.dfsg.2) libkrb5-3 (2 1.10+dfsg~) libldap-2 .4-2 (2 2.4.7) libpam0g (2 0.99.7.1) libpopt0 (2 1.14) libtalloc2 (2 2.0.4~git20 101213) libtdb1 (2 1.2.7+git20101214) zlib1g (2 1:1.1.4) debconf (18 0.5) debcon f-2.0 (0 (null)) libpam-runtime (2 1.0.1-11) libpam-modules (0 (null)) lsb-base (2 3.2-13) procps (0 (null)) update-inetd (0 (null)) adduser (0 (null)) dpkg (2 1.15.7.2) openbsd-inetd (16 (null)) inet-superserver (0 (null)) smbldap-tools (0 (null)) ldb-tools (0 (null)) ctdb (0 (null)) logrotate (0 (null)) tdb-tools (0 (null)) samba4 (3 4.0.0~alpha6-2) samba-common (1 2.0.5a-2)Provides:2:3.6.6-6+deb7u5 -Reverse Provides:pi@raspberrypi ~ $ apt-cache showpkg ntfs-3gPackage: ntfs-3gVersions:1:2012.1.15AR.5-2.1+deb7u2 (/var/lib/apt/lists/mirrordirector.raspbian.org_raspbian_dists_wheezy_main_binary-armhf_Packages) (/var/lib/dpkg/status) Description Language: File: /var/lib/apt/lists/mirrordirector.raspbian.org_raspbian_dists_wheezy_main_binary-armhf_Packages MD5: b2df024e8627b5d253b85f35263376efReverse Depends: udisks,ntfs-3g ntfsprogs,ntfs-3g ntfs-config,ntfs-3g ntfs-3g-dev,ntfs-3g 1:2012.1.15AR.5-2.1+deb7u2 ntfs-3g-dbg,ntfs-3g 1:2012.1.15AR.5-2.1+deb7u2 kvpm,ntfs-3g fsarchiver,ntfs-3gDependencies:1:2012.1.15AR.5-2.1+deb7u2 - debconf (18 0.5) debconf-2.0 (0 (null)) libc6 (2 2.13-28) libfuse2 (2 2.8.1) libgcc1 (2 1:4.4.0) libgcrypt11 (2 1.4.5) libgnutls26 (2 2.12.17-0) multiarch-support (0 (null)) fuse (0 (null)) libntfs-3g75 (0 (null)) ntfsprogs (3 1:2011.10.9AR.1-3~) libntfs-3g75 (0 (null)) ntfsprogs (0 (null))Provides:1:2012.1.15AR.5-2.1+deb7u2 -Reverse Provides:pi@raspberrypi ~ $ apt-cache showpkg ntfsprogsPackage: ntfsprogsVersions:1:2012.1.15AR.5-2.1+deb7u2 (/var/lib/apt/lists/mirrordirector.raspbian.org_raspbian_dists_wheezy_main_binary-armhf_Packages) Description Language: File: /var/lib/apt/lists/mirrordirector.raspbian.org_raspbian_dists_wheezy_main_binary-armhf_Packages MD5: 95f41af9cf1d0b5b66afb7d2a9e7c75dReverse Depends: partitionmanager,ntfsprogs ntfs-3g,ntfsprogs ntfs-3g,ntfsprogs 1:2011.10.9AR.1-3~ gparted,ntfsprogs fsarchiver,ntfsprogs fai-setup-storage,ntfsprogsDependencies:1:2012.1.15AR.5-2.1+deb7u2 - ntfs-3g (0 (null))Provides:1:2012.1.15AR.5-2.1+deb7u2 -Reverse Provides: but I'm not sure what to make of the results. I can't tell if it's going to apt-get the servers and getting information, or pulling it from my system. I tried : dpkg --get-selections which I got from here: http://www.howtogeek.com/howto/linux/show-the-list-of-installed-packages-on-ubuntu-or-debian/?PageSpeed=noscript but I think I'm running into the same problem. It seems the syntax has changed since 2007. The man page / help file seems to lead me to believe that the command should work Usage: dpkg [<option> ...] <command>--get-selections [<pattern> ...] Get list of selections to stdout. but I get an error: dpkg –get-selections sambadpkg: error: need an action option I found a few wuestions here that are related, but don't give me what I am looking for. I am interested in just knowing what's installed, but I guess that's a topic for another question. What packages are installed by default in Debian? Is there a term for that set? Why some of those packages are `automatically installed` and some not? How do we know what applications are installed in Linux? Loop to check whether a Debian package is installed or not Determine if a package is provided by an installed packagein Arch Linux
apt-cache showpkg shows detailed information about potentially installable packages. It does indicate whether the package is installed, kind of, but not in a very readable way: Versions:2:3.6.6-6+deb7u5 (/var/lib/apt/lists/mirrordirector.raspbian.org_raspbian_dists_wheezy_main_binary-armhf_Packages) If the package was installed, you'd see (/var/lib/dpkg/status) at the end of the line. However, this isn't fully reliable, because you'd also see this indication if the package was known to your system but not fully installed, e.g. if it was in the “package uninstalled but configuration files left over” state. A more useful apt-cache subcommand is apt-cache policy . It clearly shows the installed version (if any) and the available version(s). For example, here's output from a machine which has samba installed but not samba-dev : samba: Installed: 2:4.1.17+dfsg-2 Candidate: 2:4.1.17+dfsg-2 Version table: *** 2:4.1.17+dfsg-2 0 500 http://ftp.fr.debian.org/debian/ jessie/main amd64 Packages 100 /var/lib/dpkg/statussamba-dev: Installed: (none) Candidate: 2:4.1.17+dfsg-2 Version table: 2:4.1.17+dfsg-2 0 500 http://ftp.fr.debian.org/debian/ jessie/main amd64 Packages Alternatively, you can use the dpkg command to get information about your current system. APT is the software that manages the download of packages, dependency analysis, etc. Dpkg is the low-level software that carries out the actual installation of a package file. dpkg -l samba This shows a line beginning with i if the package is installed, and a line beginning with u or p or nothing at all if the package is not installed. $ dpkg -l samba samba-devDesired=Unknown/Install/Remove/Purge/Hold| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)||/ Name Version Architecture Description+++-==============-============-============-=================================ii samba 2:4.1.17+dfs amd64 SMB/CIFS file, print, and login sdpkg-query: no packages found matching samba-dev ( dpkg-query is the dpkg subcommand that returns information about the package database.) Note that if you just want to ensure that a package is installed, you can simply run apt-get install samba This won't do anything if the latest version of the package that's available in your distribution is already installed. It will install the package if it isn't installed yet, and it will upgrade it if you have an older version.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124120/" ] }
217,112
Let's say a program exists, which takes two arguments; input file and output file. What if I don't wish to save this output file to disk, but rather pass it straight to stdin of another program. Is there a way to achieve this? A lot of commands I come across on Linux provide an option to pass '-' as the output file argument, which does what I've specified above. Is this because passing the stdin of a program as an argument is not possible? If it is, how do we do it? An example of how I would image using this is: pdftotext "C BY BRIAN W KERNIGHAN & DENNIS M RITCHIE.pdf" stdin(echo) The shell I'm using is bash.
If the program supports writing to any file descriptor even if it can't seek, you can use /dev/stdout as the output file. This is a symlink to /proc/self/fd/1 on my system. File descriptor 1 is stdout.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217112", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124130/" ] }
217,165
I want to change my root password everyday based on the date. The password will be like a combination of a string and the date. The below code is working fine. echo -e "pass"$(date +"%d%m%Y")"\n""pass"$(date +"%d%m%Y") | passwd root But how to call it each time the system starts and at mid night when the date changes (If the system is on.)?
I'm not sure why you would want to do that. If you're concerned about security, if someone discovers your password on 1 July, they'll know it on 31 July or 15 September... To answer your question, if you want to ensure that the password update is done either at a scheduled time or when the system restarts, you want to install anacron . It can do periodic scheduling without assuming the system is on all the time. I'm not sure what distribution you're using, but it should be in your package archives. Alternatively, you can use a mixture of traditional cron (changing the password at midnight) and an init script (to handle the case of rebooting) to ensure that the password is always up-to-date. In either case, put the commands to change the password into a script (say, /usr/local/sbin/rootpass.sh ) and then call that script using cron or anacron and from your init script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217165", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55839/" ] }
217,183
I try to pack a .csv file with tar.gz, while being in the root directory. The file myfile.csv is located at /mnt/sdb1/ So the full filename is /mnt/sdb1/myfile.csv I try to save the tar.gz under /mnt/sdb1/old_files I tried it like this: tar -czf /mnt/sdb1/old_files/new.tar.gz mnt/sdb1/myfile.csv But when i extract the file, then a folder with name "mnt" will be extracted which cointains another folder called "sdb1", which contains the file. Is it possible to compress the file only, instead of copying all the directories?
use the --directory option from man tar : -C,- -directory DIR change to directory DIR i.e.: tar -C /mnt/sdb1/ -czf /mnt/sdb1/old_files/new.tar.gz myfile.csv
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124191/" ] }
217,200
I am wary of systemd for various reasons which are irrelevant to this question. Now, I'm about to upgrade my Debian Wheezy to Debian Jessie. Will systemd be used by default after an apt-get dist-upgrade? If so, what do I need to do to stick with sysvinit?
Yes, it will run by default. A dist-upgrade from wheezy to Jessie will switch to using systemd as the init system. The Jessie release notes devotes a whole section to this issue, also giving a recommendation about how to stay with your current init system: to prevent systemd-sysv from being installed during the upgrade, you can create a file called /etc/apt/preferences.d/local-pin-init with the following contents: Package: systemd-sysvPin: release o=DebianPin-Priority: -1 It also mentions that "some packages may have degraded behavior or may be lacking features under a non-default init system."
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217200", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34868/" ] }
217,243
I've seen things like echo , mv , if , for , in makefile rules. Some kind of shell commands seem to be recognized. Is this bash? How can I figure out which shell is being used and full list of keywords available to me?
It's sh : The execution line shall then be executed by a shell as if it were passed as the argument to the system() interface system() uses sh . You can definitely use the keywords of the POSIX Shell Command Language , and any non-keyword commands that you expect to be available on your host platform. sh on your system may actually be another name for a different shell (like bash ), in which case you'd have more options available. That sort of makefile won't be portable, though. As you ask about GNU make specifically, I'll also note that it permits you to specify a different shell to use in the makefile, but that makefile again won't be portable to other implementations of make . GNU make uses sh by default, as POSIX specifies.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217243", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32951/" ] }
217,249
An Oceanographer friend at work needs to back up many months worth of data. She is overwhelmed so I volunteered to do it. There are hundreds of directories to be backed up and we want to tar/bzip them into files with the same name as the directory. I can do this easy enough serially - but - I wanted to take advantage of the several hundred cores on my work station. Question: using find with the -n -P args or GNU Parallel, how do I tar/bzip the directories, using as many cores as possible while naming the end product: origonalDirName.tar.bz2 ? I have used find to bunzip 100 files simultaneously and it was VERY fast - so this is the way to approach the problem though I do not know how to get each filename to be that of each directory.
Just tar to stdout and pipe it to pigz . (You most likely don't want to parallelize disk access, just the compression part.): $ tar -c myDirectory/ | pigz > myDirectory.tar.gz A plain tar invocation like the one above basically only concatenates directory trees in a reversible way. The compression part can be separate as it is in this example. pigz does multithreaded compression. The number of threads it uses can be adjusted with -p and it'll default to the number of cores available. More detailed info can be found at the pigz github repo
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217249", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124243/" ] }
217,270
I'm trying to change PS1 look based on what host I'm connected in using SSH. My current PS1: PS1='\[\e[1;32m\]\u@\h\[\e[1;34m\] \w\[\e[1;31m\]$(__git_ps1)\[\e[1;0;37m\] \$\[\e[0m\] ' For host host1 I'd like to replace the first color with yellow which is 1;33 and for host2 take 1;35 as an example. How can I figure out that I'm connected to the given host using SSH and alter PS1 accordingly?
Construct your prompt specification in pieces, or use intermediate variables, or a combination of both. SSH sets the SSH_CLIENT variable, which indicates where you're logged in from. You can then use the host name to determined where you're logged into. if [[ -n $SSH_CLIENT ]]; then case $HOSTNAME in *.example.com) prompt_user_host_color='1;35';; # magenta on example.com *) prompt_user_host_color='1;33';; # yellow elsewhere esacelse unset prompt_user_host_color # omitted on the local machinefiif [[ -n $prompt_user_host_color ]]; then PS1='\[\e['$prompt_user_host_color'm\]\u@\h'else PS1=fiPS1+='\[\e[1;34m\] \w\[\e[1;31m\]$(__git_ps1)\[\e[1;0;37m\] \$\[\e[0m\] '
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217270", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/38632/" ] }
217,284
I have a little VPS I run apache and a Minecraft server on. I don't ever turn it off, but should I restart it for some reason, IPTables blocks most of my ports, including port 80. I've tried so many different suggestions on fixing this, but with no luck. Also, since the provider is OVH, the support is... lacking. So, I've created a workaround, which I'm happy with. I created a simple shell script file to open certain ports I need opened on restart (80 and 25565 for now). The important ones such as 21 and 22 are not affected on restart. The script looks like this: iptables -I INPUT -p tcp --dport 80 -j ACCEPTiptables -I INPUT -p udp --dport 80 -j ACCEPTiptables -I INPUT -p tcp --dport 25565 -j ACCEPTiptables -I INPUT -p udp --dport 25565 -j ACCEPT/sbin/service iptables save When I manually run it by typing /iptdef.sh , it runs fine, the ports become open and it's all good. Of course, it's not practical having to remember to run it every time I restart the server, so I added a crontab. The problem is, it doesn't work/run. This is my crontab file: */5 * * * * /backup2.sh*/55 * * * * /backup3.sh@reboot /iptdef.sh* * * * * /iptdef.sh The first two lines work. They are just simple scripts that make a backup of a folder for me. The second two lines are what's not working. Is there a chance that perhaps it's not possible to run iptables commands from a cron? It sounds silly, but I can't see any other reason for it not to work. The scripts have the correct permissions.
It's because cron forcibly sets PATH to /usr/bin:/bin . You need to invoke iptables as /sbin/iptables or add PATH=/usr/sbin:/sbin:/usr/bin:/bin in your script or crontab. See crontab(5) for details.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217284", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124270/" ] }
217,295
Am I wrong in my interpretation that I should basically just put first before all make rules: .PHONY: all of my rulesall: echo "Executing all ..."of: echo "Executing of ..."my: echo "Executing my ..."rules: echo "Executing rules ..." Is there ever a case where you would not want to follow this 'formula'? http://www.gnu.org/software/make/manual/make.html#Phony-Targets
Clark Grubb's Makefile style guide recommends that: All phony targets should be declared by making them prerequisites of .PHONY. add each phony target as a prerequisite of .PHONY immediately before the target declaration, rather than listing all the phony targets in a single place. No file targets should be prerequisites of .PHONY. phony targets should not be prerequisites of file targets. For your example, this would mean: .PHONY: allall: echo "Executing all ...".PHONY: ofof: echo "Executing of ...".PHONY: mymy: echo "Executing my ...".PHONY: rulesrules: echo "Executing rules ..." Multiple PHONY targets are allowed; see also this Stack Overflow question: "Is it possible to have multiple .PHONY targets in a gnu makefile?" Also, while this isn't mentioned directly in your question, care must be taken not to have a PHONY target with the same name of an actual input or intermediate files in your project. Eg, if your project hypothetically had a source code file named rules (with no suffix), the inclusion of that string in a PHONY target could break expected make behavior.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/217295", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/32951/" ] }
217,320
I have an alias in my server's /etc/profile which generates a random directory on command, like this: alias rdir="mkdir -p ./`cat /dev/random | tr -cd 'a-z0-9' | head -c 8`/" But it turns out this generates always the same string (in this case: directory). I figured out already this seems to be related to source 'ing the profile file and only generates a new random string after I call source /etc/profile . Now, I wonder, how do I generate a random string in an alias which always changes when I call the alias, like in this example: rdir ? (Without re- source -ing?)
Use single quotes instead of double quotes: alias rdir='mkdir -p ./$(cat /dev/urandom | tr -cd 'a-z0-9' | head -c 8)/' Now, the statement is evaluated every time the alias is called. With double quotes the statement is evaluated, when defining the alias, therefore static. Also a simpler solution to create a random directory inside the current working directory would be to use mktemp : alias rdir='mktemp -d --tmpdir=./'
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217320", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/19062/" ] }
217,345
On my ubuntu 14.04 machine, rsyslog service is running healthily... service rsyslog status returns, rsyslog start/running, process 794 cat /proc/794/cmdline shows, rsyslog #meaning rsyslog is running with default params. Now, i am trying to check if rsyslog is having a TCP/UDP listening connection on port 514 using, netstat -lnup | grep 514 #for udpnetstat -lntp | grep 514 #for tcp Both of the netstat commands return empty. Still, how can it run a server without a listening port?
rsyslog doesn't listen on INET sockets by default. Instead, it binds to /dev/log , which is a Unix domain socket . # ls -la /proc/$(pidof rsyslogd)/fdtotal 0dr-x------ 2 root root 0 Jul 20 11:28 .dr-xr-xr-x 7 root root 0 Jul 20 11:05 ..lrwx------ 1 root root 64 Jul 20 11:28 0 -> socket:[3559]l-wx------ 1 root root 64 Jul 20 11:28 1 -> /var/log/syslog...# netstat -x | grep 3559unix 19 [ ] DGRAM 3559 /dev/log
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217345", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
217,354
I'm trying to move a file with rsync to a remote location, The file has be be named : DRRS_(H264).mp4 but when I try : rsync -azR output.mp4 [email protected]:encoded/somepath/DRRS_(H264).mp4 it says : bash: syntax error near unexpected token `(' But I don't get how I'm supposed to encapsulate this. I tried : rsync -azR output.mp4 [email protected]:"encoded/somepath/DRRS_(H264).mp4" and rsync -azR output.mp4 "[email protected]:encoded/somepath/DRRS_(H264).mp4" without success.
You want to use the -s|--protect-args option to rsync . Without it, the part after the : is passed as is to the remote shell so you can use constructs of that shell to build the list to transfer. That way, if you know the remote shell is zsh for instance, you can do: rsync host:'*(.)' there to transfer only regular files. Or with csh / bash / zsh / ksh : rsync host:'{foo,bar}.txt' there Or: rsync file 'host:"$HOME/$(uname)-file"' Now, that means you can't easily transfer files with arbitrary name. With -s , rsync doesn't pass the string to the remote shell. Instead, it passes it in-band to the rsync server on the remote host, so there's no interpretation by the remote shell. ( and ) being special for most shells, you'd have to escape it using the syntax of the remote shell, and that varies from shell to shell. Best is to use -s instead. rsync -sazR output.mp4 [email protected]:'encoded/somepath/DRRS_(H264).mp4' However, rsync still performs (its own) globbing on the string you pass (even for the destination!). So you still can't pass arbitrary file names with rsync . If you want to process a file called * for instance, you need to escape it with backslash (at least, this time, that's irrespective of the remote shell). rsync -sazR output.mp4 [email protected]:'\*' So, to transfer a file with an arbitrary name contained in $1 , you'll want to use: file=$1rsync_escaped_file=$( printf '%s.\n' "$file" | sed 's/[[*?]/\\&/g')rsync_escaped_file=${rsync_escaped_file%.}rsync -s ... "user@host:$rsync_escaped_file" If your local shell is bash and you know the login shell of the remote user is also bash of the same version, alternatively, you can use printf %q to escape the characters that are special to the remote shell and not use -s : LC_ALL=C printf -v shell_escaped_file %q "$1"rsync ... "user@host:$shell_escaped_file" If you know the login shell of the remote host is Bourne-like (Bourne, ksh, yash, zsh, bash, ash...) and your ssh client and server allows passing LC_* environment variables, you can also do (again, without -s ): LC_FILE=$1 rsync ... 'user@host:"$LC_FILE"' Note1 : the -s | --protect-args option is only available in version 3.0.0 (2008) or above Note2 : the rsync documentation warns that -s | --protect-args may become the default in future versions of rsync (so to be future-proof, you may want to start using --no-protect-args if you don't want its effect)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217354", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/93708/" ] }
217,365
There is no fonts directory in /usr/share at all on my system. How can I go about installing Microsoft True Type fonts in Centos 7? I only need Arial and Georgia.
This is documented at: http://mscorefonts2.sourceforge.net/ yum install curl cabextract xorg-x11-font-utils fontconfigyum install https://downloads.sourceforge.net/project/mscorefonts2/rpms/msttcore-fonts-installer-2.6-1.noarch.rpm
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217365", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40129/" ] }
217,369
I am building an image for an embedded Linux based on Debian. I did use apt-get update before on the device that I want to use as a base for that image, so the lists under /var/lib/apt/lists are quite large (almost 100 MB in size). I want to keep apt-get functionality (so I don't want to remove apt repositories) but I want to free the space used up in these lists (the lists almost double the size of the image). Does anyone know how to do that? Can I just delete the files under /var/lib/apt/lists ?
You can just use: rm /var/lib/apt/lists/* This will remove the package lists. No repositories will be deleted, they are configured in the config file in /etc/apt/sources.list . All that can happen is that tools like apt-cache cannot get package information unless you updated the package lists. Also apt-get install will fail with E: Unable to locate package <package> , because no information is available about the package. Then just run: apt-get update to rewrite those lists and the command will work again. Anyway, it's recommended to run apt-get update before installing anything.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/217369", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/116266/" ] }
217,383
Note: I asked a similar quesiton concerning version 6 of these bundles here . Note that 7 uses systemd and may have a different implementation In the rare cases that a RHEL or CentOS 7 system is prevented from booting by (for instance) an improper shutdown, or a forced fsck-check failure on boot, the console will prompt the user for a root password. How do I disable the password check and drop directly to a root-shell? Unacceptable answers: overriding init on kernel command line (ie, grub) linking / replacing /sbin/sulogin with /sbin/sushell. (This would work, but it would raise red flags with the security framework). booting from some other device
Systemd is working with services and targets . Targets is the equivalent of runlevels , services is the equivalent of init scripts .Most of systemd configuration is located in /usr/lib/systemd , while standard init are in /etc/{init.d,rc*.d,inittab} . When an issue kicks in during the boot process (default are getty.target or graphical.target , you can get them with systemctl get-default ) systemd is switching to emergency.target . This "emergency" target will in turn, load the file emergency.service . This service contains multiple lines, and among them: ...[Service]Environment=HOME=/rootWorkingDirectory=/rootExecStartPre=-/bin/plymouth quitExecStartPre=-/bin/echo -e 'Welcome to emergency mode! After logging in, type "journalctl -xb" to view\\nsystem logs, "systemctl reboot" to reboot, "systemctl default" to try again\\nto boot into default mode.'ExecStart=-/bin/sh -c "/sbin/sulogin; /usr/bin/systemctl --fail --no-block default"... We just need to replace the call to /sbin/sulogin : ExecStart=-/bin/sh -c "/sbin/sushell; /usr/bin/systemctl --fail --no-block default" And we will be dropped directly to a shell, instead of getting prompted for the password via sulogin. (We can use /bin/sh , but /sbin/sushell falls in line with the answers for CentOS6/RHEL6. In fact, sushell simply exec's $SUSHELL which defaults to /bin/bash .) To make this change "permanent", ie, immune to yum updates, make the change to a copy of this file and place it in /etc/systemd/system/ . Also, to make the "rescue mode" work the same way, replace the same line in rescue.service . Here's a shell/sed script to simplify the process: for SERVICE in rescue emergency ; do sed '/^ExecStart=/ s%"/sbin/sulogin;%"/sbin/sushell;%' /usr/lib/systemd/system/$SERVICE.service > /etc/systemd/system/$SERVICE.servicedone To test this, make sure the system is otherwise not in use, and tell systemd to switch to the rescue target: systemctl rescue This will close network connections and open a shell at the console. You can test with the emergency target, but that doesn't work quite as cleanly (For some reason) and may require a full reboot to come out of. You can also test these from the boot-menu (grub). For testing the emergency mode, it's easy. Boot and when you get the menu, hit "e" to edit, and use the D-pad to navigate to the line beginning with linux16 and append (hit CTRL-A to get to the end of the line) emergency : linux16 ... emergency For testing rescue mode, it's the same steps as above but you must be more explicit: linux16 ... systemd.unit=rescue.target
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217383", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105631/" ] }
217,401
I need to have all deleted files for all users go into a folder on another drive. Is this possible? If so, what are the commands to permanently move the trash folder(s)? The operating system is Debian Jessie. Thanks.
Systemd is working with services and targets . Targets is the equivalent of runlevels , services is the equivalent of init scripts .Most of systemd configuration is located in /usr/lib/systemd , while standard init are in /etc/{init.d,rc*.d,inittab} . When an issue kicks in during the boot process (default are getty.target or graphical.target , you can get them with systemctl get-default ) systemd is switching to emergency.target . This "emergency" target will in turn, load the file emergency.service . This service contains multiple lines, and among them: ...[Service]Environment=HOME=/rootWorkingDirectory=/rootExecStartPre=-/bin/plymouth quitExecStartPre=-/bin/echo -e 'Welcome to emergency mode! After logging in, type "journalctl -xb" to view\\nsystem logs, "systemctl reboot" to reboot, "systemctl default" to try again\\nto boot into default mode.'ExecStart=-/bin/sh -c "/sbin/sulogin; /usr/bin/systemctl --fail --no-block default"... We just need to replace the call to /sbin/sulogin : ExecStart=-/bin/sh -c "/sbin/sushell; /usr/bin/systemctl --fail --no-block default" And we will be dropped directly to a shell, instead of getting prompted for the password via sulogin. (We can use /bin/sh , but /sbin/sushell falls in line with the answers for CentOS6/RHEL6. In fact, sushell simply exec's $SUSHELL which defaults to /bin/bash .) To make this change "permanent", ie, immune to yum updates, make the change to a copy of this file and place it in /etc/systemd/system/ . Also, to make the "rescue mode" work the same way, replace the same line in rescue.service . Here's a shell/sed script to simplify the process: for SERVICE in rescue emergency ; do sed '/^ExecStart=/ s%"/sbin/sulogin;%"/sbin/sushell;%' /usr/lib/systemd/system/$SERVICE.service > /etc/systemd/system/$SERVICE.servicedone To test this, make sure the system is otherwise not in use, and tell systemd to switch to the rescue target: systemctl rescue This will close network connections and open a shell at the console. You can test with the emergency target, but that doesn't work quite as cleanly (For some reason) and may require a full reboot to come out of. You can also test these from the boot-menu (grub). For testing the emergency mode, it's easy. Boot and when you get the menu, hit "e" to edit, and use the D-pad to navigate to the line beginning with linux16 and append (hit CTRL-A to get to the end of the line) emergency : linux16 ... emergency For testing rescue mode, it's the same steps as above but you must be more explicit: linux16 ... systemd.unit=rescue.target
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217401", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124357/" ] }
217,507
Bash doesn't seem to create zombies.It looks like the processes get immediately reaped when they get killed. Can I make bash make zombies? Why I'm asking: I'd like to be able to safely kill a child process or safely kill -9 it if it doesn't die within a certain period of time but I don't want to accidentally zap a process that isn't my child process. Zombie processes usually make it very easy and race-condition safe.
To make a zombie process: $ (sleep 1 & exec /bin/sleep 10) This replace the shell which run sleep 1 with /bin/sleep 10 that won't know the sleep 1 process terminated, so creating a zombie for 10 seconds. I'm not sure what do you expect from killing a zombie process. A zombie process was already dead, you can not kill it. Actually, you can make zombie processes disappear, but by killing its parent, not zombie processes themselves.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217507", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/23692/" ] }
217,514
I have several directories each containing a file containing this line: ../a_a_q1.out ../a_a_q2.out ../a_a_q3.out I would like to loop through all the subdirectories, and change the a_a part to the current subdirectory name. For example, if a subdirectory is called awesome_directory , I would like this line to read: ../awesome_directory_q1.out ../awesome_directory_q2.out ../awesome_directory_q3.out How would I achieve this? I would just like to use the subdirectory name. For example, I would just need the ( awesome_directory ) part instead of the full deal ( //something/server/user/other_stuff/more/awesome_directory )
To make a zombie process: $ (sleep 1 & exec /bin/sleep 10) This replace the shell which run sleep 1 with /bin/sleep 10 that won't know the sleep 1 process terminated, so creating a zombie for 10 seconds. I'm not sure what do you expect from killing a zombie process. A zombie process was already dead, you can not kill it. Actually, you can make zombie processes disappear, but by killing its parent, not zombie processes themselves.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124453/" ] }
217,577
I have an embedded system based on the Intel-Atom with PCH which we are busy developing. In the embedded environment I have: A Serial console through the PCH which means this doesn't work with the standard kernel. (as CONFIG_SERIAL_PCH_UART_CONSOLE is required) The SATA drive is only available in the embedded environment and can't be taken out for install. I can boot via USB drive. The system does have ethernet via the PCH which I have not yet confirmed to work. I have managed to build a custom Linux 3.16.7 kernel that can be booted with console=uartPCH0,115200 and then displays a console on the serial line. However, to move from here to an actual installation seems to be problematic.I am unable to convince debian-installer to be built using my custom kernel. My current theory is a double bootstrap process where I first bootstrap an installation into a usb-drive and then boot that and then bootstrap an installation into the SATA drive on the system?Any better suggestions? I'm not sure if there is some way to install via a network console? The system requires the e1000e driver which I assume will be built into the standard debian installer ISO's, however so far I was unable to find very clear documentation on how to convince the install system to boot and then open up ssh/telnet. Any hint ?
I managed to solve my problem with debootstrap, here is a quick run-down of the process I followed. unmount usb Partition the USB (4GB) Zap out GPT with gdisk, as my board didn't want to boot GPT.Created just one linux partition, nothing else.I had lots of problems getting a usb drive bootable on my embedded system. mkfs.ext4 /dev/sdb1 mount /dev/sdb1 /media/usb debootstrap jessie /media/usb http://my.mirror/debian I highly recommend setting up something like apt-cacher chroot /media/usb Mount all these: mount -t devtmpfs dev /devmount -t devpts devpts /dev/ptsmount -t proc proc /procmount -t sysfs sysfs /sys Edit /etc/fstab : (I use nano for editing normally) proc /proc proc defaults 0 0sysfs /sys sysfs defaults 0 0UUID=xxxx / ext4 errors=remount-ro 0 1to write UUID into file use: blkid -o value -s UUID /dev/sdb1 >> /etc/fstab house-keeping: apt-get install localesdpkg-reconfigure localesapt-get install console-setupdpkg-reconfigure keyboard-configuration (optional?)apt-get install console-datapasswd rootadduser linuxuser Install grub and kernel apt-get install grub-pcI installed grub into both /dev/sdb and /dev/sdb1 but you can use install-mbr for /dev/sdb I thinkapt-get install linux-image-686-pae now edit /etc/default/grub: uncomment GRUB_TERMINAL=consoleadd GRUB_GFXPAYLOAD_LINUX=textto GRUB_CMDLINE_LINUX_DEFAULT add: console=tty0 console=ttyPCH0,115200run upgrade-grub2 edit /etc/default/console-setup : CODESET="guess"FONTFACE=FONTSIZE=VIDEOMODE= create /etc/kernel-img.conf with this inside: image_dest = /do_symlinks = yesdo_bootloader = yesdo_bootfloppy = nodo_initrd = yeslink_in_boot = no Now install custom kernel with dpkg -i For me 2 options was important:CONFIG_SERIAL_PCH_UART=yCONFIG_SERIAL_PCH_UART_CONSOLE=yalthough I did highly customize the kernel after that.Currently I am compiling 3.14 with the rt-patch from linux-source-3.14 I downloaded out of wheezy-backports Other things to do before restarting (optional) edit /etc/modules to force drivers to loadedit /etc/network/interfacesecho myHostName > /etc/hostnameapt-get install telnetdapt-get install openssh-server At this stage I could boot the usb on my target embedded system and repeat the whole process again to install debian on the SATA drive. Obviously I needed to install things like debootstrap on the usb drive first to facilitate this but that was minor.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217577", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124494/" ] }
217,605
I want my shell scripts to fail whenever a command executed with them fails. Typically I do that with: set -eset -o pipefail (typically I add set -u also) The thing is that none of the above works with process substitution. This code prints "ok" and exit with return code = 0, while I would like it to fail: #!/bin/bash -eset -o pipefailcat <(false) <(echo ok) Is there anything equivalent to "pipefail" but for process substitution? Any other way to passing to a command the output of commands as it they were files, but raising an error whenever any of those programs fails? A poor's man solution would be detecting if those commands write to stderr (but some commands write to stderr in sucessful scenarios). Another more posix compliant solution would be using named pipes, but I need to lauch those commands-that-use-process-substitution as oneliners built on the fly from compiled code, and creating named pipes would complicate things (extra commands, trapping error for deleting them, etc.)
You could only work around that issue with that for example: cat <(false || kill $$) <(echo ok)other_command The subshell of the script is SIGTERM d before the second command can be executed ( other_command ). The echo ok command is executed "sometimes":The problem is that process substitutions are asynchronous. There's no guarantee that the kill $$ command is executed before or after the echo ok command. It's a matter of the operating systems scheduling. Consider a bash script like this: #!/bin/bashset -eset -o pipefailcat <(echo pre) <(false || kill $$) <(echo post)echo "you will never see this" The output of that script can be: $ ./scriptTerminated$ echo $?143 # it's 128 + 15 (signal number of SIGTERM) Or: $ ./scriptTerminated$ prepost$ echo $?143 You can try it and after a few tries, you will see the two different orders in the output. In the first one the script was terminated before the other two echo commands could write to the file descriptor. In the second one the false or the kill command were probably scheduled after the echo commands. Or to be more precisely: The system call signal() of the kill utillity that sends the the SIGTERM signal to the shells process was scheduled (or was delivered) later or earlier than the echo write() syscalls. But however, the script stops and the exit code is not 0. It should therefore solve your issue. Another solution is, of course, to use named pipes for this. But, it depends on your script how complex it would be to implement named pipes or the workaround above. References: http://mywiki.wooledge.org/BashFAQ/106 http://mywiki.wooledge.org/RaceCondition http://lists.gnu.org/archive/html/bug-bash/2010-09/msg00017.html
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217605", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42409/" ] }
217,613
I am trying to send characters directly to the network stack as explained in this thread . Even when the process work OK under CygWin (Windows), the same lines do fail on Ubuntu v14.04: luis@Zarzamoro:~$ sudo echo -e "\xff">/dev/udp/255.255.255.255/4000-bash: connect: Permission denied-bash: /dev/udp/255.255.255.255/4000: Permission denied Tested on Ubuntu for PC: luis@Lamborghini:~$ bash --versionGNU bash, versión 4.3.11(1)-release (i686-pc-linux-gnu) And Ubuntu for RaspBerry Pi 2: luis@Zarzamoro:~$ bash --versionGNU bash, version 4.3.11(1)-release (arm-unknown-linux-gnueabihf) Doing this via direct character sending to NIC instead of by using netcat or socat could be useful for some routers or embedded devices (like NAS) that have a rather modern Bash version, but don't allow (or it is awkward to achieve) installation of extra tools. Why is this happening and how could I solve it? Tested too: luis@Zarzamoro:~$ sudo bash -c 'echo -e "\xff" >/dev/udp/255.255.255.255/4000'bash: connect: Permission deniedbash: /dev/udp/255.255.255.255/4000: Permission denied And: luis@Zarzamoro:~$ echo -e "\xff" | sudo tee /dev/udp/255.255.255.255/4000tee: /dev/udp/255.255.255.255/4000: No such file or directory▒luis@Zarzamoro:~$ New info from @Emeric: the problem seems to affect only to broadcast address(es): luis@Zarzamoro:~$ sudo bash -c 'echo -e "\xff" >/dev/udp/192.168.11.255/4000'bash: connect: Permission deniedbash: /dev/udp/192.168.11.255/4000: Permission deniedluis@Zarzamoro:~$ sudo bash -c 'echo -e "\xff" >/dev/udp/192.168.11.1/4000'luis@Zarzamoro:~$ Tested failing too on Kali Linux v1.1.0 with Bash v4.2.37: luis@Lamborghini:~$ sudo lsb_release -aNo LSB modules are available.Distributor ID: KaliDescription: Kali GNU/Linux 1.1.0Release: 1.1.0Codename: motoluis@Lamborghini:~$ bash --versionGNU bash, versión 4.2.37(1)-release (i486-pc-linux-gnu)Copyright (C) 2011 Free Software Foundation, Inc.luis@Lamborghini:~$ sudo bash -c 'echo -e "\xff" >/dev/udp/192.168.11.255/4000'bash: connect: Permiso denegadobash: /dev/udp/192.168.11.255/4000: Permiso denegado Tested failing too on Bash from Conceptronic CH3SNAS (a NAS with 2 HDDs) installed via Fun_Plug : sh-4.1# bash --versionGNU bash, version 4.1.11(2)-release (arm-ffp-linux-uclibc)sh-4.1# sush-4.1# echo -e "\xff" > /dev/udp/255.255.255.255/4000sh: connect: Permission deniedsh: /dev/udp/255.255.255.255/4000: Permission denied
Edited following OP's clarification on the use case: You can not do that using the latest official release of bash (currently 4.3.30 according to this page ). lib/sh/netopen.c shows that bash opens a UDP socket ( SOCK_DGRAM ) then directly tries to connect without looking at the ip address to determine whether it would make sense to set specific socket options (in your case SO_BROADCAST ). Your best bet would be to send a patch to current bash maintainers or the appropriate mailing list, have it included in the official releases, then wait until the rather modern bash version in your NAS gets updated to an even more modern version including your feature. Short answer: bash can currently not do that. Previous answer : You have to resort to socat : $ echo -e "\xff" | socat - UDP-DATAGRAM:255.255.255.255:4000,broadcast Writing to /dev/udp uses bash 's built-in socket implementation. To the best of my knowledge, this implementation does not allow to send UDP datagrams to the broadcast address, as this requires setting an SO_BROADCAST flag to the socket before sending. Using netcat is also not an option, as it rejects the UDP broadcast: $ echo -e "\xff" | nc -u 255.255.255.255 4000nc: netcat.c:573: main: Assertion `connect_sock.proto != NETCAT_PROTO_UDP' failed.Aborted (core dumped) Edit: Sometimes netcat has -b flag to enable broadcast address. See this answer about UDP broadcasting on Ubuntu.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57439/" ] }
217,622
I need to add a path in a bash script, but it may be executed several times: export PATH=${OPENSHIFT_HOMEDIR}/app-root/runtime/bin/:${PATH} I don't want that path to be added over and over. How can I add it if it is not in $PATH yet?
First check if the path to add is already part of the variable: [[ ":$PATH:" != *":/path/to/add:"* ]] && PATH="/path/to/add:${PATH}" If /path/to/add is already in the $PATH , then nothing happens, else it is added at the beginning. If you need it at the end use PATH=${PATH}:/path/to/add instead. Edit : In you case it would look like this: [[ ":$PATH:" != *":${OPENSHIFT_HOMEDIR}/app-root/runtime/bin:"* ]] && PATH="${OPENSHIFT_HOMEDIR}/app-root/runtime/bin:${PATH}"
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/217622", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15657/" ] }
217,628
I have a filename like a.b.c.txt , I want this string to be split as string1=a.b.cstring2=txt Basically I want to split filename and its extension. I used cut but it splits as a,b,c and txt . I want to cut the string on the last delimiter. Can somebody help?
#For Filename echo "a.b.c.txt" | rev | cut -d"." -f2- | rev #For extension echo "a.b.c.txt" | rev | cut -d"." -f1 | rev
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/217628", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15340/" ] }
217,650
first.sh : #! /bin/kshecho "prova". ./second.shecho "ho lanciato il secondo". ./third.shecho "ho lanciato il terzo" second.sh : echo "sono nel secondo script"dosomething1exit $? If second.sh detects an error and exits with status -9, first.sh exits always.How I can do avoid exiting from the first shell if the child shell exits? I can't edit second.sh .
What you're doing here is including second.sh and third.sh as sub-scripts running in the same process, which is called “sourcing” in shell programming. . ./second.sh is basically equivalent to including the text of second.sh at that point. The exit command exits the process, it doesn't matter whether you call it in the original script or in a sourced script. If all you want to do is run the commands in second.sh and third.sh and they don't need to access or modify variables and functions from the original script, call these scripts as child processes. #! /bin/kshecho "prova"./second.shecho "ho lanciato il secondo"./third.shecho "ho lanciato il terzo" If you need the other scripts to access variables and functions from the original script, but not modify them, then call these scripts in subshells. Subshells are separate processes, so exit exits only them. #! /bin/kshecho "prova"(. ./second.sh)echo "ho lanciato il secondo"(. ./third.sh)echo "ho lanciato il terzo" If you need to use variables or functions defined in second.sh and third.sh in the parent script, then you'll need to keep sourcing them. The return builtin exits only the sourced script and not the whole process — that's one of the few differences between including another script with the . command and including its text in the parent script. If the sourced scripts only call exit at the toplevel, as opposed to inside functions, then you can change exit into return . You can do that without modifying the script by using an alias. #! /bin/kshecho "prova"alias exit=return. ./second.shecho "ho lanciato il secondo". ./third.shunalias exitecho "ho lanciato il terzo" If exit is also called inside functions, I don't think there's a non-cumbersome way. A cumbersome way is to set an exit trap and put your code there. #!/bin/kshdo_first () { echo "prova" trap "after_second" EXIT . ./second.sh after_second}after_second () { echo "ho lanciato il secondo" trap "after_third" EXIT . ./third.sh after_third}after_third () { trap - EXIT echo "ho lanciato il terzo"}do_first
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217650", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124529/" ] }
217,686
I am trying to find some hex diftool which allows me to compare to documents in the view but also the internal differences like in bless so two bless windows side-by-side but with diff capability between the windows, at least for selection. I find the bless - A full featured hexadecimal editor could be the best choice here for the integration. Is there any difftool for hex-ascii view in any Linux distro?
If you just want to view changes, not edit them, you can convert the files to hex with one program and then diff the output with any graphical diff program you want. It is probably only practical if there are only changed (not inserted) bytes between the files. As a one-liner: meld <(hexdump -C file1.bin) <(hexdump -C file2.bin) And here's a screenshot of 2 different copies of libssl.so on my system:
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16920/" ] }
217,712
Sometimes I have a folder full of jpg's and I need to randomly choose 8 or so of them. How could I automate this so my account randomly chooses 8 jpg's from the folder and copies them to another destination? My question is simple really, instead of using cp and giving it a file name then destination file name, I want to build a script that randomly chooses 8 of the .jpgs in the folder, and copies those to another folder.
You could use shuf : shuf -zn8 -e *.jpg | xargs -0 cp -vt target/ shuf shuffles the list of *.jpg files in the current directory. -z is to zero-terminate each line, so that files with special characters are treated correctly. -n8 exits shuf after 8 files. xargs -0 reads the input delimited by a null character (from shuf -z ) and runs cp . -v is to print every copy verbosely. -t is to specify the target directory.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124570/" ] }
217,718
I am looking to create a simple script that does the following: cd /specified/directorycd into child directory only if it has a 4 digit name e.g 1234rm -r all files that begin with letter Prm -r all files that begin with letter Eexit child directorycheck for next directory with 4 digit numberrepeat taskEnd
You could use shuf : shuf -zn8 -e *.jpg | xargs -0 cp -vt target/ shuf shuffles the list of *.jpg files in the current directory. -z is to zero-terminate each line, so that files with special characters are treated correctly. -n8 exits shuf after 8 files. xargs -0 reads the input delimited by a null character (from shuf -z ) and runs cp . -v is to print every copy verbosely. -t is to specify the target directory.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217718", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106673/" ] }
217,737
I'm running Fedora 22. I'm trying to setup GnuPG to have my SSH connections authenticated using my PGP authentication subkey that is located on my Yubikey Neo. I have a systemd unit starting the gpg-agent as following: /usr/bin/gpg-agent --homedir=%h/.gnupg --daemon --use-standard-socket And I have enabled SSH support in the configuration: enable-ssh-supportpinentry-program /usr/bin/pinentry-gtk Other parts of the setup include adding the keygrip of my key to the ~/.gnupg/sshcontrol file, adding my public key to the remote host and declaring the environment variables . Globally looking at the various logs the setup seems to work, I can see that SSH finds the key but is actually failing to sign with it. If I look at the logs from gpg-agent , I can see that it is failing to launch the pinentry program and therefore, not requesting for the PIN code: 2015-07-22 23:23:28 gpg-agent[6758] DBG: error calling pinentry: Ioctl() inappropriate for a device <Pinentry>2015-07-22 23:23:28 gpg-agent[6758] DBG: chan_8 -> BYE2015-07-22 23:23:28 gpg-agent[6758] DBG: chan_7 -> CAN2015-07-22 23:23:28 gpg-agent[6758] DBG: chan_7 <- ERR 100663573 The IPC call was canceled <SCD>2015-07-22 23:23:28 gpg-agent[6758] smartcard signing failed: Ioctl() inappropriate for a device2015-07-22 23:23:28 gpg-agent[6758] ssh sign request failed: Ioctl() inappropriate for a device <Pinentry> What we see here is that when used in combination with SSH, some ioctl call is failing when calling pinentry. However if I run the following: $ echo "Test" | gpg2 -s The PIN window is popping up and it's all working fine. Can you help me understand what's going on with this setup and SSH?
I've found the answer on the GPG Website itself. The agent was failing to find on which screen to display the Pinentry window. I just had to put the following in my .*shrc file: echo "UPDATESTARTUPTTY" | gpg-connect-agent > /dev/null 2>&1
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/217737", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37879/" ] }
217,827
I am trying to make a script like this: #!/bin/bashsudo -ssomething... When I execute it, I get a new shell but something is executed only when I exit the shell created by sudo -s , not inside it. Any help?
The problem is sudo -s without any argument will open an interactive shell for root. If you just want to run a single command using sudo -s , you can simple do: sudo -s command For example : $ sudo -s whoamiroot Or you can use here strings : $ sudo -s <<<'whoami'root If you have multiple commands you can use here doc : $ sudo -s <<'EOF'> whoami> hostname> EOFrootsomething--
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/217827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/124512/" ] }