source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
82,944 | Below is the text in the file: Pseudo name=Apple
Code=42B
state=fault
Pseudo name=Prance
Code=43B
state=good I need to grep for "42B" and get the output from the above text like: Pseudo name=Apple
Code=42B
state=fault Does anyone have idea on how to achieve this using grep / awk / sed ? | With awk awk -v RS='' '/42B/' file RS= changes the input record separator from a newline to blank lines. If any field in an record contains /42B/ print the record. '' (the null string) is a magic value used to represent blank lines according to POSIX : If RS is null, then records are separated by sequences consisting of a <newline> plus one or more blank lines, leading or trailing blank lines shall not result in empty records at the beginning or end of the input, and a <newline> shall always be a field separator, no matter what the value of FS is. The output paragraphs will not be separated since the output separator remains a single newline. To ensure that there is a blank line between output paragraphs, set the output record separator to two newlines: awk -v RS='' -v ORS='\n\n' '/42B/' file | {
"source": [
"https://unix.stackexchange.com/questions/82944",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42725/"
]
} |
82,952 | Using Ubuntu 12.04 LTS, my question is that if I started an application in a terminal window, then is there anything bad about just closing the terminal window without exiting the application properly first. For example, I use MATLAB. I open up a terminal and type matlab -nodisplay -nodesktop -nosplash and then run a bunch of scripts. Then I can either exit to end MATLAB and then close the terminal window or just close the terminal window. What really is the difference between these two methods? Does the second method somehow "harm" anything? Is the first method preferred? Why? | In general this should be fine to do it this way. When you click the "X" to close the terminal window, that is sending a "signal" from your desktop (GNOME, KDE, etc.) to the terminal application, telling it to shut itself down. Since you're running MATLAB in this shell it's considered a child process to the terminal application. So part of the responsibilities of being a parent process, is that you in turn send this same close "signal" to your children. Now if you understand conceptually what I just explained then let's substitute in a bit more of the real terminology. signals First with the "signal", there are actually a whole family of different signals that you can send to Unix processes. To keep it simple there are 4 that you'll often see, SIGHUP , SIGTERM , SIGINT , and SIGKILL . SIGHUP The SIGHUP signal is sent to a process when its controlling terminal
is closed. It was originally designed to notify the process of a
serial line drop. In modern systems, this signal usually means that
controlling pseudo or virtual terminal has been closed. SIGTERM The SIGTERM signal is a generic signal used to cause program
termination. Unlike SIGKILL, this signal can be blocked, handled, and
ignored. It is the normal way to politely ask a program to terminate. SIGINT The SIGINT (“program interrupt”) signal is sent when the user types
the INTR character (normally C-c). SIGKILL The SIGKILL signal is used to cause immediate program termination. It
cannot be handled or ignored, and is therefore always fatal. It is
also not possible to block this signal. NOTE: SIGINT is what gets sent when you use Ctrl + C to "break" a program from the command line while it's in the middle of running. which one is getting used? Most likely the SIGTERM is being called by your windowing environment and being passed down to your terminal. Your terminal is then most likely then sending SIGHUP down to MATLAB. This signal gives all the processes the opportunity to do any local clean-up (closing files, ending processes, etc.) themselves. kill command You can send signals yourself using the poorly named command, kill . So to send the SIGTERM signal to your terminal or the SIGHUP to MATLAB, you could determine their PID using ps` and then run this command to send them the signal: $ kill -SIGTERM <PID> or this: $ kill -SIGHUP <PID> You can get a complete list of the signals using this command: $ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM
...
... Notice that the signals have numbers? You'll often times see them used like that instead of by their names: $ kill -15 <PID> Or the infamous -9 , which can kill pretty much any process. | {
"source": [
"https://unix.stackexchange.com/questions/82952",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29972/"
]
} |
82,985 | I go to use the updatedb command to update the index and I get updatedb: can not open a temporary file for `/var/lib/mlocate/mlocate.db' fyi The locate command is working, e.g. $ locate Index.xml
/usr/share/mysql/charsets/Index.xml
durrantm.../durrantm$ How can I overcome this issue when trying to run updatedb? | You have to run the updatedb command as the super user. For example, sudo updatedb | {
"source": [
"https://unix.stackexchange.com/questions/82985",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
82,990 | I have a file that has "then"'s and "there"'s. I can $ grep "then " x.x
x and then some
x and then some
x and then some
x and then some and I can $ grep "there " x.x
If there is no blob none some will be created How can I search for both in one operation?
I tried $ grep (then|there) x.x -bash: syntax error near unexpected token `(' and grep "(then|there)" x.x
durrantm.../code
# (Nothing) | You need to put the expression in quotation marks. The error you are receiving is a result of bash interpretting the ( as a special character. Also, you need to tell grep to use extended regular expressions. $ grep -E '(then|there)' x.x Without extended regular expressions, you have to escape the | , ( , and ) . Note that we use single quotation marks here. Bash treats backslashes within double quotation marks specially. $ grep '\(then\|there\)' x.x The grouping isn't necessary in this case. $ grep 'then\|there' x.x It would be necessary for something like this: $ grep 'the\(n\|re\)' x.x | {
"source": [
"https://unix.stackexchange.com/questions/82990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
82,991 | I have a program on my path. The program runs when executed with a full path specified. But the program cannot be found when I run it with just its name. Essentially, I want to understand how the below output is possible, and how to fix it so that my program can actually be found without a full path specified: root:/usr/local/bin# ./siege
****************************************************
siege: could not open /usr/local/bin/etc/siegerc
run 'siege.config' to generate a new .siegerc file
****************************************************
root:/usr/local/bin# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
root:/usr/local/bin# siege
bash: /usr/bin/siege: No such file or directory
root:/usr/local/bin# wtf!?!? I'm on Ubuntu 12.04 using bash. Also please note the warning output from siege is not relevant for the purposes of this question, as I am only interested in whether or not the program can be found and invoked. | Note the output here: root:/usr/local/bin# siege
bash: /usr/bin/siege: No such file or directory Bash maintains an internal hash of previously found executables in your path. In this case, it has details that at one time there was an executable at /usr/bin/siege, and reuses that path to avoid having to search again. You need to tell bash to manually rehash the path for siege like so: hash siege You can also clear all hashed locations: hash -r | {
"source": [
"https://unix.stackexchange.com/questions/82991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38051/"
]
} |
82,997 | I have been wondering how all these small nas boxes that run linux can share over network and usb. The network part is totally under control, but I am completely at a loss on how I could hook up a computer to my server through USB cable, and get a share. Is this done with some specific hardware or is this done through software ? | Note the output here: root:/usr/local/bin# siege
bash: /usr/bin/siege: No such file or directory Bash maintains an internal hash of previously found executables in your path. In this case, it has details that at one time there was an executable at /usr/bin/siege, and reuses that path to avoid having to search again. You need to tell bash to manually rehash the path for siege like so: hash siege You can also clear all hashed locations: hash -r | {
"source": [
"https://unix.stackexchange.com/questions/82997",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43002/"
]
} |
83,038 | I want to remove last character from a line: [root@ozzesh ~]#df -h | awk '{ print $5 }'
Use%
22%
1%
1%
59%
51%
63%
5% Expected result: Use
22
1
1
59
51
63
5 | sed 's/.$//' To remove the last character. But in this specific case, you could also do: df -P | awk 'NR > 1 {print $5+0}' With the arithmetic expression ( $5+0 ) we force awk to interpret the 5th field as a number, and anything after the number will be ignored. Note that GNU df (your -h is already a GNU extension, though not needed here) can also be told to only output the disk usage percentage: df --output=pcent | tail -n +2 | tr -cd '0-9\n' (tail skips the headers and tr removes everything but the digits and the line delimiters). On Linux, see also: findmnt -no USE% | {
"source": [
"https://unix.stackexchange.com/questions/83038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38659/"
]
} |
83,047 | Linux mdraid supports device RAID (as opposed to partition RAID). The new superblock versions are also smart enough to not put the meta data right at the start of a disk. Does that mean it's possible to install grub2 on the MBR and boot whole-device RAID6 using GRUB2? And if it is possible, what distro installers allow you to do this? When you install Debian or Ubuntu, you are not offered this option. I know you can do it by hand, but an out-of-the-box solution would be better. | sed 's/.$//' To remove the last character. But in this specific case, you could also do: df -P | awk 'NR > 1 {print $5+0}' With the arithmetic expression ( $5+0 ) we force awk to interpret the 5th field as a number, and anything after the number will be ignored. Note that GNU df (your -h is already a GNU extension, though not needed here) can also be told to only output the disk usage percentage: df --output=pcent | tail -n +2 | tr -cd '0-9\n' (tail skips the headers and tr removes everything but the digits and the line delimiters). On Linux, see also: findmnt -no USE% | {
"source": [
"https://unix.stackexchange.com/questions/83047",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43055/"
]
} |
83,191 | I have a program that is installed in a custom directory under /opt. To make it easier to run it, I edited my bashrc to add said directory to my path: export PATH=$PATH:/opt/godi/bin:/opt/godi/sbin This works fine if I want to run the program without sudo. However, if I try to run it with sudo it fails with a "command not found" error. $ sudo godi_console
sudo: godi_console: command not found Inspecting the PATH variable after using sudo reveals that its not including the same PATH I have as a normal user: $ sudo sh
# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Why is the PATH not the same? Am I doing something wrong? I'm on Debian Jessie, if it makes a difference. One thing I tried was to invoke /opt/godi/sbin/godi_console directly, passing the absolute path to the executable. Unfortunatelly, that didn't help in this particular case because godi_console itself depends on the PATH being correctly set. | You can always do: sudo env "PATH=$PATH" godi_console As a security measure on Debian, /etc/sudoers has the secure_path option set to a safe value. Note that: sudo "PATH=$PATH" godi_console Where sudo treats leading arguments containing = characters as environment variable assignments by itself, would also work at running godi_console with your $PATH (as opposed to the secure_path ) in its environment, but would not affect sudo 's search path for executable, so wouldn't help sudo in finding that godi_console . | {
"source": [
"https://unix.stackexchange.com/questions/83191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23960/"
]
} |
83,221 | So I'm on a VPS - CentOS Linux installation. I have vsFTPd on the server.
I currently have SFTP access to the server via my root user, but am now trying to create a new user with FTP access to a specific directory only on the server, I've done the following: 1. mkdir /var/www/mydomain.com
2. mkdir /var/www/mydomain.com/html
3. useradd <-username>
4. passwd <-username>
5. chown –R <-username> /var/www/mydomain.com
5. groupadd <-groupname>
6. gpasswd -a <-username> <-groupname>
7. chgrp -R <-groupname> /var/www/mydomain.com
8. chmod -R g+rw /var/www/mydomain.com What I'm struggling to do is to create the user to ONLY have access to /var/www/mydomain.com - I observed that the user correctly logs into the right folder, however the user can then browse "back" to other directories. I want the user to stick in the specific folder and not being able to "browse" back . Any ideas? I've found different articles on chrooting, but simply haven't figured it out to use it in the steps included above. | It's quite simple. You have to add the following option on the vsftpd.conf file chroot_local_user=YES The documentation inside the configuration file is self-explanatory: # You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot(). This means, that the user will just have access on the folder you configured as HOME of the user.Below, i have an example of a user passwd entry: upload_ftp:x:1001:1001::/var/www/sites/:/bin/bash Set the home directory of the user with the following command usermod -d /var/www/my.domain.example/ exampleuser Note: In my example, this user is also a valid user for some scheduled tasks inside Linux. If you don't have this need, please change the shell of the user to /sbin/nologin instead of bash . | {
"source": [
"https://unix.stackexchange.com/questions/83221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43151/"
]
} |
83,222 | I have non-root access to a grid of computers. The installed OS is the following: $ uname -mrs
Linux 2.6.18-274.el5xen x86_64
$ cat /etc/*-release
Scientific Linux SL release 5.1 (Boron) I built the latest version of core utils locally with ./configure --prefix=<some_path>
make
make install but before adding this new install to my PATH & LD_LIBRARY_PATH , I'm reluctant to start using a version of core utils that may not be compatible or safe to use with my OS. I know that one answer is " test and see if it works ", but I would prefer to make sure that I will not run into problems later when doing real work with core utils (e.g. moving/deleting files, using chmod , etc.) Is this a legitimate concern? Are core utils fully backwards compatible with versions of GNU/Linux this old? How do I find out? | It's quite simple. You have to add the following option on the vsftpd.conf file chroot_local_user=YES The documentation inside the configuration file is self-explanatory: # You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot(). This means, that the user will just have access on the folder you configured as HOME of the user.Below, i have an example of a user passwd entry: upload_ftp:x:1001:1001::/var/www/sites/:/bin/bash Set the home directory of the user with the following command usermod -d /var/www/my.domain.example/ exampleuser Note: In my example, this user is also a valid user for some scheduled tasks inside Linux. If you don't have this need, please change the shell of the user to /sbin/nologin instead of bash . | {
"source": [
"https://unix.stackexchange.com/questions/83222",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
83,260 | I have a couple of big text files and in the file UNIQS.txt I have a list of strings to grep from another file. The code I use is grep -f UNIQS.txt EEP_VSL.uniqs.sam > UNIQ_templates.sam which does nothing - the file generated is empty. But when I do grep -F -f UNIQS.txt EEP_VSL.uniqs.sam > UNIQ_templates.sam it works correctly. This confuses me because I didn't think grep would interpret the entries in UNIQS.txt as regexp patterns without quotes and slashes and so on being in the file (which there aren't). Is it the case in general that if you are getting the patterns from a file then it will automatically think that they are regexp patterns? Edit: In the UNIQS.txt file, there are newline separated strings of the form HWI-ST365:215:D0GH0ACXX:2:1101:10034:186783 (called template names) and the file EEP_VSL... tab separated columns, with about 14 columns and the first column is the template name, so basically I want to extract the line corresponding to each template in the file. | The -f option specifies a file where grep reads patterns. That's just like passing patterns on the command line (with the -e option if there's more than one), except that when you're calling from a shell you may need to quote the pattern to protect special characters in it from being expanded by the shell. The argument -E or -F or -P , if any, tells grep which syntax the patterns are written in. With no argument, grep expects basic regular expressions ; with -E , grep expects extended regular expressions ; with -P (if supported), grep expects Perl regular expressions ; and with -F , grep expects literal strings. Whether the patterns come from the command line or from a file doesn't matter. Note that the strings are substrings: if you pass a+b as a pattern then a line containing a+b+c is matched. If you want to search for lines containing exactly one of the supplied strings and no more, then pass the -x option. | {
"source": [
"https://unix.stackexchange.com/questions/83260",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43168/"
]
} |
83,322 | I'm looking for the process started in Linux which has process ID 0. I know init has PID 1 , which is the first process in Linux, is there any process with PID 0? | From the wikipedia page titled: Process identifier : There are two tasks with specially distinguished process IDs: swapper or sched has process ID 0 and is responsible for paging , and is
actually part of the kernel rather than a normal user-mode process.
Process ID 1 is usually the init process primarily responsible for
starting and shutting down the system. Originally, process ID 1 was
not specifically reserved for init by any technical measures: it
simply had this ID as a natural consequence of being the first process
invoked by the kernel. More recent Unix systems typically have
additional kernel components visible as 'processes', in which case PID
1 is actively reserved for the init process to maintain consistency
with older systems. You can see the evidence of this if you look at the parent PIDs (PPID) of init and kthreadd : $ ps -eaf
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Jun24 ? 00:00:02 /sbin/init
root 2 0 0 Jun24 ? 00:00:00 [kthreadd] kthreadd is the kernel thread daemon. All kthreads are forked from this thread. You can see evidence of this if you look at other processes using ps and seeing who their PPID is: $ ps -eaf
root 3 2 0 Jun24 ? 00:00:57 [ksoftirqd/0]
root 4 2 0 Jun24 ? 00:01:19 [migration/0]
root 5 2 0 Jun24 ? 00:00:00 [watchdog/0]
root 15 2 0 Jun24 ? 00:01:28 [events/0]
root 19 2 0 Jun24 ? 00:00:00 [cpuset]
root 20 2 0 Jun24 ? 00:00:00 [khelper] Notice they're all 2 . | {
"source": [
"https://unix.stackexchange.com/questions/83322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42985/"
]
} |
83,342 | Due to work I have recently started using OS X and have set it up using homebrew in order to get a similar experience as with Linux. However, there are quite a few differences in their settings. Some only need to be in place on one system. As my dotfiles live in a git repository, I was wondering what kind of switch I could set in place, so that some configs are only read for Linux system and other for OS X. As to dotfiles, I am referring, among other, to .bash_profiles or .bash_alias . | Keep the dotfiles as portable as possible and avoid OS dependent settings or switches that require a particular version of a tool, e.g. avoid GNU syntax if you don't use GNU software on all systems. You'll probably run into situations where it's desirable to use system specific settings. In that case use a switch statement with the individual settings: case $(uname) in
'Linux') LS_OPTIONS='--color=auto --group-directories-first' ;;
'FreeBSD') LS_OPTIONS='-Gh -D "%F %H:%M"' ;;
'Darwin') LS_OPTIONS='-h' ;;
esac In case the configuration files of arbitrary applications require different options, you can check if the application provides compatibility switches or other mechanisms. For vim , for instance, you can check the version and patchlevel to support features older versions, or versions compiled with a different feature set, don't have. Example snippet from .vimrc : if v:version >= 703
if has("patch769")
set matchpairs+=“:”
endif
endif | {
"source": [
"https://unix.stackexchange.com/questions/83342",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12471/"
]
} |
83,385 | I would like to know how I can get the value of a node with the following paths: config/global/resources/default_setup/connection/host
config/global/resources/default_setup/connection/username
config/global/resources/default_setup/connection/password
config/global/resources/default_setup/connection/dbname from the following XML: <?xml version="1.0"?>
<config>
<global>
<install>
<date><![CDATA[Tue, 11 Dec 2012 12:31:25 +0000]]></date>
</install>
<crypt>
<key><![CDATA[70e75d7969b900b696785f2f81ecb430]]></key>
</crypt>
<disable_local_modules>false</disable_local_modules>
<resources>
<db>
<table_prefix><![CDATA[]]></table_prefix>
</db>
<default_setup>
<connection>
<host><![CDATA[localhost]]></host>
<username><![CDATA[root]]></username>
<password><![CDATA[pass123]]></password>
<dbname><![CDATA[testdb]]></dbname>
<initStatements><![CDATA[SET NAMES utf8]]></initStatements>
<model><![CDATA[mysql4]]></model>
<type><![CDATA[pdo_mysql]]></type>
<pdoType><![CDATA[]]></pdoType>
<active>1</active>
</connection>
</default_setup>
</resources>
<session_save><![CDATA[files]]></session_save>
</global>
<admin>
<routers>
<adminhtml>
<args>
<frontName><![CDATA[admin]]></frontName>
</args>
</adminhtml>
</routers>
</admin>
</config> Also I want to assign that value to the variable for further use. Let me know your idea. | Using bash and xmllint (as given by the tags): xmllint --version # xmllint: using libxml version 20703
# Note: Newer versions of libxml / xmllint have a --xpath option which
# makes it possible to use xpath expressions directly as arguments.
# --xpath also enables precise output in contrast to the --shell & sed approaches below.
#xmllint --help 2>&1 | grep -i 'xpath' {
# the given XML is in file.xml
host="$(echo "cat /config/global/resources/default_setup/connection/host/text()" | xmllint --nocdata --shell file.xml | sed '1d;$d')"
username="$(echo "cat /config/global/resources/default_setup/connection/username/text()" | xmllint --nocdata --shell file.xml | sed '1d;$d')"
password="$(echo "cat /config/global/resources/default_setup/connection/password/text()" | xmllint --nocdata --shell file.xml | sed '1d;$d')"
dbname="$(echo "cat /config/global/resources/default_setup/connection/dbname/text()" | xmllint --nocdata --shell file.xml | sed '1d;$d')"
printf '%s\n' "host: $host" "username: $username" "password: $password" "dbname: $dbname"
}
# output
# host: localhost
# username: root
# password: pass123
# dbname: testdb In case there is just an XML string and the use of a temporary file is to be avoided, file descriptors are the way to go with xmllint (which is given /dev/fd/3 as a file argument here): set +H
{
xmlstr='<?xml version="1.0"?>
<config>
<global>
<install>
<date><![CDATA[Tue, 11 Dec 2012 12:31:25 +0000]]></date>
</install>
<crypt>
<key><![CDATA[70e75d7969b900b696785f2f81ecb430]]></key>
</crypt>
<disable_local_modules>false</disable_local_modules>
<resources>
<db>
<table_prefix><![CDATA[]]></table_prefix>
</db>
<default_setup>
<connection>
<host><![CDATA[localhost]]></host>
<username><![CDATA[root]]></username>
<password><![CDATA[pass123]]></password>
<dbname><![CDATA[testdb]]></dbname>
<initStatements><![CDATA[SET NAMES utf8]]></initStatements>
<model><![CDATA[mysql4]]></model>
<type><![CDATA[pdo_mysql]]></type>
<pdoType><![CDATA[]]></pdoType>
<active>1</active>
</connection>
</default_setup>
</resources>
<session_save><![CDATA[files]]></session_save>
</global>
<admin>
<routers>
<adminhtml>
<args>
<frontName><![CDATA[admin]]></frontName>
</args>
</adminhtml>
</routers>
</admin>
</config>
'
# exec issue
#exec 3<&- 3<<<"$xmlstr"
#exec 3<&- 3< <(printf '%s' "$xmlstr")
exec 3<&- 3<<EOF
$(printf '%s' "$xmlstr")
EOF
{ read -r host; read -r username; read -r password; read -r dbname; } < <(
echo "cat /config/global/resources/default_setup/connection/*[self::host or self::username or self::password or self::dbname]/text()" |
xmllint --nocdata --shell /dev/fd/3 |
sed -e '1d;$d' -e '/^ *--* *$/d'
)
printf '%s\n' "host: $host" "username: $username" "password: $password" "dbname: $dbname"
exec 3<&-
}
set -H
# output
# host: localhost
# username: root
# password: pass123
# dbname: testdb | {
"source": [
"https://unix.stackexchange.com/questions/83385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43226/"
]
} |
83,394 | I am running Fedora 17 64-bit and the rsync --exclude=/home/ben/<dir> is not working as expected. I am trying to rsync my home directory to a thumb drive, but I want to exclude certainly directories that hold cache files and build files. This is the command I'm using: rsync --exclude=/home/ben/build/ --exclude=/home/ben/.ccache -arv /home/ben home-ben/ However, content from the ~/build and ~/.ccache is being copied by rsync . What am I doing wrong? | Global rsync filter rules beginning with a leading / are anchored to the root of transfer. Quoting from the "INCLUDE/EXCLUDE PATTERN RULES" section of the man page: if the pattern starts with a / then it is anchored to a particular spot in the hierarchy of files, otherwise it is matched against the end of the pathname. This is similar to a leading ^ in regular expressions. Thus "/foo" would match a name of "foo" at either the "root of the transfer" (for a global rule) or in the merge-file's directory (for a per-directory rule). In your command ( rsync ... -arv /home/ben home-ben/ ), the file /home/ben/foo would be transferred to home-ben/ben/foo . The root of transfer is home-ben and the correct filter path is /ben/foo . Thus, to match /home/ben/.ccache you need a filter path of /ben/.ccache to match /home/ben/build/ you need a filter path of /ben/build/ A more detailed explanation can be found in the "ANCHORING INCLUDE/EXCLUDE PATTERNS" section of the rsync(1) man page . Note that simply leaving out the leading / is not necessarily what you want. Quoting again from the same man page section: An unqualified "foo" would match a name of "foo" anywhere in the tree because the algorithm is applied recursively from the top down; it behaves as if each path component gets a turn at being the end of the filename. Even the unanchored "sub/foo" would match at any point in the hierarchy where a "foo" was found within a directory named "sub". See the section on ANCHORING INCLUDE/EXCLUDE PATTERNS for a full discussion of how to specify a pattern that matches at the root of the transfer. Thus a filter pattern of build/ would match a build directory anywhere in /home/ben , even /home/ben/many/sub/directories/build/ . | {
"source": [
"https://unix.stackexchange.com/questions/83394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34855/"
]
} |
83,540 | The problem is simple - I have a .deb package and I want to install it on my Arch Linux. Is this possible? If yes, how? | Is it possible? Yes. Is it a good idea? That depends. You would only really need to do this if the application only exists as a .deb package. It is much more likely that you can just grab the upstream source and write a simple PKGBUILD to install it with pacman. You should also search the AUR to ensure that someone hasn't done this already. | {
"source": [
"https://unix.stackexchange.com/questions/83540",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43345/"
]
} |
83,593 | I have a folder structure with a bunch of *.csv files scattered across the folders. Now I want to copy all *.csv files to another destination keeping the folder structure. It works by doing: cp --parents *.csv /target
cp --parents */*.csv" /target
cp --parents */*/*.csv /target
cp --parents */*/*/*.csv /target
... and so on, but I would like to do it using one command. | find has a very handy -exec option: find . -name '*.csv' -exec cp --parents \{\} /target \; | {
"source": [
"https://unix.stackexchange.com/questions/83593",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
83,748 | I'm running Ubuntu where I have the directories /etc/rc0.d , /etc/rc1.d , /etc/rc2.d , ..., /etc/rc6.d . Example files from my machine: directory example symlinks in the dir
------------------------------------------
/etc/rc1.d: K76dovecot, K77ntp
/etc/rc2.d: S23ntp, S24dovecot
/etc/rc3.d: S23ntp, S24dovecot
/etc/rc4.d: S23ntp, S24dovecot
/etc/rc5.d: S23ntp, S24dovecot Questions: What's the purpose of the multiple "rc" directories? Why did Ubuntu install duplicates of dovecot and ntp into all the directories except rc0.d and rc6.d ? If they are specified multiple times like above, are they actually executed multiple times? Can you tell from the above in what order dovecot and ntp will execute at startup? What is the proper way to tell Ubuntu to always execute ntp before dovecot at startup? | These are runlevel s and are a System V-style initiation used by most *NIX systems (with the notable exception of systemd -based systems). When booting the kernel/user decides what runlevel should it run and execute only that runlevel . Meaning that depending the runlevel you can boot up with a different set of programs. There are runlevels for halt and reboot too, but since you are focusing on the startup part, let's ignore them for now. Since only one runlevel is executed at boot, some programs should/want to start/stop at different runlevel s with different or same parameters in the same or different order (not all runlevels are the same in all OS's). But Ubuntu copy runlevels 3-5 from 2, that's why they are the same. No. runlevel s are executed just once in startup or when you change runlevel . ntp scripts should execute first then dovecot in runlevel 2-5, not the case for runlevel 1. The ordinal number in the script names ( S 23 ntp ) states the order of execution. So, it all depends of the runlevel you are using. It depends of the Distro but in the particular case of Ubuntu you can add your script to runlevel 1 and 2. More info in the Wikipedia article about Ubuntu runlevels | {
"source": [
"https://unix.stackexchange.com/questions/83748",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43438/"
]
} |
83,778 | I have used of hdparm -n and smartctl -A but it always seem to be a "per drive" technique as a drive may answer for only one of these tools. So, is there a standard way to get the drive temperature on Linux (HDD or SSD)? If not, what (other) tools can I use to get this information? | I like hddtemp , which provides a pretty standard way of getting the temperature for supported devices. It requires SMART support though. Example Usage: sudo hddtemp /dev/sd[abcdefghi] Example Response: /dev/sda: WDC WD6401AALS-00J7B0: 31°C /dev/sdb: WDC WD7501AALS-00J7B0: 30°C | {
"source": [
"https://unix.stackexchange.com/questions/83778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30196/"
]
} |
83,806 | I'm pretty lost on this. From the man page: -f Requests ssh to go to background just before command execution. After starting SSH with the -f option, I have a working tunnel. But after I finish using it I don't know how to further interaction with it. For example, I cannot close the SSH tunnel when I finish with it. The usual methods I know don't work. For example, jobs returns nothing. The ~ command is not recognized (and I don't know exactly how to use it, anyway). However, pgrep tells me that the SSH tunnel is still running (after I have closed the terminal, etc.). How do I interact with it? How do I close it? | I found the solution here: http://www.g-loaded.eu/2006/11/24/auto-closing-ssh-tunnels/ The best way – Tunnels that auto-close As it has been mentioned previously, instead of using the -f -N switch combination, we can just use -f alone, but also execute a command on the remote machine. But, which command should be executed, since we only need to initialize a tunnel? This is when sleep can be the most useful command of all! In this particular situation, sleep has two advantages: it does nothing, so no resources are consumed the user can specify for how long it will be executed How these help in auto-closing the ssh tunnel is explained below. We start the ssh session in the background, while executing the sleep command for 10 seconds on the remote machine. The number of seconds is not crucial. At the same time, we execute vncviewer exactly as before: [me@local]$ ssh -f -L 25901:127.0.0.1:5901 [email protected] sleep 10; \
vncviewer 127.0.0.1:25901:1 In this case, the ssh client is instructed to fork the ssh session to the background (-f), create the tunnel (-L 25901:127.0.0.1:5901) and execute the sleep command on the remote server for 10 seconds (sleep 10). The difference between this method and the previous one (-N switch), basically, is that in this case the ssh client’s primary goal is not to create the tunnel, but rather to execute the sleep command for 10 seconds. The creation of the tunnel is some kind of side-effect, a secondary goal. If vncviewer was not used, the ssh client would exit after the 10 sec period, as it would have no more jobs to do, destroying the tunnel at the same time. During the execution of the sleep command, if another process, vncviewer in this case, starts using that tunnel and keeps it occupied beyond the 10 sec period, then, even if the ssh client finishes its remote job (execution of sleep), it cannot exit because another process occupies the tunnel. In other words, the ssh client cannot destroy the tunnel because it would have to kill vncviewer as well. When vncviewer stops using the tunnel, then the ssh client exits too, as it has already accomplished its goal. This way, no ssh processes are left running in the background. | {
"source": [
"https://unix.stackexchange.com/questions/83806",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15010/"
]
} |
83,862 | Today I was told a tale by a Unix trainer where the root password got leaked to the students, and one of the fellas removed the execute permission from /usr/bin/chmod itself. How do you recover chmod in this case and make it executable again? Let's say getting it from an external source or recompiling it is not a preferable option, is there some smart hack that can recover this chmod itself? Note that this happened a long time ago and I'm not looking for a solution for some current problem, just curious about what ways Unix provides us around such an issue. | You can run the loader directly, and pass it the command you want to run: /lib/ld-linux.so /bin/chmod +x /bin/chmod Your path to the loader might vary. On a 64-bit system you need to choose the right one based on how chmod was compiled; the 64-bit version is named something like /lib64/ld-linux-x86-64.so.2 | {
"source": [
"https://unix.stackexchange.com/questions/83862",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43499/"
]
} |
83,926 | I have a minimal headless *nix which does not have any command line utilities for downloading files (e.g. no curl, wget, etc). I only have bash. How can I download a file? Ideally, I would like a solution that would work across a wide range of *nix. | If you have bash 2.04 or above with the /dev/tcp pseudo-device enabled, you can download a file from bash itself. Paste the following code directly into a bash shell (you don't need to save the code into a file for executing): function __wget() {
: ${DEBUG:=0}
local URL=$1
local tag="Connection: close"
local mark=0
if [ -z "${URL}" ]; then
printf "Usage: %s \"URL\" [e.g.: %s http://www.google.com/]" \
"${FUNCNAME[0]}" "${FUNCNAME[0]}"
return 1;
fi
read proto server path <<<$(echo ${URL//// })
DOC=/${path// //}
HOST=${server//:*}
PORT=${server//*:}
[[ x"${HOST}" == x"${PORT}" ]] && PORT=80
[[ $DEBUG -eq 1 ]] && echo "HOST=$HOST"
[[ $DEBUG -eq 1 ]] && echo "PORT=$PORT"
[[ $DEBUG -eq 1 ]] && echo "DOC =$DOC"
exec 3<>/dev/tcp/${HOST}/$PORT
echo -en "GET ${DOC} HTTP/1.1\r\nHost: ${HOST}\r\n${tag}\r\n\r\n" >&3
while read line; do
[[ $mark -eq 1 ]] && echo $line
if [[ "${line}" =~ "${tag}" ]]; then
mark=1
fi
done <&3
exec 3>&-
} Then you can execute it as from the shell as follows: __wget http://example.iana.org/ Source: Moreaki 's answer upgrading and installing packages through the cygwin command line? Update: as mentioned in the comment, the approach outlined above is simplistic: the read will trashes backslashes and leading whitespace. Bash can't deal with NUL bytes very nicely so binary files are out. unquoted $line will glob. | {
"source": [
"https://unix.stackexchange.com/questions/83926",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24554/"
]
} |
84,060 | Is there a way to convert existing pair of OpenSSH keys to the SSH2 (ssh.com format) pair of keys? UPD : since there are some answers about ssh-keygen suddenly appeared, I'll explain where I came from (also it will be a nice answer on "what have you tried?"). $> diff --report-identical-files <(ssh-keygen -e -f ~/.ssh/id_dsa) <(ssh-keygen -e -f ~/.ssh/id_dsa.pub)
Files /tmp/zshAGGWAK and /tmp/zshPZiIr6 are identical In other words, ssh-keygen returns same keys for private and public input keys (hashes of original files are obviously different, I've checked them twice to ensure that they are valid private and public keys). It seems to be that ssh-keygen generates only public key for private or public input key. I'm doing it wrong or it is a normal behavior? | This tutorial titled: SSH: Convert OpenSSH to SSH2 and vise versa appears to offer what you're looking for. Convert OpenSSH key to SSH2 key Run the OpenSSH version of ssh-keygen on your OpenSSH public key to convert it into the format needed by SSH2 on the remote machine. This must be done on the system running OpenSSH. $ ssh-keygen -e -f ~/.ssh/id_dsa.pub > ~/.ssh/id_dsa_ssh2.pub Convert SSH2 key to OpenSSH key Run the OpenSSH version of ssh-keygen on your ssh2 public key to convert it into the format needed by OpenSSH. This needs to be done on the system running OpenSSH. $ ssh-keygen -i -f ~/.ssh/id_dsa_1024_a.pub > ~/.ssh/id_dsa_1024_a_openssh.pub The tutorial goes on to show how to both generate the various types of keys and how to export them to other formats. Use this for private & public keys? According to the man page, the answer would be a yes. Looking at the man page for ssh-keygen it states the following for the -e switch: -e This option will read a private or public OpenSSH key file and print
the key in RFC 4716 SSH Public Key File Format to stdout. This option
allows exporting keys for use by several commercial SSH implementations. But in practice it would appear that ssh-keygen can't convert private keys, only public ones. For example: # Make a new RSA key-pair
$ ssh-keygen -t rsa -f newkey
# attempt to extract the private key
$ ssh-keygen -e -f newkey > newkey_e
# attempt to extract the public key
$ ssh-keygen -e -f newkey.pub > newkey.pub_e
# Notice the supposed extracted private key (newkey_e) and the corresponding extracted public key (newkey.pub_e) have identical `md5sum`'s.
$ for i in *;do md5sum $i;done
d1bd1c12c4a2b9fee4b5f8f83150cf1a newkey
8b67a7be646918afc7a041119e863be5 newkey_e
13947789d5dcc5322768bd8a2d3f562a newkey.pub
8b67a7be646918afc7a041119e863be5 newkey.pub_e Looking at the resulting extracted keys confirms this: $ grep BEGIN newkey_e newkey.pub_e
newkey_e:---- BEGIN SSH2 PUBLIC KEY ----
newkey.pub_e:---- BEGIN SSH2 PUBLIC KEY ---- Googling a bit I came across this blurb from an article titled: How do you convert OpenSSH Private key files to SSH . The site seemed to be up and down but looking in Google's cache for this page I found the following blurb: How do you convert OpenSSH Private key files to SSH.com Private key files? It cannot be done by the ssh-keygen program even though most man pages
say it can. They discourage it so that you will use multiple public
keys. The only problem is that RCF will not allow you to register more
than one public key. The article goes on to cover a method for converting a openssh private key to a ssh.com private key through the use of PuTTY's puttygen tool. NOTE: puttygen can be run from Windows & Linux. Open 'puttygen' and generate a 2048 bit rsa public/private key pair.
Make sure you add a password after it is generated. Save the public
key as "puttystyle.pub" and save the private key as "puttystyle". The
putty program and SSH.com programs share a common public-key format
but the putty program and OpenSSH have different public-key formats.
We will come back to this, later. You should be able to load both
puttystyle keys into the putty program. However, the private key
formats for putty and SSH.com are not the same and so you will have to
create a converted file. Go to the conversions menu and export an
SSH.com key. Save it as "sshstyle". Now go back to the conversions
menu and export an openssh key. Save it as "openssh". These names
are arbitrary and you can choose your own. You will have to change
the names for installation on an OpenSSH machine, later. See below. Given the above I worked out the following using puttygen , using our previously generated private/public openssh key-pair: # generate ssh.com private key from private openssh key
$ puttygen newkey -O private-sshcom -o newkey.puttygen-sshcom
# generate ssh.com public key from private openssh key
$ puttygen newkey -O public -o newkey.pub_puttygen-sshcom
# generate openssh public key from private openssh key (for confirmation)
$ puttygen newkey -O public-openssh -o newkey.pub_puttygen-openssh The commenting is different so you can't just compare the resulting files, so if you look at the first few lines of the keys, that's a pretty good indicator that the above commands were successful. Comparison of public ssh.com keys: $ tail -n +3 newkey.pub_e | head -1 | cut -c 1-60
AAAAB3NzaC1yc2EAAAADAQABAAABAQDFkZdpmbze9c6pT883rE1i64TJd4wb
$ tail -n +3 newkey.pub_puttygen-sshcom | head -1 | cut -c 1-60
AAAAB3NzaC1yc2EAAAADAQABAAABAQDFkZdpmbze9c6pT883rE1i64TJd4wb Comparison of public openssh keys: $ cut -c 1-100 newkey.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFkZdpmbze9c6pT883rE1i64TJd4wbz9x/w6I2DmSZVI9TJa6M9jgGE952QsOY
$ cut -c 1-100 newkey.pub_puttygen-openssh
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFkZdpmbze9c6pT883rE1i64TJd4wbz9x/w6I2DmSZVI9TJa6M9jgGE952QsOY | {
"source": [
"https://unix.stackexchange.com/questions/84060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3737/"
]
} |
84,093 | I am trying to view the content of a .war file. I first did set its permissions with chmod 777 then when I try to accces it using: cd /usr/local/standalone/deployments/Sample.war/WEB-INF/classes/ It is giving cd: /usr/local/standalone/deployments/Sample.war/WEB-INF/classes/: Not a directory and I am not able to proceed further. Can someone help me in this issue? | .war files are packed. You can extract the information by using either of the following commands: jar -xvf Sample.war
unzip Sample.war You should then be able to run cd /usr/local/standalone/deployments/Sample.war/WEB-INF/classes/ | {
"source": [
"https://unix.stackexchange.com/questions/84093",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43638/"
]
} |
84,146 | The layout of my netbook's keyboard means that using the arrow keys for navigation is slightly uncomfortable. Is there a way to make GNU Info pages use vim-style hjkl navigation? I know I can info printf | less ...and use j and k to scroll up and down, which is good enough as I use info pages for reading so navigating to specific characters isn't vital; but it would be nice if I could do this within info , rather than resorting to a pipe. | Yes, info has support for pretty much any key binding scheme you like; see http://www.gnu.org/software/texinfo/manual/info-stnd/html_node/Custom-Key-Bindings.html and note in particular the --vi-keys startup option for Info. | {
"source": [
"https://unix.stackexchange.com/questions/84146",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34494/"
]
} |
84,160 | Is it possible to run the Python interpreter on a ChromeOS machine? I've found various editors you can use, but I would like the ability to run python applications as well. I would like to purchase the Samsung Chromebook, and being a computer science student, I'd love to be able to do my CS homework on it instead of carrying around my 15 inch Macbook or Toshiba. | Python Shell You can install this plugin, Python Shell into Chrome. Here's some info from that extensions info page in the store: Python shell for your browser. A Python shell for Chrome. Features: Python 2.7 Ruby 1.8 JavaScript These are the only languages that have been currently compiled to
JavaScript by the jsrepl project as this time. Developer Mode Alternatively you can go put your device in Developer Mode and gain access to a shell from where you can install/launch Python. Skulpt Interpreter Lastly you can check out the Skulpt Interpreter . Main site's here . Skulpt is an entirely in-browser implementation of Python. Crouton You can install a full fledged Linux on the Chromebook hardware using the project Crouton . crouton is a set of scripts that bundle up into an easy-to-use,
Chromium OS-centric chroot generator. Currently Ubuntu and Debian are
supported (using debootstrap behind the scenes), but "Chromium OS
Debian, Ubuntu, and Probably Other Distros Eventually Chroot
Environment" doesn't acronymize as well (crodupodece is admittedly
pretty fun to say, though). There's a easy to follow tutorial on Life Hacker which walks you through the installation and setup, titled: How to Install Linux on a Chromebook and Unlock Its Full Potential . Which way to go? If you're serious about using the Chromebook hardware as a development box I would go with Crouton. The other options only give you pieces of Python. If you're serious about doing any real development this is really the only option. | {
"source": [
"https://unix.stackexchange.com/questions/84160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43670/"
]
} |
84,166 | I accidentally forgot to specify destination before hitting the Return key. Where does mv ./* without specifying destination move the files and directories under current directory to? | If the last argument was a directory, you just moved all of the files and directories in your current working directory (except those whose names begin with dots) into that directory. If there were two files, the first file may have overwritten the second file. Here are some demonstrations: More than two files and the last argument is a file $ mkdir d1 d2 d3
$ touch a b c e
$ mv *
mv: target 'e' is not a directory More than two files and the last argument is a directory $ mkdir d1 d2 d3
$ touch a b c
$ mv -v *
'a' -> 'd3/a'
'b' -> 'd3/b'
'c' -> 'd3/c'
'd1' -> 'd3/d1'
'd2' -> 'd3/d2' Two files $ touch a b
$ mv -v *
'a' -> 'b' Further explanation The shell expands the glob ( * ) into arguments for mv . The glob is usually expanded in alphabetical order. mv always sees a list of files and directories. It never sees the glob itself. The command mv supports two types of moving. One is mv file ... directory . The other is mv old-file-name new-file-name (or mv old-file-name directory/new-file-name ). | {
"source": [
"https://unix.stackexchange.com/questions/84166",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/674/"
]
} |
84,175 | I'm trying to create a symbolic link in my home directory that points to a directory on my external HDD. It works fine when I specify it like this: cd ~
ln -s /run/media/name/exhdd/Data/ Data However it creates a faulty link when I try this: cd /run/media/name/exhdd
ln -s Data/ ~/Data This creates a link that I cannot cd into. When I try, bash complains: bash: cd: Data: Too many levels of symbolic links The Data symbolic link in my home is also colored in red when ls is set to display colored output. Why is this happening? How can I create a link in that manner? (I want to create a symlink to a directory in my working directory in another directory.) Edit: according to this StackOverflow answer, if the second argument (in my case that'd be ~/Data) already exists and is a directory, ln will create a symlink to the target inside that directory. However, I'm experiencing the same issue with: ln -s Data/ ~/ | Here's what's happening. If you make a symlink with a relative path, the symlink will be relative. Symlinks just store the paths that you give them. They never resolve paths to full paths. Running $ pwd
/usr/bin
$ ln -s ls /usr/bin/ls2 creates a symlink named ls2 in /usr/bin to ls (viz. /usr/bin/ls ) relative to the directory that the symlink is in ( /usr/bin ). The above command would create a functional symlink from any directory. $ pwd
/home/me
$ ln -s ls /usr/bin/ls2 If you moved the symlink to a different directory, it would cease to point to the file at /usr/bin/ls . You are making a symlink that points to Data , and naming it Data . It is pointing to itself. You have to make a symlink with the absolute path of the directory. ln -s "$(realpath Data)" ~/Data | {
"source": [
"https://unix.stackexchange.com/questions/84175",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17008/"
]
} |
84,191 | Apparently, it's not possible to create a nested directory in a single command? $ sudo mkdir x/y/z
mkdir: cannot create directory 'x/y/z': No such file or directory | The command you are looking for is mkdir -p x/y/z . The -p switch create parents directories. ~$ mkdir -p d/s/a/e
~$ cd d/s/a/e/
~/d/s/a/e$ | {
"source": [
"https://unix.stackexchange.com/questions/84191",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42571/"
]
} |
84,227 | I'm trying to install 389-ds , And it gives me this warning: WARNING: There are only 1024 file descriptors (hard limit) available, which limit the number of simultaneous connections. I understand about file descriptors, but I don't understand about soft and hard limits. When I run cat /proc/sys/fs/file-max , I get back 590432 . This should imply that I can open up to 590432 files (i.e. have up to 590432 file descriptors. But when I run ulimit , it gives me different results: $ ulimit
unlimited
$ ulimit -Hn # Hard limit
4096
$ ulimit -Sn # Soft limit
1024 But what are the hard / soft limit from ulimit , and how do they relate to the number stored at /proc/sys/fs/file-max ? | According to the kernel documentation , /proc/sys/fs/file-max is the maximum, total, global number of file handles the kernel will allocate before choking. This is the kernel's limit, not your current user's. So you can open 590432, provided you're alone on an idle system (single-user mode, no daemons running). File handles ( struct file in the kernel) are different from file descriptors: multiple file descriptors can point to the same file handle, and file handles can also exist without an associated descriptor internally. No system-wide file descriptor limit is set; this can only be mandated per process. Note that the documentation is out of date: the file has been /proc/sys/fs/file-max for a long time. Thanks to Martin Jambon for pointing this out. The difference between soft and hard limits is answered here, on SE . You can raise or lower a soft limit as an ordinary user, provided you don't overstep the hard limit. You can also lower a hard limit (but you can't raise it again for that process). As the superuser, you can raise and lower both hard and soft limits. The dual limit scheme is used to enforce system policies, but also allow ordinary users to set temporary limits for themselves and later change them. Note that if you try to lower a hard limit below the soft limit (and you're not the superuser), you'll get EINVAL back (Invalid Argument). So, in your particular case, ulimit (which is the same as ulimit -Sf ) says you don't have a soft limit on the size of files written by the shell and its subprocesses . (that's probably a good idea in most cases) Your other invocation, ulimit -Hn reports on the -n limit (maximum number of open file descriptors), not the -f limit, which is why the soft limit seems higher than the hard limit. If you enter ulimit -Hf you'll also get unlimited . | {
"source": [
"https://unix.stackexchange.com/questions/84227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11133/"
]
} |
84,279 | Note that the order of redirections is significant. For example, the command
ls > dirlist 2>&1
directs both standard output and standard error to the file dirlist,
while the command
ls 2>&1 > dirlist
directs only the standard output to file dirlist, because the
standard error was duplicated from the standard output before the standard
output was redirected to dirlist. Now, that last part is confusing to me. In that case, any standard error would be printed to the terminal and any STDOUT would go to the dirlist file. That is what would happen, but that is not how I understand the manual. It seems like it should say "because the standard error was duplicated from the standard output AFTER the standard output was redirected to dirlist". If STDERR was sent to STDOUT before STDOUT was directed to a file, then wouldn't the file contain STDOUT AND STDERR? Can someone please clear this up for me? Is it just poor reading comprehension on my part? The use of the word duplication seems a little strange to me in this context. Perhaps that is throwing me. | Duplication is really the important part here. Let's see where the file descriptors are going to before redirection. This is normally the current terminal, e.g.: STDOUT ---> /dev/pts/1
STDERR ---> /dev/pts/1 Now, if we call ls -l without redirection, output and error messages go to my terminal under /dev/pts/1 . If we first redirect the STDOUT to a file ( ls -l > dirlist ), it looks like this: STDOUT ---> /home/bon/dirlist
STDERR ---> /dev/pts/1 When we then redirect STDERR to a duplicate of STDOUT 's file descriptor ( ls -l > dirlist 2>&1 ), STDERR goes to a duplicate of /home/bon/dirlist : STDOUT ---> /home/bon/dirlist
STDERR ---> /home/bon/dirlist If we would first redirect STDERR to a duplicate of STDOUT 's file descriptor ( ls -l 2>&1 ): STDOUT ---> /dev/pts/1
STDERR ---> /dev/pts/1 and then STDOUT to a file ( ls -l 2>&1 > dirlist ), we would get this: STDOUT ---> /home/bon/dirlist
STDERR ---> /dev/pts/1 Here, STDERR is still going to the terminal. You see, the order in the man page is correct. Testing Redirection Now, you can test that yourself. Using ls -l /proc/$$/fd/ , you see where STDOUT (with fd 1) and STDERR (with fd 2), are going for the current process: $ ls -l /proc/$$/fd/
total 0
lrwx------ 1 bon bon 64 Jul 24 18:19 0 -> /dev/pts/1
lrwx------ 1 bon bon 64 Jul 24 18:19 1 -> /dev/pts/1
lrwx------ 1 bon bon 64 Jul 24 07:41 2 -> /dev/pts/1
lrwx------ 1 bon bon 64 Jul 24 18:19 255 -> /dev/pts/1 Let's create a small shell script that shows where your file descriptors are pointed. This way, we always get the state when calling ls , including any redirection from the calling shell. $ cat > lookfd.sh
#!/bin/sh
ls -l /proc/$$/fd/
^D
$ chmod +x lookfd.sh (With Ctrl D , you send an end-of-file and so stop the cat command reading from STDIN .) Now, call this script with varying combinations of redirection: $ ./lookfd.sh
total 0
lrwx------ 1 bon bon 64 Jul 24 19:08 0 -> /dev/pts/1
lrwx------ 1 bon bon 64 Jul 24 19:08 1 -> /dev/pts/1
lrwx------ 1 bon bon 64 Jul 24 19:08 2 -> /dev/pts/1
lr-x------ 1 bon bon 64 Jul 24 19:08 255 -> /home/bon/lookfd.sh
$ ./lookfd.sh > foo.out
$ cat foo.out
total 0
lrwx------ 1 bon bon 64 Jul 24 19:10 0 -> /dev/pts/1
l-wx------ 1 bon bon 64 Jul 24 19:10 1 -> /home/bon/foo.out
lrwx------ 1 bon bon 64 Jul 24 19:10 2 -> /dev/pts/1
lr-x------ 1 bon bon 64 Jul 24 19:10 255 -> /home/bon/lookfd.sh
$ ./lookfd.sh 2>&1 > foo.out
$ cat foo.out
total 0
lrwx------ 1 bon bon 64 Jul 24 19:10 0 -> /dev/pts/1
l-wx------ 1 bon bon 64 Jul 24 19:10 1 -> /home/bon/foo.out
lrwx------ 1 bon bon 64 Jul 24 19:10 2 -> /dev/pts/1
lr-x------ 1 bon bon 64 Jul 24 19:10 255 -> /home/bon/lookfd.sh
$ ./lookfd.sh > foo.out 2>&1
$ cat foo.out
total 0
lrwx------ 1 bon bon 64 Jul 24 19:11 0 -> /dev/pts/1
l-wx------ 1 bon bon 64 Jul 24 19:11 1 -> /home/bon/foo.out
l-wx------ 1 bon bon 64 Jul 24 19:11 2 -> /home/bon/foo.out
lr-x------ 1 bon bon 64 Jul 24 19:11 255 -> /home/bon/lookfd.sh You can see, that the file descriptors 1 (for STDOUT ) and 2 (for STDERR ) vary. For fun, you could also redirect STDIN and see the result: $ ./lookfd.sh < /dev/zero
total 0
lr-x------ 1 bon bon 64 Jul 24 19:18 0 -> /dev/zero
lrwx------ 1 bon bon 64 Jul 24 19:18 1 -> /dev/pts/1
lrwx------ 1 bon bon 64 Jul 24 19:18 2 -> /dev/pts/1
lr-x------ 1 bon bon 64 Jul 24 19:18 255 -> /home/bon/lookfd.sh (Question left to the reader: Where does file descriptor 255 point? ;-)) | {
"source": [
"https://unix.stackexchange.com/questions/84279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43342/"
]
} |
84,317 | I would like to delete every file, but keep the folder structure. Is there a way? NOTE: (I'm using GNU bash 4.1.5). | Try this: find . ! -type d -exec rm '{}' \; This will delete every single file, excluding directories, below the current working directory. Be extremely careful with this command. If the version of find on your machine supports it, you can also use find . ! -type d -delete | {
"source": [
"https://unix.stackexchange.com/questions/84317",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3608/"
]
} |
84,335 | I have set up a backup script to back up world data on my Minecraft server hourly using cron, but because the worlds being constantly edited by players, tar was telling me that files changed while they were read. I added --ignore-command-error to the tar in the script and that suppresses any errors when I run it manually, however cron still sends a mail message saying that files were changed while being read, and ends up flooding my mail because it's run once an hour. Anyone know how to fix this? This is the script: filename=$(date +%Y-%m-%d)
cd /home/minecraft/Server/
for world in survival survival_nether survival_the_end creative superflat
do
if [ ! -d "/home/minecraft/backups/$world" ]; then
mkdir /home/minecraft/backups/$world
fi
find /home/minecraft/backups/$world -mtime +1 -delete
tar --ignore-command-error -c $world/ | nice -n 10 pigz -9 > /home/minecraft/backups/$world/$filename.tar.gz
done | Cron will attempt to send an email with any output that may have occurred when the command was run. From cron's man page: When executing commands, any output is mailed to the owner of the
crontab (or to the user specified in the MAILTO environment variable
in the crontab, if such exists). Any job output can also be sent to
syslog by using the -s option. So to disable it for a specific crontab entry just capture all of the commands output and either direct it to a file or to /dev/null . 30 * * * * notBraiamsBackup.sh >/dev/null 2>&1 | {
"source": [
"https://unix.stackexchange.com/questions/84335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43208/"
]
} |
84,380 | Sometimes in the sources of projects I see "*.in" files. For example, a bunch of "Makefile.in"s. What are they for and/or what does the ".in" part mean? I assume that this has something to do with autoconf or make or something like those, but I'm not sure. I've tried searching for ".in file extension", "autoconf .in file extension", "autoconf .in", "autoconf dot in", and other variants, with no luck. | it's just a convention that signifies the given file is for input ; in my experience, these files tend to be a sort of generic template from which a specific output file or script results. | {
"source": [
"https://unix.stackexchange.com/questions/84380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29146/"
]
} |
84,381 | How can two dates be compared in a shell? Here is an example of how I would like to use this, though it does not work as-is: todate=2013-07-18
cond=2013-07-15
if [ $todate -ge $cond ];
then
break
fi How can I achieve the desired result? | The right answer is still missing: todate=$(date -d 2013-07-18 +%s)
cond=$(date -d 2014-08-19 +%s)
if [ $todate -ge $cond ];
then
break
fi Note that this requires GNU date. The equivalent date syntax for BSD date (like found in OSX by default) is date -j -f "%F" 2014-08-19 +"%s" | {
"source": [
"https://unix.stackexchange.com/questions/84381",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34169/"
]
} |
84,394 | After changing keyboard layout by setxkbmap ,If I typing something in an GUI apps, then it results in blank squares or dotted circle. Is there anything I am missing in LFS OS | The right answer is still missing: todate=$(date -d 2013-07-18 +%s)
cond=$(date -d 2014-08-19 +%s)
if [ $todate -ge $cond ];
then
break
fi Note that this requires GNU date. The equivalent date syntax for BSD date (like found in OSX by default) is date -j -f "%F" 2014-08-19 +"%s" | {
"source": [
"https://unix.stackexchange.com/questions/84394",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4843/"
]
} |
84,445 | I often use cd - to go back to where I was. How can I do this multiple times in bash? Or would zsh or some other tool support this? | In zsh, there's an auto_pushd option. This option makes cd behave like pushd . Then you can just use popd to go back to previous directories. ~ $ setopt auto_pushd
~ $ cd /
/ $ cd /var
/var $ cd /usr
/usr $ dirs
/usr /var / ~
/usr $ popd
/var $ popd
/ $ popd
~ $ In Bash, you can alias cd to pushd . alias cd=pushd The one downside of this is that you will lose cd 's three flags. From the cd help entry: -L force symbolic links to be followed -P use the physical directory structure without following symbolic links -e if the -P option is supplied, and the current working directory
cannot be determined successfully, exit with a non-zero status If you ever have to use the actual cd builtin instead of the alias, you can use one of these: 'cd' - Quoting the command makes the shell not resolve the alias and use the
normal cd. \cd - Backslashes quote characters. If you quote one character of a word, the
shell treats the whole word as quoted. builtin cd - This directly tells the shell to use the builtin instead of the alias. | {
"source": [
"https://unix.stackexchange.com/questions/84445",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36178/"
]
} |
84,453 | During installation of most (if not all) distro's of linux, the hard drive is partitioned to include a swap partition by default. It is possible to change this behavior with swapon -p priority According to the man pages, the priority is: PRIORITY
Each swap area has a priority, either high or low. The default priority is
low. Within the low-priority areas, newer areas are even lower priority
than older areas.
All priorities set with swapflags are high-priority, higher than default.
They may have any non-negative value chosen by the caller. Higher numbers
mean higher priority.
Swap pages are allocated from areas in priority order, highest priority
first. For areas with different priorities, a higher-priority area is
exhausted before using a lower-priority area. If two or more areas have the
same priority, and it is the highest priority available, pages are
allocated on a round-robin basis between them.
As of Linux 1.3.6, the kernel usually follows these rules, but there are
exceptions Why would you ever need more than one swap file? Is it common practice for system administrators to configure more than one swap? | There are oh so many reasons to have multiple swap areas (they don't need to be files), even if you only have a single spindle. 20-20 hindsight: You deployed a machine with a single swap area, then eventually realised it's not enough. You can't redeploy the machine at will, but you can make another swap area (probably a file) until redoing the partition layout becomes an option. Resizing or moving swap areas: You can't resize swap areas (as mentioned by Evan Teitelman ). And you can't just swapoff , make a new swap area and then swapon again unless you have enough RAM: swapoff wants to move all the swapped out pages to RAM before letting go of the swap area. So you make a temporary swap area, swapoff the original, wait till all the pages have moved from the old swap area to the temporary one, resize the original swap partition, mkswap it, then swapon the resized one and swapoff the temporary one. The swapped pages are copied from the temporary swap area to the resized one, and you're done. If you're moving swap areas, you don't even need a temporary area. mkswap the new one, swapon it, then swapoff the old one and everything's moved. Crazy fast swapping: modern disks employ zone bit recording . The first zone of the disk is the fastest. You may want to measure the disk, and create a partition covering exactly the first, fastest zone of the drive. This may be smaller than your intended swap size. So you add multiple partitions on several disks, using the same technique. Crazy fast swapping, the sequel: alternatively, once you know where your disks' fastest zones are, you can make high priority swap areas in the first zone, lower priority swap areas in the second zone, etc. This way your swapping system automatically knows to load balance across all fast disk zones, prefer the faster zones, and use the slower zones as an overflow area when the need arises. Symmetric load balancing: on a nicely built system with many spindles (like a server), I like to have multiple swap partitions occupying the beginning of every disk (to take advantage of zone bit recording ). They all have identical priorities, so the kernel will load-balance the swap. One spindle may give you 100 MB/s, but swap across all spindles could give you a multiple of that. (naïvely speaking) Bottleneck-aware load balancing: in practice, however, there are other bottlenecks in place. So, for instance, a 16 disk server may have four 6 Gbps SATA ports, each with a four-port multiplier and four disks sharing the bandwidth. If you know about this, you can organise your swap spaces so Disk 1 on Ports 1–4 have the highest priority, the second disks on ports 1–4 have the second highest priority, etc. This will load balance swapping but not overwhelm the port multipliers. Swapping across devices with different performance: (as mentioned by Luke) if your system isn't a brand new server, and it's grown organically over the years, it may have block devices that are significantly faster than others. You'll want to swap to the fastest device first, then to the next fastest, etc. Size considerations: (courtesy of David Kohen ) maybe putting all your swap on one drive leaves a few gigs free on the drive (this sounds like a 2001 scenario, but there are plenty of old or embedded devices where this could be an issue). Split it across all drives, and on top of all the other benefits above, you get better disk space usage per drive. It's one thing to lose a couple of gigs per spindle, and another to lose 300 gigs from one disk. Emergencies: you have exactly 96 hours to submit your PhD thesis, and your last experiment (the one that's likely to get you that Nobel prize as well as funky mixed-case letters after your name) is sucking memory at impressive rates. You're almost out of swap. You create a swap file with a priority less than the priority of your main swap device — the kernel will use it as overflow swap space. You could even install swapd to do this for you automatically, so you'll also have plenty of swap space for those huge emacs and LaTeX runs. Swapping across different media: Linux can't swap to character devices, but there are lots of different media, physical and virtual: SSDs (note: you probably don't want to swap on SSDs), dozens of shockingly different types of spinning hard disks, floppies (yes, you can swap on a floppy — you can always shoot yourself in the foot with Unix), DRBD volumes, iSCSI, LVM volumes, LUKS encrypted partitions, etc (including surreal, mind-boggling layered combinations of these — swap on LUKS on LVM on a parallel port ZIP drive over iSCSI over IEEE802.3ad aggregated Ethernet? No problem, you filthy pervert). These are niche scenarios, and are meant to support niche requirements. | {
"source": [
"https://unix.stackexchange.com/questions/84453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39263/"
]
} |
84,615 | I am wondering, if cd alone sends me to my home folder and cd ~ does the same, why was the ~ added in the first place then? Is it something specific to BASH or would they behave differently in another Shell? | The ~ can be used for more than just that. Any command can profit from having a shortcut to the home folder. So it is not necessary if you want to cd into your home, but what about ~/.config ? $ cd ~/.config Otherwise you'd have to write out the home path, use the $HOME var, or do two cd s. Also for copying or moving files: $ cp ~/downloads/some-file some/path/ Since most of your files are in the home it's always good to have a shortcut. | {
"source": [
"https://unix.stackexchange.com/questions/84615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3357/"
]
} |
84,639 | I got this when trying to copy this gist on vim: https://gist.github.com/w0ng/3278077 I notice the same behaviour everywhere: If I paste something that includes a comment, everything that follows the comment is pasted commented. Maybe it has something to do with the fact that if I write a comment on insert mode and press enter, the following line automatically appends a comment. How can I get rid of this behaviour? | Enter paste mode before you paste: :set paste To switch back to "normal" mode: :set nopaste | {
"source": [
"https://unix.stackexchange.com/questions/84639",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43448/"
]
} |
84,640 | pacman refuses to upgrade my Arch Linux system because of a conflict between open-vm-tools-modules and the linux kernel package: $ sudo pacman -Syu
:: Synchronizing package databases...
core is up to date
extra is up to date
community 1988.4 KiB 970K/s 00:02 [#############################################################################] 100%
multilib is up to date
:: Starting full system upgrade...
resolving dependencies...
looking for inter-conflicts...
error: failed to prepare transaction (could not satisfy dependencies)
:: open-vm-tools-modules: requires linux<3.10 Do you have any suggestions how to analyze the problem? | Enter paste mode before you paste: :set paste To switch back to "normal" mode: :set nopaste | {
"source": [
"https://unix.stackexchange.com/questions/84640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29509/"
]
} |
84,686 | Can anyone guide me to create custom commands in Unix/Linux. | Create a bash script in your /usr/bin folder, it should look something like this #!/bin/bash
Whatever combination of commands you want to run when you type this thing. Its really that easy. Just name the bash script what you want to type in to the terminal, and make it excecutable: chmod +x filename and you're good to go! | {
"source": [
"https://unix.stackexchange.com/questions/84686",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43965/"
]
} |
84,813 | Here are some options I thought of, not sure which is the right one. There was an I/O error reading from the pipe. The process writing to the other end of the pipe died with a failure. All processes who could write to the pipe have closed it. The write buffer of the pipe is full. The peer has closed the other direction of the duplex pipe. Writing failed because there are no processes which could read from the pipe. A system call returned the EPIPE error, and there was no error handler installed. | A process receives a SIGPIPE when it attempts to write to a pipe (named or not) or socket of type SOCK_STREAM that has no reader left. It's generally wanted behaviour. A typical example is: find . | head -n 1 You don't want find to keep on running once head has terminated (and then closed the only file descriptor open for reading on that pipe). The yes command typically relies on that signal to terminate. yes | some-command Will write "y" until some-command terminates. Note that it's not only when commands exit, it's when all the reader have closed their reading fd to the pipe. In: yes | ( sleep 1; exec <&-; ps -fC yes)
1 2 1 0 There will be 1 (the subshell), then 2 (subshell + sleep), then 1 (subshell) then 0 fd reading from the pipe after the subshell explicitely closes its stdin, and that's when yes will receive a SIGPIPE. Above, most shells use a pipe(2) while ksh93 uses a socketpair(2) , but the behaviour is the same in that regard. When a process ignores the SIGPIPE, the writing system call (generally write , but could be pwrite , send , splice ...) returns with a EPIPE error. So processes wanting to handle the broken pipe manually would typically ignore SIGPIPE and take action upon a EPIPE error. | {
"source": [
"https://unix.stackexchange.com/questions/84813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9908/"
]
} |
84,814 | I'd like to do a health check of a service by calling a specific url on it. Feels like the simplest solution would be to use cron to do the check every minute or so. In case of errors, cron sends me an email. I tried using cUrl for this but I can't get it to output messages only on errors. If I try to direct output to /dev/null, it prints out progress report. % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5559 100 5559 0 0 100k 0 --:--:-- --:--:-- --:--:-- 106k I tried looking through the curl options but I just can't find anything to suit the situation where you want it to be silent on success but make noise on errors. Is there a way to make curl do what I want or is there some other tool I should be looking at? | What about -sSf ? From the man pages: -s/--silent
Silent or quiet mode. Do not show progress meter or error messages.
Makes Curl mute.
-S/--show-error
When used with -s it makes curl show an error message if it fails.
-f/--fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly
done to better enable scripts etc to better deal with failed attempts. In
normal cases when a HTTP server fails to deliver a document, it returns
an HTML document stating so (which often also describes why and more).
This flag will prevent curl from outputting that and return error 22.
This method is not fail-safe and there are occasions where non-successful
response codes will slip through, especially when authentication is
involved (response codes 401 and 407). For example: curl -sSf http://example.org > /dev/null | {
"source": [
"https://unix.stackexchange.com/questions/84814",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44044/"
]
} |
84,844 | In Fish when you start typing, autocompletion automatically shows the first autocompleted guess on the line itself. In zsh you have to hit tab, and it shows the autocompletion below. Is there anyway to make zsh behave more like fish in this regard? (I am using Oh My Zsh ...) | I have implemented a zsh-autosuggestions plugin. It should integrate nicely with zsh-history-substring-search and zsh-syntax-highlighting which are features ported from fish. | {
"source": [
"https://unix.stackexchange.com/questions/84844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44062/"
]
} |
84,847 | I am trying to find how to log a specific instantiation of rrdtool to see whether the path it is receiving is incorrect. I know I could wrap the executable in a shell script that would log the parameters, but I was wondering if there was a more kernel-specific way to monitor for that, perhaps a filesystem callback that sees when a particular /proc/pid/exe matches a given binary? | Yes, there is a kernel facility: the audit subsystem. The auditd daemon does the logging, and the command auditctl sets up the logging rules. You can log all calls to a specific system alls, with some filtering. If you want to log all commands executed and their arguments, log the execve system call: auditctl -a exit,always -S execve To specifically trace the invocation of a specific program, add a filter on the program executable: auditctl -a exit,always -S execve -F path=/usr/bin/rrdtool The logs show up in /var/log/audit.log , or wherever your distribution puts them. You need to be root to control the audit subsystem. Once you're done investigating, use the same command line with -d instead of -a to delete a logging rule, or run auditctl -D to delete all audit rules. For debugging purposes, replacing the program by a wrapper script gives you more flexibility to log things like the environment, information about the parent process, etc. | {
"source": [
"https://unix.stackexchange.com/questions/84847",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15189/"
]
} |
84,848 | Say I'm on machine local and log into machine remote with ssh, using X11 forwarding. Is there any way for programs running within the ssh session on remote can know they are being displayed on local ? Ideally, I'd like to know the hostname of the computer that the X server is running on. My goal is to cause different behavior in a program (emacs) based on which machine it's displayed on. | Yes, there is a kernel facility: the audit subsystem. The auditd daemon does the logging, and the command auditctl sets up the logging rules. You can log all calls to a specific system alls, with some filtering. If you want to log all commands executed and their arguments, log the execve system call: auditctl -a exit,always -S execve To specifically trace the invocation of a specific program, add a filter on the program executable: auditctl -a exit,always -S execve -F path=/usr/bin/rrdtool The logs show up in /var/log/audit.log , or wherever your distribution puts them. You need to be root to control the audit subsystem. Once you're done investigating, use the same command line with -d instead of -a to delete a logging rule, or run auditctl -D to delete all audit rules. For debugging purposes, replacing the program by a wrapper script gives you more flexibility to log things like the environment, information about the parent process, etc. | {
"source": [
"https://unix.stackexchange.com/questions/84848",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19334/"
]
} |
84,852 | I need to recursively remove all files in all subdirs where the filename contains a number followed by an 'x' followed by a number, at least two times. Example: I'd want to remove these files: 'aaa-12x123-123x12.jpg'
'aaa-12x12-123x12-12x123.jpg' But I do NOT want to remove these files: 'aaa.jpg'
'aaa-12x12.jpg'
'aaaxaaa-123x123.jpg'
'aaaxaaa-aaaxaaa.jpg' How can I do that (from the bash shell) | A string contains “a number followed by an x followed by a number” if and only if it contains a digit followed by an x followed by a digit, i.e. if it contains a substring matching the pattern [0-9]x[0-9] . So you're looking to remove the files whose name matches the pattern *[0-9]x[0-9]*[0-9]x[0-9]*.jpg . find /path/to/directory -type f -name '*[0-9]x[0-9]*[0-9]x[0-9]*.jpg' -delete If your find doesn't have -delete , call rm to delete the files. find /path/to/directory -type f -name '*[0-9]x[0-9]*[0-9]x[0-9]*.jpg' -exec rm {} + | {
"source": [
"https://unix.stackexchange.com/questions/84852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44075/"
]
} |
84,915 | I am looking for a software in Linux, that will add the album-art/cover to each of the selected files in batch-mode. The album art is a jpg/png stored in my computer. It will be awesome if it can import from internet. Currently I have tried both Rhythmbox and Banshee. I have also tried lame and easytag , but seems they do not support batch mode. lame is not adding the properties, but overwriting. (I know lame is the only command line s/w i have used so far). So, basically I am looking for: <some magic s/w> --picture=<my chosen picture> Music/Artist/*.mp3 That will add the picture to the metadata of the file, permanently. Can you suggest me any such software? | lame Using lame you can do this using a little scripting: $ lame --ti /path/to/file.jpg audio.mp3 If the files are named something like this you can make a shell script to do what you want: for i in file1.mp3 file2.mp3 file3.mp3; do
albart=$(echo $i | sed 's/.mp3/.jpg/')
lame --ti /path/to/$albart $i
done You can make the above slightly more compact and eliminate the need for sed by using bash to do it by having it remove the matching suffix: ...
albart="${i%.mp3}.jpg"
... Picard/MusicBrainz If you want to do this on a large scale I'd suggest using Picard which is the frontend tool for using the MusicBrainz database. There is a plugin to Picard called "Cover Art Downloader", which can do this to batches of your collection. MusicBrainz Picard Picard Plugins The above doesn't appear to be command line driven however. beets Another option would be to use beets . This can be driven from the command-line and makes use of MusicBrainz database for sourcing the album art. http://beets.radbox.org/ You can either source the album art using FetchArt Plugin or embed it using the EmbedArt Plugin . Other options? Also take a look at this previously asked U&L Q&A titled: Which mp3 tagging tool for Linux? . There are several alternative tools listed in this thread. | {
"source": [
"https://unix.stackexchange.com/questions/84915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43074/"
]
} |
84,922 | I want to read one part of one line from a file.
For example: POP3_SERVER_NAME = localhost I want to return only localhost , using sed. This text is on the third line. I do this to extract the line: sed -n '3p' installation.sh How do I extract only the localhost part? | awk might be a better tool here. $ cat test.dat
LINE 1
LINE 2
POP3_SERVER_NAME = localhost Search for lines that contain "POP3_SERVER_NAME"; print the last field. This doesn't depend on POP3_SERVER_NAME always being on line 3, which is probably a Good Thing. $ awk '/POP3_SERVER_NAME/{print $NF}' test.dat
localhost Depending on your application, you might need to make the regular expression more stringent. For example, you might want to match only that line that starts with POP3_SERVER_NAME. $ awk '/^POP3_SERVER_NAME/{print $NF}' test.dat
localhost Using sed is a little less intuitive. (Thanks, I'm aware of the irony.) Address the line that contains POP3_SERVER_NAME anywhere. Substitute an empty string for all the text from the beginning of the line to the optional space following "=". Then print. sed -n -e '/POP3_SERVER_NAME/ s/.*\= *//p' test.dat | {
"source": [
"https://unix.stackexchange.com/questions/84922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42897/"
]
} |
84,929 | I have learned from this Stack Overflow question that it is possible to use vi / vim to comment out a specified range of line numbers. For example, suppose I have the following bash script: #!/bin/bash
This
is
my
very
very
great
script Now suppose that I want to comment out line numbers 6 through 8 (which contain the words very , very , and great ) using the # comment character. In vi / vim , I can simply type :6,8s/^/# to obtain the following: #!/bin/bash
This
is
my
#very
#very
#great
script which comments out lines 6 through 8. My question is, is it possible to type a similar one liner that will remove the # comment character from lines 6 through 8 (but not any other commented lines in the file)? Having said this, I realize that there is some debate about whether I am actually using vi or vim . In practice, I open a file script.sh with the command vi script.sh . Also, when I type the command which vi , I obtain /usr/bin/vi . Nevertheless, when I simply type vi and press Enter , I obtain this: ~ VIM - Vi IMproved
~
~ version 7.2.330
~ by Bram Moolenaar et al.
~ Vim is open source and freely distributable
~
~ Sponsor Vim development!
~ type :help sponsor<Enter> for information
~
~ type :q<Enter> to exit
~ type :help<Enter> or <F1> for on-line help
~ type :help version7<Enter> for version info which seems to suggest that I'm actually using vim . I am accessing a remote Ubuntu Linux cluster using SSH from my PC. I am not using a Ubuntu Linux GUI. | You can use: :6,8s/^#// But much easier is to use Block Visual selection mode: Go to beginning of line 6, press Ctrl-v , go down to line 8 and press x . There is also "The NERD Commenter" plugin. | {
"source": [
"https://unix.stackexchange.com/questions/84929",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9605/"
]
} |
85,021 | If I'm assigning a variable with temp=$! what would it be its value? | $! contains the process ID of the most recently executed background pipeline. From man bash : Special Parameters The shell treats several parameters specially. These parameters may only be referenced; assignment to them is not allowed. ... ! - Expands to the process ID of the most recently executed background (asynchronous) command. For example: $ sleep 60 &
[1] 6238
$ echo "$!"
6238 | {
"source": [
"https://unix.stackexchange.com/questions/85021",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8893/"
]
} |
85,060 | Say I have two paths: <source_path> and <target_path> . I would like my shell (zsh) to automatically find out if there is a way to represent <target_path> from <source_path> as a relative path. E.g. Let's assume <source_path> is /foo/bar/something <target_path> is /foo/hello/world The result would be ../../hello/world Why I need this: I need like to create a symbolic link from <source_path> to <target_path> using a relative symbolic link whenever possible, since otherwise our samba server does not show the file properly when I access these files on the network from Windows (I am not the sys admin, and don't have control over this setting) Assuming that <target_path> and <source_path> are absolute paths, the following creates a symbolic link pointing to an absolute path . ln -s <target_path> <source_path> so it does not work for my needs. I need to do this for hundreds of files, so I can't just manually fix it. Any shell built-ins that take care of this? | Try using realpath command (part of GNU coreutils ; >=8.23 ), e.g.: realpath --relative-to=/foo/bar/something /foo/hello/world If you're using macOS, install GNU version via: brew install coreutils and use grealpath . Note that both paths need to exist for the command to be successful. If you need the relative path anyway even if one of them does not exist then add the -m switch. For more examples, see Convert absolute path into relative path given a current directory . | {
"source": [
"https://unix.stackexchange.com/questions/85060",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
85,194 | I'd like to download, and extract an archive under a given directory. Here is how I've been doing it so far: wget http://downloads.mysql.com/source/dbt2-0.37.50.3.tar.gz
tar zxf dbt2-0.37.50.3.tar.gz
mv dbt2-0.37.50.3 dbt2 I'd like instead to download and extract the archive on the fly , without having the tar.gz written to the disk. I think this is possible by piping the output of wget to tar , and giving tar a target, but in practice I don't know how to put the pieces together. | You can do it by telling wget to output its payload to stdout (with the flag -O- ) and suppress its own output (with the flag -q ): wget -qO- your_link_here | tar xvz To specify a target directory: wget -qO- your_link_here | tar xvz -C /target/directory If you happen to have GNU tar , you can also rename the output dir: wget -qO- your_link_here | tar --transform 's/^dbt2-0.37.50.3/dbt2/' -xvz | {
"source": [
"https://unix.stackexchange.com/questions/85194",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30018/"
]
} |
85,244 | I'm trying to learn systemd services by trying to start xclock as a service; the service file is below [Unit]
Description=clock
[Service]
Environment=DISPLAY=:0
ExecStart=/usr/bin/xclock
[Install]
WantedBy=graphical.target Any ideas what's wrong here? I'm getting an error saying "cannot connect to display." | An application needs two things to open a window on an X display. It needs to know the location of the X display; that's conveyed by the DISPLAY environment variable. It also needs to authenticate with the X server. This is conveyed through a cookie, which is a secret value generated by the X server when it starts and stored in a file that only the user who started the X server can access. The default cookie file is ~/.Xauthority . If your X server is using the default cookie file location, then adding Environment=XAUTHORITY=/home/dogs/.Xauthority will work (assuming /home/dogs is the home directory of the user who is logged in under X). If you need to find the location, see Can I launch a graphical program on another user's desktop as root? and Open a window on a remote X display (why “Cannot open display”)? Alternatively, running the program as the user who is running the X server will work, provided that the cookie file is in the default location (if not, you'll have to locate the cookie file, like in the root case). Add the User directive (e.g. User=dogs ). Of course the service won't run if there isn't an X display by that number owned by the user you specify. It's rather bizarre to start a GUI program from Systemd. It wasn't designed for this. GUI programs live in an X session, started by a user. Systemd is for system processes. You should experiment with daemons instead. | {
"source": [
"https://unix.stackexchange.com/questions/85244",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44336/"
]
} |
85,249 | When looking for the path to an executable or checking what would happen if you enter a command name in a Unix shell, there's a plethora of different utilities ( which , type , command , whence , where , whereis , whatis , hash , etc). We often hear that which should be avoided. Why? What should we use instead? | Here is all you never thought you would ever not want to know about it: Summary To get the pathname of an executable in a Bourne-like shell script (there are a few caveats; see below): ls=$(command -v ls) To find out if a given command exists: if command -v given-command > /dev/null; then
echo given-command is available
else
echo given-command is not available
fi At the prompt of an interactive Bourne-like shell: type ls The which command is a broken heritage from the C-Shell and is better left alone in Bourne-like shells. Use Cases There's a distinction between looking for that information as part of a script or interactively at the shell prompt. At the shell prompt, the typical use case is: this command behaves weirdly, am I using the right one? What exactly happened when I typed mycmd ? Can I look further at what it is? In that case, you want to know what your shell does when you invoke the command without actually invoking the command. In shell scripts, it tends to be quite different. In a shell script there's no reason why you'd want to know where or what a command is if all you want to do is run it. Generally, what you want to know is the path of the executable, so you can get more information out of it (like the path to another file relative to that, or read information from the content of the executable file at that path). Interactively, you may want to know about all the my-cmd commands available on the system, in scripts, rarely so. Most of the available tools (as is often the case) have been designed to be used interactively. History A bit of history first. The early Unix shells until the late 70s had no functions or aliases. Only the traditional looking up of executables in $PATH . csh introduced aliases around 1978 (though csh was first released in 2BSD , in May 1979), and also the processing of a .cshrc for users to customize the shell (every shell, as csh , reads .cshrc even when not interactive like in scripts). While the Bourne shell was first released in Unix V7 earlier in 1979, function support was only added much later (1984 in SVR2), and anyway, it never had some rc file (the .profile is to configure your environment, not the shell per se ). csh got a lot more popular than the Bourne shell as (though it had an awfully worse syntax than the Bourne shell) it was adding a lot of more convenient and nice features for interactive use. In 3BSD (1980), a which csh script was added for the csh users to help identify an executable, and it's a hardly different script you can find as which on many commercial Unices nowadays (like Solaris, HP/UX, AIX or Tru64). That script reads the user's ~/.cshrc (like all csh scripts do unless invoked with csh -f ), and looks up the provided command name(s) in the list of aliases and in $path (the array that csh maintains based on $PATH ). Here you go: which came first for the most popular shell at the time (and csh was still popular until the mid-90s), which is the main reason why it got documented in books and is still widely used. Note that, even for a csh user, that which csh script does not necessarily give you the right information. It gets the aliases defined in ~/.cshrc , not the ones you may have defined later at the prompt or for instance by source ing another csh file, and (though that would not be a good idea), PATH might be redefined in ~/.cshrc . Running that which command from a Bourne shell would still lookup aliases defined in your ~/.cshrc , but if you don't have one because you don't use csh , that would still probably get you the right answer. A similar functionality was not added to the Bourne shell until 1984 in SVR2 with the type builtin command. The fact that it is builtin (as opposed to an external script) means that it can give you the right information (to some extent) as it has access to the internals of the shell. The initial type command suffered from a similar issue as the which script in that it didn't return a failure exit status if the command was not found. Also, for executables, contrary to which , it output something like ls is /bin/ls instead of just /bin/ls which made it less easy to use in scripts. Unix Version 8's (not released in the wild) Bourne shell had its type builtin renamed to whatis and extended to also report about parameters and print function definitions. It also fixed type issue of not returning failure when failing to find a name. rc , the shell of Plan9 (the once-to-be successor of Unix) (and its derivatives like akanga and es ) have whatis as well. The Korn shell (a subset of which the POSIX sh definition is based on), developed in the mid-80s but not widely available before 1988, added many of the csh features (line editor, aliases...) on top of the Bourne shell. It added its own whence builtin (in addition to type ) which took several options ( -v to provide with the type -like verbose output, and -p to look only for executables (not aliases/functions...)). Coincidental to the turmoil with regards to the copyright issues between AT&T and Berkeley, a few free software shell implementations came out in the late 80s early 90s. All of the Almquist shell ( ash , to be replacement of the Bourne shell in BSDs), the public domain implementation of ksh ( pdksh ), bash (sponsored by the FSF), zsh came out in-between 1989 and 1991. Ash, though meant to be a replacement for the Bourne shell, didn't have a type builtin until much later (in NetBSD 1.3 and FreeBSD 2.3), though it had hash -v . OSF/1 /bin/sh had a type builtin which always returned 0 up to OSF/1 v3.x. bash didn't add a whence but added a -p option to type to print the path ( type -p would be like whence -p ) and -a to report all the matching commands. tcsh made which builtin and added a where command acting like bash 's type -a . zsh has them all. The fish shell (2005) has a type command implemented as a function. The which csh script meanwhile was removed from NetBSD (as it was builtin in tcsh and of not much use in other shells), and the functionality added to whereis (when invoked as which , whereis behaves like which except that it only looks up executables in $PATH ). In OpenBSD and FreeBSD, which was also changed to one written in C that looks up commands in $PATH only. Implementations There are dozens of implementations of a which command on various Unices with different syntax and behaviour. On Linux (beside the builtin ones in tcsh and zsh ) we find several implementations. On recent Debian systems for instance, it's a simple POSIX shell script that looks for commands in $PATH . busybox also has a which command. There is a GNU which which is probably the most extravagant one. It tries to extend what the which csh script did to other shells: you can tell it what your aliases and functions are so that it can give you a better answer (and I believe some Linux distributions set some global aliases around that for bash to do that). zsh has a couple of operators to expand to the path of executables: the = filename expansion operator and the :c history expansion modifier (here applied to parameter expansion ): $ print -r -- =ls
/bin/ls
$ cmd=ls; print -r -- $cmd:c
/bin/ls zsh , in the zsh/parameters module also makes the command hash table as the commands associative array: $ print -r -- $commands[ls]
/bin/ls The whatis utility (except for the one in Unix V8 Bourne shell or Plan 9 rc / es ) is not really related as it's for documentation only (greps the whatis database, that is the man page synopsis'). whereis was also added in 3BSD at the same time as which though it was written in C , not csh and is used to lookup at the same time, the executable, man page and source but not based on the current environment. So again, that answers a different need. Now, on the standard front, POSIX specifies the command -v and -V commands (which used to be optional until POSIX.2008). UNIX specifies the type command (no option). That's all ( where , which , whence are not specified in any standard). Up to some version, type and command -v were optional in the Linux Standard Base specification which explains why for instance some old versions of posh (though based on pdksh which had both) didn't have either. command -v was also added to some Bourne shell implementations (like on Solaris). Status Today The status nowadays is that type and command -v are ubiquitous in all the Bourne-like shells (though, as noted by @jarno, note the caveat/bug in bash when not in POSIX mode or some descendants of the Almquist shell below in comments). tcsh is the only shell where you would want to use which (as there's no type there and which is builtin). In the shells other than tcsh and zsh , which may tell you the path of the given executable as long as there's no alias or function by that same name in any of our ~/.cshrc , ~/.bashrc or any shell startup file and you don't define $PATH in your ~/.cshrc . If you have an alias or function defined for it, it may or may not tell you about it, or tell you the wrong thing. If you want to know about all the commands by a given name, there's nothing portable. You'd use where in tcsh or zsh , type -a in bash or zsh , whence -a in ksh93 and in other shells, you can use type in combination with which -a which may work. Recommendations Getting the pathname to an executable Now, to get the pathname of an executable in a script, there are a few caveats: ls=$(command -v ls) would be the standard way to do it. There are a few issues though: It is not possible to know the path of the executable without executing it. All the type , which , command -v ... all use heuristics to find out the path. They loop through the $PATH components and find the first non-directory file for which you have execute permission. However, depending on the shell, when it comes to executing the command, many of them (Bourne, AT&T ksh, zsh, ash...) will just execute them in the order of $PATH until the execve system call doesn't return with an error. For instance if $PATH contains /foo:/bar and you want to execute ls , they will first try to execute /foo/ls or if that fails /bar/ls . Now execution of /foo/ls may fail because you don't have execution permission but also for many other reasons, like it's not a valid executable. command -v ls would report /foo/ls if you have execution permission for /foo/ls , but running ls might actually run /bar/ls if /foo/ls is not a valid executable. if foo is a builtin or function or alias, command -v foo returns foo . With some shells like ash , pdksh or zsh , it may also return foo if $PATH includes the empty string and there's an executable foo file in the current directory. There are some circumstances where you may need to take that into account. Keep in mind for instance that the list of builtins varies with the shell implementation (for instance, mount is sometimes builtin for busybox sh ), and for instance bash can get functions from the environment. if $PATH contains relative path components (typically . or the empty string which both refer to the current directory but could be anything), depending on the shell, command -v cmd might not output an absolute path. So the path you obtain at the time you run command -v will no longer be valid after you cd somewhere else. Anecdotal: with the ksh93 shell, if /opt/ast/bin (though that exact path can vary on different systems I believe) is in your $PATH , ksh93 will make available a few extra builtins ( chmod , cmp , cat ...), but command -v chmod will return /opt/ast/bin/chmod even if that path doesn't exist. Determining whether a command exists To find out if a given command exists standardly, you can do: if command -v given-command > /dev/null 2>&1; then
echo given-command is available
else
echo given-command is not available
fi Where one might want to use which (t)csh In csh and tcsh , you don't have much choice. In tcsh , that's fine as which is builtin. In csh , that will be the system which command, which may not do what you want in a few cases. Find commands only in some shells A case where it might make sense to use which is if you want to know the path of a command, ignoring potential shell builtins or functions in bash , csh (not tcsh ), dash , or Bourne shell scripts, that is shells that don't have whence -p (like ksh or zsh ), command -ev (like yash ), whatis -p ( rc , akanga ) or a builtin which (like tcsh or zsh ) on systems where which is available and is not the csh script. If those conditions are met, then: echo=$(which echo) would give you the path of the first echo in $PATH (except in corner cases), regardless of whether echo also happens to be a shell builtin/alias/function or not. In other shells, you'd prefer: zsh : echo==echo or echo=$commands[echo] or echo=${${:-echo}:c} ksh , zsh : echo=$(whence -p echo) yash : echo=$(command -ev echo) rc , akanga : echo=`whatis -p echo` (beware of paths with spaces) fish : set echo (type -fp echo) Note that if all you want to do is run that echo command, you don't have to get its path, you can just do: env echo this is not echoed by the builtin echo For instance, with tcsh , to prevent the builtin which from being used: set Echo = "`env which echo`" When you do need an external command Another case where you may want to use which is when you actually need an external command. POSIX requires that all shell builtins (like command ) be also available as external commands, but unfortunately, that's not the case for command on many systems. For instance, it's rare to find a command command on Linux based operating systems while most of them have a which command (though different ones with different options and behaviours). Cases where you may want an external command would be wherever you would execute a command without invoking a POSIX shell. The system("some command line") , popen() ... functions of C or various languages do invoke a shell to parse that command line, so system("command -v my-cmd") do work in them. An exception to that would be perl which optimises out the shell if it doesn't see any shell special character (other than space). That also applies to its backtick operator: $ perl -le 'print system "command -v emacs"'
-1
$ perl -le 'print system ":;command -v emacs"'
/usr/bin/emacs
0
$ perl -e 'print `command -v emacs`'
$ perl -e 'print `:;command -v emacs`'
/usr/bin/emacs The addition of that :; above forces perl to invoke a shell there. By using which , you wouldn't have to use that trick. | {
"source": [
"https://unix.stackexchange.com/questions/85249",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22565/"
]
} |
85,336 | I need to test aspects of my software that only happen at certain times of the day. Rather than waiting whole days (and getting here at 2:00 AM), I'd like to change the time. But I'd rather not change it permanently . I know I can change the time using date , and then change it back again, but is there a better way? OS in question is RHEL6 running in a VM. | There's a library called libfaketime (also on GitHub ) which allows you to make the system report a given time to your application. You can either have the system report a fixed time for the duration of the program execution, or start the clock at some specific time (for example, 01:59:30). Basically, you hook the faketime library into your program's in-memory image through the library loader, and it captures and handles in its own way all system calls which relate to system time. It doesn't exactly change the system time, but it changes what time is reported to your specific application without affecting anything else that is running, which is probably what you really are after (otherwise, I see no reason to not just change the global system time). There's a number of possible variants on how to use it, but it looks like Changing what time a process thinks it is with libfaketime has a pretty thorough listing along with sample code to try them out. Google should also be able to unearth some examples given that you know what to search for. It would appear that it isn't available prepackaged through the RHEL repositories, but for example Debian provides it under the package name faketime . It also looks straight forward to build from source code (it apparently doesn't even need a configure step or anything like that). | {
"source": [
"https://unix.stackexchange.com/questions/85336",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19581/"
]
} |
85,352 | I'm trying to set an alias for sudo !! in Bash. I tried alias sbb='sudo !! ', but it interprets that as a literal !! and prints sudo: !!: command not found If I use double quotes, it substitutes the double bang in the string itself, so that doesn't work. Is there any way to make this work? Or an alternative alias?
` | !! is expanded by bash when you type it. It's not expanded by alias substitution. You can use the history built-in to do the expansion: alias sbb='sudo $(history -p !!)' If the command is more than a simple command (e.g. it contains redirections or pipes), you need to invoke a shell under sudo: alias sbb='sudo "$BASH" -c "$(history -p !!)"' | {
"source": [
"https://unix.stackexchange.com/questions/85352",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17002/"
]
} |
85,364 | How can I verify whether a running process will catch a signal, or ignore it, or block it? Ideally I'd like to see a list of signals, or at least not have to actually send the signal to check. | Under Linux, you can find the PID of your process, then look at /proc/$PID/status . It contains lines describing which signals are blocked (SigBlk), ignored (SigIgn), or caught (SigCgt). # cat /proc/1/status
...
SigBlk: 0000000000000000
SigIgn: fffffffe57f0d8fc
SigCgt: 00000000280b2603
... The number to the right is a bitmask. If you convert it from hex to binary, each 1-bit represents a caught signal, counting from right to left starting with 1. So by interpreting the SigCgt line, we can see that my init process is catching the following signals: 00000000280b2603 ==> 101000000010110010011000000011
| | | || | || |`-> 1 = SIGHUP
| | | || | || `--> 2 = SIGINT
| | | || | |`----------> 10 = SIGUSR1
| | | || | `-----------> 11 = SIGSEGV
| | | || `--------------> 14 = SIGALRM
| | | |`-----------------> 17 = SIGCHLD
| | | `------------------> 18 = SIGCONT
| | `--------------------> 20 = SIGTSTP
| `----------------------------> 28 = SIGWINCH
`------------------------------> 30 = SIGPWR (I found the number-to-name mapping by running kill -l from bash.) EDIT : And by popular demand, a script, in POSIX sh. sigparse () {
i=0
# bits="$(printf "16i 2o %X p" "0x$1" | dc)" # variant for busybox
bits="$(printf "ibase=16; obase=2; %X\n" "0x$1" | bc)"
while [ -n "$bits" ] ; do
i="$(expr "$i" + 1)"
case "$bits" in
*1) printf " %s(%s)" "$(kill -l "$i")" "$i" ;;
esac
bits="${bits%?}"
done
}
grep "^Sig...:" "/proc/$1/status" | while read a b ; do
printf "%s%s\n" "$a" "$(sigparse "$b")"
done # | fmt -t # uncomment for pretty-printing | {
"source": [
"https://unix.stackexchange.com/questions/85364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3629/"
]
} |
85,379 | What do I need to do to have read permissions on /dev/hidraw*? I'm seeing stuff about udev rules and saw this on the net, but the world of udev is like a foreign land to me, and if there's some sort of a simpler solution where I just add myself to a group that'd be dandy... (Ubuntu 13.10 Preview) Feel free to retag the question - I'm not too keen on what 'hidraw' exactly goes under. EDIT: H'okay, so, just some more information to clarify the issue:
I literally stepped through code that called the POSIX open() method, and got the errno for insufficient permissions. Running cat on the file as a normal user results in an insufficient permissions error, while running under su results in a successful (albeit meaningless) cat operation. EDIT EDIT: At request, I'm providing the relevant code with POSIX call. It's from the HIDAPI library by Signal11 (function hid_open_path ). I trust that this code is correct, as it has apparently been in use for quite some time now. I've added a comment located where the relevant errno reading took place in GDB. hid_device *dev = NULL;
hid_init();
dev = new_hid_device();
if (kernel_version == 0) {
struct utsname name;
int major, minor, release;
int ret;
uname(&name);
ret = sscanf(name.release, "%d.%d.%d", &major, &minor, &release);
if (ret == 3) {
kernel_version = major << 16 | minor << 8 | release;
//printf("Kernel Version: %d\n", kernel_version);
}
else {
printf("Couldn't sscanf() version string %s\n", name.release);
}
}
/* OPEN HERE */
dev->device_handle = open(path, O_RDWR);
// errno at this location is 13: insufficient permissions
/* If we have a good handle, return it. */
if (dev->device_handle > 0) {
/* Get the report descriptor */
int res, desc_size = 0;
struct hidraw_report_descriptor rpt_desc;
memset(&rpt_desc, 0x0, sizeof(rpt_desc));
/* Get Report Descriptor Size */
res = ioctl(dev->device_handle, HIDIOCGRDESCSIZE, &desc_size);
if (res < 0)
perror("HIDIOCGRDESCSIZE");
/* Get Report Descriptor */
rpt_desc.size = desc_size;
res = ioctl(dev->device_handle, HIDIOCGRDESC, &rpt_desc);
if (res < 0) {
perror("HIDIOCGRDESC");
} else {
/* Determine if this device uses numbered reports. */
dev->uses_numbered_reports =
uses_numbered_reports(rpt_desc.value,
rpt_desc.size);
}
return dev;
}
else {
/* Unable to open any devices. */
free(dev);
return NULL;
} | I gave up running around trying to figure out some other means of doing it than udev rules, and instead just learned a bit about udev and wrote a flippin' rule. The following line was placed in a .rules file (I named mine 99-hidraw-permissions.rules ) located under /etc/udev/rules.d . KERNEL=="hidraw*", SUBSYSTEM=="hidraw", MODE="0664", GROUP="plugdev" Basically this assigns all devices coming out of the hidraw subsystem in the kernel to the group plugdev and sets the permissions to r/w r/w r (for root [the default owner], plugdev, and everyone else respectively). With myself added to the plugdev group, everything's dandy. Not quite as brain melting as I'd expected. Udev rules actually seem pretty straightforward... I mean, they look like they can get ridiculous if you're dealing with individual product IDs and whatnot, but they seem pretty damn tame for what they do. | {
"source": [
"https://unix.stackexchange.com/questions/85379",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44416/"
]
} |
85,383 | I tried to do it with the command startx 1 . It seemed to flicker to a different screen momentarily, but then exited. Got the following error: /usr/bin/xterm: No absolute path found for shell: :1 Any ideas? | I think you can do it with this: $ startx -- :1 Note that you need to be on a text console. If you do this from an X session, you may not be authorized. First Ctrl + Alt + F1 to switch to a text console and log in there. Press Ctrl + Alt + F7 and Ctrl + Alt + F8 to switch between the X sessions (the F key numbers may vary depending on your distribution). If you want more control you can add more options to the command like so: $ startx gnome-session -- :1 vt8 This will start up gnome-session on display :1 and run it on virtual console 8 ( Ctrl + Alt + F8 ). | {
"source": [
"https://unix.stackexchange.com/questions/85383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40454/"
]
} |
85,390 | This question follows directly from the answer . In this case I am specifically unable to understand the part which says: In that regard, its behaviour is closer to emacs' than with
bash(readline)/ksh/zsh emacs mode, but departs from the terminal
driver embedded line editor (in canonical mode), where Ctrl-W deletes
the previous word (werase, also in vi). Here we are talking about shells and not editors which are two completely different programs. What does it mean to say shell is in some editor mode? P.S: You can base your answer on the premise that I understand what a shell is and how to use vim for basic editing. | In "vi" mode you can edit/navigate on the current shell prompt like a line in the vi editor. You can look at it like a one-line text file. Analogously in "emacs" mode you can edit/navigate the current command line using (some) of Emacs' shortcuts. Example For example in vi-mode you can do something like (in bash): $ set -o vi
$ ls hello world
<ESC>
bbdw # results in
$ ls world In emacs-mode you can hit e.g. Ctrl + A to jump at the start of a line (vi: Ctrl + [ , 0 or ESC , 0 ). You can turn on emacs mode via set -o emacs (in bash, ksh, zsh etc.). Readline A lot of interactive command line programs (including bash ) use the readline library. Thus, you can configure which input mode to use (vi or emacs) and other options in one place such that every program using readline has the exact same editing/navigating interface. For example my readline configuration looks like: $ cat ~/.inputrc
set editing-mode vi
set blink-matching-paren on For example zsh / ksh does not use readline as far as I know, but also support vi/emacs modes that are very much like the bash/readline one. Of course, the vi/emacs mode in a command line shell is just a subset of the complete editor feature set. Not every feature makes sense in a command line shell, and some features are more complicated to support than others. Canonical Mode Before vi/emacs modes of interactive command line shells 'were invented' your shell would use just the canonical mode of your terminal which only provides a limited set of editing commands (e.g. Ctrl + W to delete the last word. | {
"source": [
"https://unix.stackexchange.com/questions/85390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28032/"
]
} |
85,391 | I have recently discovered that if we press Ctrl + X Ctrl + E , bash opens the current command in an editor (set in $VISUAL or $EDITOR ) and executes it when the editor is closed. But it doesn't seem to be documented in the man pages. Is it documented, and if so where? | I have found it out now. I should have read it more carefully before asking this. The man page says: edit-and-execute-command (C-xC-e)
Invoke an editor on the current command line, and execute the
result as shell commands. Bash attempts to invoke $VISUAL,
$EDITOR, and emacs as the editor, in that order. | {
"source": [
"https://unix.stackexchange.com/questions/85391",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40290/"
]
} |
85,466 | I want to see list of process created by specific user or group of user in Linux
Can I do it using ps command or is there any other command to achieve this? | To view only the processes owned by a specific user, use the following command: top -U [username] Replace the [username] with the required username If you want to use ps then ps -u [username] OR ps -ef | grep <username> OR ps -efl | grep <username> for the extended listing Check out the man ps page for options Another alternative is to use pstree wchich prints the process tree of the user pstree <username or pid> | {
"source": [
"https://unix.stackexchange.com/questions/85466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44476/"
]
} |
85,505 | I am trying to get Protractor working for performing e2e angular testing, but protractor requires Selenium which requires ChromeDriver which requires glibc 2.14. My current development box is running Debian Wheezy which comes with glibc 2.13. I have read that switching over to Debian's unstable branch would provide access to glib-2.14 , but from what I have heard unstable is pretty...unstable. Is there any way I can upgrade glibc to 2.14 or 2.15 without the risk of breaking everything? Or is it possible to switch back from the unstable Debian branch if things start to break? 12:15:22.784 INFO - Executing: [new session: {browserName=chrome}] at URL: /session)
12:15:22.796 INFO - Creating a new session for Capabilities [{browserName=chrome}]
/home/chris/projects/personal/woddy/client/selenium/chromedriver: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.15' not found (required by /home/chris/projects/personal/woddy/client/selenium/chromedriver)
/home/chris/projects/personal/woddy/client/selenium/chromedriver: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by /home/chris/projects/personal/woddy/client/selenium/chromedriver)
12:15:43.032 WARN - Exception thrown
java.util.concurrent.ExecutionException: org.openqa.selenium.WebDriverException: java.lang.reflect.InvocationTargetException | You don't have to switch to the unstable to get glib >= 2.14. In fact, the testing branch (now stable, or Jessie) has glib-2.17 which you can pick just adding the testing repository and launching: sudo apt-get install libc6-dev=2.17-7 or, sudo apt-get -t testing install libc6-dev You can add the switch --dry-run to see what will being installed before hand. You can see the status of the glibc package in the Debian Package Tracker System (Debian renamed eglibc package to simply glibc from Jessie onwards). You can also just wait for Jessie release on April 25 . | {
"source": [
"https://unix.stackexchange.com/questions/85505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1782/"
]
} |
85,521 | # dd if=2013-Aug-uptime.csv bs=1 count=1 skip=3 2> /dev/null
d
# dd if=2013-Aug-uptime.csv bs=1 count=1 skip=0x3 2> /dev/null
f Why the second command outputs a different value? Is it possible to pass the skip|seek offset to dd as an hexadecimal value? | Why the second command outputs a different value? For historical reasons, dd considers x to be a multiplication operator. So 0x3 is evaluated to be 0. Is it possible to pass the skip|seek offset to dd as an hexadecimal value? Not directly, as far as I know. As well as multiplication using the operator x , you can suffix any number with b to mean "multiply by 512" (0x200) and with K to mean "multiply by 1024" (0x400). With GNU dd you can also use suffixes M , G , T , P , E , Z and Y to mean multiply by 2 to the power of 20, 30, 40, 50, 60, 70, 80 or 90, respectively, and you can use upper or lower case except for the b suffix. (There are many other possible suffixes. For example, EB means "multiply by 10 18 " and PiB means "multiply by 2 50 ". See info coreutils "block size" for more information, if you have a GNU installation.) You might find the above arcane, anachronistic, and geeky to the point of absurdity. Not to worry: you are not alone. Fortunately, you can just ignore it all and use your shell's arithmetic substitution instead (bash and other Posix compliant shells will work, as well as some non-Posix shells). The shell does understand hexadecimal numbers, and it allows a full range of arithmetic operators written in the normal way. You just need to surround the expression with $((...)) : # dd if=2013-Aug-uptime.csv bs=1 count=$((0x2B * 1024)) skip=$((0x37)) | {
"source": [
"https://unix.stackexchange.com/questions/85521",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31724/"
]
} |
85,578 | If I have the following 2 files and 1 folder: someuser@computer:~/Desktop/test$ ls -l
total 340
-rw-r--r-- 1 someuser someuser 45082 ago 5 09:56 file1.pdf
-rw-r--r-- 1 someuser someuser 291836 ago 5 09:56 file2.pdf
drwxrwxr-x 2 someuser someuser 4096 ago 5 09:56 this_is_a_folder.pdf And I run the following command (notice that I ommit the destination): cp *.pdf file1.pdf and file2.pdf are copied into the this_is_a_folder.pdf folder. someuser@computer00:~/Desktop/test$ ls this_is_a_folder.pdf/
file1.pdf file2.pdf Obviously *.pdf is expanding into matching items, so it's equivalent to cp file1.pdf file2.pdf this_is_a_folder.pdf ... and as this_is_a_folder.pdf is a folder, then the two files are copied to it. Is this a bug ? It's obviously a side effect of wildcard expansion and it's not what I would expect to happen. I would have expected a missing destination file error . | This is not a bug in the cp command. When you enter cp *.pdf , cp never sees the actual wildcards because the wildcards are expanded by bash , not by cp . How will cp know that you have entered only one argument? This is a side effect of bash wildcards and cannot be called a bug. | {
"source": [
"https://unix.stackexchange.com/questions/85578",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40195/"
]
} |
85,663 | To run the command poweroff or reboot one needs to be super user. Is there anyway I can run this as a normal user? I just don't want to sudo and enter my password every time I reboot or power off. | I changed /etc/sudoers so that every user that is in the admin group can execute the following commands without being ask for a password. sudo halt
sudo reboot
sudo poweroff You just need to add the following lines to /etc/sudoers ## Admin user group is allowed to execute halt and reboot
%admin ALL=NOPASSWD: /sbin/halt, /sbin/reboot, /sbin/poweroff and add yourself to the admin group. If you want only one user to be able to do this just remove the %admin and replace it with username like this ## user is allowed to execute halt and reboot
stormvirux ALL=NOPASSWD: /sbin/halt, /sbin/reboot, /sbin/poweroff You can find out more about /etc/sudoers with man sudoers or the online manpage | {
"source": [
"https://unix.stackexchange.com/questions/85663",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33793/"
]
} |
85,789 | I have directory with cca 26 000 files and I need to grep in all these files. Problem is, that I need it as fast as possible, so it's not ideal to make script where grep will take name of one file from find command and write matches to file. Before "arguments list too long" issue it took cca 2 minutes to grep in all this files.
Any ideas how to do it?
edit: there is a script that is making new files all the time, so it's not possible to put all files to different dirs. | With find : cd /the/dir
find . -type f -exec grep pattern {} + ( -type f is to only search in regular files (also excluding symlinks even if they point to regular files). If you want to search in any type of file except directories (but beware there are some types of files like fifos or /dev/zero that you generally don't want to read), replace -type f with the GNU-specific ! -xtype d ( -xtype d matches for files of type directory after symlink resolution)). With GNU grep : grep -r pattern /the/dir (but beware that unless you have a recent version of GNU grep, that will follow symlinks when descending into directories). Non-regular files won't be searched unless you add a -D read option. Recent versions of GNU grep will still not search inside symlinks though. Very old versions of GNU find did not support the standard {} + syntax, but there you could use the non-standard: cd /the/dir &&
find . -type f -print0 | xargs -r0 grep pattern Performances are likely to be I/O bound. That is the time to do the search would be the time needed to read all that data from storage. If the data is on a redundant disk array, reading several files at a time might improve performance (and could degrade them otherwise). If the performances are not I/O bound (because for instance all the data is in cache), and you have multiple CPUs, concurrent greps might help as well. You can do that with GNU xargs 's -P option. For instance, if the data is on a RAID1 array with 3 drives, or if the data is in cache and you have 3 CPUs whose time to spare: cd /the/dir &&
find . -type f -print0 | xargs -n1000 -r0P3 grep pattern (here using -n1000 to spawn a new grep every 1000 files, up to 3 running in parallel at a time). However note that if the output of grep is redirected, you'll end up with badly interleaved output from the 3 grep processes, in which case you may want to run it as: find . -type f -print0 | stdbuf -oL xargs -n1000 -r0P3 grep pattern (on a recent GNU or FreeBSD system) or use the --line-buffered option of GNU grep . If pattern is a fixed string, adding the -F option could improve matters. If it's not multi-byte character data, or if for the matching of that pattern, it doesn't matter whether the data is multi-byte character or not, then: cd /the/dir &&
LC_ALL=C grep -r pattern . could improve performance significantly. If you end up doing such searches often, then you may want to index your data using one of the many search engines out there. | {
"source": [
"https://unix.stackexchange.com/questions/85789",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44676/"
]
} |
85,808 | I use the following command to find files with a given string: find /var/www/http -type f | xargs grep -iR "STRING1" But how can I find files which include "STRING1" OR "STRING2" OR "STRING3"? This code doesn't work: find /var/www/http -type f | xargs grep -iR "STRING1" | xargs grep -iR "STRING2" | POSIXly, using grep with -E option: find /var/www/http -type f -exec grep -iE 'STRING1|STRING2' /dev/null {} + Or -e : find /var/www/http -type f -exec grep -i -e 'STRING' -e 'STRING2' /dev/null {} + With some implementations, at least on GNU systems, OSX and FreeBSD, you can escape | : find /var/www/http -type f -exec grep -i 'STRING1\|STRING2' /dev/null {} + | {
"source": [
"https://unix.stackexchange.com/questions/85808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44686/"
]
} |
85,865 | I tried to setup TRIM with LVM and dm-crypt on ubuntu 13.04 following this tutorial: http://blog.neutrino.es/2013/howto-properly-activate-trim-for-your-ssd-on-linux-fstrim-lvm-and-dmcrypt/ See the notes about my configuration and my testing procedure below. Questions Is there a reliable test if TRIM works properly? Is my test routine wrong or is my TRIM not working? If it's not working: what is wrong with my setup? How can I debug TRIM for my setup and make TRIM work? Configuration Here ist my configuration: cat /etc/crypttab sda3_crypt UUID=[...] none luks,discard and cat /etc/lvm/lvm.conf # [...]
devices {
# [ ... ]
issue_discards = 1
# [ ... ]
}
# [...] The SSD is a Samsung 840 Pro. Here is my test-procedure To test the setup I just did sudo fstrim -v / which resulted in /: [...] bytes were trimmed Doing this again resulted in /: 0 bytes were trimmed which seems to make sense and indicated that TRIM seems to work. However then I did this test: dd if=/dev/urandom of=tempfile count=100 bs=512k oflag=direct sudo hdparm --fibmap tempfile
tempfile:
filesystem blocksize 4096, begins at LBA 0; assuming 512 byte sectors.
byte_offset begin_LBA end_LBA sectors
0 5520384 5521407 1024
524288 5528576 5529599 1024
1048576 5523456 5525503 2048
2097152 5607424 5619711 12288
8388608 5570560 5603327 32768
25165824 5963776 5980159 16384
33554432 6012928 6029311 16384
41943040 6275072 6291455 16384
50331648 6635520 6639615 4096 sync sudo hdparm --read-sector 5520384 /dev/sda
/dev/sda:
reading sector 5520384: succeeded
7746 4e11 bf42 0c93 25d3 2825 19fd 8eda
bd93 8ec6 9942 bb98 ed55 87eb 53e1 01d5
c61a 3f52 19a1 0ae5 0798 c6e2 39d9 771a
b89f 3fc5 e786 9b1d 3452 d5d7 9479 a80d
114a 7528 a79f f475 57dc aeaf 25f4 998c
3dd5 b44d 23bf 77f3 0ad9 8688 6518 28ee
81db 1473 08b5 befe 8f2e 5b86 c84e c7d2
1bdd 1065 6a23 fd0f 2951 d879 e823 021b
fa84 b9c1 eadd 9154 c9f4 2ebe cd70 64ec
75a8 4d93 c8fa 3174 7277 1ffb e858 5eca
7586 8b2e 9dbc ab12 40ab eb17 8187 e67d
5e0d 0005 5867 b924 5cfd 6723 9e4a 6f5f
99a4 a3b0 eeac 454a 83b6 c528 1106 6682
ca77 4edf 2180 bf0c b175 fabb 3d4b 37e2
b834 9e3e 82f2 2fdd 2c6a c6ca 873f e71e
f979 160f 5778 356f 2aea 6176 46b6 72b9
f76e ee51 979c 326b 1436 7cfe f677 bfcd
4c3c 9e11 4747 45c1 4bb2 4137 03a1 e4c8
e9dd 43b4 a3b4 ce1b d218 4161 bf64 727b
75d8 dcc2 e14c ebec 2126 25da 0300 12bd
6b1a 28b3 824f 3911 c960 527d 97cd de1b
9f08 9a8e dcdc e65f 1875 58ca be65 82bf
e844 50b8 cc1b 7466 58b8 e708 bd3d c01f
64fb 9317 a77a e43b 671f e1fb e328 93a9
c9c7 291c 56e0 c6c1 f011 b94d 9dc7 71e6
c8b1 5720 b8c9 b1a6 14f1 7299 9122 912b
312a 0f2f a31a 8bf9 9f8c 54e6 96f3 60b8
04a7 7dc9 3caa db0a a837 e5d7 2752 b477
c22d 7598 44e1 84e9 25d4 5db5 9f19 f73b
85a0 c656 373a ec34 55fb e1fc 124e 4674
1ba8 1a84 6aa4 7cb5 455e f416 adc6 a125
c4d4 8323 4eee 2493 2920 4e38 524c 1981 sudo rm tempfile sync sudo fstrim / sync sudo hdparm --read-sector 5520384 /dev/sda
/dev/sda:
reading sector 5520384: succeeded
7746 4e11 bf42 0c93 25d3 2825 19fd 8eda
bd93 8ec6 9942 bb98 ed55 87eb 53e1 01d5
c61a 3f52 19a1 0ae5 0798 c6e2 39d9 771a
b89f 3fc5 e786 9b1d 3452 d5d7 9479 a80d
114a 7528 a79f f475 57dc aeaf 25f4 998c
3dd5 b44d 23bf 77f3 0ad9 8688 6518 28ee
81db 1473 08b5 befe 8f2e 5b86 c84e c7d2
1bdd 1065 6a23 fd0f 2951 d879 e823 021b
fa84 b9c1 eadd 9154 c9f4 2ebe cd70 64ec
75a8 4d93 c8fa 3174 7277 1ffb e858 5eca
7586 8b2e 9dbc ab12 40ab eb17 8187 e67d
5e0d 0005 5867 b924 5cfd 6723 9e4a 6f5f
99a4 a3b0 eeac 454a 83b6 c528 1106 6682
ca77 4edf 2180 bf0c b175 fabb 3d4b 37e2
b834 9e3e 82f2 2fdd 2c6a c6ca 873f e71e
f979 160f 5778 356f 2aea 6176 46b6 72b9
f76e ee51 979c 326b 1436 7cfe f677 bfcd
4c3c 9e11 4747 45c1 4bb2 4137 03a1 e4c8
e9dd 43b4 a3b4 ce1b d218 4161 bf64 727b
75d8 dcc2 e14c ebec 2126 25da 0300 12bd
6b1a 28b3 824f 3911 c960 527d 97cd de1b
9f08 9a8e dcdc e65f 1875 58ca be65 82bf
e844 50b8 cc1b 7466 58b8 e708 bd3d c01f
64fb 9317 a77a e43b 671f e1fb e328 93a9
c9c7 291c 56e0 c6c1 f011 b94d 9dc7 71e6
c8b1 5720 b8c9 b1a6 14f1 7299 9122 912b
312a 0f2f a31a 8bf9 9f8c 54e6 96f3 60b8
04a7 7dc9 3caa db0a a837 e5d7 2752 b477
c22d 7598 44e1 84e9 25d4 5db5 9f19 f73b
85a0 c656 373a ec34 55fb e1fc 124e 4674
1ba8 1a84 6aa4 7cb5 455e f416 adc6 a125
c4d4 8323 4eee 2493 2920 4e38 524c 1981 This seems to indicate that TRIM doesn't work. Since sudo hdparm -I /dev/sda | grep -i TRIM
* Data Set Management TRIM supported (limit 8 blocks)
* Deterministic read ZEROs after TRIM Edit Here is the output of sudo dmsetup table lubuntu--vg-root: 0 465903616 linear 252:0 2048
lubuntu--vg-swap_1: 0 33308672 linear 252:0 465905664
sda3_crypt: 0 499222528 crypt aes-xts-plain64 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0 8:3 4096 1 allow_discards Here is my /etc/fstab : # <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/lubuntu--vg-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda2 during installation
UUID=f700d855-96d0-495e-a480-81f52b965bda /boot ext2 defaults 0 2
# /boot/efi was on /dev/sda1 during installation
UUID=2296-2E49 /boot/efi vfat defaults 0 1
/dev/mapper/lubuntu--vg-swap_1 none swap sw 0 0
# tmp
tmpfs /tmp tmpfs nodev,nosuid,noexec,mode=1777 0 0 Edit: I finally reported it as a bug in https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1213631 Hope somebody will find a solution there or at least test the setup and verify the bug. Update Now it works, see accepted answer. | I suggest using a different testing method. hdparm is a bit weird as it gives device addresses rather than filesystem addresses, and it doesn't say which device those addresses relate to (e.g. it resolves partitions, but not devicemapper targets, etc.). Much easier to use something that sticks with filesystem addresses, that way it's consistent (maybe except for non-traditional filesystems like zfs/btrfs). Create a test file: (not random on purpose) # yes | dd iflag=fullblock bs=1M count=1 of=trim.test Get the address, length and blocksize: (exact command depends on filefrag version) # filefrag -s -v trim.test
File size of trim.test is 1048576 (256 blocks, blocksize 4096)
ext logical physical expected length flags
0 0 34048 256 eof
trim.test: 1 extent found Get the device and mountpoint: # df trim.test
/dev/mapper/something 32896880 11722824 20838512 37% /mount/point With this set up, you have a file trim.test filled with yes -pattern on /dev/mapper/something at address 34048 with length of 256 blocks of 4096 bytes. Reading that from the device directly should produce the yes -pattern: # dd bs=4096 skip=34048 count=256 if=/dev/mapper/something | hexdump -C
00000000 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a |y.y.y.y.y.y.y.y.|
*
00100000 If TRIM is enabled, this pattern should change when you delete the file. Note that caches need to be dropped also, otherwise dd will not re-read the data from disk. # rm trim.test
# sync
# fstrim -v /mount/point/ # when not using 'discard' mount option
# echo 1 > /proc/sys/vm/drop_caches
# dd bs=4096 skip=34048 count=256 if=/dev/mapper/something | hexdump -C On most SSD that would result in a zero pattern: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00100000 If encryption is involved, you will see a random pattern instead: 00000000 1f c9 55 7d 07 15 00 d1 4a 1c 41 1a 43 84 15 c0 |..U}....J.A.C...|
00000010 24 35 37 fe 05 f7 43 93 1e f4 3c cc d8 83 44 ad |$57...C...<...D.|
00000020 46 80 c2 26 13 06 dc 20 7e 22 e4 94 21 7c 8b 2c |F..&... ~"..!|.,| That's because physically trimmed, the crypto layer reads zeroes and decrypts those zeroes to "random" data. If the yes -pattern persists, most likely no trimming has been done. | {
"source": [
"https://unix.stackexchange.com/questions/85865",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5289/"
]
} |
85,868 | Suppose that I have a desktop computer such that everything works except keyboard
. Is it possible to install and run Linux to this computer? I guess the answer is no as super user password requires keyboard but I'm not completely sure about this fact. | I suggest using a different testing method. hdparm is a bit weird as it gives device addresses rather than filesystem addresses, and it doesn't say which device those addresses relate to (e.g. it resolves partitions, but not devicemapper targets, etc.). Much easier to use something that sticks with filesystem addresses, that way it's consistent (maybe except for non-traditional filesystems like zfs/btrfs). Create a test file: (not random on purpose) # yes | dd iflag=fullblock bs=1M count=1 of=trim.test Get the address, length and blocksize: (exact command depends on filefrag version) # filefrag -s -v trim.test
File size of trim.test is 1048576 (256 blocks, blocksize 4096)
ext logical physical expected length flags
0 0 34048 256 eof
trim.test: 1 extent found Get the device and mountpoint: # df trim.test
/dev/mapper/something 32896880 11722824 20838512 37% /mount/point With this set up, you have a file trim.test filled with yes -pattern on /dev/mapper/something at address 34048 with length of 256 blocks of 4096 bytes. Reading that from the device directly should produce the yes -pattern: # dd bs=4096 skip=34048 count=256 if=/dev/mapper/something | hexdump -C
00000000 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a 79 0a |y.y.y.y.y.y.y.y.|
*
00100000 If TRIM is enabled, this pattern should change when you delete the file. Note that caches need to be dropped also, otherwise dd will not re-read the data from disk. # rm trim.test
# sync
# fstrim -v /mount/point/ # when not using 'discard' mount option
# echo 1 > /proc/sys/vm/drop_caches
# dd bs=4096 skip=34048 count=256 if=/dev/mapper/something | hexdump -C On most SSD that would result in a zero pattern: 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00100000 If encryption is involved, you will see a random pattern instead: 00000000 1f c9 55 7d 07 15 00 d1 4a 1c 41 1a 43 84 15 c0 |..U}....J.A.C...|
00000010 24 35 37 fe 05 f7 43 93 1e f4 3c cc d8 83 44 ad |$57...C...<...D.|
00000020 46 80 c2 26 13 06 dc 20 7e 22 e4 94 21 7c 8b 2c |F..&... ~"..!|.,| That's because physically trimmed, the crypto layer reads zeroes and decrypts those zeroes to "random" data. If the yes -pattern persists, most likely no trimming has been done. | {
"source": [
"https://unix.stackexchange.com/questions/85868",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44721/"
]
} |
85,873 | I have a scanned copy of my written signature and I need to apply it to some documents in the signature block. I used to do this on Windows all the time but I now have only Linux. Is this possible? How can I add a signature image to a PDF file in Linux (Gnome 3)? | Using Xournal you can annotate PDFs and add custom images (e.g. a transparent PNG). Although it is used for taking freehand notes and drawing, it can also annotate PDFs. On Ubuntu: Install Xournal through the Ubuntu Software Center Open Xournal Select "Annotate PDF" from the File menu and select your PDF file to be signed. Click the "Image" button in the toolbar (it looks like a silhouette of a person). Click on document. A file browser dialog will open. Select a PNG image of your signature. Resize and position the image on the PDF. Select "Export to PDF" from the File menu. More info at http://www.howtogeek.com/215485/sign-pdf-documents-without-printing-and-scanning-them-from-any-device/ | {
"source": [
"https://unix.stackexchange.com/questions/85873",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34855/"
]
} |
85,925 | What command(s) can I use to examine the contents of the timezone files, such as /etc/localtime or the files under /usr/share/zoneinfo/* ? | The most appropriate command would appear to be zdump . $ zdump /etc/localtime
/etc/localtime Wed Aug 7 23:52:25 2013 EDT
$ zdump /usr/share/zoneinfo/* | tail -10
/usr/share/zoneinfo/Singapore Thu Aug 8 11:52:48 2013 SGT
/usr/share/zoneinfo/Turkey Thu Aug 8 06:52:48 2013 EEST
/usr/share/zoneinfo/UCT Thu Aug 8 03:52:48 2013 UCT
/usr/share/zoneinfo/Universal Thu Aug 8 03:52:48 2013 UTC
/usr/share/zoneinfo/US Thu Aug 8 03:52:48 2013
/usr/share/zoneinfo/UTC Thu Aug 8 03:52:48 2013 UTC
/usr/share/zoneinfo/WET Thu Aug 8 04:52:48 2013 WEST
/usr/share/zoneinfo/W-SU Thu Aug 8 07:52:48 2013 MSK
/usr/share/zoneinfo/zone.tab Thu Aug 8 03:52:48 2013
/usr/share/zoneinfo/Zulu Thu Aug 8 03:52:48 2013 UTC You can also interrogate these files using the file command: $ file /etc/localtime
/etc/localtime: timezone data, version 2, 4 gmt time flags, 4 std time flags, no leap seconds, 235 transition times, 4 abbreviation chars
$ file /usr/share/zoneinfo/Singapore
/usr/share/zoneinfo/Singapore: timezone data, version 2, 8 gmt time flags, 8 std time flags, no leap seconds, 8 transition times, 8 abbreviation chars | {
"source": [
"https://unix.stackexchange.com/questions/85925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
85,932 | I'm trying to locally redirect ports on my Ubuntu machine using iptables . Similar to transparent proxying. I want to catch anything trying to leave my system on port 80 and redirect it to a remote host and port. Can I achieve this using the NAT and pre-routing functions of iptables ? | Try this iptables rule: $ sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination IP:80 The above says to: Add the following rule to the NAT table ( -t nat ). This rule will be appended ( -A ) to the outbound traffic ( OUTPUT ). We're only interested in TCP traffic ( -p tcp ). We're only interested in traffic who's destination port is 80 ( --dport 80 ). When we have a match, jump to DNAT ( -j DNAT ). Route this traffic to some other server's IP @ port 80 ( --to-destination IP:80 ). What's DNAT? DNAT
This target is only valid in the nat table, in the PREROUTING and OUTPUT
chains, and user-defined chains which are only called from those chains.
It specifies that the destination address of the packet should be modified
(and all future packets in this connection will also be mangled), and
rules should cease being examined. References iptables man page | {
"source": [
"https://unix.stackexchange.com/questions/85932",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43423/"
]
} |
85,994 | Using flock , several processes can have a shared lock at the same time, or be waiting to acquire a write lock. How do I get a list of these processes? That is, for a given file X, ideally to find the process id of each process which either holds, or is waiting for, a lock on the file. It would be a very good start though just to get a count of the number of processes waiting for a lock. | lslocks , from the util-linux package , does exactly this. In the MODE column, processes waiting for a lock will be marked with a * . | {
"source": [
"https://unix.stackexchange.com/questions/85994",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5032/"
]
} |
86,000 | How can you log every command someone has entered into the shell? I'm asking on both the basis of protecting yourself if you are logged into someone else's server and something breaks, or if someone else is logged into your server (either intentionally or maliciously). Even a novice can bypass history with unset history or create a new shell to hide their tracks. I'm curious how the senior Linux admins track what commands have been entered or what changes made to the system. | Check out auditd . If you add -a exit,always -F arch=b64 -S execve
-a exit,always -F arch=b32 -S execve to /etc/audit/audit.rules every executed commands will be logged. See: https://whmcr.com/2011/10/14/auditd-logging-all-commands/ Then send it to a syslog server. | {
"source": [
"https://unix.stackexchange.com/questions/86000",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39263/"
]
} |
86,012 | If you run hash it shows the path of all commands run since the hash was last reset ( hash -r ) [root@c04c ~]# hash
hash: hash table empty
[root@c04c ~]# whoami
root
[root@c04c ~]# hash
hits command
1 /usr/bin/whoami
[root@c04c ~]# whoami
root
[root@c04c ~]# hash
hits command
2 /usr/bin/whoami According to the man pages, the purpose of hash is: The /usr/bin/hash utility affects the way the current shell
environment remembers the locations of utilities found.
Depending on the arguments specified, it adds utility locations
to its list of remembered locations or it purges the
contents of the list. When no arguments are specified, it
reports on the contents of the list. The -r option causes
the shell to forget all remembered locations. Utilities provided as built-ins to the shell are not
reported by hash. Other than seeing how many times I've entered a command, I can't see the utility of hash . It was even featured in thegeekstuff.com's top 15 useful commands In what ways is hash useful? | hash is a bash built-in command. The hash table is a feature of bash that prevents it from having to search $PATH every time you type a command by caching the results in memory. The table gets cleared on events that obviously invalidate the results (such as modifying $PATH ) The hash command is just how you interact with that system (for whichever reason you feel you need to). Some use cases: Like you saw it prints out how many times you hit which commands if you type it with no arguments. This might tell you which commands you use most often. You can also use it to remember executables in non-standard locations. Example: [root@policyServer ~]# hash -p /lol-wut/whoami whoami
[root@policyServer ~]# whoami
Not what you’re thinking
[root@policyServer ~]# which whoami
/usr/bin/whoami
[root@policyServer ~]# /usr/bin/whoami
root
[root@policyServer ~]# Which might be useful if you just have a single executable in a directory outside of $PATH that you want to run by just type the name instead of including everything in that directory (which would be the effect if you added it to $PATH ). An alias can usually do this as well, though and since you're modifying the current shell's behavior, it isn't mapped in programs you kick off. A symlink to the lone executable is probably the preferable option here. hash is one way of doing it. You can use it to un-remember file paths. This is useful if a new executable pops up in an earlier PATH directory or gets mv 'd to somewhere else and you want to force bash to go out and find it again instead of the last place it remembers finding it. Example: [root@policyServer ~]# hash
hits command
1 /bin/ls
[root@policyServer ~]# cp /bin/ls /lol-wut
[root@policyServer ~]# hash
hits command
1 /bin/cp
1 /bin/ls
[root@policyServer ~]# hash -d ls
[root@policyServer ~]# ls
default.ldif newDIT.ldif notes.txt users.ldif
[root@policyServer ~]# hash
hits command
1 /bin/cp
1 /lol-wut/ls
[root@policyServer ~]# The cp command caused a new version of the ls executable to show up earlier in my $PATH but didn't trigger a purge of the hash table. I used hash -d to selectively purge the entry for ls from the hash table. Bash was then forced to look through $PATH again and when it did, it found it in the newer location (earlier in $PATH than it was running before). You can selectively invoke this "find new location of executable from $PATH " behavior, though: [root@policyServer ~]# hash
hits command
1 /bin/ls
[root@policyServer ~]# hash ls
[root@policyServer ~]# hash
hits command
0 /lol-wut/ls
[root@policyServer ~]# You'd mostly just want to do this if you wanted something out of the hash table and weren't 100% that you could logout and then back in successfully, or you wanted to preserve some modifications you've made to your shell. To get rid of stale mappings, you can also do hash -r (or export PATH=$PATH ) which effectively just purges bash's entire hash table. There are lots of little situations like that. I don't know if I'd call it one of the "most useful" commands but it does have some use cases. | {
"source": [
"https://unix.stackexchange.com/questions/86012",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39263/"
]
} |
86,036 | Here's something I find myself doing often: less super/long/file/name Followed by: vim super/long/file/name Is there an easy way to pass the args of the previous command over to the next? SO I'd like to do something like vim !!! And have it automatically open super/long/file/name in vim . | Using !$ should work to access the last argument of the previous command in the bash shell: less super/long/file/name
vim !$ Also Meta + . or Esc + . can be used to paste the last argument if the readline library is enabled in emacs mode (default option). | {
"source": [
"https://unix.stackexchange.com/questions/86036",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44788/"
]
} |
86,045 | Some quick background, I am running Fedora 19 x86_64 on a Dell Latitude with a 2nd gen i7 and discrete nvidia graphics card. I have some rather obnoxious problems where the screen doesn't seem to render consistently. The freezes are irregular and short, but frequent. The system has behaved like that since install, and initially I noticed it in a certain online multiplayer 3D Java game. I thought it was lag, but single player and other games behave similarly. Then I realized it actually consistently happens in the Desktop Environment (Gnome 3) and at times is almost unusable. I had to wait for about 30 seconds typing a sentence in this question. So what do I do to diagnose this problem? Who is most likely at fault? X? OpenGL? Graphics driver? Gnome 3? Kernel? Hardware? I am not even sure how to check what driver is being used or whether the discrete card is being taken advantage of. Also, the cursor retains the ability to move during the freezes which even further confuses me. Why might I be able to wiggle the cursor, but nothing else (like text) will render? | Using !$ should work to access the last argument of the previous command in the bash shell: less super/long/file/name
vim !$ Also Meta + . or Esc + . can be used to paste the last argument if the readline library is enabled in emacs mode (default option). | {
"source": [
"https://unix.stackexchange.com/questions/86045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29691/"
]
} |
86,147 | I have a need to see some additional file properties for exe and dll files. If I open windows explorer and add the additional columns to my view, I can see things like Company, Copyright, Product name and Product version when it exists for that file. This data is available via windows explorer so it stands to reason that while the data/string may exist somewhere in the file itself I should be able to extract that information via command line in linux. I've tried using 'strings' but have been met with limited success. Files where I know all the aforementioned data fields I cannot always see with 'strings' I'm hoping that someone may have an alternative solution. Maybe something I haven't thought of yet, to see this information. | You can use ExifTool . Here is an example of its usage: $ exiftool somefile.exe
ExifTool Version Number : 9.27
File Name : somefile.exe
Directory : .
File Size : 4.4 MB
File Modification Date/Time : 2013:08:09 12:43:10-04:00
File Access Date/Time : 2013:08:09 12:43:19-04:00
File Inode Change Date/Time : 2013:08:09 12:43:10-04:00
File Permissions : rw-------
File Type : Win32 EXE
MIME Type : application/octet-stream
Machine Type : Intel 386 or later, and compatibles
Time Stamp : 1992:06:19 18:22:17-04:00
PE Type : PE32
Linker Version : 2.25
Code Size : 37888
Initialized Data Size : 96256
Uninitialized Data Size : 0
Entry Point : 0x9c40
OS Version : 1.0
Image Version : 6.0
Subsystem Version : 4.0
Subsystem : Windows GUI
File Version Number : 3.3.0.0
Product Version Number : 3.3.0.0
File Flags Mask : 0x003f
File Flags : (none)
File OS : Win32
Object File Type : Executable application
File Subtype : 0
Language Code : Neutral
Character Set : Unicode
Comments : This installation was built with Inno Setup.
Company Name : Some company
File Description : Some company
File Version : 3.3
Legal Copyright : Copyright(c) 2009-2013 Some company
Product Name : Some company somefile
Product Version : 3.3 ExifTool supports a number of file types and meta information formats. From the exiftool(1) manpage: Below is a list of file types and meta information formats currently
supported by ExifTool (r = read, w = write, c = create):
File Types
------------+-------------+-------------+-------------+------------
3FR r | EIP r | LA r | ORF r/w | RSRC r
3G2 r | EPS r/w | LNK r | OTF r | RTF r
3GP r | ERF r/w | M2TS r | PAC r | RW2 r/w
ACR r | EXE r | M4A/V r | PAGES r | RWL r/w
AFM r | EXIF r/w/c | MEF r/w | PBM r/w | RWZ r
AI r/w | EXR r | MIE r/w/c | PCD r | RM r
AIFF r | F4A/V r | MIFF r | PDF r/w | SO r
APE r | FFF r/w | MKA r | PEF r/w | SR2 r/w
ARW r/w | FLA r | MKS r | PFA r | SRF r
ASF r | FLAC r | MKV r | PFB r | SRW r/w
AVI r | FLV r | MNG r/w | PFM r | SVG r
BMP r | FPF r | MODD r | PGF r | SWF r
BTF r | FPX r | MOS r/w | PGM r/w | THM r/w
CHM r | GIF r/w | MOV r | PLIST r | TIFF r/w
COS r | GZ r | MP3 r | PICT r | TTC r
CR2 r/w | HDP r/w | MP4 r | PMP r | TTF r
CRW r/w | HDR r | MPC r | PNG r/w | VRD r/w/c
CS1 r/w | HTML r | MPG r | PPM r/w | VSD r
DCM r | ICC r/w/c | MPO r/w | PPT r | WAV r
DCP r/w | IDML r | MQV r | PPTX r | WDP r/w
DCR r | IIQ r/w | MRW r/w | PS r/w | WEBP r
DFONT r | IND r/w | MXF r | PSB r/w | WEBM r
DIVX r | INX r | NEF r/w | PSD r/w | WMA r
DJVU r | ITC r | NRW r/w | PSP r | WMV r
DLL r | J2C r | NUMBERS r | QTIF r | WV r
DNG r/w | JNG r/w | ODP r | RA r | X3F r/w
DOC r | JP2 r/w | ODS r | RAF r/w | XCF r
DOCX r | JPEG r/w | ODT r | RAM r | XLS r
DV r | K25 r | OFR r | RAR r | XLSX r
DVB r | KDC r | OGG r | RAW r/w | XMP r/w/c
DYLIB r | KEY r | OGV r | RIFF r | ZIP r
Meta Information
----------------------+----------------------+---------------------
EXIF r/w/c | CIFF r/w | Ricoh RMETA r
GPS r/w/c | AFCP r/w | Picture Info r
IPTC r/w/c | Kodak Meta r/w | Adobe APP14 r
XMP r/w/c | FotoStation r/w | MPF r
MakerNotes r/w/c | PhotoMechanic r/w | Stim r
Photoshop IRB r/w/c | JPEG 2000 r | APE r
ICC Profile r/w/c | DICOM r | Vorbis r
MIE r/w/c | Flash r | SPIFF r
JFIF r/w/c | FlashPix r | DjVu r
Ducky APP12 r/w/c | QuickTime r | M2TS r
PDF r/w/c | Matroska r | PE/COFF r
PNG r/w/c | GeoTIFF r | AVCHD r
Canon VRD r/w/c | PrintIM r | ZIP r
Nikon Capture r/w/c | ID3 r | (and more) | {
"source": [
"https://unix.stackexchange.com/questions/86147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44835/"
]
} |
86,161 | In the example below what do the channel numbers correspond to? Which are on the server? Which are on the client? $ ssh -L1570:127.0.0.1:8899 root@thehost
Password:
Last login: Fri Aug 9 13:08:44 2013 from theclientip
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
You have new mail.
# channel 2: open failed: administratively prohibited: open failed
channel 3: open failed: administratively prohibited: open failed
channel 2: open failed: administratively prohibited: open failed The ssh client is running on Windows 7 and the server has a Tomcat server running on port 8899. Tomcat is not listening on 127.0.0.1 on the remote machine so if I change the command to ssh -L1570:thehostpublicip:8899 root@thehost the port forward works. So I know that port forwarding seems to be working just fine on the server. my sshd config file contains the following two lines: # Port forwarding
AllowTcpForwarding yes
# If port forwarding is enabled, specify if the server can bind to INADDR_ANY.
# This allows the local port forwarding to work when connections are received
# from any remote host.
GatewayPorts yes I'm trying to setup port forwarding for another process not Tomcat and I get the error messages similar to the stuff above so I'm trying to understand the meaning of the error messages. | From the SSH Protocol documentation , regarding channels: All terminal sessions, forwarded connections, etc., are channels.
Either side may open a channel. Multiple channels are multiplexed
into a single connection. Channels are identified by numbers at each end. The number referring
to a channel may be different on each side. Requests to open a
channel contain the sender's channel number. Any other channel
related messages contain the recipient's channel number for the
channel. Channels are flow-controlled. No data may be sent to a channel until
a message is received to indicate that window space is available. Port forwarding The command you have looks fine. Are you sure that the service you're trying to connect to is up and accepting connections? The channel errors would seem to indicate that it's not. What are my active channels? If you have an active ssh connection you can use the following key combination to get help: Shift + ~ followed by Shift + ? $ ~?
Supported escape sequences:
~. - terminate connection (and any multiplexed sessions)
~B - send a BREAK to the remote system
~C - open a command line
~R - Request rekey (SSH protocol 2 only)
~^Z - suspend ssh
~# - list forwarded connections
~& - background ssh (when waiting for connections to terminate)
~? - this message
~~ - send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)
debug2: channel 2: written 480 to efd 8 You can then use this key combination to get a list of the active channels: Shift + ~ followed by Shift + # $ ~#
The following connections are open:
#2 client-session (t4 r0 i0/0 o0/0 fd 6/7 cc -1)
debug2: channel 2: written 93 to efd 8 | {
"source": [
"https://unix.stackexchange.com/questions/86161",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16389/"
]
} |
86,165 | I have an NIC card on a Debian machine somewhere. The machine is turned off, but I need to know whether the NIC card is turned on so that I can send a wake-on-lan magic packet later (from another Debian machine) to wake it up. I have the MAC address of the card. Is there any way I can ping the ethernet card by MAC to see whether it is on? I tried creating an ARP entry: arp -s 192.168.2.2 00-0c-0d-ef-02-03
ping 192.168.2.2 That didn't work, since the NIC card does not have this ip address. So the NIC card would receive the ping request but would not reply to it. Is there any way around this? I am using the etherwake package to send a wake-on-lan message. | You might have better luck using the tool arping instead. The tool ping works at the layer 3 level of the OSI model , whereas arping works at layer 2. You still need to know the IP of the system however with this tool. There are 2 versions of it, the standard one included with most Unixes (Alexey Kuznetsov's) is the version that can only deal with IP addresses. The other version (Thomas Habets') supposedly can query using MAC addresses. $ sudo arping 192.168.1.1 -c 1
ARPING 192.168.1.1 from 192.168.1.218 eth0
Unicast reply from 192.168.1.1 [00:90:7F:85:BE:9A] 1.216ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s) arping works similarly to ping except instead of sending ICMP packets, it sends ARP packets. Getting a system's IP using just the MAC Here are a couple of methods for doing the reverse lookup of MAC to IP. nmap $ nmap -sP 192.168.1.0/24 Then look in your arp cache for the corresponding machine arp -an . fping $ fping -a -g 192.168.1.0/24 -c 1 Then look in your arp cache, same as above. ping $ ping -b -c1 192.168.1.255 Then look in your arp cache, same as above. nbtscan (windows only hosts) $ nbtscan 192.168.1.0/24
Doing NBT name scan for addresses from 192.168.1.0/24
IP address NetBIOS Name Server User MAC address
------------------------------------------------------------------------------
192.168.1.0 Sendto failed: Permission denied
192.168.1.4 MACH1 <server> <unknown> 00-0b-12-60-21-dd
192.168.1.5 MACH2 <server> <unknown> 00-1b-a0-3d-e7-be
192.168.1.6 MACH3 <server> <unknown> 00-21-9b-12-b6-a7 | {
"source": [
"https://unix.stackexchange.com/questions/86165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44840/"
]
} |
86,221 | There is no "Lock" option showing up in the user menu, and the standard shortcuts ( Ctrl + L or Ctrl + Alt + L ) don't do anything. I'm running Fedora 19 with Gnome Shell 3.8.3, and XDM 1.1.11. I'm using XDM because of broken XDMCP support in GDM - but before I upgraded to Fedora 19, I did have the lock option, even when using XDM. I've posted an answer reflecting the results of my own research. It basically says that it's not possible to have screen-lock integrated into Gnome 3.8 without running GDM. I really hope there's a better answer available though - so please add your own answer if there's any way to do this that I overlooked. | After some research, I think I've got enough information to post an answer to my own question. In Gnome Shell 3.6 and earlier, the old gnome-screensaver program was present, and if GDM was not running, gnome-screensaver would be invoked - allowing you to lock the screen. Starting in Gnome Shell 3.8 (included in Fedora 19), gnome-screensaver support has been dropped completely. This was done for three reasons: reduced code complexity coupled with the fact that the screensaver is seen as an unneeded feature, and the fact that the eventual move to Wayland will require screensaver, locking, etc. support to be in the compositor. So the only Gnome-integrated way of locking the screen is to have GDM running, which will respond to a dbus message telling it to lock the screen. Other display managers (such as XDM) have not been designed to respond to this dbus message, and so the screen cannot be locked. From this link : In old versions of gnome the command gnome-screensaver-command -l
would lock your screen. As gnome-screensaver is no more in gnome 3.8
you now have to send a dbus call. I think this is then handled by GDM. $ dbus-send --type=method_call --dest=org.gnome.ScreenSaver \
/org/gnome/ScreenSaver org.gnome.ScreenSaver.Lock | {
"source": [
"https://unix.stackexchange.com/questions/86221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41532/"
]
} |
86,247 | sh sys-snap.sh & What is sh ?
What is sys-snap.sh ?
Why I should put & at the end of the line?
Can anyone explain the syntax? Without the & the script won't go back to the prompt till I press Ctrl + C .
With & I can press enter and it works. | sh is the default Bourne-compatible shell (usually bash or dash) sys-snap.sh is a shell script, which contains commands that sh executes.
As you do not post its content, I can only guess from its name, what it does.
I can find a script related to CPanel with the same file name, that make a log file with all current processes, current memory usage, database status etc.
If the script starts with a shebang line ( #!/bin/sh or similar), you can make it executable with chmod +x sys-snap.sh and start it directly by using ./sys-snap.sh if it is in the current directory. With & the process starts in the background, so you can continue to use the shell and do not have to wait until the script is finished.
If you forget it, you can stop the current running process with Ctrl-Z and continue it in the background with bg (or in the foreground with fg ).
For more information, see job control | {
"source": [
"https://unix.stackexchange.com/questions/86247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29357/"
]
} |
86,266 | How can I make bash use time binary (/usr/bin/time) by default instead of the shell keyword? which time returns /usr/bin/time type time returns time is a shell keyword Running time is obviously executing the shell keyword: $ time
real 0m0.000s
user 0m0.000s
sys 0m0.000s
$ /usr/bin/time
Usage: /usr/bin/time [-apvV] [-f format] [-o file] [--append] [--verbose]
[--portability] [--format=format] [--output=file] [--version]
[--quiet] [--help] command [arg...] enable -n time returns bash: enable: time: not a shell builtin | You can use the command shell built-in to bypass the normal lookup process and run the given command as an external command regardless of any other possibilities (shell built-ins, aliases, etc.). This is often done in scripts which need to be portable across systems, although probably more commonly using the shorthand \ (as in \rm rather than command rm or rm , as especially the latter may be aliased to something not known like rm -i ). $ time
real 0m0.000s
user 0m0.000s
sys 0m0.000s
$ command time
Usage: time [-apvV] [-f format] [-o file] [--append] [--verbose]
[--portability] [--format=format] [--output=file] [--version]
[--quiet] [--help] command [arg...]
$ This can be used with an alias, like so: $ alias time='command time'
$ time
Usage: time [-apvV] [-f format] [-o file] [--append] [--verbose]
[--portability] [--format=format] [--output=file] [--version]
[--quiet] [--help] command [arg...]
$ The advantage of this over e.g. alias time=/usr/bin/time is that you aren't specifying the full path to the time binary, but instead falling back to the usual path search mechanism. The alias command itself can go into e.g. ~/.bashrc or /etc/bash.bashrc (the latter is global for all users on the system). For the opposite case (forcing use of the shell built-in in case there's an alias defined), you'd use something like builtin time , which again overrides the usual search process and runs the named shell built-in. The bash man page mentions that this is often used in order to provide custom cd functionality with a function named cd , which in turn uses the builtin cd to do the real thing. | {
"source": [
"https://unix.stackexchange.com/questions/86266",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28779/"
]
} |
86,270 | Can someone provide a couple of examples on how to use coproc ? | co-processes are a ksh feature (already in ksh88 ). zsh has had the feature from the start (early 90s), while it has just only been added to bash in 4.0 (2009). However, the behaviour and interface is significantly different between the 3 shells. The idea is the same, though: it allows to start a job in background and being able to send it input and read its output without having to resort to named pipes. That is done with unnamed pipes with most shells and socketpairs with recent versions of ksh93 on some systems. In a | cmd | b , a feeds data to cmd and b reads its output. Running cmd as a co-process allows the shell to be both a and b . ksh co-processes In ksh , you start a coprocess as: cmd |& You feed data to cmd by doing things like: echo test >&p or print -p test And read cmd 's output with things like: read var <&p or read -p var cmd is started as any background job, You can use fg , bg , kill on it and refer it by %job-number or via $! . To close the writing end of the pipe cmd is reading from, you can do: exec 3>&p 3>&- And to close the reading end of the other pipe (the one cmd is writing to): exec 3<&p 3<&- You cannot start a second co-process unless you first save the pipe file descriptors to some other fds. For instance: tr a b |&
exec 3>&p 4<&p
tr b c |&
echo aaa >&3
echo bbb >&p zsh co-processes In zsh , co-processes are nearly identical to those in ksh . The only real difference is that zsh co-processes are started with the coproc keyword. coproc cmd
echo test >&p
read var <&p
print -p test
read -p var Doing: exec 3>&p Note: This doesn't move the coproc file descriptor to fd 3 (like in ksh ), but duplicates it. So, there's no explicit way to close the feeding or reading pipe, other starting another coproc . For instance, to close the feeding end: coproc tr a b
echo aaaa >&p # send some data
exec 4<&p # preserve the reading end on fd 4
coproc : # start a new short-lived coproc (runs the null command)
cat <&4 # read the output of the first coproc In addition to pipe based co-processes, zsh (since 3.1.6-dev19, released in 2000) has pseudo-tty based constructs like expect . To interact with most programs, ksh-style co-processes won't work, since programs start buffering when their output is a pipe. Here are some examples. Start the co-process x : zmodload zsh/zpty
zpty x cmd (Here, cmd is a simple command. But you can do fancier things with eval or functions.) Feed a co-process data: zpty -w x some data Read co-process data (in the simplest case): zpty -r x var Like expect , it can wait for some output from the co-process matching a given pattern. bash co-processes The bash syntax is a lot newer, and builds on top of a new feature recently added to ksh93, bash, and zsh that provides a syntax to allow handling of dynamically-allocated file descriptors above 10. bash offers a basic coproc syntax, and an extended one. Basic syntax The basic syntax for starting a co-process looks like zsh 's: coproc cmd In ksh or zsh , the pipes to and from the co-process are accessed with >&p and <&p . But in bash , the file descriptors of the pipe from the co-process and the other pipe to the co-proccess are returned in the $COPROC array (respectively ${COPROC[0]} and ${COPROC[1]} . So… Feed data to the co-process: echo xxx >&"${COPROC[1]}" Read data from the co-process: read var <&"${COPROC[0]}" With the basic syntax, you can start only one co-process at the time. Extended syntax In the extended syntax, you can name your co-processes (like in zsh zpty co-proccesses): coproc mycoproc { cmd; } The command has to be a compound command. (Notice how the example above is reminiscent of function f { ...; } .) This time, the file descriptors are in ${mycoproc[0]} and ${mycoproc[1]} . You can start more than one co-process at a time—but you do get a warning when you start a co-process while one is still running (even in non-interactive mode). You can close the file descriptors when using the extended syntax. coproc tr { tr a b; }
echo aaa >&"${tr[1]}"
exec {tr[1]}>&-
cat <&"${tr[0]}" Note that closing that way doesn't work in bash versions prior to 4.3 where you have to write it instead: fd=${tr[1]}
exec {fd}>&- As in ksh and zsh , those pipe file descriptors are marked as close-on-exec. But in bash , the only way to pass those to executed commands is to duplicate them to fds 0 , 1 , or 2 . That limits the number of co-processes you can interact with for a single command. (See below for an example.) yash process and pipeline redirection yash doesn't have a co-process feature per se, but the same concept can be implemented with its pipeline and process redirection features. yash has an interface to the pipe() system call, so this kind of thing can be done relatively easily by hand there. You'd start a co-process with: exec 5>>|4 3>(cmd >&5 4<&- 5>&-) 5>&- Which first creates a pipe(4,5) (5 the writing end, 4 the reading end), then redirects fd 3 to a pipe to a process that runs with its stdin at the other end, and stdout going to the pipe created earlier. Then we close the writing end of that pipe in the parent which we won't need. So now in the shell we have fd 3 connected to the cmd's stdin and fd 4 connected to cmd's stdout with pipes. Note that the close-on-exec flag is not set on those file descriptors. To feed data: echo data >&3 4<&- To read data: read var <&4 3>&- And you can close fds as usual: exec 3>&- 4<&- Now, why they are not so popular hardly any benefit over using named pipes Co-processes can easily be implemented with standard named pipes. I don't know when exactly named pipes were introduced but it's possible it was after ksh came up with co-processes (probably in the mid 80s, ksh88 was "released" in 88, but I believe ksh was used internally at AT&T a few years before that) which would explain why. cmd |&
echo data >&p
read var <&p Can be written with: mkfifo in out
cmd <in >out &
exec 3> in 4< out
echo data >&3
read var <&4 Interacting with those is more straightforward—especially if you need to run more than one co-process. (See examples below.) The only benefit of using coproc is that you don't have to clean up of those named pipes after use. deadlock-prone Shells use pipes in a few constructs: shell pipes: cmd1 | cmd2 , command substitution: $(cmd) , and process substitution: <(cmd) , >(cmd) . In those, the data flows in only one direction between different processes. With co-processes and named pipes, though, it's easy to run into deadlock. You have to keep track of which command has which file descriptor open, to prevent one staying open and holding a process alive. Deadlocks can be tricky to investigate, because they may occur non-deterministically; for instance, only when as much data as to fill one pipe up is sent. works worse than expect for what it's been designed for The main purpose of co-processes was to provide the shell with a way to interact with commands. However, it does not work so well. The simplest form of deadlock mentioned above is: tr a b |&
echo a >&p
read var<&p Because its output doesn't go to a terminal, tr buffers its output. So it won't output anything until either it sees end-of-file on its stdin , or it has accumulated a buffer-full of data to output. So above, after the shell has output a\n (only 2 bytes), the read will block indefinitely because tr is waiting for the shell to send it more data. In short, pipes aren't good for interacting with commands. Co-processes can only be used to interact with commands that don't buffer their output, or commands which can be told not to buffer their output; for example, by using stdbuf with some commands on recent GNU or FreeBSD systems. That's why expect or zpty use pseudo-terminals instead. expect is a tool designed for interacting with commands, and it does it well. File descriptor handling is fiddly, and hard to get right Co-processes can be used to do some more complex plumbing than what simple shell pipes allow. that other Unix.SE answer has an example of a coproc usage. Here's a simplified example: Imagine you want a function that feeds a copy of a command's output to 3 other commands, and then have the output of those 3 commands get concatenated. All using pipes. For instance: feed the output of printf '%s\n' foo bar to tr a b , sed 's/./&&/g' , and cut -b2- to obtain something like: foo
bbr
ffoooo
bbaarr
oo
ar First, it's not necessarily obvious, but there’s a possibility for deadlock there, and it will start to happen after only a few kilobytes of data. Then, depending on your shell, you’ll run in a number of different problems that have to be addressed differently. For instance, with zsh , you'd do it with: f() (
coproc tr a b
exec {o1}<&p {i1}>&p
coproc sed 's/./&&/g' {i1}>&- {o1}<&-
exec {o2}<&p {i2}>&p
coproc cut -c2- {i1}>&- {o1}<&- {i2}>&- {o2}<&-
tee /dev/fd/$i1 /dev/fd/$i2 >&p {o1}<&- {o2}<&- &
exec cat /dev/fd/$o1 /dev/fd/$o2 - <&p {i1}>&- {i2}>&-
)
printf '%s\n' foo bar | f Above, the co-process fds have the close-on-exec flag set, but not the ones that are duplicated from them (as in {o1}<&p ). So, to avoid deadlocks, you’ll have to make sure they're closed in any processes that don't need them. Similarly, we have to use a subshell and use exec cat in the end, to ensure there's no shell process lying about holding a pipe open. With ksh (here ksh93 ), that would have to be: f() (
tr a b |&
exec {o1}<&p {i1}>&p
sed 's/./&&/g' |&
exec {o2}<&p {i2}>&p
cut -c2- |&
exec {o3}<&p {i3}>&p
eval 'tee "/dev/fd/$i1" "/dev/fd/$i2"' >&"$i3" {i1}>&"$i1" {i2}>&"$i2" &
eval 'exec cat "/dev/fd/$o1" "/dev/fd/$o2" -' <&"$o3" {o1}<&"$o1" {o2}<&"$o2"
)
printf '%s\n' foo bar | f ( Note: That won’t work on systems where ksh uses socketpairs instead of pipes , and where /dev/fd/n works like on Linux.) In ksh , fds above 2 are marked with the close-on-exec flag, unless they’re passed explicitly on the command line. That’s why we don't have to close the unused file descriptors like with zsh —but it’s also why we have to do {i1}>&$i1 and use eval for that new value of $i1 , to be passed to tee and cat … In bash this cannot be done, because you can't avoid the close-on-exec flag. Above, it's relatively simple, because we use only simple external commands. It gets more complicated when you want to use shell constructs in there instead, and you start running into shell bugs. Compare the above with the same using named pipes: f() {
mkfifo p{i,o}{1,2,3}
tr a b < pi1 > po1 &
sed 's/./&&/g' < pi2 > po2 &
cut -c2- < pi3 > po3 &
tee pi{1,2} > pi3 &
cat po{1,2,3}
rm -f p{i,o}{1,2,3}
}
printf '%s\n' foo bar | f Conclusion If you want to interact with a command, use expect , or zsh 's zpty , or named pipes. If you want to do some fancy plumbing with pipes, use named pipes. Co-processes can do some of the above, but be prepared to do some serious head scratching for anything non-trivial. | {
"source": [
"https://unix.stackexchange.com/questions/86270",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
86,321 | I would like to display the contents of a text file on the command line. The file only contains 5-6 characters. Is there an easy way to do this? | Using cat Since your file is short, you can use cat . cat filename Using less If you have to view the contents of a longer file, you can use a pager such as less . less filename You can make less behave like cat when invoked on small files and behave
normally otherwise by passing it the -F and -X flags. less -FX filename I have an alias for less -FX . You can make one yourself like so: alias aliasname='less -FX' If you add the alias to your shell
configuration , you can use it
forever. Using od If your file contains strange or unprintable characters, you can use od to examine the characters. For example, $ cat file
(ÐZ4 ?o=÷jï
$ od -c test
0000000 202 233 ( 320 K j 357 024 J 017 h Z 4 240 ? o
0000020 = 367 \n
0000023 | {
"source": [
"https://unix.stackexchange.com/questions/86321",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44536/"
]
} |
86,330 | I can fire ls -lrt to get files and folders sorted by modification date, but this does not separate directories from files. I want ls to show me first directories by modification date and then files by modification date. How to do that? | what about something like this: ls -ltr --group-directories-first | {
"source": [
"https://unix.stackexchange.com/questions/86330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27887/"
]
} |
86,556 | I have numerous linux boxes with a very limited set of commands and disk space. But it has the telnet command on it. I remotely connect to each of these probes (programmatically) and issue one line linux command through SSH. I need to run a single command to connect to a specific machine, using telnet, and then disconnect right away . I can do all that, but the disconnection right away part. Telnet opens some sort of a console, or terminal and I can't figure out a one-line command to run the telnet command and then disconnect right away. If I do that, I can easily parse the textual output for error messages for not being able to connect to the machine on the specified port and that's exactly what I am looking for. So how can I run a one-line command to connect to a machine using telnet and disconnect afterwards ? | You ought to be able to pipe the exit command into STDIN in telnet .
Try: echo 'exit' | telnet {site} {port} and see if that works. (it seems to work on my web server, but YMMV). | {
"source": [
"https://unix.stackexchange.com/questions/86556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/25989/"
]
} |
86,720 | I am running Arch based on the Linux 3.10.5-1 kernel. The system uses the new de-facto naming conventions of ethernet interfaces enp*s* and wlp* etc. This is a problem however, as my educational institution is using a program called Maple 17 . Maple's licensing system is dependant on the existence of an interface named eth0 because it must retrieve the MAC address of it to verify the license. It's a bad solution, but I have to work around it. This means I will need an eth0 interface with any MAC address at all (As I can retrieve a new license file for the new MAC address) that doesn't necessarily have to work. In fact, it should just be down at all times. I reckon there are several ways to attempt to solve this issue, but I haven't been able to find anything about any of the ideas. Creating an adapter without connectivity Creating an alias for enp3s0 named eth0 Renaming enp3s0 or the loopback interface. The things I was able to find only covered changing to the newer conventions and on older versions of udev. They only worked on RHEL and SuSe anyway. I tried it without luck though. (persistent-net-names.rules and net-name-slot.rules, both of them just made my actual interface stop working and my wlan interface disappear) | Sure. You can create a tap device fairly easily, either with tunctl (from uml-utilities, at least on Debian): # tunctl -t eth0
Set 'eth0' persistent and owned by uid 0
# ifconfig eth0
eth0 Link encap:Ethernet HWaddr a6:9b:fe:d8:d9:5e
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Or with ip : # ip tuntap add dev eth0 mode tap
# ip link ls dev eth0
7: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 500
link/ether 0e:55:9b:6f:57:6c brd ff:ff:ff:ff:ff:ff Probably you should prefer the second method, as ip is preferred network tool on Linux, and you likely already have it installed. Also, both of these are creating the tap device with a—I'd guess—random local MAC, you can set the MAC to a fixed value in any of the normal ways. | {
"source": [
"https://unix.stackexchange.com/questions/86720",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28919/"
]
} |
86,722 | I have a folder with some directories and some files (some are hidden, beginning with dot). for d in *; do
echo $d
done will loop through all files and directories, but I want to loop only through directories. How do I do that? | You can specify a slash at the end to match only directories: for d in */ ; do
echo "$d"
done If you want to exclude symlinks, use a test to continue the loop if the current entry is a link. You need to remove the trailing slash from the name in order for -L to be able to recognise it as a symbolic link: for d in */ ; do
[ -L "${d%/}" ] && continue
echo "$d"
done | {
"source": [
"https://unix.stackexchange.com/questions/86722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
86,729 | I have some url which has space in it's query param. I want to use this in curl,
e.g. curl -G "http://localhost:30001/data?zip=47401&utc_begin=2013-8-1 00:00:00&utc_end=2013-8-2 00:00:00&country_code=USA" which gives out Malformed Request-Line As per my understanding o/p is due to the space present in query param. Is there any away to encode the url automatically before providing it to curl command? | curl supports url-encoding internally with --data-urlencode : $ curl -G -v "http://localhost:30001/data" --data-urlencode "msg=hello world" --data-urlencode "msg2=hello world2" -G is also necessary to append the data to the URL. Trace headers > GET /data?msg=hello%20world&msg2=hello%20world2 HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu)
> Host: localhost
> Accept: */* | {
"source": [
"https://unix.stackexchange.com/questions/86729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45138/"
]
} |
86,813 | In XFCE, when holding down Alt and then clicking anywhere in a window with the left button will move that window (as if by dragging the title bar), or resize the windows when clicking and dragging with the right button. I like this feature very much, and I'm used to it, since forever. But I've always had it bound to Super-L (left "Windows"-Key). How do I change the behavior of Alt for Super-L ? I was looking at XFCE Settings Manager but I couldn't find it anywhere. | In the Settings Manager choose Window Manager Tweaks , then on the third tab, Accessibility you will find the control Key used to grab and move windows : | {
"source": [
"https://unix.stackexchange.com/questions/86813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1290/"
]
} |
86,875 | This is a simple problem but the first time I've ever had to actually fix it: finding which specific files/inodes are the targets of the most I/O. I'd like to be able to get a general system overview, but if I have to give a PID or TID I'm alright with that. I'd like to go without having to do a strace on the program that pops up in iotop . Preferably, using a tool in the same vein as iotop but one that itemizes by file. I can use lsof to see which files mailman has open but it doesn't indicate which file is receiving I/O or how much. I've seen elsewhere where it was suggested to use auditd but I'd prefer to not do that since it would put the information into our audit files, which we use for other purposes and this seems like an issue I ought to be able to research in this way. The specific problem I have right now is with LVM snapshots filling too rapidly. I've since resolved the problem but would like to have been able to fix it this way rather than just doing an ls on all the open file descriptors in /proc/<pid>/fd to see which one was growing fastest. | There are several aspects to this question which have been addressed partially through other tools, but there doesn't appear to be a single tool that provides all the features you're looking for. iotop This tools shows which processes are consuming the most I/O. But it lacks options to show specific file names. $ sudo iotop
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
5 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/u:0]
6 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0]
7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0] By default it does what regular top does for processes vying for the CPU's time, except for disk I/O. You can coax it to give you a 30,000 foot view by using the -a switch so that it shows an accumulation by process, over time. $ sudo iotop -a
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
258 be/3 root 0.00 B 896.00 K 0.00 % 0.46 % [jbd2/dm-0-8]
22698 be/4 emma 0.00 B 72.00 K 0.00 % 0.00 % chrome
22712 be/4 emma 0.00 B 172.00 K 0.00 % 0.00 % chrome
1177 be/4 root 0.00 B 36.00 K 0.00 % 0.00 % cupsd -F
22711 be/4 emma 0.00 B 120.00 K 0.00 % 0.00 % chrome
22703 be/4 emma 0.00 B 32.00 K 0.00 % 0.00 % chrome
22722 be/4 emma 0.00 B 12.00 K 0.00 % 0.00 % chrome i* tools (inotify, iwatch, etc.) These tools provide access to the file access events, however they need to be specifically targeted to specific directories or files. So they aren't that helpful when trying to trace down a rogue file access by an unknown process, when debugging performance issues. Also the inotify framework doesn't provide any particulars about the files being accessed. Only the type of access, so no information about the amount of data being moved back and forth is available, using these tools. iostat Shows overall performance (reads & writes) based on access to a given device (hard drive) or partition. But doesn't provide any insight into which files are generating these accesses. $ iostat -htx 1 1
Linux 3.5.0-19-generic (manny) 08/18/2013 _x86_64_ (3 CPU)
08/18/2013 10:15:38 PM
avg-cpu: %user %nice %system %iowait %steal %idle
18.41 0.00 1.98 0.11 0.00 79.49
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda
0.01 0.67 0.09 0.87 1.45 16.27 37.06 0.01 10.92 11.86 10.82 5.02 0.48
dm-0
0.00 0.00 0.09 1.42 1.42 16.21 23.41 0.01 9.95 12.22 9.81 3.19 0.48
dm-1
0.00 0.00 0.00 0.02 0.01 0.06 8.00 0.00 175.77 24.68 204.11 1.43 0.00 blktrace This option is too low level. It lacks visibility as to which files and/or inodes are being accessed, just raw block numbers. $ sudo blktrace -d /dev/sda -o - | blkparse -i -
8,5 0 1 0.000000000 258 A WBS 0 + 0 <- (252,0) 0
8,0 0 2 0.000001644 258 Q WBS [(null)]
8,0 0 3 0.000007636 258 G WBS [(null)]
8,0 0 4 0.000011344 258 I WBS [(null)]
8,5 2 1 1266874889.709032673 258 A WS 852117920 + 8 <- (252,0) 852115872
8,0 2 2 1266874889.709033751 258 A WS 852619680 + 8 <- (8,5) 852117920
8,0 2 3 1266874889.709034966 258 Q WS 852619680 + 8 [jbd2/dm-0-8]
8,0 2 4 1266874889.709043188 258 G WS 852619680 + 8 [jbd2/dm-0-8]
8,0 2 5 1266874889.709045444 258 P N [jbd2/dm-0-8]
8,0 2 6 1266874889.709051409 258 I WS 852619680 + 8 [jbd2/dm-0-8]
8,0 2 7 1266874889.709053080 258 U N [jbd2/dm-0-8] 1
8,0 2 8 1266874889.709056385 258 D WS 852619680 + 8 [jbd2/dm-0-8]
8,5 2 9 1266874889.709111456 258 A WS 482763752 + 8 <- (252,0) 482761704
...
^C
...
Total (8,0):
Reads Queued: 0, 0KiB Writes Queued: 7, 24KiB
Read Dispatches: 0, 0KiB Write Dispatches: 3, 24KiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 0, 0KiB Writes Completed: 5, 24KiB
Read Merges: 0, 0KiB Write Merges: 3, 12KiB
IO unplugs: 2 Timer unplugs: 0
Throughput (R/W): 0KiB/s / 510KiB/s
Events (8,0): 43 entries
Skips: 0 forward (0 - 0.0%) fatrace This is a new addition to the Linux Kernel and a welcomed one, so it's only in newer distros such as Ubuntu 12.10. My Fedora 14 system was lacking it 8-). It provides the same access that you can get through inotify without having to target a particular directory and/or files. $ sudo fatrace
pickup(4910): O /var/spool/postfix/maildrop
pickup(4910): C /var/spool/postfix/maildrop
sshd(4927): CO /etc/group
sshd(4927): CO /etc/passwd
sshd(4927): RCO /var/log/lastlog
sshd(4927): CWO /var/log/wtmp
sshd(4927): CWO /var/log/lastlog
sshd(6808): RO /bin/dash
sshd(6808): RO /lib/x86_64-linux-gnu/ld-2.15.so
sh(6808): R /lib/x86_64-linux-gnu/ld-2.15.so
sh(6808): O /etc/ld.so.cache
sh(6808): O /lib/x86_64-linux-gnu/libc-2.15.so The above shows you the process ID that's doing the file accessing and which file it's accessing, but it doesn't give you any overall bandwidth usage, so each access is indistinguishable to any other access. So what to do? The fatrace option shows the most promise for FINALLY providing a tool that can show you aggregate usage of disk I/O based on files being accessed, rather than the processes doing the accessing. References fatrace: report system wide file access events fatrace - report system wide file access events Another new ABI for fanotify blktrace User Guide | {
"source": [
"https://unix.stackexchange.com/questions/86875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34520/"
]
} |
86,933 | I have a Macbook Air that runs Linux. I want to swap the alt and super keys in both sides of the keyboard with each other. How do I do this with cli tools? Update Following Drav Sloan's answer I used the following: keycode 64 = Alt_L
keycode 133 = Super_L
remove Mod1 = Alt_L
remove Mod4 = Super_L
add Mod1 = Super_L
add Mod4 = Alt_L
keycode 108 = Alt_R
keycode 134 = Super_R
remove Mod1 = Alt_R
remove Mod4 = Super_R
add Mod1 = Super_R
add Mod4 = Alt_R | One way to achieve that is via xmodmap . You can run xev to get key events. On running xev a box should appear and you can focus it and press the keys you want to swap. It should output details similar to for the Alt key: KeyPress event, serial 28, synthetic NO, window 0x8800001,
root 0x25, subw 0x0, time 2213877115, (126,91), root:(1639,475),
state 0x0, keycode 14 (keysym 0xffe9, Alt_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False I'm on a PC, and don't have a "Command Key", but have the equivalent "Windows Key", and
xev gives: KeyPress event, serial 28, synthetic NO, window 0x8000001,
root 0x25, subw 0x0, time 2213687746, (111,74), root:(1624,98),
state 0x0, keycode 93 (keysym 0xffeb, Super_L), same_screen YES,
XLookupString gives 0 bytes:
XmbLookupString gives 0 bytes:
XFilterEvent returns: False Because xmodmap has no idea of state , and can easily break key mappings, I suggest you do a: xmodmap -pke > defaults Then we create a xmodmap file: keycode 14 = Alt_L
keycode 93 = Super_L
remove Mod1 = Alt_L
remove Mod4 = Super_L
add Mod1 = Super_L
add Mod4 = Alt_L Note how I'm using the keycodes that xev returned. Also here I'm only replacing the left super and alt keys (and leaving the right ones to their old behavior). Then we can simply run xmodmap , to set these keys: $ xmodmap -v modmap.file
! modmap:
! 1: keycode 14 = Alt_L
keycode 0xe = Alt_L
! 2: keycode 93 = Super_L
keycode 0x5d = Super_L
! 3: remove Mod1 = Alt_L
! Keysym Alt_L (0xffe9) corresponds to keycode(s) 0xe
remove mod1 = 0xe
! 4: remove Mod4 = Super_L
! Keysym Super_L (0xffeb) corresponds to keycode(s) 0x5d
remove mod4 = 0x5d
! 5: add Mod1 = Super_L
add mod1 = Super_L
! 6: add Mod4 = Alt_L
add mod4 = Alt_L
!
! executing work queue
!
keycode 0xe = Alt_L
keycode 0x5d = Super_L
remove mod1 = 0xe
remove mod4 = 0x5d
add mod1 = Super_L
add mod4 = Alt_L You can run without the -v (verbose) switch for silent running, but I find it useful if you made mistakes in your modmap file. If things go messy then just reapply your defaults: xmodmap defaults Modmap is often ran at start up of X, so you can have these applied as defaults if you put your modmap commands in ~/.xmodmaprc . | {
"source": [
"https://unix.stackexchange.com/questions/86933",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
86,941 | I frequently login to a server, then cd into a specific directory. Is it possible to simplify these two commands into one? ssh bob@foo
cd /home/guest I'd like to avoid changing anything on 'foo' if possible, as I'll have to clear it with the server administrator. I use bash, but I am open to answers in other shells. | This works with OpenSSH: ssh -t bob@foo 'cd /home/guest && exec bash -l' The last argument runs in your login shell. The -t flag passed to ssh forces ssh to allocate a pseudo-terminal, which is necessary for an interactive shell. The -l flag passed to bash starts bash as a login shell . | {
"source": [
"https://unix.stackexchange.com/questions/86941",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39263/"
]
} |
86,955 | I am learning device drivers and Kernel programming. According to Jonathan Corbet's book, there is no main() function in device drivers. So I two questions: Why don't we need a main() function in device drivers? Does the kernel itself have a main() function? Can someone explain this to me? | In user space programs, main() is the entry point to the program that is called by the libc initialization code when the binary is executed. Kernel code does not have the luxury to rely on libc, as libc itself relies on the kernel syscall interface for memory allocation, I/O, process managements etc. That said, the equivalent of main() in kernel code is start_kernel() , which is called by the bootloader after having loaded the kernel image, decompressed it into memory and setup essential hardware and memory paging. start_kernel() performs the majority of the system setup and eventually spawns the init process. The entry point to Linux kernel modules is an init function that is registered with the kernel by calling the module_init() macro. The registered module init function is then called by kernel code through the do_initcalls() function during kernel startup. | {
"source": [
"https://unix.stackexchange.com/questions/86955",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40082/"
]
} |
86,971 | This question explains how to do it with curl . The accepted answer also points out that If wget is available, that would be far simpler. I looked through man wget but the got lost in there, and didn't find an option to follow redirects. | wget follows redirects automatically. Just give the URL as an argument wget 'http://downloads.sourceforge.net/project/romfs/genromfs/0.5.2/genromfs-0.5.2.tar.gz' | {
"source": [
"https://unix.stackexchange.com/questions/86971",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20506/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.