source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
198,423
I am trying to figure out why this happens in bash. Ok this is easy enough. $ echo -e 'a\txy\bc'a xc Ok this is easy enough. $ echo -e 'a\txy\b\b\b\b\b\b\b\b\bc'ac xy Ok this is easy enough. $ echo -e 'a\txy\b\b\b\b\b\b\b\b\b\bc'c xy Now, why has c not dropped off the left end? $ echo -e 'a\txy\b\b\b\b\b\b\b\b\b\b\b\bc'c xy I expected the output to be: <a tab>xy But clearly that isn't the case. Anyone got a pointer as to what might be happening? Thanks.
You can use a "here-document" with the - modifier. It can be indented by tab characters. You must switch from echo to cat . cat <<-EOF > /etc/apache/sites-availabe/000-default.conf <VirtualHost *:80> redirect 404 / ErrorDocument 404 </VirtualHost>EOF Or, to keep tabs in the result, you can pre-process the HERE document by let's say sed and indent with 4 spaces instead: sed 's/^ //' <<EOF....{....(------)func () {....(------)return....(------)}....}EOF I used . instead of a space and (------) instead of a tab to show how to format the script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198423", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111813/" ] }
198,425
I want to have an alias that execute a command and then whether it fails or no it execute other commands that depends on the success of each other. So I have something like that in .gitconfig getpull = !sh -c 'git remote add $0 $1; git fetch $0 && git checkout -b $2 $0/$2' With that command I get the following error (I donno as when I copy this to the shell it works fine): sh -c 'git remote add $0 $1: 1: sh -c 'git remote add $0 $1: Syntax error: Unterminated quoted string
I figured it out, it seems something with .gitconfig parser and to solve it we just need to wrap the whole command with double quotes as follow "!sh -c 'git remote add $0 $1; git fetch $0 && git checkout -b $2 $0/$2'"
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198425", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111815/" ] }
198,444
I would like to execute a script every 30 min after booting into the system. I know you can use cron, but I don't plan to use this feature often therefore I'd like to try it with systemd. So far I have only found the monotonic timers which allows to execute something once (at least I think so). How would the foo.timer and [email protected] look like in case I wanted to execute something every 30 minutes from boot/system start? [email protected] [Unit]Description=run fooWants=foo.timer[Service]User=%IType=simpleExecStart=/bin/bash /home/user/script.sh foo.timer [Unit]Description=run foo[Timer]where I am stuck... ???
You need to create two files: one for service, other for timer with same name. example: /etc/systemd/system/test.service [Unit]Description=test job[Service]Type=oneshotExecStart=/bin/bash /tmp/1.sh /etc/systemd/system/test.timer [Unit]Description=test[Timer]OnUnitActiveSec=10sOnBootSec=10s[Install]WantedBy=timers.target after that reload the systemd using command systemctl daemon-reload and start your timer by systemctl start test.timer , or enable it by default ( systemctl enable test.timer ). test content of 1.sh #!/bin/bashecho `date` >> /tmp/2 And command to check all available timers: systemctl list-timers --all More detailed info on project page and examples on ArchLinux page
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/198444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111041/" ] }
198,452
I'm trying to ls dirs that have .png files inside (no need for recursiveness, though it would be extra useful), exept one. Exlude one directory i.e. ls */*.png works fine. ls (^one)*/*.png returns no stdout. How do I achieve it? I am blind and piping it to espeak , so I can only hear stdout for now.
Option 1 - using just ls : With extended bash globbing turned on ( shopt -s extglob ) you can do: ls !(one*)/*.png Option 2 - combining ls and grep : You can combine ls with grep -v e.g. ls */*.png | grep -v "one/" Option 3 - (the best IMO) but uses find not ls : For recursive searching of all subdirectories using find find . -type f -name "*.png" -not -path "*/one/*" All of the above one-liners will list directories with .png files while filtering out any path matching one/ ; only Option 3 will do this recursively.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198452", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111839/" ] }
198,458
I've recently started using nano quite a bit for code editing. I've written custom syntax files, and I can include them in my local ~/.nanorc . However, I do work across multiple accounts, so I manually have to apply the include to each user's .nanorc . Is there a system-wide nanorc file I can edit, so the changes take effect for all users?
The system-wide nanorc file is at /etc/nanorc You can also add a .nanorc file to /etc/skel so all new users have a local nanorc file added to their home folder.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198458", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59616/" ] }
198,532
I am using multiple python, I alias python3 to py : alias py="/opt/python3.4/bin/python3" I want to run python in vim so I write this: map <F9> :call SpecialCompileRun()<CR>func! SpecialCompileRun() exec "w" if &filetype == 'python' exec '!time py %' endifendfunc but when I press F9 in vim, it tells me that: /bin/bash py cannot find command when I change py to python3 , it is ok now, but I still want to know why?
Because the way you define it py is a shell alias, and Vim doesn't know (nor care) about shell aliases. Use an environment variable instead, perhaps like this: $ PY=/opt/python3.4/bin/python3$ export PY then in Vim: ...exec '!time ' . fnameescape($PY) . ' %'... Edit: Added fnameescape() . It's needed if $PY contains characters that have a special meaning to Vim (f.i. # and % ).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198532", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109888/" ] }
198,563
I have a script which uses rsync to sync data in a remote -> local scenario. Immediately after the rsync command is run, a check to see if the error code equals zero or not. If its zero, further commands are performed. This however doesn't take into account the fact that rsync might have ran successfully but not actually made any changes. Because of this the equal zero condition will run regardless, which is a little redundant. rsync -aEivm --delete /path/to/remote/ /path/to/local/if [ $? -eq 0 ]; then # Success do some more work!else # Something went wrong! exit 1;fi What would be the best approach to expand this to check if there were actually any changes based on the rsync command that ran. I've read that -i flag can provide output to stdout, but how can this be placed in a conditional block?
Based on the comments to my original question, make rsync output to stdout with the -i flag and use a non string check condition to see if anything actually changed within the error code check. Wrapping the rsync command in a variable allows the check to be done. RSYNC_COMMAND=$(rsync -aEim --delete /path/to/remote/ /path/to/local/) if [ $? -eq 0 ]; then # Success do some more work! if [ -n "${RSYNC_COMMAND}" ]; then # Stuff to run, because rsync has changes else # No changes were made by rsync fi else # Something went wrong! exit 1 fi Potential downside, you have to lose the verbose output, but you can always log it to a file instead.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/198563", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/88576/" ] }
198,590
What is a “bind mount”? How do I make one? What is it good for? I've been told to use a bind mount for something, but I don't understand what it is or how to use it.
What is a bind mount? A bind mount is an alternate view of a directory tree. Classically, mounting creates a view of a storage device as a directory tree. A bind mount instead takes an existing directory tree and replicates it under a different point. The directories and files in the bind mount are the same as the original. Any modification on one side is immediately reflected on the other side, since the two views show the same data. For example, after issuing the Linux command- mount --bind /some/where /else/where the directories /some/where and /else/where have the same content, which is the content of /some/where . (If /else/where was not empty, its previous content is now hidden.) Unlike a hard link or symbolic link, a bind mount doesn't affect what is stored on the filesystem. It's a property of the live system. How do I create a bind mount? bindfs The bindfs filesystem is a FUSE filesystem which creates a view of a directory tree. For example, the command bindfs /some/where /else/where makes /else/where a mount point under which the contents of /some/where are visible. Since bindfs is a separate filesystem, the files /some/where/foo and /else/where/foo appear as different files to applications (the bindfs filesystem has its own st_dev value). Any change on one side is “magically” reflected on the other side, but the fact that the files are the same is only apparent when one knows how bindfs operates. Bindfs has no knowledge of mount points, so if there is a mount point under /some/where , it appears as just another directory under /else/where . Mounting or unmounting a filesystem underneath /some/where appears under /else/where as a change of the corresponding directory. Bindfs can alter some of the file metadata: it can show fake permissions and ownership for files. See the manual for details, and see below for examples. A bindfs filesystem can be mounted as a non-root user, you only need the privilege to mount FUSE filesystems. Depending on your distribution, this may require being in the fuse group or be allowed to all users. To unmount a FUSE filesystem, use fusermount -u instead of umount , e.g. fusermount -u /else/where nullfs FreeBSD provides the nullfs filesystem which creates an alternate view of a filesystem. The following two commands are equivalent: mount -t nullfs /some/where /else/wheremount_nullfs /some/where /else/where After issuing either command, /else/where becomes a mount point at which the contents of /some/where are visible. Since nullfs is a separate filesystem, the files /some/where/foo and /else/where/foo appear as different files to applications (the nullfs filesystem has its own st_dev value). Any change on one side is “magically” reflected on the other side, but the fact that the files are the same is only apparent when one knows how nullfs operates. Unlike the FUSE bindfs, which acts at the level of the directory tree, FreeBSD's nullfs acts deeper in the kernel, so mount points under /else/where are not visible: only the tree that is part of the same mount point as /some/where is reflected under /else/where . The nullfs filesystem may be usable under other BSD variants (OS X, OpenBSD, NetBSD) but it is not compiled as part of the default system. Linux bind mount Under Linux, bind mounts are available as a kernel feature. You can create one with the mount command, by passing either the --bind command line option or the bind mount option. The following two commands are equivalent: mount --bind /some/where /else/wheremount -o bind /some/where /else/where Here, the “device” /some/where is not a disk partition like in the case of an on-disk filesystem, but an existing directory. The mount point /else/where must be an existing directory as usual. Note that no filesystem type is specified either way: making a bind mount doesn't involve a filesystem driver, it copies the kernel data structures from the original mount. mount --bind also support mounting a non-directory onto a non-directory: /some/where can be a regular file (in which case /else/where needs to be a regular file too). A Linux bind mount is mostly indistinguishable from the original. The command df -T /else/where shows the same device and the same filesystem type as df -T /some/where . The files /some/where/foo and /else/where/foo are indistinguishable, as if they were hard links. It is possible to unmount /some/where , in which case /else/where remains mounted. With older kernels (I don't know exactly when, I think until some 3.x), bind mounts were truly indistinguishable from the original. Recent kernels do track bind mounts and expose the information through <code/proc/ PID /mountinfo, which allows findmnt to indicate bind mount as such . You can put bind mount entries in /etc/fstab . Just include bind (or rbind etc.) in the options, together with any other options you want. The “device” is the existing tree. The filesystem column can contain none or bind (it's ignored, but using a filesystem name would be confusing). For example: /some/where /readonly/view none bind,ro If there are mount points under /some/where , their contents are not visible under /else/where . Instead of bind , you can use rbind , also replicate mount points underneath /some/where . For example, if /some/where/mnt is a mount point then mount --rbind /some/where /else/where is equivalent to mount --bind /some/where /else/wheremount --bind /some/where/mnt /else/where/mnt In addition, Linux allows mounts to be declared as shared , slave , private or unbindable . This affects whether that mount operation is reflected under a bind mount that replicates the mount point. For more details, see the kernel documentation . Linux also provides a way to move mounts: where --bind copies, --move moves a mount point. It is possible to have different mount options in two bind-mounted directories. There is a quirk, however: making the bind mount and setting the mount options cannot be done atomically, they have to be two successive operations. (Older kernels did not allow this.) For example, the following commands create a read-only view, but there is a small window of time during which /else/where is read-write: mount --bind /some/where /else/wheremount -o remount,ro,bind /else/where I can't get bind mounts to work! If your system doesn't support FUSE, a classical trick to achieve the same effect is to run an NFS server, make it export the files you want to expose (allowing access to localhost ) and mount them on the same machine. This has a significant overhead in terms of memory and performance, so bind mounts have a definite advantage where available (which is on most Unix variants thanks to FUSE). Use cases Read-only view It can be useful to create a read-only view of a filesystem, either for security reasons or just as a layer of safety to ensure that you won't accidentally modify it. With bindfs: bindfs -r /some/where /mnt/readonly With Linux, the simple way: mount --bind /some/where /mnt/readonlymount -o remount,ro,bind /mnt/readonly This leaves a short interval of time during which /mnt/readonly is read-write. If this is a security concern, first create the bind mount in a directory that only root can access, make it read-only, then move it to a public mount point. In the snippet below, note that it's important that /root/private (the directory above the mount point) is private; the original permissions on /root/private/mnt are irrelevant since they are hidden behind the mount point. mkdir -p /root/private/mntchmod 700 /root/privatemount --bind /some/where /root/private/mntmount -o remount,ro,bind /root/private/mntmount --move /root/private/mnt /mnt/readonly Remapping users and groups Filesystems record users and groups by their numerical ID. Sometimes you end up with multiple systems which assign different user IDs to the same person. This is not a problem with network access, but it makes user IDs meaningless when you carry data from one system to another on a disk. Suppose that you have a disk created with a multi-user filesystem (e.g. ext4, btrfs, zfs, UFS, …) on a system where Alice has user ID 1000 and Bob has user ID 1001, and you want to make that disk accessible on a system where Alice has user ID 1001 and Bob has user ID 1000. If you mount the disk directly, Alice's files will appear as owned by Bob (because the user ID is 1001) and Bob's files will appear as owned by Alice (because the user ID is 1000). You can use bindfs to remap user IDs. First mount the disk partition in a private directory, where only root can access it. Then create a bindfs view in a public area, with user ID and group ID remapping that swaps Alice's and Bob's user IDs and group IDs. mkdir -p /root/private/alice_disk /media/alice_diskchmod 700 /root/privatemount /dev/sdb1 /root/private/alice_diskbindfs --map=1000/1001:1001/1000:@1000/1001:@1001/1000 /root/private/alice_disk /media/alice_disk See How does one permissibly access files on non-booted system's user's home folder? and mount --bind other user as myself another examples. Mounting in a jail or container A chroot jail or container runs a process in a subtree of the system's directory tree. This can be useful to run a program with restricted access, e.g. run a network server with access to only its own files and the files that it serves, but not to other data stored on the same computer). A limitation of chroot is that the program is confined to one subtree: it can't access independent subtrees. Bind mounts allow grafting other subtrees onto that main tree. This makes them fundamental to most practical usage of containers under Linux. For example, suppose that a machine runs a service /usr/sbin/somethingd which should only have access to data under /var/lib/something . The smallest directory tree that contains both of these files is the root. How can the service be confined? One possibility is to make hard links to all the files that the service needs (at least /usr/sbin/somethingd and several shared libraries) under /var/lib/something . But this is cumbersome (the hard links need to be updated whenever a file is upgraded), and doesn't work if /var/lib/something and /usr are on different filesystems. A better solution is to create an ad hoc root and populate it with using mounts: mkdir /run/somethingcd /run/somethingmkdir -p etc/something lib usr/lib usr/sbin var/lib/somethingmount --bind /etc/something etc/somethingmount --bind /lib libmount --bind /usr/lib usr/libmount --bind /usr/sbin usr/sbinmount --bind /var/lib/something var/lib/somethingmount -o remount,ro,bind etc/somethingmount -o remount,ro,bind libmount -o remount,ro,bind usr/libmount -o remount,ro,bind usr/sbinchroot . /usr/sbin/somethingd & Linux's mount namespaces generalize chroots. Bind mounts are how namespaces can be populated in flexible ways. See Making a process read a different file for the same filename for an example. Running a different distribution Another use of chroots is to install a different distribution in a directory and run programs from it, even when they require files at hard-coded paths that are not present or have different content on the base system. This can be useful, for example, to install a 32-bit distribution on a 64-bit system that doesn't support mixed packages, to install older releases of a distribution or other distributions to test compatibility, to install a newer release to test the latest features while maintaining a stable base system, etc. See How do I run 32-bit programs on a 64-bit Debian/Ubuntu? for an example on Debian/Ubuntu. Suppose that you have an installation of your distribution's latest packages under the directory /f/unstable , where you run programs by switching to that directory with chroot /f/unstable . To make home directories available from this installations, bind mount them into the chroot: mount --bind /home /f/unstable/home The program schroot does this automatically. Accessing files hidden behind a mount point When you mount a filesystem on a directory, this hides what is behind the directory. The files in that directory become inaccessible until the directory is unmounted. Because BSD nullfs and Linux bind mounts operate at a lower level than the mount infrastructure, a nullfs mount or a bind mount of a filesystem exposes directories that were hidden behind submounts in the original. For example, suppose that you have a tmpfs filesystem mounted at /tmp . If there were files under /tmp when the tmpfs filesystem was created, these files may still remain, effectively inaccessible but taking up disk space. Run mount --bind / /mnt (Linux) or mount -t nullfs / /mnt (FreeBSD) to create a view of the root filesystem at /mnt . The directory /mnt/tmp is the one from the root filesystem. NFS exports at different paths Some NFS servers (such as the Linux kernel NFS server before NFSv4) always advertise the actual directory location when they export a directory. That is, when a client requests server:/requested/location , the server serves the tree at the location /requested/location . It is sometimes desirable to allow clients to request /request/location but actually serve files under /actual/location . If your NFS server doesn't support serving an alternate location, you can create a bind mount for the expected request, e.g. /requested/location *.localdomain(rw,async) in /etc/exports and the following in /etc/fstab : /actual/location /requested/location bind bind A substitute for symbolic links Sometimes you'd like to make symbolic link to make a file /some/where/is/my/file appear under /else/where , but the application that uses file expands symbolic links and rejects /some/where/is/my/file . A bind mount can work around this: bind-mount /some/where/is/my to /else/where/is/my , and then realpath will report /else/where/is/my/file to be under /else/where , not under /some/where . Side effects of bind mounts Recursive directory traversals If you use bind mounts, you need to take care of applications that traverse the filesystem tree recursively, such as backups and indexing (e.g. to build a locate database). Usually, bind mounts should be excluded from recursive directory traversals, so that each directory tree is only traversed once, at the original location. With bindfs and nullfs, configure the traversal tool to ignore these filesystem types, if possible. Linux bind mounts cannot be recognized as such: the new location is equivalent to the original. With Linux bind mounts, or with tools that can only exclude paths and not filesystem types, you need to exclude the mount points for the bind mounts. Traversals that stop at filesystem boundaries (e.g. find -xdev , rsync -x , du -x , …) will automatically stop when they encounter a bindfs or nullfs mount point, because that mount point is a different filesystem. With Linux bind mounts, the situation is a bit more complicated: there is a filesystem boundary only if the bind mount is grafting a different filesystem, not if it is grafting another part of the same filesystem. Going beyond bind mounts Bind mounts provide a view of a directory tree at a different location. They expose the same files, possibly with different mount options and (with bindfs) different ownership and permissions. Filesystems that present an altered view of a directory tree are called overlay filesystems or stackable filesystems . There are many other overlay filesystems that perform more advanced transformations. Here are a few common ones. If your desired use case is not covered here, check the repository of FUSE filesystems . loggedfs — log all filesystem access for debugging or monitoring purposes ( configuration file syntax , Is it possible to find out what program or script created a given file? , List the files accessed by a program ) Filter visible files clamfs — run files through a virus scanner when they are read filterfs — hide parts of a filesystem rofs — a read-only view. Similar to bindfs -r , just a little more lightweight. Union mounts — present multiple filesystems (called branches ) under a single directory: if tree1 contains foo and tree2 contains bar then their union view contains both foo and bar . New files are written to a specific branch, or to a branch chosen according to more complex rules. There are several implementations of this concept, including: aufs — Linux kernel implementation, but rejected upstream many times funionfs — FUSE implementation mhddfs — FUSE, write files to a branch based on free space overlay — Linux kernel implementation, merged upstream in Linux v3.18 unionfs-fuse — FUSE, with caching and copy-on-write features Modify file names and metadata ciopfs — case-insensitive filenames (can be useful to mount Windows filesystems) convmvfs — convert filenames between character sets ( example ) posixovl — store Unix filenames and other metadata (permissions, ownership, …) on more restricted filesystems such as VFAT ( example ) View altered file contents avfs — for each archive file, present a directory with the content of the archive ( example , more examples ). There are also many FUSE filesystems that expose specific archives as directories . fuseflt — run files through a pipeline when reading them, e.g. to recode text files or media files ( example ) lzopfs — transparent decompression of compressed files mp3fs — transcode FLAC files to MP3 when they are read ( example ) scriptfs — execute scripts to serve content (a sort of local CGI) ( example ) Modify the way content is stored chironfs — replicate files onto multiple underlying storage ( RAID-1 at the directory tree level ) copyfs — keep copies of all versions of the files encfs — encrypt files pcachefs — on-disk cache layer for slow remote filesystems simplecowfs — store changes via the provided view in memory, leaving the original files intact wayback — keep copies of all versions of the files
{ "score": 11, "source": [ "https://unix.stackexchange.com/questions/198590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
198,673
On many *nix systems like OS X and Ubuntu, We can see the inode of root directory is 2. Then what is the inode 1 used for?
Inode 0 is used as a NULL value to indicate that there is no inode. Inode 1 is used to keep track of any bad blocks on the disk; it is essentially a hidden file containing the bad blocks. Those bad blocks which are recorded using e2fsck -c . Inode 2 is used by the root directory, and indicates starting of filesystem inodes.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/198673", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111987/" ] }
198,678
How should I choose the proper uid (or gid ) for system users being created in a custom .rpm package? The user will be used to run a daemon non-root. I think it's not acceptabe to let adduser choose the next ID by omitting --uid because this could create a conflict with fixed uid s of official packages. Further, this could lead to different IDs across multiple systems, making further administration harder. Are there ranges of unused and unreserved uid s that can be used (as far as they're not reused in the local environment)? Is there a algorithm to generate a uid / gid ? This question applies to RedHat EL 6+, CentOS 6+ and Fedora. The package's .srpm or .spec should get published.
Inode 0 is used as a NULL value to indicate that there is no inode. Inode 1 is used to keep track of any bad blocks on the disk; it is essentially a hidden file containing the bad blocks. Those bad blocks which are recorded using e2fsck -c . Inode 2 is used by the root directory, and indicates starting of filesystem inodes.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/198678", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29510/" ] }
198,703
I'm trying to run yum update and I'm running this error: rpmdb: PANIC: fatal region error detected; run recoveryerror: db3 error(-30974) from dbenv->open: DB_RUNRECOVERY: Fatal error, run database recoveryerror: cannot open Packages index using db3 - (-30974)error: cannot open Packages database in /var/lib/rpmCRITICAL:yum.main:Error: rpmdb open failed I checked page like this one but running yum clean all runs the same error. How can I solve this?
This is how I fixed my problem. You may fix this by cleaning out rpm database. But first, in order to minimize the risk, make sure you create a backup of files in /var/lib/rpm/ using cp command: mkdir /root/backups.rpm.mm_dd_yyyy/cp -avr /var/lib/rpm/ /root/backups.rpm.mm_dd_yyyy/ The try this to fix this problem: # rm -f /var/lib/rpm/__db*# db_verify /var/lib/rpm/Packages# rpm --rebuilddb# yum clean all And finally verify that error has gone with the following yum command # yum update
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/198703", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112017/" ] }
198,739
I have a dash script, and I need to parse $1 , which is a string containing two parts separated by ' : ', such as foo:123 . I would like to to save foo in $X and 123 in $Y. I thought I could use read : $ echo "foo:123" | tr ':' ' ' | read X Y but that does not work (no error given) $ echo $X gives empty line as output. Why does my read construct not work?And how could I achieve my goal (any solution, does not have to use read)
In dash , each command in Pipelines run in subshell ( zsh and AT&T ksh , the rightmost command in pipelines doesn't ), so variables X and Y are no longer existed when your command done. Simply, you can use Parameter Expansion , try: $ set -- foo:123$ X=${1%:*}$ Y=${1#*:} The example is used for interactive session. Inside your script, you don't need set -- foo:123 .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198739", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
198,756
rsync -avP /home/user/.profile hpux3:/home/user/.profilebash: rsync: command not found If I did ssh to hpux3 machine rsync version 3.1.1 protocol version 31Copyright (C) 1996-2014 by Andrew Tridgell, Wayne Davison, and others.Web site: http://rsync.samba.org/output truncated I have set PATH in $HOME/.profile and $HOME/.bashrc . Should I set it in the /etc/profile file?
Your .profile is only read when you log in interactively. When rsync connects to another machine to execute a command, /etc/profile and ~/.profile are not read. If your login shell is bash, then ~/.bashrc may be read (this is a quirk of bash — ~/.bashrc is read by non-login interactive shells, and in some circumstances by login non-interactive shells). This doesn't apply to all versions of bash though. The easiest way to make rsync work is probably to pass the --rsync-path option, e.g. rsync --rsync-path=/home/elbarna/bin/rsync -avP /home/user/.profile hpux3:/home/user/.profile If you log in over SSH with key-based authentication, you can set the PATH environment variable via your ~/.ssh/authorized_keys . See sh startup files over ssh for explanations of how to arrange to load .profile when logging in over SSH with a key.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/198756", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
198,787
If I have an array with 5 elements, for example: [a][b][c][d][e] Using echo ${myarray[4]} I can see what it holds. But what if I didn't know the number of elements in a given array? Is there a way of reading the last element of an unknown length array? i.e. The first element reading from the right to the left for any array? I would like to know how to do this in bash.
As of bash 4.2 , you can just use a negative index ${myarray[-1]} to get the last element. You can do the same thing for the second-last, and so on; in Bash: If the subscript used to reference an element of an indexed arrayevaluates to a number less than zero, it is interpreted as relative toone greater than the maximum index of the array, so negative indicescount back from the end of the array, and an index of -1 refers to thelast element. The same also works for assignment. When it says "expression" it really means an expression; you can write in any arithmetic expression there to compute the index, including one that computes using the length of the array ${#myarray[@]} explicitly like ${myarray[${#myarray[@]} - 1]} for earlier versions.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/198787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102813/" ] }
198,794
When I open a terminal window with the GNOME Terminal emulator in the desktop GUI the shell TERM environment variable defaults to the value xterm . If I use CTL + ALT + F1 to switch to a console TTY window and echo $TERM the value is set to linux . My motivation for asking is that inside my ~/.bashrc file a variable is used to determine if a color shell is provided or just good old fashioned monochrome. # set a fancy prompt (non-color, unless we know we "want" color)case "$TERM" in xterm-color) color_prompt=yes;;esac In both the console shell and the Gnome Terminal emulator shell if I type export TERM=xterm-colorsource /.bashrc both shells change to color mode (something I'd like to have happen always in both). Where do the default TERM values get set please and where is the best place to change their defaults, if at all possible? There appears to be nothing in the terminal emulator GUI to select or set the default TERM value. I did consider just adding the line export TERM=xterm-color to the top of my ~/.bashrc file but my gut instinct tells this is not the best solution and my Google searches haven't yet led me to a good answer. I'm running Ubuntu 15.04 Desktop Edition (Debian Based).
In lots of places, depending On virtual terminals and real terminals, the TERM environment variable is set by the program that chains to login , and is inherited all of the way along to the interactive shell that executes once one has logged on. Where, precisely, this happens varies from system to system, and according to the kind of terminal. real terminals Real, serial, terminals can vary in type, according to what's at the other end of the wire. So conventionally the getty program is invoked with an argument that specifies the terminal type, or is passed the TERM program from a service manager's service configuration data. On van Smoorenburg init systems, one can see this in /etc/inittab entries, which will read something along the lines of S0:3:respawn:/sbin/agetty ttyS0 9600 vt100-nav The last argument to agetty in that line, vt100-nav , is the terminal type set for /dev/ttyS0 . So /etc/inittab is where to change the terminal type for real terminals on such systems. On systemd systems, one used to be able to see this in the /usr/lib/systemd/system/[email protected] unit file ( /lib/systemd/system/[email protected] on un-merged systems), which used to read Environment=TERM=vt100 setting the TERM variable in the environment passed to agetty . On the BSDs, init takes the terminal type from the third field of each terminal's entry in the /etc/ttys database, and sets TERM from that in the environment that it executes getty with. So /etc/ttys is where one changes the terminal type for real terminals on the BSDs. systemd's variability The [email protected] service unit file, or drop-in files that apply thereto, is where to change the terminal type for real terminals on systemd systems. Note that such a change applies to all terminal login services that employ this service unit template. (To change it for only individual terminals, one has to manually instantiate the template, or add drop-ins that only apply to instantiations.) systemd has had at least four mechanisms during its lifetime for picking up the value of the TERM environment variable. At the time of first writing this answer, as can be seen, there was an Environment=TERM= something line in the template service unit files. At other times, the types linux and vt102 were hard-wired into the getty and serial-getty service unit files respectively. More recently, the environment variable has been inherited from process #1, which has set it in various ways. As of 2020, the way that systemd decides what terminal type to specify in a service's TERM environment variable is quite complex, and not documented at all. The way to change it remains a drop-in configuration file with Environment=TERM= something . But where the default value originates from is quite variable. Subject to some fairly complex to explain rules that involve the TTYPath= settings of individual service units, it can be one of three values : a hardwired linux , a hardwired vt220 (no longer vt102 ), or the value of the TERM environment variable that process #1 inherited, usually from the kernel/bootstrap loader. (Ironically, the getttyent() mechanism still exists in the GNU C library, and systemd could have re-used the /etc/ttys mechanism.) kernel virtual terminals Kernel virtual terminals, as you have noted, have a fixed type. Unlike NetBSD, which can vary the kernel virtual terminal type on the fly, Linux and the other BSDs have a single fixed terminal type implemented in the kernel's built-in terminal emulation program. On Linux, that type matches linux from the terminfo database. (FreeBSD's kernel terminal emulation since version 9 has been teken . Prior to version 9 it was cons25 OpenBSD's is pccon .) On systems using mingetty or vc-get-tty (from the nosh package) the program "knows" that it can only be talking to a virtual terminal, and they hardwire the "known" virtual terminal types appropriate to the operating system that the program was compiled for. On systemd systems, one used to be able to see this in the /usr/lib/systemd/system/[email protected] unit file ( /lib/systemd/system/[email protected] on un-merged systems), which read Environment=TERM=linux setting the TERM variable in the environment passed to agetty . For kernel virtual terminals, one does not change the terminal type. The terminal emulator program in the kernel doesn't change, after all. It is incorrect to change the type. In particular, this will screw up cursor/editing key CSI sequence recognition. The linux CSI sequences sent by the Linux kernel terminal emulator are different to the xterm or vt100 CSI sequences sent by GUI terminal emulator programs in DEC VT mode. (In fact, they are highly idiosyncratic and non-standard, and different both to all real terminals that I know of, and to pretty much all other software terminal emulators apart from the one built into Linux.) GUI terminal emulators Your GUI terminal emulator is one of many programs, from the SSH dæmon to screen , that uses pseudo-terminals. What the terminal type is depends from what terminal emulator program is running on the master side of the pseudo-terminal, and how it is configured. Most GUI terminal emulators will start the program on the slave side with a TERM variable whose value matches their terminal emulation on the master side. Programs like the SSH server will attempt to "pass through" the terminal type that is on the client end of the connection. Usually there is some menu or configuration option to choose amongst terminal emulations. The gripping hand The right way to detect colour capability is not to hardwire a list of terminal types in your script. There are an awful lot of terminal types that support colour. The right way is to look at what termcap/terminfo says about your terminal type. colour=0if tput Co > /dev/null 2>&1thentest "`tput Co`" -gt 2 && colour=1elif tput colors > /dev/null 2>&1thentest "`tput colors`" -gt 2 && colour=1fi Further reading Jonathan de Boyne Pollard (2018). TERM . nosh Guide . Softwares.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/198794", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112077/" ] }
198,810
I am not able to open any https URLs using wget or curl: $ wget https://www.python.org--2015-04-27 17:17:33-- https://www.python.org/Resolving www.python.org (www.python.org)... 103.245.222.223Connecting to www.python.org (www.python.org)|103.245.222.223|:443... connected.ERROR: cannot verify www.python.org's certificate, issued by "/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert SHA2 Extended Validation Server CA": Unable to locally verify the issuer's authority.To connect to www.python.org insecurely, use '--no-check-certificate'.$ curl https://www.python.orgcurl: (60) SSL certificate problem: unable to get local issuer certificateMore details here: http://curl.haxx.se/docs/sslcerts.htmlcurl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option.If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL).If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. This is using wget 1.12 and curl 7.30.0 on CentOS 5.5. It sounds like something is wrong with my local certificate store, but I have no idea how to proceed from here. Any ideas? Update: After upgrading the openssl package from 0.9.8e-12.el5_4.6 to 0.9.8e-33.el5_11, there is now a different error: $ wget https://pypi.python.org--2015-04-28 10:27:35-- https://pypi.python.org/Resolving pypi.python.org (pypi.python.org)... 103.245.222.223Connecting to pypi.python.org (pypi.python.org)|103.245.222.223|:443... connected.ERROR: certificate common name "www.python.org" doesn't match requested host name "pypi.python.org".To connect to pypi.python.org insecurely, use '--no-check-certificate'.
I was having a similar error with https://excellmedia.dl.sourceforge.net/project/astyle/astyle/astyle%203.0.1/astyle_3.0.1_linux.tar.gz on a docker image(circleci/jdk8:0.1.1), In my case upgrading ca-certificates solved the issue: sudo apt-get install ca-certificates
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198810", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57331/" ] }
198,821
I want to install Uberstudent (based on Ubuntu 14.04) on a Samsung Chromebook 2. I'm new to looking at specs so I don't know if it will work. The Uberstudent documentation states: " 2 GB of memory or more is strongly recommended. 4GB of memory is optimal " Is this refering to RAM or Harddrive/SSD-space? What kind of problems will appear if the computer only has 2 GB RAM (if it's refering to RAM)?
I was having a similar error with https://excellmedia.dl.sourceforge.net/project/astyle/astyle/astyle%203.0.1/astyle_3.0.1_linux.tar.gz on a docker image(circleci/jdk8:0.1.1), In my case upgrading ca-certificates solved the issue: sudo apt-get install ca-certificates
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198821", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112094/" ] }
198,832
I am running Linux version 3.4.8 on an at91sam9g20. I want to take a large record and split it into multiple files. I have tried a number of methods, but none appear to function correctly, e.g. tar -c -M --tape-length=102400 --file=disk1.tar mytest.tar.gztar: invalid option -- MBusyBox v1.20.2 (2012-09-24 16:21:25 CEST) multi-call binary.Usage: tar -[cxthvO] [-X FILE] [-T FILE] [-f TARFILE] [-C DIR] [FILE]...Create, extract, or list files from a tar fileOperation: c Create x Extract t List f Name of TARFILE ('-' for stdin/out) C Change to DIR before operation v Verbose O Extract to stdout h Follow symlinks exclude File to exclude X File with names to exclude T File with names to include It appears busybox has a slimmed down version of tar that does not allow certain parameters. When I try split, I get the following: /:# split-sh: split: not found Is there any method of splitting a large file to multiple files using the busybox command set? Currently defined functions: [, [[, addgroup, adduser, ar, arping, ash, awk, basename, blkid, bunzip2, bzcat, cat, catv, chattr, chgrp, chmod, chown, chroot, chrt, chvt, cksum, clear, cmp, cp, cpio, crond, crontab, cut, date, dc, dd, deallocvt, delgroup, deluser, devmem, df, diff, dirname, dmesg, dnsd, dnsdomainname, dos2unix, du, dumpkmap, echo, egrep, eject, env, ether-wake, expr, false, fdflush, fdformat, fgrep, find, fold, free, freeramdisk, fsck, fuser, getopt, getty, grep, gunzip, gzip, halt, hdparm, head, hexdump, hostid, hostname, hwclock, id, ifconfig, ifdown, ifup, inetd, init, insmod, install, ip, ipaddr, ipcrm, ipcs, iplink, iproute, iprule, iptunnel, kill, killall, killall5, klogd, last, less, linux32, linux64, linuxrc, ln, loadfont, loadkmap, logger, login, logname, losetup, ls, lsattr, lsmod, lsof, lspci, lsusb, lzcat, lzma, makedevs, md5sum, mdev, mesg, microcom, mkdir, mkfifo, mknod, mkswap, mktemp, modprobe, more, mount, mountpoint, mt, mv, nameif, netstat, nice, nohup, nslookup, od, openvt, passwd, patch, pidof, ping, pipe_progress, pivot_root, poweroff, printenv, printf, ps, pwd, rdate, readlink, readprofile, realpath, reboot, renice, reset, resize, rm, rmdir, rmmod, route, run-parts, runlevel, sed, seq, setarch, setconsole, setkeycodes, setlogcons, setserial, setsid, sh, sha1sum, sha256sum, sha512sum, sleep, sort, start-stop-daemon, strings, stty, su, sulogin, swapoff, swapon, switch_root, sync, sysctl, syslogd, tail, tar, tee, telnet, test, tftp, time, top, touch, tr, traceroute, true, tty, udhcpc, umount, uname, uniq, unix2dos, unlzma, unxz, unzip, uptime, usleep, uudecode, uuencode, vconfig, vi, vlock, watch, watchdog, wc, wget, which, who, whoami, xargs, xz, xzcat, yes, zcat
You can use busybox's dd applet with its bs , count and skip arguments to split a large file into chunks. dd manpage part from busybox : dd [if=FILE] [of=FILE] [ibs=N] [obs=N] [bs=N] [count=N] [skip=N] [seek=N] [conv=notrunc|noerror|sync|fsync] Copy a file with converting and formatting if=FILE Read from FILE instead of stdin of=FILE Write to FILE instead of stdout bs=N Read and write N bytes at a time ibs=N Read N bytes at a time obs=N Write N bytes at a time count=N Copy only N input blocks skip=N Skip N input blocks seek=N Skip N output blocks conv=notrunc Don't truncate output file conv=noerror Continue after read errors conv=sync Pad blocks with zeros conv=fsync Physically write data out before finishing So basically you would do something like this: $ dd if=bigfile of=part.0 bs=1024 count=1024 skip=0$ dd if=bigfile of=part.1 bs=1024 count=1024 skip=1024$ dd if=bigfile of=part.2 bs=1024 count=1024 skip=2048 For each part.X file dd writes count * bs bytes ignoring first skip bytes from input file. A very basic one-liner (combining sed , xargs and dd applet from busybox) could look like this: seq 0 19 | xargs -n1 sh -c 'dd if=bigfile of=part.$0 bs=1024 count=1024 skip=$(expr $0 \* 1024)' producing 20 part.X files with at most 1048576 bytes in size. Example splittig bigfile : $ ls -ltotal 2940-rw-rw-r-- 1 user user 3000000 Apr 27 13:21 bigfile$ seq 0 20 | xargs -n1 sh -c 'dd if=bigfile of=part.$0 bs=1024 count=1024 skip=$(expr $0 \* 1024)'1024+0 records in1024+0 records out1024+0 records in1024+0 records out881+1 records in881+1 records out0+0 records in0+0 records out[...]$ ls -ltotal 5968-rw-rw-r-- 1 user user 3000000 Apr 27 13:21 bigfile-rw-rw-r-- 1 user user 1048576 Apr 27 13:43 part.0-rw-rw-r-- 1 user user 1048576 Apr 27 13:43 part.1-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.10-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.11-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.12-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.13-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.14-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.15-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.16-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.17-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.18-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.19-rw-rw-r-- 1 user user 902848 Apr 27 13:43 part.2-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.3-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.4-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.5-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.6-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.7-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.8-rw-rw-r-- 1 user user 0 Apr 27 13:43 part.9 Restoring can easily be done with cat (or dd again with the seek parameter). 0 byte files can be skipped: $ cat part.0 part.1 part.2 > bigfile.res$ diff bigfile bigfile.res Depending on your needs you shouldn't use seq and calculate the specific size of your bigfile and do all the stuff in a shell script.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198832", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105845/" ] }
198,838
I have written a loop to iterate all .out c binary file and copy their outputs into a text file (the output for every binary file is just a one line hash value, one line output for one program). Here is my code so far: for j in {1..10}do ./gcc-$j.out >> gcc-result.txtdone Unfortunately, some binary files have some unknown issues and cannot be correctly executed (they got stuck and cannot proceed to the next program). I am not going to fix those c code but I want my bash to automatically jump to executing the next program within a given timeout (say 10 secs), and also write "0" to the gcc-result.txt . Thanks in advance if you have idea to solve this issue.
You could use the timeout command : if timeout 10 ping google.fr > /dev/nullthen echo "process successful"else echo "process killed"fi shows process killed , and if timeout 10 ls /usr/bin | wc -l > /dev/nullthen echo "process successful"else echo "process killed"fi shows process successful . Based on this, you could run each command using such an if; then; else; fi , redirect the standard output to a temporary file, and copy that temporary to the target file in the successful case, while generating the target file in the failure case. How can I kill a process and be sure the PID hasn't been reused could be helpful in case you don't have timeout.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/198838", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29124/" ] }
198,842
I would like to dual boot ArchLinux with my Ubuntu. I would like some hints at how to do it without messing my partitions too much.Presently, my Computer partition scheme goes like this: (Ubuntu only)sda sda1 [boot loader] sda2 [root] sda3 [swap] sda4 [home] If I were to install Arch only, I would have the same partition scheme.Now how should I prepare my partitions in order to successfully dual-boot? I have a suggestion though it may be a naive one: sda sda1 [bootloader] -> Will it detect Arch? sda2 [root_ubuntu] sda3 [swap] sda4 [home_ubuntu] -> I have a lot of space, I could just resize and divide this partition. sda5 [root_arch] sda6 [home_arch] Will the above scheme be a workable implementation?In any case, what do you suggest I do? Which files (config, etc) will I have to create or modify?
This scheme is certainly workable. You are right, the best solution is to transform your current layout as little as it is possible. If you don't ask Arch Linux to install his Grub bootloader, you'll have to run grub-mkconfig -o /boot/grub/grub.cfg in Ubuntu (if you have os-prober installed, it will find your Arch installation and update all the config files automatically). You may find information you need about Grub in Arch Linux wiki: https://wiki.archlinux.org/index.php/GRUB (almost all the instuctions are applicable to current versions of Ubuntu)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198842", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/86588/" ] }
198,846
Is there a way to make a linux system react or do something when pinged from another device, besides just sending a reply (or not)?
You can use Linux netfilter to intercept incoming pings and send them to userspace. This will do it: iptables -I INPUT -p icmp --icmp-type echo-request -j NFLOG You can add any kind of iptables criteria like -s (source) in order to intercept only some pings and not others. Note that this does not cancel the kernel's handling of the original ping. It only sends a copy to userspace. If you plan to respond to the ping from userspace then you will want to prevent the kernel from also handling it so that there are not 2 replies. To implement that, just follow up the above with another iptables rule to drop the original: iptables -I INPUT -p icmp --icmp-type echo-request -j DROP In order to receive the ping from userspace, you have to write some C code. The libnetfilter_log library is what you need. Here is some sample code which I wrote a few years ago that does exactly that: #include <libnetfilter_log/libnetfilter_log.h>[...] struct nflog_handle *h; struct nflog_g_handle *qh; ssize_t rv; char buf[4096]; h = nflog_open(); if (!h) { fprintf(stderr, "error during nflog_open()\n"); return 1; } if (nflog_unbind_pf(h, AF_INET) < 0) { fprintf(stderr, "error nflog_unbind_pf()\n"); return 1; } if (nflog_bind_pf(h, AF_INET) < 0) { fprintf(stderr, "error during nflog_bind_pf()\n"); return 1; } qh = nflog_bind_group(h, 0); if (!qh) { fprintf(stderr, "no handle for group 0\n"); return 1; } if (nflog_set_mode(qh, NFULNL_COPY_PACKET, 0xffff) < 0) { fprintf(stderr, "can't set packet copy mode\n"); return 1; } nflog_callback_register(qh, &callback, NULL); fd = nflog_fd(h); while ((rv = recv(fd, buf, sizeof(buf), 0)) && rv >= 0) { nflog_handle_packet(h, buf, rv); } callback is a function that gets called for each incoming packet. It is defined as something like this: static intcallback(struct nflog_g_handle *gh, struct nfgenmsg *nfmsg, struct nflog_data *ldata, void *data){ payload_len = nflog_get_payload(ldata, (char **)(&ip)); .... /* now "ip" points to the packet's IP header */ /* ...do something with it... */ ....}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198846", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112126/" ] }
198,849
Sometimes, I'd like to know the name of a glyph. For example, if I see − , I may want to know if it's a hyphen - , an en-dash – , an em-dash — , or a minus symbol − . Is there a way that I can copy-paste this into a terminal to see what it is? I am unsure if my system knows the common names to these glyphs, but there is certainly some (partial) information available, such as in /usr/share/X11/locale/en_US.UTF-8/Compose . For example, <Multi_key> <exclam> <question> : "‽" U203D # INTERROBANG Another example glyph: .
Try the unicode utility: $ unicode ‽U+203D INTERROBANGUTF-8: e2 80 bd UTF-16BE: 203d Decimal: &#8253;‽Category: Po (Punctuation, Other)Bidi: ON (Other Neutrals) Or the uconv utility from the ICU package: $ printf %s ‽ | uconv -x any-name\N{INTERROBANG} You can also get information via the recode utility: $ printf %s ‽ | recode ..dumpUCS2 Mne Description203D point exclarrogatif Or with Perl: $ printf %s ‽ | perl -CLS -Mcharnames=:full -lne 'print charnames::viacode(ord) for /./g'INTERROBANG Note that those give information on the characters that make-up that glyph, not on the glyph as a whole. For instance, for é (e with combining acute accent): $ printf é | uconv -x any-name\N{LATIN SMALL LETTER E}\N{COMBINING ACUTE ACCENT} Different from the standalone é character: $ printf é | uconv -x any-name\N{LATIN SMALL LETTER E WITH ACUTE} You can ask uconv to recombine those (for those that have a combined form): $ printf 'e\u0301b\u0301' | uconv -x '::nfc;::name;'\N{LATIN SMALL LETTER E WITH ACUTE}\N{LATIN SMALL LETTER B}\N{COMBINING ACUTE ACCENT} (é has a combined form, but not b́).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/198849", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18887/" ] }
198,887
In Fedora, instead of typing ls -l , you can type ll . What causes this behavior? I've installed a new Debian system that does not appear to have this functionality. They're both running Bash. I can define an alias for alias ll=ls -l , but I'm wondering what the root difference is between these two bashes.
From Fedora /etc/bashrc : for i in /etc/profile.d/*.sh; do if [ -r "$i" ]; then if [ "$PS1" ]; then . "$i" else . "$i" >/dev/null fi fi done And in /etc/profile.d are files in which are defined aliases. ll is defined in /etc/profile.d/colorls.sh
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198887", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/34796/" ] }
198,899
Is there a tool (or diff option combination) which would produce output similar to git diff --stat ? I have two kernel sources and I would like to get general idea of what was changed in them. Displaying all the changed lines is a overkill, I would like a simple file_name1 <number of changes>file_name2 <number of changes>... summary.
Yes, use Tom Dickey's diffstat : diff -ur dir1 dir2 | diffstat You can summarise any (well, most ) diffs / patches with it, not just directory diffs.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198899", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112158/" ] }
198,925
[username@notebook ~]$ cat foo.sh #!/bin/bashecho "$0"[username@notebook ~]$ ./foo.sh./foo.sh[username@notebook ~]$ Question : How can I output the "foo.sh"? No matter how was it executed.
Use basename : #!/bin/bashbasename -- "$0" If you want to assign it to a variable, you'd do: my_name=$(basename -- "$0")
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/198925", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105588/" ] }
198,938
Which file in /proc gets read by the kernel during the boot up process? This was a question on my LPIC 101 test that I think I might have answered wrong. I searched on google and some other places but wasn't able to find an answer. Hoping one of you could provide. Thanks!
My question is, which file in /proc gets read by the kernel during the boot up process? This was a question on my LPIC 101 test... Sounds like a trick question. The files in /proc aren't real files on disk (this is why they have a size of 0) and the nodes don't exist until the kernel mounts a procfs file system there and populates it. Procfs and sysfs files are kernel interfaces. When you read a file in /proc , you are asking the kernel for information and it will supply it. That information is not stored in that file -- nothing is. When you write to a file in /proc , you are sending the kernel information, but again, the information will not be stored in that file. This is possible because the kernel is the gatekeeper to file access generally. All file access involves system calls, i.e., they must pass through the kernel. So I would say the answer here is that it does not read any files in /proc at boot or at any other time. This would be like dialing your own phone number.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/95779/" ] }
198,950
The Linux Programming Interface states: Each device driver registers its association with a specific major device ID, and this association provides the connection between the device special file and the device driver. Is it possible to obtain the list of those associations?
Documentation/admin-guide/devices.txt in the kernel source code documents the allocation process and lists all the allocated device numbers. sd gets a whole bunch of major device numbers because of the large number of devices it can handle: major 8 covers /dev/sda to /dev/sdp , major 65 covers /dev/sdq to /dev/sdaf , 66 /dev/sdag to /dev/sdav and so on all the way to 135 for /dev/sdig to /dev/sdiv (for a total of 256 disk devices).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198950", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/111198/" ] }
198,952
This is a very simple script #!/usr/local/bin/bashset -eif [ "$#" -lt 1 ]thenecho "Please insert at least one argument"exitelseecho -e "\c"fiif [ -h "$1" ]then echo "$1 is a symbolic link"else echo "$1 in not a symbolic link"fi ~ My question is: how to change the script for recognize more than 1 argument?I have 4 files, and I want the script to return $1 is a symbolic link$2 is not a symbolic link$3 is not a symbolic link etc How could I do this?
Using the same structure as your original script, you just need to iterate over the $@ array (that's the list of arguments given in the command line): #!/usr/local/bin/bashset -eif [ "$#" -lt 1 ]thenecho "Please insert at least one argument"exitelseecho -e "\c"fifor file in "$@"do if [ -h "$file" ] then echo "$file is a symbolic link" else echo "$file is not a symbolic link" fidone A simplified version of the same thing would be: #!/usr/bin/env bash[ "$#" -lt 1 ] && printf "Please give at least one argument\n" && exit for file do [ -h "$file" ] && printf "%s is a symbolic link\n" "$file" || printf "%s is not a symbolic link\n" "$file"done
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/198952", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
198,971
I am trying to find and delete all .swp files in my current directory. I was using below command: rm -i `find . | grep .swp$` But everytime it is giving me this error: rm: missing operandTry `rm --help' for more information. Anything wrong I am doing?
The error message you received probably indicates that no file matched the name pattern .swp$ . A generally safer way to do what you wrote (because it will handle any file name): find . -name '*.swp' -print0 | xargs -0 rm -i -- In the snippet above, I used -print0 so that find separatesrecords with null characters; the default is newlines, whichunfortunately is valid within a filename. With -print0 , the commandworks works with any filenames, including any that embednewlines. Likewise, xargs -0 processes its input as null-separatedrecords.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198971", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22434/" ] }
198,977
After installing the Dropbox DEB package from their site and starting the Dropbox daemon, the tray icon for Dropbox doesn't show up in the tray. I have verified that Dropbox is, in fact, running, but the icon still doesn't show up. How can I get the tray icon working in Elementary OS Luna/Freya?
As of 2015/04/27, the Dropbox daemon looks for a couple of environment variables on startup to try and correctly display the tray icon. Since these environment variables aren't set by Elementary OS, Dropbox just gives up and doesn't try to display a tray icon. To test this theory, stop the Dropbox daemon like so: dropbox stop Next, start it with these two environment variables set: DROPBOX_USE_LIBAPPINDICATOR=1 XDG_CURRENT_DESKTOP=Unity \ dropbox start Hooray, the tray icon is there! To make this change permanent, you'll need to edit the autostart command for the Dropbox daemon. This desktop entry lives at $HOME/.config/autostart/dropbox.desktop . Since "Dropbox knows best™," the start command automatically regenerates this file, overwriting any changes you'd make there. Therefore, copy it to $HOME/.config/autostart/dropbox-better.desktop . Next, create a script somewhere which will start Dropbox properly: #!/bin/bash# stop it if it's runningdropbox stop &>/dev/null# start it properlyDROPBOX_USE_LIBAPPINDICATOR=1 XDG_CURRENT_DESKTOP=Unity \ dropbox start -i Now open the dropbox-better.desktop file in your favorite text editor and modify it to this: [Desktop Entry]Name=Dropbox (Better)GenericName=File SynchronizerComment=Sync your files across computers and to the webExec=/absolute/path/to/start-dropbox.shTerminal=falseType=ApplicationIcon=dropboxCategories=Network;FileTransfer;StartupNotify=false Log out and back in again to test that it's working, and you, like me, will finally have a Dropbox tray icon after something like 18 months without one!
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198977", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5614/" ] }
198,996
When I open the Gtk file dialog, there is a box called “Places” on the left-hand side which lists “Search”, “Recently Used”, a bunch of directories, and several things that appear to be volumes. I don't care about any of these entries, but for the most part I don't mind, except for one. One of the volumes is on an external hard disk that spends most of its time spun down. Opening the Gtk file dialog makes this disk wake up (presumably because the application reads the disk size or label and that information isn't in the cache). I want this to stop. etch200808 is the label of a mounted filesystem. I have two 500MB filesystems mounted, one of them is on the external disk that I don't want to spin up. I'm not sure what the 412 GB one is: I have no filesystem anywhere near this size; I do have an LVM physical volume that's the right size. I have no idea why these are displayed and not any other volume of various types on this system. How can I force this volume (or all volume, or all directories) off the “Places” box? Note that this isn't just about not being listed, this is about the mount point not being accessed , so that my disk doesn't spin up just because I wanted to open or save a file from a Gtk application. I'm running Debian wheezy, but I want to know the answer for other distributions and generations as well — if only because this machine will be upgraded to jessie soon.
The GVFS documentation has a file about Controlling What is Shown in the User Interface . In short, you have two ways to do this: If it's in /etc/fstab , add x-gvfs-hide as one of the options (or, for older versions of udisks2, comment=gvfs-hide ). Configure udev to set the $ENV{UDISKS_IGNORE}="1" for the relevant device. For example, here is how I hide logical volumes on my system (which are all things I don't want to mount via the GUI): ENV{DM_VG_NAME}=="Zia", ENV{UDISKS_IGNORE}="1" For a partition on a disk, reasonable things to match on would include $ENV{ID_WWN} or $ENV{ID_SERIAL} along with $ENV{ID_PART_ENTRY_NUMBER} . So, for example: ENV{ID_WWN}=="0x5000c5001c33a889", ENV{ID_PART_ENTRY_NUMBER}=="1", ENV{UDISKS_IGNORE}="1" should match the first partition on one of my disks and set it ignored. ID_FS_UUID would be another possibility. If you're running udisks v. 1 (e.g, in Debian Wheezy), the udev environment variable to set is ENV{UDISKS_PRESENTATION_HIDE}="1" . and it appears from Gilles' testing that the /etc/fstab method does not work reliably. Note that it's possible to be running both v. 1 and v. 2, in which case you'll have to set both.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/198996", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/885/" ] }
199,038
I am executing a tar command to compress files which are present in another directory. I executed following command: tar -czf /backupmnt/abc.tar.gz -C /backupmnt/statusService/ * I want to create abc.tar.gz file in /backupmnt which should include all files in /backupmnt/statusService/ directory, but I am getting below error: tar: components: Cannot stat: No such file or directorytar: Exiting with failure status due to previous errors components is present in current directory from where I am executing the command. Below are the contents of /backupmnt/statusService SS-01:/ # ls /backupmnt/statusServiceMJ.netact.xml.tar.gz gmoTemp_fm.tar.gz mr.properties.tar.gz probe.properties.tar.gz relay_logs.tar.gz ss_logs.tar.gz tomcat_conf.tar.gzesymac_config.txt.tar.gz gmoTemp_pm.tar.gz o2ml.tar.gz probes_logs.tar.gz ss_conf.tar.gz ss_pm.tar.gz tomcat_logs.tar.gz I am not able to get where I am wrong.
* is expanded by the shell before tar gets executed. So, making tar change the directory invalidates the arguments that * expanded into. You can simply tell tar to compress the directory instead: tar -czf /backupmnt/abc.tar.gz -C /backupmnt/statusService/ . The . represents the current directory, which will change when tar changes directories. This will result in hidden files (those beginning with . ) being included in the archive.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199038", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108657/" ] }
199,049
$ id userauid=830(usera) gid=799(groupa) groups=799(groupa) I need to extract the group name from the output of id and store it in a variable. In this case it's groupname=groupa
id also accepts paramters, so you don't have to grep for it ( -g to print only the group, and -n to print names instead of ids): $ id -gn useragroupa To save that into a variable use that: groupname=$(id -gn usera)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199049", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/104837/" ] }
199,052
When I login on some particular server through SSH (which I do not have admin access to), I get the following error: urxvt-unicode: Unknown terminal type (I also don't want to change my terminal type permanently). It is important because depending on the terminal type I get different colors when logged in. Is it possible to change the terminal type just when logging through SSH?
If you have root access to the remote box, install the package ncurses-term .This will provide the rxvt-256color terminfo entry. As a non-root user, you can also copy over the rxvt terminfo entries to $HOME/.terminfo/r/ on the remote machine, and export TERMINFO=$HOME/.terminfo . ssh <host> 'mkdir -p .terminfo/r'scp /usr/share/terminfo/r/rxvt-unicode-256color <host>:~/.terminfo/r/
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
199,059
this is not a duplicate of creating a tar archive without including parent directory If I do tar -czf archive directory , directory will be added to the archive. I want to add the files of the folder directory not the directory itself, how? please suggest answer that does not involve cd ing to directory this is not a duplicate of what its showing: I have a directory called somedir with contents as abc , xyz : ls somedirabc xyz I want to make archive that will contain files abc , xyz , not somedir folder Update :if i use the command tar -C /home/username/dir1/dir2 -cvf temp.tar yourarchive (which the answer to the question of which my question is called duplicate of) i get this: what I wanted is tar czf archive.tar.gz -C yourarchive . , I get this ( which is close enough of what I wanted ): what I wanted was is this (directly files, no folder):
Use -C : tar czf archive -C directory . This instructs tar to use directory as its working directory, and archive everything contained in the directory ( . ). Using -C is nearly equivalent to using cd ; archive above is interpreted relative to the current directory when tar starts, then tar changes its working directory to directory before interpreting . . I'm not sure how widespread support for tar -C is, but any Linux distribution should support it.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199059", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
199,072
Gedit 3.14 in Debian 8 has no window manager decoration and the window cannot be resized. Do I need to install any additional package to make it work or has Gedit become unusable outside of the Gnome desktop? I use the window manager Blackbox. Edit: Window resizing works in Openbox. Screenshot: .
Pluma is a Gedit fork without client-side decorations, which means it includes the usual the window borders and title bar. # apt-get install pluma Below is a screenshot with the window manager Blackbox.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199072", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36718/" ] }
199,081
I am writing a systemd unit file for a Java application and I'd like to control the version of Java used to start it up. My (simplified) service file is [Service]Type=simpleEnvironmentFile=%h/Documents/apps/app/app-%i/app.cfgExecStart=${JAVA_HOME}/bin/java ${JAVA_OPTS} -jar %h/Documents/apps/app/app-%i/myapp.jarSuccessExitStatus=143 When trying to start it up I get an error back Apr 28 12:43:37 rombert systemd[1613]: [/home/robert/.config/systemd/user/[email protected]:7] Executable path is not absolute, ignoring: ${JAVA_HOME}/bin/java ${JAVA_OPTApr 28 12:43:37 rombert systemd[1613]: [email protected] lacks both ExecStart= and ExecStop= setting. Refusing. I know that JAVA_HOME is correctly set ; if I change the ExecStart line to start with /usr/bin/java and then add something like -DsomeOption=${JAVA_HOME} I can see it just fine. The obvious workaround is to create a wrapper script but I feel that it defeats the point of using a service file. How can I set JAVA_HOME for my Java application using a unit file?
From the "Command lines" section in systemd.service(5): Note that the first argument (i.e. the program to execute) may not be a variable. I was going to suggest using the instance specifier %i (you can read more about it in systemd.unit(5)), but (now we're back in systemd.service(5)): the first argument of the command line (i.e. the program to execute) may not include specifiers. I think the best option at this point really is creating a shell script that wraps the execution of the java binary as suggested by Warren Young or you could ExecStart a shell directly like in the example for shell command lines in the "Command Lines" section of systemd.service(5) which has the following example: ExecStart=/bin/sh -c 'dmesg | tac' so you could do (untested): ExecStart=/bin/sh -c '${JAVA_HOME}....'
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199081", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2235/" ] }
199,157
I'm currently working on a pretty simple zsh script. What I often do is something like: mv */*.{a,b} . When I run that within a zsh script, it seems to expand differently and fail while it works in interactive mode. % mkdir dir% touch dir/file.a% ls file.als: cannot access file.a: No such file or directory% mv */*.{a,b} .% ls file.afile.a So, this works, but as a script: % mkdir dir% touch dir/file.a% ls file.als: cannot access file.a: No such file or directory% cat script.sh#!/usr/bin/zshmv */*.{a,b} .% ./script.sh./script.sh:2: no matches found: */*.b So, what's different? What am I doing wrong?
Both are wrong with the zsh default option settings. You can easily see what's going on by using echo as the command instead of mv . Interactively, it looks like you have the null_glob option set. According to the zsh documentation that option is not set by default. What happens with that option unset depends on whether another option, nomatch , is set or unset. With nomatch unset ( nonomatch ) you would get this: % mkdir dir% touch dir/file.a% ls file.als: cannot access file.a: No such file or directory% echo */*.{a,b} .dir/file.a */*.b . The expansion happens in 2 steps. First, */*.{a,b} is expanded to 2 words: */*.a and */*.b . Then each word is expanded as a glob pattern. The first expands to dir/file.a and the second expands to itself because it doesn't match anything. All of this means that, if you use mv and not echo , mv ought to try to move 2 files: dir/file.a (fine) and */*.b (no such file). This is what happens by default in most shells, like sh and ksg and bash . The zsh defaults option settings are that null_glob is unset and nomatch is set. Scripts run with the default option settings (unless you change them in ~/.zshenv or /etc/zshenv , which you relly shouldn't). That means that in scripts, you get this: % mkdir dir% touch dir/file.a% ls file.als: cannot access file.a: No such file or directory% cat script.sh#!/usr/bin/zshecho */*.{a,b} .% ./script.sh./script.sh:2: no matches found: */*.b Since */*.b does not match anything, you get an error due to nomatch . If you insert setopt nonomatch in the script before the echo / mv command, you get back to the wrong behaviour as that I describe above: it tries to move a file that does not exist. If you insert setopt null_glob in the script before the echo / mv command, you get the behaviour you got in your interactive shell, which is that is works.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199157", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112283/" ] }
199,160
The files are .serverauth.##### where ##### is a 5-digit number. I have a handful of these files in my home directory with a broad range of creation dates spanning a couple of years. What are these files from? Is it safe to delete them?
You can remove all of them except the newest one. They are created by the startx script. If X does not shut down gracefully, that files is not removed and stays forever (see that bug ). You can change the line in the /usr/bin/startx file, to a more handy way: Search for xserverauthfile= in the script and replace the line with: xserverauthfile=$XAUTHORITY
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199160", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44406/" ] }
199,164
All LVM commands give me the error /run/lvm/lvmetad.socket: connect failed: No such file or directory . I Googled this error and found only postings related to Grub and Grub-install. wish to get rid of those errors # pvs /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad: No such file or directory. Falling back to internal scanning. /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory /run/lvm/lvmetad.socket: connect failed: No such file or directory PV VG Fmt Attr PSize PFree /dev/vdb1 vdatavg lvm2 a-- 16.00g 6.00g
If you are using lvm and systemd do: systemctl enable lvm2-lvmetad.servicesystemctl enable lvm2-lvmetad.socketsystemctl start lvm2-lvmetad.servicesystemctl start lvm2-lvmetad.socket BTW this is grub related as well. I think grub gets kernel parameter root from /run/lvm/lvmetad.socket. Wasn't patient to test all this in detail as it worked out to function. Please someone correct me if I'm wrong. Edit: This is only relevant for systems using systemd for init. If you are on older ubuntu you mnight be using upstart instead and on other systems openrc.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199164", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26612/" ] }
199,166
I'm using Elementary OS, and just found I've not jar command. I tried to install it through the "typical": apt-get install jar But this doesn't work, since "jar" is not found in the repositories or isn't the name of the package. I've tried to look for it in google, but jar is such an used word that I've found nothing useful. Could you say me how to install it? Thank you in advance Update : $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b25)Java HotSpot(TM) Client VM (build 25.40-b25, mixed mode)$ javac -versionjavac 1.8.0_40
jar is part of the JDK. If you installed the JDK correctly, you should have it. As far as I'm concerned, the path to jar is /usr/lib/jvm/java-7-openjdk-amd64/bin/jar . The version and architecture are the main variables. In most cases, the binary should be made available to your shell's PATH through a few symlinks. For instance, on my Ubuntu machine, jar is found at /usr/bin/jar , which is itself a symlink to /etc/alternatives/jar (another symlink). The final destination is /usr/lib/jvm/java-7-openjdk-amd64/bin/jar . It is possible that you may not have these links correctly set up (especially if you don't use the update-alternatives mechanism), which might make your shell unable to find the jar executable. The first step to solve this is to locate it. Have a look at the various paths I've given earlier, and try to locate it. Note: as a last resort, you can use the following find command to have it looked up system-wide: $ find / -type f -name "jar" Once you have found it, make sure the directory in which it lies is within your PATH . For instance, let's assume you don't want to create links. If you were to add the /usr/lib/jvm/java-7-openjdk-amd64/bin directory to your PATH , you'd add the following to your ~/.bashrc file: export PATH="$PATH:/usr/lib/jvm/java-7-openjdk-amd64/bin" After re-sourcing the file, or reopening your terminal, you should be able to run jar . Now, if you don't want to use that trick, and prefer to use a symlink, you could do something like... $ sudo ln -s /usr/lib/jvm/java-7-openjdk-amd64/bin/jar /usr/bin/jar Of course, you'd have to make sure that /usr/bin is within your PATH , or you would just end up with the same problem all over again.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199166", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/97123/" ] }
199,178
I put the following script in /etc/init.d/rc3.d on my Debian 7 but it doesn't work on my computer #! /bin/sh # . /etc/rc.d/init.d/functions # uncomment/modify for your killproc case "$1" in start) echo "Starting noip2." /usr/local/bin/noip2 ;; stop) echo -n "Shutting down noip2." killproc -TERM /usr/local/bin/noip2 ;; *) echo "Usage: $0 {start|stop}" exit 1 esac exit 0 How can I automatically run noip2 daemon when the machine is booted? Here is the full documetation from the noip2 source folder: This file describes noip2, a second-generation Linux client for the no-ip.com dynamic DNS service. NEW: This code will build and run on Solaris/Intel and BSD also. Edit the Makefile for Solaris and the various BSDs. For BSD users wanting to use a tun interface, see below. Let me know about any other changes needed for noip2 to operate correctly on your non-Linux OS. Mac OS X is a BSD variant. Please read this short file before using noip2. ########################################################################### HOW TO BUILD AN EXECUTABLE FOR YOUR SYSTEM The command make will build a binary of the noip2 client that will run on your system. If you do not have 'make' installed and you have an i686 Linux machine with libc6, a binary for i686 systems is located in the binaries directory called noip2-Linux. Copy that binary to the build directory 'cp binaries/noip2-Linux noip2' The command make install (which must be run as root) will install the various pieces to their appropriate places. This will ask questions and build a configuration data file. See below if you can't become root or can't write in /usr/local/*.########################################################################### HOW TO USE THE CLIENT WITHOUT READING THE REST OF THIS TEXT Usual operation? /usr/local/bin/noip2 -C configure a client /usr/local/bin/noip2 run a client /usr/local/bin/noip2 -S display info about running clients /usr/local/bin/noip2 -D pid toggle the debug state for client pid /usr/local/bin/noip2 -K pid terminate client pid Have more than one internet access device? /usr/local/bin/noip2 -M -c file start additional instances ########################################################################### HOW TO START THE CLIENT The noip2 executable can be run by typing /usr/local/bin/noip2 If you want it to run automatically when the machine is booted, then place the following script in your startup directory. (/etc/init.d/rcX.d or /sbin/init.d/rcX.d or ???) ####################################################### #! /bin/sh # . /etc/rc.d/init.d/functions # uncomment/modify for your killproc case "$1" in start) echo "Starting noip2." /usr/local/bin/noip2 ;; stop) echo -n "Shutting down noip2." killproc -TERM /usr/local/bin/noip2 ;; *) echo "Usage: $0 {start|stop}" exit 1 esac exit 0 ####################################################### Where the 'X' in rcX.d is the value obtained by running the following command grep initdefault /etc/inittab | awk -F: '{print $2}' Killproc can be downloaded from ftp://ftp.suse.com/pub/projects/init Alternatively, you can uncomment the line after #! /bin/sh If you have a recent RedHat version, you may want to use the startup script supplied by another user. It's in this package called redhat.noip.sh It may need some modification for your system. There is a startup script for Debian called debian.noip2.sh. It also has been supplied by another user and is rumored to fail in some situations. Another user has supplied a proceedure to follow for MAc OS X auto startup. It's called mac.osx.startup. Mac users may wish to read that file. Here is a script which will kill all running copies of noip2. #!/bin/sh for i in `noip2 -S 2>&1 | grep Process | awk '{print $2}' | tr -d ','` do noip2 -K $i done These four lines can replace 'killproc' and 'stop_daemon' in the other scripts. If you are behind a firewall, you will need to allow port 8245 (TCP) through in both directions. ####################################################################### IMPORTANT!! Please set the permissions correctly on your executable. If you start noip2 using one of the above methods, do the following: chmod 700 /usr/local/bin/noip2 chown root:root /usr/local/bin/noip2 If you start noip2 manually from a non-root account, do the chmod 700 as above but chown the executable to the owner:group of the non-root account, and you will need to substitute your new path if the executable is not in /usr/local/bin. ########################################################################### SAVED STATE Noip2 will save the last IP address that was set at no-ip.com when it ends. This setting will be read back in the next time noip2 is started. The configuration data file must be writable for this to happen! Nothing happens if it isn't, the starting 0.0.0.0 address is left unchanged. If noip2 is started as root it will change to user 'nobody', group 'nobody'. Therefore the file must be writeable by user 'nobody' or group 'nobody' in this case! ########################################################################### BSD USING A TUN DEVICE Recent BSD systems will use getifaddrs() to list ALL interfaces. Set the 'bsd_wth_getifaddrs' define in the Makefile if using a version of BSD which supports getifaddrs() and ignore the rest of this paragraph. Mac OS X users should have a versdion of BSD which supports getifaddrs(). Otherwise set the 'bsd' define. The 'bsd' setting will not list the tun devices in BSD. Therefore a tun device cannot be selected from the menu. If you want to use a tun device you will need to edit the Makefile and change the line ${BINDIR}/${TGT} -C -Y -c /tmp/no-ip2.conf to ${BINDIR}/${TGT} -C -Y -c /tmp/no-ip2.conf -I 'your tun device' COMMAND LINE ARGUMENTS WHEN INVOKING THE CLIENT The client will put itself in the background and run as a daemon. This means if you invoke it multiple times, and supply the multiple-use flag, you will have multiple instances running. If you want the client to run once and exit, supply the '-i IPaddress' argument. The client will behave well if left active all the time even on intermittent dialup connections; it uses very few resources. The actions of the client are controlled by a configuration data file. It is usually located in /usr/local/etc/no-ip2.conf, but may be placed anywhere if the '-c new_location' parameter is passed on the startup line. The configuration data file can be generated with the '-C' parameter. There are some new command line arguments dealing with default values in the configuration data file. They are -F, -Y and -U. The interval between successive testing for a changed IP address is controlled the '-U nn' parameter. The number is minutes, a minimum of 1 is enforced by the client when running on the firewall machine, 5 when running behind a router/firewall. A usual value for clients behind a firewall is 30. One day is 1440, one week is 10080, one month is 40320, 41760, 43200 or 44640. One hour is left as an exercise for the reader :-) The configuration builder code will allow selection among the hosts/groups registered at no-ip.com for the selected user. The '-Y' parameter will cause all the hosts/groups to be selected for update. Some sites have multiple connections to the internet. These sites confuse the auto NAT detection. The '-F' parameter will force the non-NAT or "firewall" setting. The client can be invoked with the '-i IPaddress' parameter which will force the setting of that address at no-ip.com. The client will run once and exit. The -I parameter can be used to override the device name in the configuration data file or to force the supplied name into the configuration data file while it is being created. Please use this as a last resort! The '-S' parameter is used to display the data associated with any running copies of noip2. If nothing is running, it will display the contents of the configuration data file that is selected. It will then exit. The '-K process_ID' parameter is used to terminate a running copy of noip2. The process_ID value can be obtained by running noip2 -S. The '-M' parameter will permit multiple running copies of the noip2 client. Each must have it's own configuration file. Up to 4 copies may run simultaneously. All errors and informational messages are stored via the syslog facility. A line indicating a successful address change at no-ip.com is always written to the syslog. The syslog is usually /var/log/messages. If the client has been built with debugging enabled, the usual state, the '-d' parameter will activate the debug output. This will produce a trace of the running program and should help if you are having problems getting the connection to no-ip.com established. All errors, messages and I/O in both directions will be displayed on the stderr instead of syslog. The additional '-D pid' parameter will toggle the debug state of a running noip2 process. This will not change where the output of the process is appearing; if it was going to the syslog, it will still be going to the syslog. One final invocation parameter is '-h'. This displays the help screen as shown below and ends. USAGE: noip2 [ -C [ -F][ -Y][ -U #min]][ -c file] [ -d][ -D pid][ -i addr][ -S][ -M][ -h] Version Linux-2.x.x Options: -C create configuration data -F force NAT off -Y select all hosts/groups -U minutes set update interval -c config_file use alternate data path -d increase debug verbosity -D processID toggle debug flag for PID -i IPaddress use supplied address -I interface use supplied interface -S show configuration data -M permit multiple instances -K processID terminate instance PID -h help (this text) ########################################################################### HOW TO CONFIGURE THE CLIENT The command noip2 -C will create configuration data in the /usr/local/etc directory. It will be stored in a file called no-ip2.conf. If you can't write in /usr/local/*, or are unable to become root on the machine on which you wish to run noip2, you will need to include the '-c config_file_name' on every invocation of the client, including the creation of the datafile. Also, you will probably need to put the executable somewhere you can write to. Change the PREFIX= line in the Makefile to your new path and re-run make install to avoid these problems. You will need to re-create the datafile whenever your account or password changes or when you add or delete hosts and/or groups at www.no-ip.com Each invocation of noip2 with '-C' will destroy the previous datafile. Other options that can be used here include '-F' '-Y' -U' You will be asked if you want to run a program/script upon successful update at no-ip.com. If you specify a script, it should start with #!/bin/sh or your shell of choice. If it doesn't, you will get the 'Exec format error' error. The IP address that has just been set successfully will be delivered as the first argument to the script/program. The host/group name will be delivered as the second argument. Some machines have multiple network connections. In this case, you will be prompted to select the device which connects to outside world. The -I flag can be supplied to select an interface which is not shown. Typically, this would be one of the pppx interfaces which do not exist until they are active. The code will prompt for the username/email used as an account identifier at no-ip.com. It will also prompt for the password for that account. The configuration data contains no user-serviceable parts!! IMPORTANT!! Please set the permissions correctly on the configuration data. chmod 600 /usr/local/etc/no-ip2.conf. chown root:root /usr/local/etc/no-ip2.conf. If you start noip2 manually from a non-root account, do the chmod as above but chown the no-ip2.conf file to the owner:group of the non-root account. Make sure the directory is readable! The program will drop root privileges after acquiring the configuration data file. And the sample init script for debian: #! /bin/sh # /etc/init.d/noip2.sh # Supplied by no-ip.com # Modified for Debian GNU/Linux by Eivind L. Rygge <[email protected]> # corrected 1-17-2004 by Alex Docauer <[email protected]> # . /etc/rc.d/init.d/functions # uncomment/modify for your killproc DAEMON=/usr/local/bin/noip2 NAME=noip2 test -x $DAEMON || exit 0 case "$1" in start) echo -n "Starting dynamic address update: " start-stop-daemon --start --exec $DAEMON echo "noip2." ;; stop) echo -n "Shutting down dynamic address update:" start-stop-daemon --stop --oknodo --retry 30 --exec $DAEMON echo "noip2." ;; restart) echo -n "Restarting dynamic address update: " start-stop-daemon --stop --oknodo --retry 30 --exec $DAEMON start-stop-daemon --start --exec $DAEMON echo "noip2." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" exit 1 esac exit 0
Two steps for you to solve this. Your script ( /etc/init.d/noip2 ) should look like: #! /bin/sh# /etc/init.d/noip2# Supplied by no-ip.com# Modified for Debian GNU/Linux by Eivind L. Rygge <[email protected]># Updated by David Courtney to not use pidfile 130130 for Debian 6.# Updated again by David Courtney to "LSBize" the script for Debian 7.### BEGIN INIT INFO# Provides: noip2# Required-Start: networking# Required-Stop:# Should-Start:# Should-Stop:# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Start noip2 at boot time# Description: Start noip2 at boot time### END INIT INFO# . /etc/rc.d/init.d/functions # uncomment/modify for your killprocDAEMON=/usr/local/bin/noip2NAME=noip2test -x $DAEMON || exit 0case "$1" in start) echo -n "Starting dynamic address update: " start-stop-daemon --start --exec $DAEMON echo "noip2." ;; stop) echo -n "Shutting down dynamic address update:" start-stop-daemon --stop --oknodo --retry 30 --exec $DAEMON echo "noip2." ;; restart) echo -n "Restarting dynamic address update: " start-stop-daemon --stop --oknodo --retry 30 --exec $DAEMON start-stop-daemon --start --exec $DAEMON echo "noip2." ;; *) echo "Usage: $0 {start|stop|restart}" exit 1esacexit 0 Then make it executable, i.e run # chmod a+x /etc/init.d/noip2# update-rc.d noip2 defaults
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199178", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61633/" ] }
199,203
In Vim, if I paste this script: #!/bin/shVAR=1while ((VAR < 10)) do echo "VAR1 is now $VAR" ((VAR = VAR +2)) done echo "finish" I get these strange results: #!/bin/sh#VAR=1#while ((VAR < 10))# do# echo "VAR1 is now $VAR"# ((VAR = VAR +2))# done# echo "finish"# Hash signs (#) and tabs have appeared. Why?
There're two reasons: Auto insert comment Auto indenting For pasting in vim while auto-indent is enabled, you must change to paste mode by typing: :set paste Then you can change to insert mode and paste your code. After pasting is done, type: :set nopaste to turn off paste mode. Since this is a common and frequent action, vim offers toggling paste mode: set pastetoggle=<F2> You can change F2 to whatever key you want, and now you can turn pasting on and off easily. To turn off auto-insert of comments, you can add these lines to your vimrc : augroup auto_comment au! au FileType * setlocal formatoptions-=c formatoptions-=r formatoptions-=oaugroup END vim also provides a pasting register for you to paste text from the system clipboard. You can use "*p or "+p depending on your system. On a system without X11, such as OSX or Windows, you have to use the * register. On an X11 system, like Linux, you can use both. Further reading Accessing the system clipboard How can I paste something to the VIM from the clipboard fakeclip
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/199203", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
199,220
How can I loop over all users in a shell script? I am writing a shell script to perform cleanup on a system. And if the script is run as root, I want to perform the requested actions in all users' home directories. I thought I could maybe loop over the lines in /etc/passwd ; however, I see many more entries there than there are living, breathing users . Is there some special way I can weed out dummy users from /etc/passwd ? I would prefer solutions tailored for the Bourne Shell (for portability). Pseudocode and mere explanations are also welcome.
There's no standard command to enumerate all existing user accounts. On most Unix variants, /etc/passwd contains the list of local accounts, always with the same traditional format (colon-separated columns). On Linux with Glibc (i.e. any non-embedded Linux), you can use the getent command: getent passwd is similar to cat /etc/passwd , but also includes remote accounts (NIS, LDAP, …). The following snippet enumerates user accounts: getent passwd | while IFS=: read -r name password uid gid gecos home shell; do echo "$name's home directory is $home"done Filtering the flesh-and-blood users is not possible in a completely reliable way, because nothing in the user database says whether a user is flesh-and-blood. (Plus: do test accounts count? Do guest accounts count? etc.) Here are some heuristics you can apply. You can filter users whose home directory seems not to be a system directory. top=${home#/}; top=${top%%/*}case $top in |bin|dev|etc|lib*|no*|proc|sbin|usr|var) echo "Looks like a system user";; *) echo "Looks like a user with a real home directory";;esac You can test if the user owns their home directory. That's almost always the case for flesh-and-blood users, and usually not the case for system users, but this is not very reliable because some system users do own their home directory. Furthermore it's possible for the home directory of flesh-and-blood users not to exist if it's a remote directory that isn't currently available, though normally the directory would be automounted. On Linux: if [ -d "$home" ] && [ "$(stat -c %u "$home")" = "$uid" ]; then echo "Likely flesh-and-blood"else echo "Probably a system user"fi You can test if the user has a shell that's an actual shell. But many system users also do, so that's even less reliable than the home directory. You can test whether the account has a password. This isn't fully reliable because flesh-and-blood users don't always have a password (for example they might only have SSH keys). Occasionally system accounts have a password — in particular the root account often does. How to do this depends on the Unix variant and often requires root access. On Linux with shadow passwords (which is the normal thing for Linux), you need to look in /etc/shadow for the password. case $(getent shadow "$name" | awk -F: '{print $2}') in ?|??) echo "No password on this account";; |\$*) echo "This account has a password";; *) echo "Weird password field";;esac Many systems define UID ranges for system accounts vs flesh-and-blood users. The ranges are system-dependent: the default depends on the Unix variant and distribution, and it can usually be configured by the system administrator. On most Linux distributions, the ranges are defined in login.defs : UID_MIN to UID_MAX for human users, other values for system accounts. I think the path of the home directory is the most reliable single indicator, but you may want to combine it with other heuristics.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199220", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83381/" ] }
199,234
Aside from checking the manual for a particular DVD drive, is there any way to determine if a DVD drive supports DVD +R/-R (DL) for both reading and writing? In Windows, Nero InfoTool is a convenient way to determine the capabilities of a drive. Is there an equivalent tool (or method) for linux?
You should be able to use less /proc/sys/dev/cdrom/info to display the functionality of the DVD drive. 0 means an option is not enabled and a 1 signifies an option that is available. If you have libcdio installed you can use the cd-drive command for more detailed drive information. K3b is a graphical tool you can use that is similar to Nero . Navigate to: Settings ==> Configure K3b ==> Devices This should display the DVD+R status.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199234", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112330/" ] }
199,315
I'm using the Linux "top" command to monitor %CPU of particular process. As the values keep on changing every few seconds, is there any way to keep track of values in a separate file or as a graphical representation? Is it possible to do it using any shell scripts?
The answer to this question can range from a simple command, to complex monitoring tools, depending on your needs. You can start by simply running top -b -n 1 >> file.txt (-b for batch mode, -n 1 for running a single iteration of top) and store the output (appended) in the file.txt. You can filter also "top" output like top -b -n 1 | grep init to see only the data for the "init" process or top -b -n 1 | grep "init" | head -1 |awk '{print $9}' to get the 9th column of the init process data (the CPU value). If you want to use in a shell script, you could: CPU=$(top -b -n1 | grep "myprocess" | head -1 | awk '{print $9}')MEM=$(top -b -n1 | grep "myprocess" | head -1 | awk '{print $10}') Or, with a single execution of top: read CPU MEM <<<$(top -b -n1 | grep "myprocess" | head -1 | awk '{print $9 " " $10}') (note that grep, head and awk could be merged in a single awk command but for the shake of simplicity I'm using separate commands). We used top in this example but there are alternate methods for other metrics (check sar , iostat , vmstat , iotop , ftop , and even reading /proc/*). Now you have a way to access the data (CPU usage). And in our example we are appending it to a text file. But you can use other tools to store the data and even graph them: store in csv and graph with gnuplot/python/openoffice, use monitoring&graping tools like zabbix, rrdtools, cacti, etc. There is a big world of monitoring tools that allow to collect and graph the data like CPU usage, memory usage, disk io, and even custom metrics (number of mysql connections, etc). EDIT: finally, to specifically answering your question, if you want to keep track of changes easily for a simple test, you can run top -b -n 1 >> /tmp/file.txt in your /etc/crontab file, by running top every 5 minutes (or any other time interval if you replace the /5 below). 0-59/5 * * * * root top -b -n1 >>/tmp/output.txt (and a grep + head -1 in the command above if you're only intestered in a single process data). Note that the output.txt will grow, so if you want to reset it daily or weekly, you can "rm" it with another crontab entry.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199315", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112381/" ] }
199,328
I have a machine with 90% hard-disk usage. I want to compress its 500+ log files into a smaller new file. However, the hard disk is too small to keep both the original files and the compressed ones. So what I need is to compress all log files into a single new file one by one, deleting each original once compressed. How can I do that in Linux?
gzip or bzip2 will compress the file and remove the non-compressed one automatically (this is their default behaviour). However, keep in mind that while the compressing process, both files will exists. If you want to compress log files (ie: files containing text), you may prefer bzip2 , since it has a better ratio for text files. bzip2 -9 myfile # will produce myfile.bz2 Comparison and examples: $ ls -l myfile-rw-rw-r-- 1 apaul apaul 585999 29 april 10:09 myfile$ bzip2 -9 myfile$ ls -l myfile*-rw-rw-r-- 1 apaul apaul 115780 29 april 10:09 myfile.bz2$ bunzip2 myfile.bz2$ gzip -9 myfile$ ls -l myfile*-rw-rw-r-- 1 apaul apaul 146234 29 april 10:09 myfile.gz UPDATE as @Jjoao told me in a comment, interestingly, xz seems to have a best ratio on plain files with its default options: $ xz -9 myfile$ ls -l myfile*-rw-rw-r-- 1 apaul apaul 109384 29 april 10:09 myfile.xz For more informations, here is an interesting benchmark for different tools: http://binfalse.de/2011/04/04/comparison-of-compression/ For the example above, I use -9 for a best compression ratio, but if the time needed to compress data is more important than the ratio, you'd better not use it (use a lower option, ie -1 , or something between).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199328", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74226/" ] }
199,348
This has been asked several times, but none of the methods work. I would like to dynamically create arrays with array names taken from the variables. So lets start with just one array for now: #!/bin/bashi="aaa"eval ${i}=("1") results in ./1.sh: line 3: syntax error near unexpected token `('./1.sh: line 3: `eval ${i}=("1")' same result with: $(eval ${i})=('1')$(eval echo ${i})=('1')$(eval "echo ${i}")=('1') I do not want to eval everything, justthe array name. If it is possible I would like to avoid using eval at all
eval expects a string as the argument. You can't use ( unquoted, it has a special meaning in shell. i=aaaeval "$i=(1 2)" # Use a string, $i will expand in double quotes.echo ${aaa[1]} You can also use declare or typeset instead of eval : declare -a $i='(1 2)' You still have to quote the parentheses and spaces. To avoid eval completely, you can assign one by one: #! /bin/bashname=aaavalues=(1 2)for ((i=0; i<${#values[@]}; ++i)) ; do read "$name[$i]" <<< "${values[i]}"doneecho ${aaa[1]}
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199348", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77420/" ] }
199,360
When I use kill on command line it works. kill -SIGSTOP 1234 But if I use in bash script file I get this error: kill: SIGSTOP: invalid signal specification sh file is #!/bin/shkill -SIGSTOP 1234 How can I use kill in bash script? I tried this: #!/bin/sh/bin/bash -c "kill -SIGSTOP 1234" but it does not work.
The standard (POSIX) syntax is: kill -s STOP "$pid" That is, without the SIG prefix. Some shell implementations, support kill -s SIGSTOP or kill -SIGSTOP as an extension but that's not standard nor portable. The UNIX specification (POSIX+XSI) also allows: kill -STOP "$pid" And kill -19 "$pid" Though which signal number is SIGSTOP is not specified and may change between systems and even architectures for a same system, so should be avoided.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199360", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112408/" ] }
199,449
Taking example of Ubuntu, can we tell if the kernel was custom compiled rather than what comes with distro?
Sure, just check whether dpkg knows about it. First check the kernel version you are running. uname -aLinux orwell 3.2.0-4-amd64 #1 SMP Debian 3.2.65-1+deb7u2 x86_64 GNU/Linux Then tell dpkg to search for the kernel image file in the dpkg database. dpkg -S /boot/vmlinuz-3.2.0-4-amd64linux-image-3.2.0-4-amd64: /boot/vmlinuz-3.2.0-4-amd64 Or, better, use dlocate from the dlocate package. dlocate first builds a cache from the dpkg database, and uses that. So it is fast. dlocate /boot/vmlinuz-3.2.0-4-amd64linux-image-3.2.0-4-amd64: /boot/vmlinuz-3.2.0-4-amd64 Finally, check that the Debian archives contain this package. apt-cache policy linux-image-3.2.0-4-amd64linux-image-3.2.0-4-amd64: Installed: 3.2.68-1+deb7u1 Candidate: 3.2.68-1+deb7u1 Version table: *** 3.2.68-1+deb7u1 0 500 http://security.debian.org/ wheezy/updates/main amd64 Packages 100 /var/lib/dpkg/status 3.2.65-1 0 500 http://httpredir.debian.org/debian/ wheezy/main amd64 Packages If they don't, then it is a custom package. Of course, if dpkg doesn't know about the image file, then your kernel is not part of a package at all, but has been locally compiled. Note that apt can tell the difference between a package in the Debian archive and a locally compiled one of the same name. I think it checks the md5sum of the package, but I forget the details of how it does that. The binary packages contain information about hashes, see the bottom of apt-cache show linux-image-3.2.0-4-amd64 , for example. e.g. Package: linux-image-3.2.0-4-amd64Source: linuxVersion: 3.2.68-1+deb7u1Installed-Size: 105729[...]Size: 23483788MD5sum: f9736f30f8b68ae79b2747d8a710ce28SHA1: 64bfde903892801dccd04b52b12316901a02cd96SHA256: 775814b3eff4a964b593c0bdeaac20587a4e3ddb1257a9d2bfcf1e9d3b9bfd15
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199449", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/77199/" ] }
199,464
Is there any shell/shellscript which identifies that there is already an alias for the command I have typed on the command line. Eg On shell I type git checkout master Shell prints "You can use alias you have for that : gcm" Then Shell proceeds with checking out master like normal. I am aware of the alias command which lists all the available aliases. I want shell to remind me to use alias when I am not using them and typing out the full command.
Here's a conceptual approach that might work. The alias command generates a list of all aliases, each followed by an = and it's definition. This output could be parsed by extracting the alias name from between the first blank and the first = . Everything after the first = would be the definition to be matched against. This sounds like something easily handled using associative arrays in awk. Two complications to this (that I see): 1) Since the definition can consist of multiple words, you'd have to do something like measuring the length of the definition minus any enclosing quotes and limit your compare against the entered command to that length. 2) Aliases can hold some weird stuff. Here's my weirdest one (and I'm sure they can get a lot weirder): alias pqp='cd ~/pq;[ -n "$(ls)" ] && mprb * || echo "Print Queue Empty"' The point here is that aliases can contain (almost?) all the special characters the shell wants to interpret and expand, so it would be best to code this in something else like awk or Python where the two strings you need to compare will be treated as simple character strings. The next issue is that you want to actually run the command after you're done matching and issuing any messages. The concern here would be that anything you run from within your program, even if it's just another script, will (normally) be executed in a child process. This means that any (shell-level) side effects of the command will be lost when the command terminates and the subshell closes. The most obvious thing is the return code, $? , which you would have to capture and reissue as the return code of your utility. If the alias included an invocation using the source , . command, then you'd be pretty much out of luck because running that in a subshell would completely cancel the intended side effects. Shells contain a number of built-in commands like cd . Some of these have equivalent programs as does [ and /usr/bin/test , but some, like cd don't and, even for those which do, the built-in versions don't always have the identical behavior and options that the external ones do. The above problems exist because the final command is coming from somewhere other than the standard input of the current shell level and is processed differently because of that. This is where something like AutoKey may be able to help. AutoKey is a macro processor which can be used to automate various actions in an X (gui) environment. In this context, it could be used to invoke a macro when you press a user defined hotkey. The macro could read and analyse a command line you would type into it, issue any messages (about aliases, etc.), and then retype the command as if you typed it on your keyboard. The advantage here is that Linux (really your desktop environment/terminal emulator) can't tell it's not you typing the command and everything proceeds in your original, native environment as if AutoKey wasn't there at all. A second advantage of this approach is that if you just enter a command without first pressing the hotkey, it's business as usual with no overhead, etc. This bring me to a final point which has been discussed at length in various places on stackexchange and elsewhere. It's almost always a bad idea to take a normal part of any system and change how it works by default under the hood without that being totally visible to the user (you, in this case). In brief, if you customize a system to behave in a non-standard way, there are two major consequences. 1) If someone else uses the system (e.g. to help you debug a problem), they will encounter unexpected results which could have any number of unpleasant consequences. 2) You become accustomed to the modified behavior and then (unconsciously) expect it to work when you are using another system which has not been customized in the same way. The classic example of this is defining an alias such as alias rm='rm -i' This sounds like a great idea until you type rm on another system and the files are immediately deleted when you were expecting to be prompted first. The way around this is something like: alias rmi='rm -i' If you type it, it works, but if you type it on another system, it probably won't do anything other than issue a command not found error. This is particularly important in the middle of the night when something important is broken, you're not as alert as you'd like to be, and you're under pressure to fix it now . That's when you really don't want any unexpected behaviors or things you have to waste time explaining to people. It works the same way with doing alias checking with an AutoKey macro activated by a hotkey. If you don't go out of your way to activate the macro by pressing a hotkey, then everything works as anyone would expect it to. A properly chosen hotkey probably won't be pressed by accident and, if pressed on another system by habit, probably won't do anything harmful.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199464", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112479/" ] }
199,472
On AIX (but this happens on HP-UX as well), I have GNU ls in my pathand it is also aliased as ls . When I use xargs, it instead uses the standard Unix ls instead of thealias. For example ( flocate is a function which finds the exact path ofthe search subject): flocate mirrorvg | xargs ls -lhls: illegal option -- husage: ls [-1ACFHLNRSabcdefgiklmnopqrstuxEUX] [File...]ls -lh /usr/sbin/mirrorvg-r-xr-x--- 1 root system 37K apr 3 2014 /usr/sbin/mirrorvg* Why doesn't xargs use the ls alias?
The command xargs is only able to run commands, not aliases. GNU parallel, however, is able to run functions: The command must be an executable, a script, a composedcommand, or a function. If it is a function you need to export-f the function first. An alias will, however, not work (seewhy http://www.perlmonks.org/index.pl?node_id=484296). So I would recommend either: Giving xargs the full path to the version of ls you want to use (or an unambiguous name, perhaps gls depending on how it was installed on your system) or, if your shell allows it, Defining ls as a function ( function ls { gls "$@"; }; export -f ls in bash) and using GNU parallel instead of xargs ( parallel -j1 if you would like to use a single CPU).
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199472", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/80389/" ] }
199,554
One of my folders contains files in the following format: 3_20150412104422154033.txt3_2015041211022775012.txt3_20150412160410171639.txt3_20150412160815638933.txt3_20150413161046573097.txt3_20150413161818852312.txt3_20150413163054600311.txt3_20150413163514489159.txt3_2015041321292659391.txt3_20150414124528747462.txt3_20150414125110440425.txt3_20150414134437706174.txt3_20150415085045179056.txt3_20150415100637970281.txt3_20150415101749513872.txt I want to retrieve those files having a date value less than or equal to my input date value. For example, if I give "3_20150414" which is (3_YYYYMMDD), I want the output to be the file names 3_20150412104422154033.txt3_2015041211022775012.txt3_20150412160410171639.txt3_20150412160815638933.txt3_20150413161046573097.txt3_20150413161818852312.txt3_20150413163054600311.txt3_20150413163514489159.txt3_2015041321292659391.txt3_20150414124528747462.txt3_20150414125110440425.txt3_20150414134437706174.txt I can list the files by issuing a command like this: ls -l | grep '20150413\|20150414' |awk '{print $NF}' But I am struggling to find a <= match.
You can use awk and its string comparison operator. ls | awk '$0 < "3_20150415"' In a variable: max=3_20150414 export maxls | LC_ALL=C awk '$0 <= ENVIRON["max"] "z"' concatenating with "z" here makes sure that the comparison is a string comparison, and allows any time on that day since in the C locale, digits sort before z . In zsh , you can also do: print -rC1 -- *.txt(e['[[ $REPLY < ${max}z ]]'])
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199554", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/101660/" ] }
199,586
I read somewhere that it is not recommended to put boot partition on lvm based partition. But, I'm doing it anyway. Then the only problem I've faced on this is sometimes when I install a new Linux distro and put it's boot partition on lvm, grub can't detect it. The grub-mkconfig command usually make a mistake on generating grub.cfg file. But, if this is the only problem on lvm based boot partition, I think it's okay. Because I know how to fix it, just give a proper address to the intended boot partition to boot and then everything goes fine. So, is there anything other than this that lvm can cause problems? Because, in my opinion lvm is very flexible and didn't slow down the system.
It's not a performance problem, it's a troubleshooting and fixing things problem. /boot is the bootstrap location - in there is a few files that start off everything else in your system. And sometimes you need to poke in there to fix a problem (such as grub config or similar). If you have to do this, it's useful to have a lowest common denominators sort of filesystem, to make it as easy as possible if e.g. you have to remove the drive and put it in another box to edit a config file. If you're in this position, you don't want to be having to 'fudge' your LVM into life just to be able to read it :).
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199586", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/102187/" ] }
199,598
In UNIX systems you can press top and bottom arrows to navigate through the previous commands. This is extremely handy. Sometimes, I go up and find a command that I want to use again, but with some variations. If I do such changes, then I don't have a way to get the original command back, unless I check it in history . Is there any way to "undo" the changes to the command in the history accessed through keys? My current workaround is to prepend a # to the command. This way the current command is performed as a comment, so nothing happens. Then, I can browse again through the commands with the keys. The problem is that the command I was using may be veeeery far away in the list, so going up again two hundred times is a bit . Control + R is not a solution either, since I may not remember exactly what I was looking for. Example I typed the following "50 commands ago": ls -l /etc/httpd/conf/ Now I went up to that line and changed it to ls -l /etc/init.d/ but did not press enter. Now, I want to get to the ls -l /etc/httpd/conf/ again. My environment $ echo $SHELL/bin/bash$ echo $TERMxterm
As long as you've edited a history entry but not pressed Enter yet, to go back to the original entry, repeatedly press Ctrl + _ — the undo command — until it doesn't make any further change. You're back to the original entry.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199598", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40596/" ] }
199,613
How can I set the library path for the current script that's running? I mean I don't want to list a new path for the libraries in a textfile.I tried it using export LD_LIBRARY_PATH=$(pwd)/lib/ This is the script: #!/bin/bashLD_LIBRARY_PATH="$(pwd)/lib/"export LD_LIBRARY_PATH./X3TC_config
In your script, these two lines close to the top should do the trick: LD_LIBRARY_PATH="$(pwd)/lib"export LD_LIBRARY_PATH Although bash allows you to set and export a variable in a single statement, not all shells do, so the two step approach is more portable, if that's a concern. If this isn't working for you, check that you are running the script from the right place - using $(pwd) like this ties you to running the script from the directory that contains the required ./lib subdirectory. If you want to be able to run the script from anywhere, you need to use the absolute path to the ./lib subdir, or construct a relative path from the directory portion of the path to the script using, e.g., $(dirname $0)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199613", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/109103/" ] }
199,615
I often use bc utility for converting hex to decimal and vice versa. However, it is always bit trial and error how ibase and obase should be configured. For example here I want to convert hex value C0 to decimal: $ echo "ibase=F;obase=A;C0" | bc180$ echo "ibase=F;obase=10;C0" | bcC0$ echo "ibase=16;obase=A;C0" | bc192 What is the logic here? obase ( A in my third example) needs to be in the same base as the value which is converted( C0 in my examples) and ibase ( 16 in my third example) has to be in the base where I am converting to?
What you actually want to say is: $ echo "ibase=16; C0" | bc192 for hex-to-decimal, and: $ echo "obase=16; 192" | bcC0 for decimal-to-hex. You don't need to give both ibase and obase for any conversion involving decimal numbers, since these settings default to 10. You do need to give both for conversions such as binary-to-hex. In that case, I find it easiest to make sense of things if you give obase first: $ echo "obase=16; ibase=2; 11000000" | bcC0 If you give ibase first instead, it changes the interpretation of the following obase setting, so that the command has to be: $ echo "ibase=2; obase=10000; 11000000" | bcC0 This is because in this order, the obase value is interpreted as a binary number, so you need to give 10000₂=16 to get output in hex. That's clumsy. Now let’s work out why your three examples behave as they do. echo "ibase=F;obase=A;C0" | bc 180 That sets the input base to 15 and the output base to 10, since a single-digit value is interpreted in hex, according to POSIX . This asks bc to tell you what C0₁₅ is in base A₁₅=10, and it is correctly answering 180₁₀, though this is certainly not the question you meant to ask. echo "ibase=F;obase=10;C0" | bc C0 This is a null conversion in base 15. Why? First, because the single F digit is interpreted in hex, as I pointed out in the previous example. But now that you've set it to base 15, the following output base setting is interpreted that way, and 10₁₅=15, so you have a null conversion from C0₁₅ to C0₁₅. That's right, the output isn't in hex as you were assuming, it's in base 15! You can prove this to yourself by trying to convert F0 instead of C0 . Since there is no F digit in base 15, bc clamps it to E0 , and gives E0 as the output. echo "ibase=16; obase=A; C0" 192 This is the only one of your three examples that likely has any practical use. It is changing the input base to hex first , so that you no longer need to dig into the POSIX spec to understand why A is interpreted as hex, 10 in this case. The only problem with it is that it is redundant to set the output base to A₁₆=10, since that's its default value.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/199615", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/33060/" ] }
199,633
I would like to write an installation bash script, where I would like to install MySQL server. On Linux Mint I had followed code: apt-get -y --force-yes install mysql-server-5.6 but I installed the new Debian 8 and there is no mysql-server - instead there is mariadb . How can I find out if package exists? I just know that there is dpkg -s which should tell whether a package is installed.
(the below is from Ubuntu, but the same technique obviously works on Debian as well) $ apt-cache show screenPackage: screenPriority: optionalSection: miscInstalled-Size: 950Maintainer: Ubuntu Developers <[email protected]>Original-Maintainer: Axel Beckert <[email protected]>Architecture: amd64Version: 4.1.0~20120320gitdb59704-9Depends: libc6 (>= 2.15), libpam0g (>= 0.99.7.1), libtinfo5Suggests: iselect (>= 1.4.0-1) | screenie | byobuFilename: pool/main/s/screen/screen_4.1.0~20120320gitdb59704-9_amd64.debSize: 645730... If the package exists, information will be displayed. If not, you'll see something like: $ apt-cache show foobarN: Unable to locate package foobarE: No packages found Additionally, the exit code of apt-cache will be non-zero if no matching packages are found. Additional note: If you're using apt-cache show package where package is a virtual one (one that doesn't exist, but is, for example, referenced by other packages), you'll get: N: Can't select versions from package 'package' as it is purely virtualN: No packages found The exit code of this is zero (which is a bit misleading in my opinion.)
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199633", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50966/" ] }
199,638
What is the equivalent to sudo apt-get install texlive-full on Fedora system?I read it is yum install texlive-scheme-full. Am I correct?
Yes. dnf install texlive-scheme-full (or yum install texlive-scheme-full , in older versions) is the way to go. While the installed packages are not fully equivalent the intention is the same. As stated here: https://ask.fedoraproject.org/en/question/44989/how-to-install-latex-for-fedora-19/ there are the following schemes: texlive-scheme-basic : basic scheme (plain and latex)texlive-scheme-context : ConTeXt schemetexlive-scheme-full : full scheme (everything)texlive-scheme-gust : GUST TeX Live schemetexlive-scheme-medium : medium scheme (small + more packages and languages)texlive-scheme-minimal : minimal scheme (plain only)texlive-scheme-small : small scheme (basic + xetex, metapost, a few languages)texlive-scheme-tetex : teTeX scheme (more than medium, but nowhere near full)texlive-scheme-xml : XML scheme and various collections (if you want some finer control over what you install): texlive-collection-basic : Essential programs and filestexlive-collection-bibtexextra : BibTeX additional stylestexlive-collection-binextra : TeX auxiliary programstexlive-collection-context : ConTeXt and packagestexlive-collection-fontsextra : Additional fontstexlive-collection-fontsrecommended : Recommended fontstexlive-collection-fontutils : Graphics and font utilitiestexlive-collection-formatsextra : Additional formatstexlive-collection-games : Games typesettingtexlive-collection-genericextra : Generic additional packagestexlive-collection-genericrecommended : Generic recommended packagestexlive-collection-htmlxml : HTML/SGML/XML supporttexlive-collection-humanities : Humanities packagestexlive-collection-langafrican : African scriptstexlive-collection-langarabic : Arabictexlive-collection-langcjk : Chinese/Japanese/Koreantexlive-collection-langcyrillic : Cyrillictexlive-collection-langczechslovak : Czech/Slovaktexlive-collection-langenglish : US and UK Englishtexlive-collection-langeuropean : Other European languagestexlive-collection-langfrench : Frenchtexlive-collection-langgerman : Germantexlive-collection-langgreek : Greektexlive-collection-langindic : Indic scriptstexlive-collection-langitalian : Italiantexlive-collection-langother : Other languagestexlive-collection-langpolish : Polishtexlive-collection-langportuguese : Portuguesetexlive-collection-langspanish : Spanishtexlive-collection-latex : LaTeX fundamental packagestexlive-collection-latexextra : LaTeX additional packagestexlive-collection-latexrecommended : LaTeX recommended packagestexlive-collection-luatex : LuaTeX packagestexlive-collection-mathextra : Mathematics packagestexlive-collection-metapost : MetaPost and Metafont packagestexlive-collection-music : Music packagestexlive-collection-omega : Omega packagestexlive-collection-pictures : Graphics, pictures, diagramstexlive-collection-plainextra : Plain TeX packagestexlive-collection-pstricks : PSTrickstexlive-collection-publishers : Publisher styles, theses, etctexlive-collection-science : Natural and computer sciencestexlive-collection-xetex : XeTeX and packages
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/199638", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/108183/" ] }
199,679
My script is not running on boot in a vagrant box under Ubuntu. My script looks like this - #!/bin/bash# /etc/init.d/mailcatcher### BEGIN INIT INFO# Provides: scriptname# Required-Start: $remote_fs $syslog# Required-Stop: $remote_fs $syslog# Default-Start: 2 3 4 5# Default-Stop: 0 1 6# Short-Description: Start daemon at boot time# Description: Enable service provided by daemon.### END INIT INFOmailcatcher --http-ip 192.168.50.10 My permissions on the file look like this - -rwxr-xr-x 1 root root 352 Apr 30 09:59 mailcatcher.sh I run the command - sudo update-rc.d "mailcatcher.sh" defaults If I run the script manually, it works and starts mailcatcher. If I reboot the computer, the mailcatcher daemon does not start. Am I missing something?
And now for the Ubuntu answers. This is an Ubuntu Linux question, and version 15 is now released. The Ubuntu world now has systemd. But even before version 15 the Ubuntu world had upstart. There really isn't a reason to write System 5 rc scripts; and there is certainly no good reason for starting from there. Both upstart and systemd do all of the "service controls". All that you need to do is describe the service . systemd A systemd service unit, to be placed in /etc/systemd/system/mailcatcher.service , is [Unit]Description=Ruby MailCatcherDocumentation=http://mailcatcher.me/[Service]# Ubuntu/Debian convention:EnvironmentFile=-/etc/default/mailcatcherType=simpleExecStart=/usr/bin/mailcatcher --foreground --http-ip 192.168.50.10[Install]WantedBy=multi-user.target This automatically gets one all of the systemd controls, such as: systemctl enable mailcatcher.service to set the service to be auto-started at boot. systemctl preset mailcatcher.service to set the service to be auto-started at boot, if the local policy permits it. systemctl start mailcatcher.service to start the service manually. systemctl status mailcatcher.service to see the service status. upstart Upstart is similar, and modifying Fideloper LLC's upstart job file to this question gives this for /etc/init/mailcatcher.conf : description "Mailcatcher"start on runlevel [2345]stop on runlevel [!2345]respawnexec /usr/bin/mailcatcher --foreground --http-ip=192.168.50.10 This automatically gets one all of the upstart controls, such as: initctl start mailcatcher to start the service manually. initctl status mailcatcher to see the service status. Bonus daemontools section For kicks, for the entertainment of any daemontools-family-using people who reach this via a WWW search, and to demonstrate another reason why not to begin at System 5 rc scripts, I ran that systemd service unit through the nosh toolset's convert-systemd-units command to produce the following daemontools-family run script: #!/bin/nosh#Run file generated from ./mailcatcher.service#Ruby MailCatcherchdir /read-conf --oknofile /etc/default/mailcatcher/usr/bin/mailcatcher --foreground --http-ip 192.168.50.10 Actually, the convert-systemd-units command generates a whole nosh service bundle directory. With that directory, which specifies dependency and ordering information, installed as /var/sv/mailcatcher in a system with the nosh service-manager one gets all of the nosh controls, such as: system-control enable mailcatcher.service to set the service to be auto-started at boot. system-control start mailcatcher.service to start the service manually. system-control status mailcatcher.service to see the service status. system-control preset mailcatcher.service to set the service to be auto-started at boot, if the local configuration (systemd-style presets or /etc/rc.conf{,.local} ) permits it. Don't even begin with System 5 rc files. Look at this template used by SaltStack for System 5 rc scripts. Even with the SaltStack parameterization eliminated that is 59 lines of shell script code, most of which is generic boilerplate that you'd be having to re-invent and re-write. Again. And Celada has already pointed out where you've re-invented it badly. The systemd unit file is 11 lines long. The upstart job file is 8 lines. The nosh run script is 6. And they do all of the start/stop/status mechanics for you. Don't start with System V rc , especially not on Ubuntu Linux. Further reading Setting Up Mailcatcher . 2014-10-21. Servers for Hackers. Fideloper LLC. James Hunt and Clint Byrum (2014). "Utilities" . Upstart Cookbook . Jonathan de Boyne Pollard (2014). A side-by-side look at run scripts and service units. . Frequently Given Answers.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199679", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/37640/" ] }
199,683
My mappings which contain Alt aren't working in vim when using in urxvt . They work fine in gvim .An example of such mapping map <silent> <A-h> <C-w>< In insert mode when I type Alt-h , only h is printed. How do I need to configure urxvt such that mappings containing Alt work in vim?I am using Ubuntu. Oops I set my modifier as super in urxvt. That's why none of the solutions were working for me. Make sure that you set the modifier as Alt to get the solutions proposed to work.
When you make an <A-x> mapping in Vim when x is a printable character (i.e., not a cursor or arrow key), it tells Vim to expect that character with the 8th/high bit set (aka, add 128 to the ASCII value). In your example, <A-h> means Vim will trigger the mapping when you type è. The ASCII value of h is 104 (binary 01101000) and when you set the 8th bit of that number, you get è's ASCII value of 232 (binary 11101000). What happens in urxvt and many other terminals is that the Alt key is set to send the character typed prefixed with the Escape character instead of "adding 128". In this case, Vim seed <Esc>h instead of è, so the mapping isn't triggered. This leaves you two options: re-configure your terminal to do something different with the Alt modifier or add more mappings to Vim with <Esc>x in addition to <A-x> .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199683", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/73864/" ] }
199,684
I am installing Emacs 24.5.0, and I want to install it with X. I am on Fedora. the configuration script cannot find any x-toolkit. How do I find out which one is good for me, and in which folder do I find it? Somewhere I have read that the x-toolkit for fedora should be gtk. How can I check if and where this library is installed?
When you make an <A-x> mapping in Vim when x is a printable character (i.e., not a cursor or arrow key), it tells Vim to expect that character with the 8th/high bit set (aka, add 128 to the ASCII value). In your example, <A-h> means Vim will trigger the mapping when you type è. The ASCII value of h is 104 (binary 01101000) and when you set the 8th bit of that number, you get è's ASCII value of 232 (binary 11101000). What happens in urxvt and many other terminals is that the Alt key is set to send the character typed prefixed with the Escape character instead of "adding 128". In this case, Vim seed <Esc>h instead of è, so the mapping isn't triggered. This leaves you two options: re-configure your terminal to do something different with the Alt modifier or add more mappings to Vim with <Esc>x in addition to <A-x> .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199684", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15525/" ] }
199,686
I have some confusion regarding fork and clone. I have seen that: fork is for processes and clone is for threads fork just calls clone, clone is used for all processes and threads Are either of these accurate? What is the distinction between these 2 syscalls with a 2.6 Linux kernel?
fork() was the original UNIX system call. It can only be used to create new processes, not threads. Also, it is portable. In Linux, clone() is a new, versatile system call which can be used to create a new thread of execution. Depending on the options passed, the new thread of execution can adhere to the semantics of a UNIX process, a POSIX thread, something in between, or something completely different (like a different container). You can specify all sorts of options dictating whether memory, file descriptors, various namespaces, signal handlers, and so on get shared or copied. Since clone() is the superset system call, the implementation of the fork() system call wrapper in glibc actually calls clone() , but this is an implementation detail that programmers don't need to know about. The actual real fork() system call still exists in the Linux kernel for backward compatibility reasons even though it has become redundant, because programs that use very old versions of libc, or another libc besides glibc, might use it. clone() is also used to implement the pthread_create() POSIX function for creating threads. Portable programs should call fork() and pthread_create() , not clone() .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/199686", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43342/" ] }
199,694
This is a cents 5.7 32bit VM running inside a vmware 5.5. host.I see high values of load average with low CPU usage. The VM has 4 vCPU’s and load sometimes reach 20. When I run vmstat I see high values in the 'r' column. The question is how I find which process are inside the kernel run queue?. I've tried what ever combintation of ps I've found in internet with no luck things like ps r -A vmstat output: [ ~]# vmstat 1 10procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 9 0 8 822516 322880 1593592 0 0 1 65 9 6 1 1 98 0 0 7 0 8 823136 322880 1593584 0 0 0 0 9387 97411 8 9 84 0 053 0 8 823508 322880 1593588 0 0 0 236 8332 108913 9 12 79 0 064 0 8 818424 322888 1597548 0 0 0 116 9027 140988 10 11 79 0 069 0 8 820284 322888 1597548 0 0 0 0 9095 128715 8 10 83 0 064 0 8 820284 322888 1597692 0 0 0 0 8701 119305 9 11 80 0 0 3 0 8 819540 322888 1597688 0 0 0 4704 9531 112734 8 8 84 0 081 0 8 818052 322888 1599452 0 0 0 224 8324 102409 10 13 77 0 0 8 0 8 816192 322888 1601788 0 0 0 3240 9181 98478 9 11 80 0 0 7 0 8 815076 322888 1601872 0 0 0 0 9250 104422 10 9 81 0 0 mpstat 1 1006:04:03 PM CPU usr nice sys iowait irq soft steal guest idle06:04:04 PM all 9.32 0.00 8.82 0.00 0.25 4.03 0.00 0.00 77.5806:04:05 PM all 9.85 0.00 8.84 0.00 0.25 4.29 0.00 0.00 76.7706:04:06 PM all 8.29 0.00 5.78 0.00 0.50 4.77 0.00 0.00 80.6506:04:07 PM all 9.82 0.00 7.81 0.00 0.25 4.28 0.00 0.00 77.8306:04:08 PM all 8.84 0.00 5.30 0.00 0.25 4.29 0.00 0.00 81.3106:04:09 PM all 10.05 0.00 9.05 0.00 0.50 4.02 0.00 0.00 76.3806:04:10 PM all 9.60 0.00 7.32 0.00 0.51 4.04 0.00 0.00 78.5406:04:11 PM all 8.33 0.00 5.81 0.00 0.25 4.29 0.00 0.00 81.3106:04:12 PM all 9.57 0.00 7.05 0.00 0.25 4.03 0.00 0.00 79.0906:04:13 PM all 7.83 0.00 5.05 0.00 0.25 3.79 0.00 0.00 83.08Average: all 9.15 0.00 7.08 0.00 0.33 4.18 0.00 0.00 79.25
fork() was the original UNIX system call. It can only be used to create new processes, not threads. Also, it is portable. In Linux, clone() is a new, versatile system call which can be used to create a new thread of execution. Depending on the options passed, the new thread of execution can adhere to the semantics of a UNIX process, a POSIX thread, something in between, or something completely different (like a different container). You can specify all sorts of options dictating whether memory, file descriptors, various namespaces, signal handlers, and so on get shared or copied. Since clone() is the superset system call, the implementation of the fork() system call wrapper in glibc actually calls clone() , but this is an implementation detail that programmers don't need to know about. The actual real fork() system call still exists in the Linux kernel for backward compatibility reasons even though it has become redundant, because programs that use very old versions of libc, or another libc besides glibc, might use it. clone() is also used to implement the pthread_create() POSIX function for creating threads. Portable programs should call fork() and pthread_create() , not clone() .
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/199694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112633/" ] }
199,698
What is the difference between uname -n and hostname ? Are there any real differences in what they return? Are there any differences in availability on different OSes? Is one of them included in POSIX and the other not?
There is no difference. hostname and uname -n output the same information. They both obtain it from the uname() system call. One difference is that the hostname command can be used to set the hostname as well as getting it. uname cannot do that. (Normally this is done only once, early in the boot process!)
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199698", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83381/" ] }
199,708
This may be a rather simple question but here goes. I am thinking of learning to use a CLI as I have been told it is more efficient for a technician to use than a GUI. Could someone please outline why it would be benificial to use a CLI?
It's all about metaphors for communication. GUIs are picture books - they let you move things to other things, or right click on things and select options. But you still type a post when you want to ask a question. Command lines are much the same thing - they're about telling a computer what to do. If what you're trying to do is simple, then pictures and words are about the same. If what you're trying to do is complex, then words allow you to explain better. One of my favourite examples is find . find . -mtime -60 -name '*.txt' -exec sed -i.bak 's/fish/carrot/g' {} \; This will: Search the current directory structure for all files modified within the last 60 days, called '*.txt' replace the word 'fish' with 'carrot' in all of them. leaving a backup copy suffixed .bak . How would you illustrate that pictorially using a GUI? (And to extend it - the next phase of the command line is to learn scripting, which lets you explain in more detail what you want the computer to do). This principle goes double for long running and time consuming tasks. You write your command to do it, press 'enter' and save the output. - go home and leave it going. Where complex GUI commands you have to run one at a time. The core advantage of GUIs in my mind is that they do let you have a range of things to do without needing to know specifically . Control Panel in Windows is the place you look when you want to 'fiddle with some settings' - and you can flick through them looking for something appropriate. I would suggest for infrequent tasks, that's beneficial. It's probably still (in my opinion) easier to do that with Windows than it would be on Linux. But that's really the same point - for infrequent tasks performed occasionally, GUIs are good. For complex and regular repeatable commands, CLI every time.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199708", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/84597/" ] }
199,787
Here is output from ps : $ ps aux | grep blobubuntu 4286 0.0 0.1 34748 9592 ? S Jan14 0:00 /usr/bin/python /usr/local/bin/pynt start_blob_readerubuntu 4287 0.0 0.1 34748 9596 ? S Jan14 0:00 /usr/bin/python /usr/local/bin/pynt start_blob_readerubuntu 4288 0.0 0.0 4444 656 ? S Jan14 0:00 /bin/sh -c python -m blob_manager blobubuntu 4289 1.2 0.2 65512 20668 ? S Jan14 1974:18 python -m blob_manager blobubuntu 4290 0.0 0.0 4444 656 ? S Jan14 0:00 /bin/sh -c python -m blob_manager blobubuntu 4291 1.2 0.2 65404 20624 ? S Jan14 1978:24 python -m blob_manager blobubuntu 19849 0.0 0.0 10464 896 pts/0 S+ 05:43 0:00 grep blob What is the easiest to kill these jobs (except 19849 because it is the grep process itself) via shell scripting? Either bash or zsh is OK.
Use pkill : pkill blob That would kill all processes matching the pattern blob . Another approach would be killall , but you should call it with -r so that the pattern is interpreted as a regex: killall -r blob
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199787", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7003/" ] }
199,820
How do I disable the beep/bell sound that is emitted in Secure Shell in Chrome OS? When certain commands fail, such as pressing backspace on an empty command line or on a failing tab autocompletion, I hear a very loud and annoying sound. It seems like a silly question, but I use the Secure Shell continuously and the sounds happen a lot. I'm talking about the Secure Shell here that can be accessed when Chrome OS is in developer mode by pressing Ctrl + Alt + T and then typing 'shell'.
A friendly user of the chrome-os, Aseda,pointed me to the hterm/Secure Shell FAQ . The audible bell can be disabled by opening the Javascript console by pressing Ctrl + Shift + J and then typing: term_.prefs_.set('audible-bell-sound', '')
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199820", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112702/" ] }
199,827
I'm trying to boot Solaris 10 or Solaris 11 for SPARC using qemu-system-sparc64 but keep running into problems early on. I'm using the DVD images sol-10-u11-ga-sparc-dvd.iso and sol-11_2-text-sparc.iso available from SUN's^WOracle's web site. Attempting Solaris 10: $ qemu-system-sparc64 -m 1024 -cdrom /tank/images/sol-10-u11-ga-sparc-dvd.iso -boot d -nographicOpenBIOS for Sparc64Configuration device id QEMU version 1 machine id 0kernel cmdlineCPUs: 1 x SUNW,UltraSPARC-IIiUUID: 00000000-0000-0000-0000-000000000000Welcome to OpenBIOS v1.1 built on Mar 12 2015 08:09 Type 'help' for detailed informationTrying cdrom:f...Not a bootable ELF imageNot a bootable a.out imageLoading FCode image...Loaded 7420 bytesentry point is 0x4000Ignoring failed claim for va 1000000 memsz af6d6!Ignoring failed claim for va 1402000 memsz 4dcc8!Ignoring failed claim for va 1800000 memsz 510c8!Jumping to entry point 00000000010071d8 for type 0000000000000001...switching to new context: entry point 0x10071d8 stack 0x00000000ffe8aa09warning:interpret: exception -13 caughtCopyright (c) 1983, 2013, Oracle and/or its affiliates. All rights reserved.spacex@:interpret: exception -13 caughtinterpret h# d constant MMU_PAGESHIFT h# 0 constant TTE8K h# 20 constant SFHME_SIZE h# 0 constant SFHME_TTE h# 8 constant HMEBLK_TAG h# 0 constant HMEBLK_NEXT h# 2c constant HMEBLK_MISC h# 38 constant HMEBLK_HME1 h# 8 constant NHMENTS h# 7 constant HBLK_SZMASK h# 10 constant HBLK_RANGE_SHIFT h# 8 constant HMEBP_HBLK h# 1 constant HMEBLK_ENDPA h# 20 constant HMEBUCKET_SIZE h# 0 constant HTAG_SFMMUPSZ h# d constant HTAG_BSPAGE_SHIFT h# a constant HTAG_REHASH_SHIFT h# 3ff constant SFMMU_INVALID_SHMERID h# 3 ccould not find debugger-vocabulary-hook>threads:interpret: exception -13 caughtinterpret \ Copyright (c) 1995-1999 by Sun Microsystems, Inc.\ All rights reserved.\\ ident "@(#)data64.fth 1.3 00/07/17 SMI"hexonly forth also definitionsvocabulary kdbg-wordsalso kdbg-words definitionsdefer p@defer p!['] x@ is p@['] x! is p!8 constant ptrsized# 32 constant nbitsminorh# ffffffff constant maxmin\\ Copyright 2008 Sun Microsystems, Inc. All rights reserved.\ Use is subject to license terms.\\ #pragma ident "@(#)kdbg.fth 1.20 08/06/06 SMI"h# 7ff constant v9biash# Unhandled Exception 0x0000000000000008PC = 0x0000000000000000 NPC = 0x0000000000000000Stopping executionqemu: fatal: Trap 0x0032 while trap level (5) >= MAXTL (5), Error statepc: 00000000ffd04640 npc: 00000000ffd04644%g0-3: 0000000000000000 00000000c40aaab5 00000000c3fb6875 00000000ffe11e38%g4-7: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%o0-3: 000001fe020003f8 000001fff0080886 0000000000000000 0000000000000000%o4-7: 00000000ffeabc00 0000000000000000 00000000ffe812c1 000001fff000ccb8%l0-3: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%l4-7: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%i0-3: 0000000000000000 0000030000f8de5d 0000000000000000 0000000000000002%i4-7: 0000000000000012 00000000ffe8b000 00000000ffe81371 00000000ffd0c6c0%f00: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%f08: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%f16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%f24: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%f32: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%f40: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%f48: 0000000000000000 0000000000000000 0000000000000000 0000000000000000%f56: 0000000000000000 0000000000000000 0000000000000000 0000000000000000pstate: 00000015 ccr: 44 (icc: -Z-- xcc: -Z--) asi: 80 tl: 5 pil: 0cansave: 7 canrestore: 0 otherwin: 0 wstate: 0 cleanwin: 7 cwp: 1fsr: 0000000000000000 y: 0000000000000000 fprs: 0000000000000000Abort trap Attempting Solaris 11: qemu-system-sparc64 -m 1024 -cdrom /tank/images/sol-11_2-text-sparc.iso -boot d -nographicOpenBIOS for Sparc64Configuration device id QEMU version 1 machine id 0kernel cmdlineCPUs: 1 x SUNW,UltraSPARC-IIiUUID: 00000000-0000-0000-0000-000000000000Welcome to OpenBIOS v1.1 built on Mar 12 2015 08:09 Type 'help' for detailed informationTrying cdrom:f...Not a bootable ELF imageNot a bootable a.out imageLoading FCode image...Loaded 6636 bytesentry point is 0x4000Ignoring failed claim for va 1000000 memsz c107e!Ignoring failed claim for va 1402000 memsz 5a6e0!Ignoring failed claim for va 1800000 memsz 52240!Jumping to entry point 00000000010071f8 for type 0000000000000001...switching to new context: entry point 0x10071f8 stack 0x00000000ffe8aa09'SUNW,UltraSPARC-IIi' is not supported by this release of Solaris.EXIT0 > I have tried using the -cpu help option to find a supported CPU, but no matter what string I provide, it is not understood. Has anyone gotten original Solaris 10 or 11 to boot with sparc64 under QEMU?What else should I try? PS: It is not an option to buy SPARC hardware or emulate 32bit SPARC and ancient Solaris 9 or older or use Solaris x86.
A friendly user of the chrome-os, Aseda,pointed me to the hterm/Secure Shell FAQ . The audible bell can be disabled by opening the Javascript console by pressing Ctrl + Shift + J and then typing: term_.prefs_.set('audible-bell-sound', '')
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199827", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7107/" ] }
199,836
I've taken a backup of the file where my dconf database is stored ( ~/.config/dconf/user which is a binary file), and now I need to move some keys from the backup to the dconf in use. How can I view the content of the backed up dconf without putting it "in place" and view it with for example dconf-editor ?
To view the content of that file you could rename it - e.g. test - place it under ~/.config/dconf/ and then have dconf read/dump the settings from that file. By default , dconf reads the user-db found in $XDG_CONFIG_HOME/dconf/ : A "user-db" line specifies a user database. These databases are found in $XDG_CONFIG_HOME/dconf/ . The name of the file to open in that directory is exactly as it is written in the profile. This file is expected to be in the binary dconf database format. Note that XDG_CONFIG_HOME cannot be set/modified per terminal or session, because then the writer and reader would be working on different DBs (the writer is started by DBus and cannot see that variable). As a result, you would need a custom profile that points to that particular db file - e.g. user-db:test and then instruct dconf to dump the data (using the custom profile) via the DCONF_PROFILE environment variable: cdcp /path_to_backup_dconf/user ~/.config/dconf/testprintf %s\\n "user-db:test" > db_profileDCONF_PROFILE=~/db_profile dconf dump / > old_settings The result is a file ( old_settings ) containing the settings from your backed up dconf file, e.g.: [org/gnome/desktop/interface]font-name='DejaVu Sans Oblique 10'document-font-name='DejaVu Sans Oblique 10'gtk-im-module='gtk-im-context-simple'clock-show-seconds=trueicon-theme='HighContrast'monospace-font-name='DejaVu Sans Mono Oblique 10'[org/gnome/desktop/input-sources]sources=@a(ss) []xkb-options=@as [][org/gnome/desktop/wm/preferences]num-workspaces=4titlebar-font='DejaVu Sans Bold Oblique 10'....... You could then remove those files: rm -f ~/db_profile ~/.config/dconf/test and load the old settings into the current database: dconf load / < old_settings If you want to dump only specific settings just provide the path: DCONF_PROFILE=~/db_profile dconf dump /org/gnome/desktop/wm/preferences/[/]num-workspaces=4titlebar-font='DejaVu Sans Bold Oblique 10' but note that for each path you should have a different file and when you load it you should specify the path accordingly: dconf load /org/gnome/desktop/wm/preferences/ < old_wm_settings Also note that, due to upstream changes, older dconf databases might contain paths, keys and values that are invalid in newer versions so full compatibility between db-files created by different versions of dconf isn't always guaranteed. In that case, you would have to inspect the resulting old_settings file and manually remove or edit the entries that are invalid before loading it into your current database.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/199836", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/36186/" ] }
199,839
After some issues with Ubuntu, I have decided to go to Mint. Now new issues appear - I have three monitors, two horizonatally and one vertically. The problem is that after each reboot, Mint does not remember anything about the horizontal monitor and makes it back vertical. Any idea how to tell Mint that I do not want to change my monitor setup every time after reboot? I am using nVidia and I am making the setup via nVidia X server settings. Info for my video: $ lspci | grep VGA00:02.0 VGA compatible controller: Intel Corporation Device 041e (rev 06)01:00.0 VGA compatible controller: NVIDIA Corporation Device 0fc8 (rev a1)
To view the content of that file you could rename it - e.g. test - place it under ~/.config/dconf/ and then have dconf read/dump the settings from that file. By default , dconf reads the user-db found in $XDG_CONFIG_HOME/dconf/ : A "user-db" line specifies a user database. These databases are found in $XDG_CONFIG_HOME/dconf/ . The name of the file to open in that directory is exactly as it is written in the profile. This file is expected to be in the binary dconf database format. Note that XDG_CONFIG_HOME cannot be set/modified per terminal or session, because then the writer and reader would be working on different DBs (the writer is started by DBus and cannot see that variable). As a result, you would need a custom profile that points to that particular db file - e.g. user-db:test and then instruct dconf to dump the data (using the custom profile) via the DCONF_PROFILE environment variable: cdcp /path_to_backup_dconf/user ~/.config/dconf/testprintf %s\\n "user-db:test" > db_profileDCONF_PROFILE=~/db_profile dconf dump / > old_settings The result is a file ( old_settings ) containing the settings from your backed up dconf file, e.g.: [org/gnome/desktop/interface]font-name='DejaVu Sans Oblique 10'document-font-name='DejaVu Sans Oblique 10'gtk-im-module='gtk-im-context-simple'clock-show-seconds=trueicon-theme='HighContrast'monospace-font-name='DejaVu Sans Mono Oblique 10'[org/gnome/desktop/input-sources]sources=@a(ss) []xkb-options=@as [][org/gnome/desktop/wm/preferences]num-workspaces=4titlebar-font='DejaVu Sans Bold Oblique 10'....... You could then remove those files: rm -f ~/db_profile ~/.config/dconf/test and load the old settings into the current database: dconf load / < old_settings If you want to dump only specific settings just provide the path: DCONF_PROFILE=~/db_profile dconf dump /org/gnome/desktop/wm/preferences/[/]num-workspaces=4titlebar-font='DejaVu Sans Bold Oblique 10' but note that for each path you should have a different file and when you load it you should specify the path accordingly: dconf load /org/gnome/desktop/wm/preferences/ < old_wm_settings Also note that, due to upstream changes, older dconf databases might contain paths, keys and values that are invalid in newer versions so full compatibility between db-files created by different versions of dconf isn't always guaranteed. In that case, you would have to inspect the resulting old_settings file and manually remove or edit the entries that are invalid before loading it into your current database.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/199839", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112719/" ] }
199,840
How can I log and plot a graph of all available hardware temperatures (CPU, SSD, etc). CPU load over a given time (say a day or a week) in linux? The CPU is i7 haswell if this matters, I have both, an SSD and HDD in this box.
To view the content of that file you could rename it - e.g. test - place it under ~/.config/dconf/ and then have dconf read/dump the settings from that file. By default , dconf reads the user-db found in $XDG_CONFIG_HOME/dconf/ : A "user-db" line specifies a user database. These databases are found in $XDG_CONFIG_HOME/dconf/ . The name of the file to open in that directory is exactly as it is written in the profile. This file is expected to be in the binary dconf database format. Note that XDG_CONFIG_HOME cannot be set/modified per terminal or session, because then the writer and reader would be working on different DBs (the writer is started by DBus and cannot see that variable). As a result, you would need a custom profile that points to that particular db file - e.g. user-db:test and then instruct dconf to dump the data (using the custom profile) via the DCONF_PROFILE environment variable: cdcp /path_to_backup_dconf/user ~/.config/dconf/testprintf %s\\n "user-db:test" > db_profileDCONF_PROFILE=~/db_profile dconf dump / > old_settings The result is a file ( old_settings ) containing the settings from your backed up dconf file, e.g.: [org/gnome/desktop/interface]font-name='DejaVu Sans Oblique 10'document-font-name='DejaVu Sans Oblique 10'gtk-im-module='gtk-im-context-simple'clock-show-seconds=trueicon-theme='HighContrast'monospace-font-name='DejaVu Sans Mono Oblique 10'[org/gnome/desktop/input-sources]sources=@a(ss) []xkb-options=@as [][org/gnome/desktop/wm/preferences]num-workspaces=4titlebar-font='DejaVu Sans Bold Oblique 10'....... You could then remove those files: rm -f ~/db_profile ~/.config/dconf/test and load the old settings into the current database: dconf load / < old_settings If you want to dump only specific settings just provide the path: DCONF_PROFILE=~/db_profile dconf dump /org/gnome/desktop/wm/preferences/[/]num-workspaces=4titlebar-font='DejaVu Sans Bold Oblique 10' but note that for each path you should have a different file and when you load it you should specify the path accordingly: dconf load /org/gnome/desktop/wm/preferences/ < old_wm_settings Also note that, due to upstream changes, older dconf databases might contain paths, keys and values that are invalid in newer versions so full compatibility between db-files created by different versions of dconf isn't always guaranteed. In that case, you would have to inspect the resulting old_settings file and manually remove or edit the entries that are invalid before loading it into your current database.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/199840", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/5289/" ] }
199,863
I am looking for a command to create multiple (thousands of) files containing at least 1KB of random data. For example, Name sizefile1.01 2Kfile2.02 3Kfile3.03 5Ketc. How can I create many files like this?
Since you don't have any other requirements, something like this should work: #! /bin/bashfor n in {1..1000}; do dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=1 count=$(( RANDOM + 1024 ))done (this needs bash at least for {1..1000} ).
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/199863", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/105827/" ] }
199,891
I am attempting to run an application (ParaView) in client-server mode with its graphics rendering being done on the remote (server) end. I am using SSH as my means of connecting to the server, but do not wish to use X-forwarding since it slows down the rendering process. However, every time I try to open the application on the server's display, I get an error to this effect: Invalid MIT-MAGIC-COOKIE-1 keyError: cannot open display ':0' I have conducted extensive research into this matter and have already tried the following suggested procedure to no avail: Used "xauth list" to get the MIT-MAGIC-COOKIE-1 value for my local host's display. Logged into the remote host via ssh. Used "export DISPLAY=:0" on the remote host. On the remote host, used "xauth add" to overwrite the cookie value for the remote host's display with that of the local host's. I'm convinced that this is the correct procedure, but that I'm just not transferring the right cookies to the right displays. Again, I would like to be able to use ssh to effect the opening of applications on the remote computer's display. Ideally, I would like the entire process to be done via xauth rather than xhost, and once again, I have no need to use X-forwarding. What might I be missing or doing wrong?
Try xhost +local: before running it.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/199891", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112750/" ] }
199,966
I have docker installed on CentOS 7 and I am running firewallD. From inside my container, going to the host (default 172.17.42.1) With firewall on container# nc -v 172.17.42.1 4243nc: connect to 172.17.42.1 port 4243 (tcp) failed: No route to host with firewall shutdown container# nc -v 172.17.42.1 4243Connection to 172.17.42.1 4243 port [tcp/*] succeeded! I've read the docs on firewalld and I don't fully understand them. Is there a way to simply allow everything in a docker container (I guess on the docker0 adapter) unrestricted access to the host?
Maybe better than earlier answer; firewall-cmd --permanent --zone=trusted --change-interface=docker0firewall-cmd --permanent --zone=trusted --add-port=4243/tcpfirewall-cmd --reload
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/199966", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/110922/" ] }
199,992
Today I wanted to upgrade my system from Debian Wheezy to Jessie. As first step I thought it is a good idea to upgrade the current wheezy-packages: sudo apt-get updatesudo apt-get upgrade ... however on the "upgrade" command, I got an error (sorry, I only have the text in german): Paketlisten werden gelesen... FertigE: Der Wert »stable« ist für APT::Default-Release ungültig, da solch eine Veröffentlichung in den Paketquellen nicht verfügbar ist. A translation of the error could be: E: The value "stable" is for APT::Default-Release is invalid, since such a release is not available in the package-sources.
The value for APT::Default-Release can be modified in: /etc/apt/apt.conf/10defaultRelease Since the "stable" version has changed from "wheezy" to "jessie", it is needed to replace "stable" with "oldstable" in that file.If you want to upgrade to jessie (and if you updated your sources.list), you can replace the string with "stable" again. Edit: When looking on a different debian-system, the file "10defaultRelease" does not even exist. It seems like this file is only needed if repositories of two different debian-versions are mixed.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/199992", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57118/" ] }
200,125
I have a HP15 r007TX laptop with Debian 8 (Jessie) installed. Whenever I close the lid and then reopen, the laptop stops working. It get's stuck showing a blank screen. From there nothing happens and I have to hard reboot it. I even changed the setting to do nothing when laptop lid is closed and still have the issue.
To disable the Lid Switch: Open the file /etc/systemd/logind.conf as root. Find this: HandleLidSwitch If it's commented, uncomment and change the value to ignore. The line after editing should be: HandleLidSwitch=ignore Restart computer and your problem should be gone. Or better restart logind service: sudo service systemd-logind restart ( Source )
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/200125", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112900/" ] }
200,188
Could you lend your expertise in understanding how to go aboutconfiguring the separation of network traffic on two network interfaces? As I understand thus far, static routes are used for network traffic thatis not designed to use a default gateway. The default gateway is used forall traffic which is not destined for the local network and for which nopreferred route has been specified in a routing table. The scenario is as follows. Each computer in the network has two network cards. The production interface for each is eth0 (GW = 10.10.10.1). The management interface for each is eth1 (GW = 192.168.100.1). Production and Management traffic should be totally separated. I have posted, below, what things I have tried with Debian Wheezy.And, my problem is that, although I have hosts set up in such a way thatthey do communicate on both interfaces, individual hosts seem to "hear"traffic on the wrong interface. For example: Host 140 eth0 Link encap:Ethernet HWaddr 08:00:27:d1:b6:8f inet addr:10.10.10.140 Bcast:10.10.10.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fed1:b68f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1341 errors:0 dropped:0 overruns:0 frame:0 TX packets:2530 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:641481 (626.4 KiB) TX bytes:241124 (235.4 KiB)eth1 Link encap:Ethernet HWaddr 08:00:27:ad:14:b6 inet addr:192.168.100.140 Bcast:192.168.100.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fead:14b6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7220 errors:0 dropped:0 overruns:0 frame:0 TX packets:5257 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:602485 (588.3 KiB) TX bytes:1022906 (998.9 KiB) From host 140, I execute this command: tcpdump -i eth0 . In a separatesession on host 140, I execute ping 192.168.100.50 . 19:17:29.301565 IP 192.168.100.140 > 192.168.100.50: ICMP echo request, id 1400, seq 10, length 6419:17:30.301561 IP 192.168.100.140 > 192.168.100.50: ICMP echo request, id 1400, seq 11, length 6419:17:31.301570 IP 192.168.100.140 > 192.168.100.50: ICMP echo request, id 1400, seq 12, length 6419:17:32.301580 IP 192.168.100.140 > 192.168.100.50: ICMP echo request, id 1400, seq 13, length 64 Why do I see the above output on eth0 ? I think I should only see traffic for 10.10.10.140.I also see this on eth1 , as expected: 19:18:47.805408 IP 192.168.100.50 > 192.168.100.140: ICMP echo request, id 1605, seq 247, length 64 If I ping from Host 50 (same ifconfig results - just a different last quad),then eth0 is silent, and I see the ICMP echos on eth1 , as expected. I would like to understand how to configure each interface to handle onlythe traffic for which it is responsible in two major Linux varieties.I think I am almost there, but I am missing something I just can't seem to find. Debian Wheezy (7.x) or Debian Jessie (8.x) Enterprise Linux (6.x) (RedHat/CentOS/Scientific/Oracle). I know that a solution for Debian should be good for both Wheezy and Jessie,and that a solution for an EL should be the same for all the EL 6.x versions.I would like to avoid using an RC script to execute commands, opting insteadfor using the configuration files. In Debian the relevant configuration files that I know about are: /etc/network/interfaces In EL 6.x, the relevant configuration files that I know about are: /etc/sysconfig/network /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth1 /etc/sysconfig/network-scripts/route-eth0 /etc/sysconfig/network-scripts/route-eth1 /etc/sysconfig/network-scripts/rule-eth0 /etc/sysconfig/network-scripts/rule-eth1 My Debian 8 "Jessie" /etc/network/interfaces file: source /etc/network/interfaces.d/*# The loopback network interfaceauto loiface lo inet loopback# Production interfaceauto eth0allow-hotplug eth0iface eth0 inet static address 10.10.10.140 netmask 255.255.255.0 gateway 10.10.10.1# Management interfaceauto eth1allow-hotplug eth1iface eth1 inet static address 192.168.100.140 netmask 255.255.255.0 I think netstat -anr might illustrate the problem: Kernel IP routing tableDestination Gateway Genmask Flags MSS Window irtt Iface0.0.0.0 10.10.10.1 0.0.0.0 UG 0 0 0 eth010.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
I'd love to know more about this topic to refine the configuration to be the best that it can be, but here's what I have so far. Even without enabling ARP filtering on all network interfaces ( net.ipv4.conf.all.arp_filter = 0 ), as mentioned by @spuk,traffic seems to be completely separated in this configuration. The file, /etc/iproute2/rt_tables , is the same in EL 6.x and DEB 7/8, at least. This is the file that creates a named routing table for static routes. ## reserved values#255 local254 main253 default0 unspec## local#252 mgmt Above, the number of the named, static route, 252, is essentially arbitrary; or, each static route gets its own unique number between 1 and 252. The file, /etc/network/interfaces in DEB 7/8, at least: source /etc/network/interfaces.d/*# The loopback network interfaceauto lo iface lo inet loopback# The production network interface# The 'gateway' directive is the default route.# Were eth0 configured via DHCP, the default route would also be here.auto eth0allow-hotplug eth0iface eth0 inet static address 10.10.10.140 netmask 255.255.255.0 gateway 10.10.10.1# The management network interface# The 'gateway' directive cannot be used again because there can be# one, and only one, default route. Instead, the 'post-up' directives# use the `mgmt` static route.auto eth1allow-hotplug eth1iface eth1 inet static address 192.168.100.140 netmask 255.255.255.0 post-up ip route add 192.168.100.0/24 dev eth1 src 192.168.100.140 table mgmt post-up ip route add default via 192.168.100.1 dev eth1 table mgmt post-up ip rule add from 192.168.100.140/32 table mgmt post-up ip rule add to 192.168.100.140/32 table mgmt The result of ip route show on Debian: default via 10.10.10.1 dev eth010.10.10.0/24 dev eth0 proto kernel scope link src 10.10.10.140192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.140 The EL 6.x /etc/sysconfig/network file: NETWORKING=yesHOSTNAME=localhost.localdomainGATEWAY=10.10.10.1 Above, GATEWAY is the default route. Below, were BOOTPROTOCOL set to DHCP, the default route would be acquired from DHCP. THE EL 6.x /etc/sysconfig/network-scripts/ifcfg-eth0 file, without "HWADDR" and "UUID": DEVICE=eth0TYPE=EthernetONBOOT=yesNM_CONTROLLED=noBOOTPROTOCOL=noneIPADDR=10.10.10.140NETMASK=255.255.255.0NETWORK=10.10.10.0BROADCAST=10.10.10.255 THE EL 6.x /etc/sysconfig/network-scripts/ifcfg-eth1 file, without "HWADDR" and "UUID": DEVICE=eth0TYPE=EthernetONBOOT=yesNM_CONTROLLED=noBOOTPROTOCOL=noneIPADDR=192.168.100.140NETMASK=255.255.255.0NETWORK=192.168.100.0BROADCAST=192.168.100.255 The EL 6.x /etc/sysconfig/network-scripts/route-eth1 file: 192.168.100.0/24 dev eth1 table mgmtdefault via 192.168.100.1 dev eth1 table mgmt The EL 6.x /etc/sysconfig/network-scripts/rule-eth1 file: from 192.168.100.0/24 lookup mgmt The result of ip route show on EL 6.x: 192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.16010.10.10.0/24 dev eth0 proto kernel scope link src 10.10.10.160default via 10.10.10.1 dev eth0 Update for RHEL8 This method described above works with RHEL 6 & RHEL 7 as well as the derivatives, but for RHEL 8 and derivatives, one must first install network-scripts to use the method described above. dnf install network-scripts The installation produces a warning that network-scripts will be removed in one of the next major releases of RHEL and that NetworkManager provides ifup / ifdown scripts as well. Update for Ubuntu 20.04 LTS Creating a named routing table is ok, but not required with netplan , which will not use the name anyway. Nonetheless the number of the named routing table from the rt_tables file can be used for netplan . Corresponding NICs are enps03 ( eth0 ) and enp0s8 ( eth1 ). network: version: 2 ethernets: enp0s3: addresses: - 10.10.10.140/24 dhcp4: false dhcp6: false gateway4: 10.10.10.1 nameservers: addresses: - 1.2.3.4 - 1.2.3.5 search: - your-search-domain-name.com enp0s8: dhcp4: false dhcp6: false addresses: - 192.168.100.140/24 routes: - to: 192.168.100.0/24 via: 192.168.100.1 table: 252 routing-policy: - from: 192.168.100.0/24 table: 252 This results in the following routes from ip r s . default via 10.10.10.1 dev enp0s3 proto static10.10.10.0/24 dev enp0s3 proto kernel scope link src 10.10.10.140192.168.100.0/24 dev enp0s8 proto kernel scope link src 192.168.100.140
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200188", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29678/" ] }
200,194
I'm creating a new basic rule /etc/udev/rules.d/10-myrule.rules containing: KERNEL!="sdb*", GOTO="auto_mount_end"ACTION=="add", RUN+="/usr/bin/mount /dev/sdb1 /media"LABEL="auto_mount_end" I saved, rebooted, and inserted a SD card (recognized by /dev/sdb1 , I see it with dmesg ), but nothing happens.When I do manually mount /dev/sdb1 /media , it works. How can I troubleshoot / debug such an udev rule? Note: I'm using ArchLinux, but it should be the same on any distro?
Update Reference: udev_237 - man udev (Ubuntu_18.04) RUN{type} ︙ Note that running programs that access the network ormount/unmount filesystems is not allowed inside of udev rules,due to the default sandbox that is enforced onsystemd-udevd.service. Original Answer Debugging hint are valid for other udev rule applications. 10- as mentioned by jasonwryan, use high numbering (90's good). So your rule is not going to be overridden by another one. Use the minimum keys just as you really need. Example, != & GOTO / LABEL , instead use directly == ACTION=="add", KERNEL=="sdb*", RUN+="/usr/bin/mount /dev/sdb1 /media" Your target was sdb1 with fixed command, minimize the blind match using KERNEL=="sdb1" I find it useful to create a shadow debugging rule, I called shadow because I always leave it there in same file, so I use it when I need it. ACTION=="add", KERNEL=="sdb*", RUN+="/bin/sh -c 'echo == >> /home/user/Desktop/udev-env.txt; env >> /home/user/Desktop/udev-env.txt'" #ACTION=="add", KERNEL=="sdb*", RUN+="/usr/bin/mount /dev/sdb1 /media" Notes: udev-env.txt is created then the rule is triggered anyway. Line == corresponding to one matching node. The ENV recorded in that file could be mixture between 2 node or more, created almost in same time, it's a stdout buffering problem. Some environment variables that showed up in this debug, may not be used for condition because at by that time udev processing matches they are not yet populated (from previous rules). See https://www.suse.com/support/kb/doc/?id=000016106 (Mentioned by @clonejo in comments) Use udevadm monitor -u , udevadm test ... and udevadm trigger ... to verify which rules processed the events. Inside the scripts is up to you to make debug log and catch failed commands, by saving their return value also stdout & stderr messages.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200194", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59989/" ] }
200,202
I mean, did the developers think people would enjoy reading 19478204 rather than 19 GB? Could it be that in the older systems there weren't as much disk space so it was OK back then to count each digit?
It is more precise to output the byte count rather than the human readable numbers. I use both, but when copying data or verifying file sizes, non-human readable is a must. Since one person's sensible default is another's constant annoyance, there's really no 'right' or 'wrong' answer. However, it is easy to force df to output human readable numbers: $ echo "alias df='df -h'" >> ~/.bashrc If you ever want to use df in its default mode, escape it with a \ like this: $ \df
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200202", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/99425/" ] }
200,211
I am writing a script which loops into recent files in a folder and executes a command... #!/bin/bashcd /home/Downloadsrecent_files = ($(ls -t | head -20))for file in "${recent_files[@]}"do ./cmd $filedone I get the following syntax error: line 3: syntax error near unexpected token `('line 3: `recent_files = ($(\ls -t | head -20))'
Bash variable assignment scheme is var=value i.e. extra spaces are not allowed. Removing them will correctly assign ls -t | head -20 output as an array. So your script should be : #!/bin/bashcd /home/Downloadsrecent_files=($(ls -t | head -20))for file in "${recent_files[@]}"do ./cmd $filedone
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/200211", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
200,222
How can I set sender name and email address using mail command in shell script.
Try this: mail -s 'Some Subject' -r 'First Last <[email protected]>' [email protected] This sets both From: and the envelope sender.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/200222", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112950/" ] }
200,235
I am putting together a presentation for a non-technical audience. I have a program running in bash that outputs a continuous stream of values, a few of which are important. I would like to highlight the important results as they are displayed so the audience can get an idea of their frequency. The issue is that I can't get sed to operate on a running stream. It works fine if I put the results in a file, as in: cat output.txt | sed "s/some text/some text bolded/" But if I try the same thing on the running output, like this: command | sed "s/some text/some text bolded/" sed does nothing. Any thoughts? As Lambert was helpful enough to point out, my saying that sed does nothing was vague. What is happening is that the program outputs to stdout (I'm pretty sure it's not writing to stderr ) as it normally would, even if it's piped through sed . The issue seems to be that the command calls a second program, which then outputs to stdout. There are a few lines printed by the first program; these I can edit. Then there is a stream of values printed by the second program; these I cannot edit. Perl and awk methods do not work either.
Chances are that the command's output is buffered. When the command writes to a terminal, the buffer is flushed on every newline, so you see it appear at the expected rate. When the command writes to a pipe, the buffer is only flushed when it reaches a few kilobytes, so it lags a lot. Thus is the default behavior of the standard input/output library. To force the command not to buffet its output, you can use unbuffer (from expect) or stdbuf (from GNU coreutils). unbuffer command | sed …stdbuf -o0 command | sed …
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/200235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112963/" ] }
200,239
I have ServerAliveInterval and in case of few machines also ClientAliveInterval set to 540 in SSH client/server configuration files (I suppose setting it to more than that would not be a good idea). I work with many SSH sessions which currently freeze after a few minutes. How can I fix it? What I want is to have a session to not freeze at all, so that if I open a session at 8 and don't use it for 4 hours, for example, to still use it again at 12 without having to log-in again.
The changes you've made in /etc/ssh/ssh_config and /etc/ssh/sshd_config are correct but will still not have any effect. To get your configuration working, make these configuration changes on the client: /etc/ssh/ssh_config Host *ServerAliveInterval 100 ServerAliveInterval The client will send a null packet to the server every 100 seconds to keep the connection alive NULL packet Is sent by the server to the client. The same packet is sent by the client to the server. A TCP NULL packet does not contain any controlling flag like SYN, ACK, FIN etc. because the server does not require a reply from the client. The NULL packet is described here: https://www.rfc-editor.org/rfc/rfc6592 Then configuring the sshd part on the server. /etc/ssh/sshd_config ClientAliveInterval 60TCPKeepAlive yesClientAliveCountMax 10000 ClientAliveInterval The server will wait 60 seconds before sending a null packet to the client to keep the connection alive TCPKeepAlive Is there to ensure that certain firewalls don't drop idle connections. ClientAliveCountMax Server will send alive messages to the client even though it has not received any message back from the client. Finally restart the ssh server service ssh restart or service sshd restart depending on what system you are on.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/200239", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
200,280
I'm writing systemd unit files for OSSEC HIDS. The problem is that when systemd starts the services it immediately stops them. When I use the following ExecStart directive everything is working fine. ExecStart=/var/ossec/bin/ossec-control start But when I make ths following small improvement, I find in OSSEC logs that it receives SIG 15 after start. ExecStart=/bin/sh -c '${DIRECTORY}/bin/ossec-control start' If I make another small change, the service will receive SIG 15 after 20 seconds. ExecStart=/bin/sh -c '${DIRECTORY}/bin/ossec-control start && sleep 20' So, I guess, that systemd kills /bin/sh process after service start, and /bin/sh then kills OSSEC . How can I solve this problem?
readiness protocol mismatch As Wieland implied, the Type of the service is important. That setting denotes what readiness protocol systemd expects the service to speak. A simple service is assumed to be immediately ready. A forking service is taken to be ready after its initial process forks a child and then exits. A dbus service is taken to be ready when a server appears on the Desktop Bus. And so forth. If you don't get the readiness protocol declared in the service unit to match what the service does, then things go awry. Readiness protocol mismatches cause services not to start correctly, or (more usually) to be (mis-)diagnosed by systemd as failing. When a service is seen as failing to start systemd ensures that every orphaned additional process of the service that might have been left running as part of the failure (from its point of view) is killed in order to bring the service properly back to the inactive state. You're doing exactly this. First of all, the simple stuff: sh -c doesn't match Type=simple or Type=forking . In the simple protocol, the initial process is taken to be the service process. But in fact a sh -c wrapper runs the actual service program as a child process . So MAINPID goes wrong and ExecReload stops working, for starters. When using Type=simple , one must either use sh -c 'exec …' or not use sh -c in the first place. The latter is more often the correct course than some people think. sh -c doesn't match Type=forking either. The readiness protocol for a forking service is quite specific. The initial process has to fork a child, and then exit. systemd applies a timeout to this protocol. If the initial process doesn't fork within the allotted time, it's a failure to become ready. If the initial process doesn't exit within the allotted time, that too is a failure. the unnecessary horror that is ossec-control Which brings us to the complex stuff: that ossec-control script. It turns out that it's a System 5 rc script that forks off between 4 and 10 processes, which themselves in their turn fork and exit too. It's one of those System 5 rc scripts that attempts to manage a whole set of server processes in one single script, with for loops, race conditions, arbitrary sleep s to try to avoid them, failure modes that can choke the system in a half-started state, and all of the other horrors that got people inventing things like the AIX System Resource Controller and daemontools two decades ago. And let's not forget the hidden shell script in a binary directory that it rewrites on the fly, to implement idiosyncratic enable and disable verbs. So when you /bin/sh -c '/var/ossec/bin/ossec-control start' what happens is that: systemd forks what it expects to be the service process. That's the shell, which forks ossec-control . That in turn forks between 4 and 10 grandchildren. The grandchildren all fork and exit in turn. The great-grandchildren all fork and exit in parallel. ossec-control exits. The first shell exits. The service processes were the great-great- grandchildren, but because this way of working matches neither the forking nor the simple readiness protocol, systemd considers the service as a whole to have failed and shuts it back down. None of this horror is actually necessary under systemd at all. None of it. a systemd template service unit Instead, one writes a very simple template unit : [Unit]Description=The OSSEC HIDS %i serverAfter=network.target [Service]Type=simpleExecStartPre=/usr/bin/env /var/ossec/bin/%p-%i -tExecStart=/usr/bin/env /var/ossec/bin/%p-%i -f[Install]WantedBy=multi-user.target Save this this as /etc/systemd/system/[email protected] . The various actual services are instantiations of this template, named: [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Then enable and disable function comes straight from the service management system (with RedHat bug 752774 fixed), with no need for hidden shell scripts. systemctl enable ossec@dbd ossec@agentlessd ossec@csyslogd ossec@maild ossec@execd ossec@analysisd ossec@logcollector ossec@remoted ossec@syscheckd ossec@monitord Moreover, systemd gets to know about, and to track, each actual service directly. It can filter their logs with journalctl -u . It can know when an individual service has failed. It knows what services are supposed to be enabled and running. By the way: Type=simple and the -f option are as right here as they are in many other cases. Very few services in the wild actually signal their readiness by dint of the exit , and these here are not such cases either. But that's what the forking type means. Services in the wild in the main just fork and exit because of some mistaken received wisdom notion that that's what dæmons are supposed to do. In fact, it's not. It hasn't been since the 1990s. It's time to catch up. Further reading Jonathan de Boyne Pollard (2015). Readiness protocol problems with Unix dæmons . Frequently Given Answers.
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/200280", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/112974/" ] }
200,337
Say I had a config file /etc/emails.conf email1 = [email protected] email2 = [email protected] = [email protected] and I wanted to get email2 I could do a: grep email2 /etc/emails.conf | cut -d'=' -f2 to get the email2, but how do I do it "cooler" with one sed or awk command and remove the whitespace that the cut command would leave?
How about using awk? awk -F = '/email2/ { print $2}' /etc/emails.conf -F = Fields are separated by '=' '/email2/ { print $2}' On lines that match "email2", print the second field
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200337", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4330/" ] }
200,355
I'm using ack to search for a string. When I run it without a file argument, I get line numbers: $> ack functionthemes/README.txt7:Drupal's sub-theme functionality to ensure easy maintenance and upgrades.sites/default/default.services.yml48: # - The dump() function can be used in Twig templates to output information... But when I try to specify a file, I don't get line numbers. $> ack function themes/README.txtDrupal's sub-theme functionality to ensure easy maintenance and upgrades. I've done some googling for a switch, but found no results. How do I get ack to show me line numbers on results from a single file?
When you don't provide any file, ack will search for all files in current directory and subdirectories. If a file contains matching pattern, ack print that filename, the line number and the line which matched pattern. This behaviour does not apply for one file (See ack documentation , search for -H option). Since when ack doesn't have -n option line grep , which will print line matched with its relative line number, you have two choices to work around this issue. Forcing ack print filename with -H : ack -H pattern file or passing /dev/null as the second file: ack pattern file /dev/null
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/200355", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/394/" ] }
200,381
In the script below - which prompts the user to confirm that they want to proceed with running a potentially bad script - when the user enters Y at the prompt - it will break out of the case block, only to be sent back into the while loop again. #! /bin/bashset -eechoecho "bad install start"echo "-----------------------------------------"while true; do read -p "this script will probably fail - do you want to run anyway?" yn case $yn in [Yy]*) ##### WHAT GOES HERE?? ##### ;; [Nn]*) exit ;; *) echo "answer y or n" ;; esac echo "script has broken out of case back into while loop"doneecho -e "\e[33m Installing bad packagename \e[0m"apt-get install sdfsdfdfsdecho "rest of script - will i keep running?" When n is entered, the script exists entirely as desired. I'd like to know how to make it so that when Y is entered the script breaks out of both the case and the while block, but does not exit entirely. Is there something I can put in for the placeholder ("What goes here??") to do that?
In the case where the user entered "y", you can exit both while and case: break [n] Exit from within a for, while, until, or select loop. If n is specified, break n levels. n must be ≥ 1. If n is greater than the number of enclosing loops, all enclosing loops are exited. The return value is 0 unless n is not greater than or equal to 1. In your case, you want to do break 2 .
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/200381", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/106525/" ] }
200,437
If I wanted to search for all lines in a file that start with a date, such as "May 1", how would I do that with sed or awk? I need to extract this data and either send to screen or a file. Thanks.
To extract lines that start with May 1 : grep "^May 1\b" file Or: sed -n '/^May 1\>/p' file Or: awk '/^May 1\>/' file The above two assume a tool, such as the GNU awk or sed, that supports \> as a word boundary regex. The purpose of the word boundary is to prevent the regex from matching, for example, May 10 . More If you are looking for any day in May: grep -E "^May [[:digit:]]{1,2}\b" file If you are looking for any day of any month: grep -E "^(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec) [[:digit:]]{1,2}\b" file
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200437", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/81926/" ] }
200,540
I'm interested in a single command that would download the contents of a torrent (and perhaps participate as a seed following the download, until I stop it). Usually, there is a torrent-client daemon which should be started separately beforehand, and a client to control (like transmission-remote ). But I'm looking for the simplicity of wget or curl : give one command, get the result after a while.
Check out transmission-cli . The usage is as simple as running transmission-cli <torrent-file> , but you can obviously tune it to your needs with several options. Just a side comment: Actually you could use many other options, apart from transmission-cli and there will probably appear many other suggestions here (like deluge by Benjamin B. in the comments). I've read somewhere that any well-behaved program should be written so that it can be controlled via command line and the GUI is only an addition to that -- an interface to make the program easier or more convenient to use.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200540", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4319/" ] }
200,553
I have seen in forums and manuals that you have to add Option "Coolbits" "value" to xorg.conf or similar files. I have been able to get this working for the first GPU, the one rendering the display. I have not been able to get overclocking options in nvidia-settings for the second GPU, not rendering any display. I have tried things like Section "Device" Identifier "Videocard0" Driver "nvidia" BusID "PCI:2:00:0" Option "Coolbits" "12"EndSectionSection "Device" Identifier "Videocard1" Driver "nvidia" BusID "PCI:3:00:0" Option "Coolbits" "12"EndSection in the various files: xorg.conf, 99-nvidia.conf, nvidia-xorg.conf. Everything I have tried has led to black screens, no overclocking capability or overclocking capability on the first GPU only. Is it possible to unlock overclocking for both GPUs, if so how? I have not found this question asked anywhere. I am running 346.59 drivers on Fedora 21.
I never was able to get it to work by hand editing xorg.conf. What did work was to execute on the command line which sets it all up for you: sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration Then edit xorg.conf. For me that was sudo vi /etc/X11/xorg.conf and prepend "#" to each line containing allow-empty-initial-configuration to comment it out. Reboot. Then to overclock run: /usr/bin/nvidia-settings To restore your settings after a reboot create an executable file that you call from startup applications containing the text below which will set the gpu clock offset and set the gpu to prefer maximum performance. My example sets the offset to 50. Don't set the offset too high in the file for your actual display gpu until you know for sure what you want or you may end up with a system where the display won't work: nvidia-settings -a [gpu:0]/GpuPowerMizerMode=1nvidia-settings -a [gpu:0]/GPUGraphicsClockOffset[3]=50nvidia-settings -a [gpu:1]/GpuPowerMizerMode=1nvidia-settings -a [gpu:1]/GPUGraphicsClockOffset[3]=50nvidia-settings -a [gpu:2]/GpuPowerMizerMode=1nvidia-settings -a [gpu:2]/GPUGraphicsClockOffset[3]=50nvidia-settings -a [gpu:3]/GpuPowerMizerMode=1nvidia-settings -a [gpu:3]/GPUGraphicsClockOffset[3]=50 If you want to overclock memory too it's nvidia-settings -a [gpu:0]/GPUMemoryTransferRateOffset[3]=800 And of related interest, you can also modify power to the cards. To see the valid values enter a value of 1000 sudo -n nvidia-smi -i 0 --persistence-mode=1sudo -n nvidia-smi -i 0 --power-limit=145 And just to display power nvidia-smi
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200553", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/113176/" ] }
200,582
I am partitioning eMMC using following commands in the script, parted /dev/mmcblk0 --script mklabel gptparted /dev/mmcblk0 --script mkpart primary ext4 32MB 132MBparted /dev/mmcblk0 --script mkpart primary ext4 233MB 433MBparted /dev/mmcblk0 --script mkpart primary ext4 433MB 533MBparted /dev/mmcblk0 --script mkpart primary ext4 533MB 593MBparted /dev/mmcblk0 --script mkpart primary ext4 593MB 793MBparted /dev/mmcblk0 --script mkpart primary ext4 793MB 3800MBparted /dev/mmcblk0 --script align-check min 1 Is it the correct way to create partition in the script ? Is there any better way ? After creating first partition i am getting following warning Warning: The resulting partition is not properly aligned for best performance. Do i need to worry about it ?I tried parted /dev/mmcblk0 --script align-check min 1 but not sure that's the solution. Any pointers for that? I am going through this link meanwhile any other suggestions ? Edit :Just a quick reference for frostschutz reply, MiB = Mebibyte = 1024 KiBKiB = Kibibyte = 1024 BytesMB = Megabyte = 1,000 KBKB = Kilobyte = 1,000 Bytes
It's correct in principle but you might consider reducing it to a single parted call. parted --script /device \ mklabel gpt \ mkpart primary 1MiB 100MiB \ mkpart primary 100MiB 200MiB \ ... Your alignment issue is probably because you use MB instead of MiB . You should not need an actual align-check command when creating partitions on MiB boundaries / on a known device.
{ "score": 6, "source": [ "https://unix.stackexchange.com/questions/200582", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60966/" ] }
200,583
I don't want this to become a "my distribution is better" flamewar so please read the whole question and only answer if you know a distribution that fulfils all requirements. The situation is the following: We have servers running Proxmox and a lot of OpenVZ VMs on them.The used kernel is old and has problems with systemd. We currently run mostly Debian Wheezy, except for some software only supporting Ubuntu.Debian jessie just got released, as you probably know and has systemd as standard.I tried upgrading a VM without installing systemd and it worked OK, but then the problems so many feared started. The first was php5-fpm : Depends: libsystemd0 As stated, there is a reason I can't use systemd (apart from my dislike for it) and I am not really fond of the thought to start compiling and distributing parts of the core infrastructure we use. (And PHP is a important part because we host websites for customers) Is there any stable distribution left one can use for servers that should run safely without non security based updates? Something like CentOS or Debian stable without systemd? Or is there no other way than switching the whole hosting setup to something that supports systemd?
You can run Debian Jessie without running systemd. On upgrades, just make sure sysvinit-core remains installed (see the release notes for details; they specifically address LXC concerns which are similar to yours on OpenVZ). On new installs, see https://wiki.debian.org/systemd#Installing_without_systemd for instructions. libsystemd0 provides systemd support on systems using systemd. It can be installed without requiring systemd to be actually used... As long as the systemd and systemd-sysv packages are not installed then you're not using systemd.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200583", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/74958/" ] }
200,616
I am looking for a command line or bash script that would add space 5 times before the beginning of each line in a file. for example: abc after adding spaces 5 times abc
With GNU sed: sed -i -e 's/^/ /' <file> will replace the start of each line with 5 spaces. The -i modifies the file in place, -e gives some code for sed to execute. s tells sed to do a subsitution, ^ matches the start of the line, then the part between the second two / characters is what will replace the part matched in the beginning, i.e., the start of the line in this example.
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/200616", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52733/" ] }
200,637
Is there some way of saving all the terminal output to a file with a command? I'm not talking about redirection command > file.txt Not the history history > file.txt , I need the full terminal text Not with hotkeys ! Something like terminal_text > file.txt
You can use script . It will basically save everything printed on the terminal in that script session. From man script : script makes a typescript of everything printed on your terminal. It is useful for students who need a hardcopy record of an interactive session as proof of an assignment, as the typescript file can be printed out later with lpr(1). You can start a script session by just typing script in the terminal, all the subsequent commands and their outputs will all be saved in a file named typescript in the current directory. You can save the result to a different file too by just starting script like: script output.txt To logout of the script session (stop saving the contents), just type exit . Here is an example: $ script output.txtScript started, file is output.txt$ lsoutput.txt testfile.txt foo.txt$ exitexitScript done, file is output.txt Now if I read the file: $ cat output.txtScript started on Mon 20 Apr 2015 08:00:14 AM BDT$ lsoutput.txt testfile.txt foo.txt$ exitexitScript done on Mon 20 Apr 2015 08:00:21 AM BDT script also has many options e.g. running quietly -q ( --quiet ) without showing/saving program messages, it can also run a specific command -c ( --command ) rather than a session, it also has many other options. Check man script to get more ideas.
{ "score": 8, "source": [ "https://unix.stackexchange.com/questions/200637", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
200,639
How can I check if a command is a built-in command for ksh ? In tcsh you can use where ; in zsh and bash you can use type -a ; and in some modern versions of ksh you can use whence -av . What I want to do is write an isbuiltin function that works in any version of ksh (including ksh88 and any other "old" versions of ksh ) that behaves like this: Accept multiple arguments and check if each is built-in Return 0 (success) if all of the given commands are built-in At the first non-built-in command, stop checking, return 1 (failure), and print a message to stderr. I already have working functions like this for zsh and bash using the aforementioned commands. Here is what I have for ksh : isbuiltin() { if [[ "$#" -eq 0 ]]; then echo "Usage: isbuiltin cmd" >&2 return 1 fi for cmd in "$@" do if [[ $cmd = "builtin" ]]; then #Handle the case of `builtin builtin` echo "$cmd is not a built-in" >&2 return 1 fi if ! whence -a "$cmd" 2> /dev/null | grep 'builtin' > /dev/null ; then echo "$cmd is not a built-in" >&2 return 1 fi done} This function works for ksh93. However, it appears that ksh88's version of whence doesn't support the -a option, which is the option to make it display all occurrences. Without the ability to display all occurrences, I can only use whence -v , which does tell me whether a command is built-in but only if there isn't also an alias or function of the same name. Question: Is there something else I can use in place of whence -av in ksh88 ? Solution Using the accepted answer (opening a subshell), here is my updated solution. Place the following in .kshrc: isbuiltin() { if [[ "$#" -eq 0 ]]; then printf "Usage: isbuiltin cmd\n" >&2 return 1 fi for cmd in "$@" do if ( #Open a subshell so that aliases and functions can be safely removed, # allowing `whence -v` to see the built-in command if there is one. unalias "$cmd"; if [[ "$cmd" != '.' ]] && typeset -f | egrep "^(function *$cmd|$cmd\(\))" > /dev/null 2>&1 then #Remove the function iff it exists. #Since `unset` is a special built-in, the subshell dies if it fails unset -f "$cmd"; fi PATH='/no'; #NOTE: we can't use `whence -a` because it's not supported in older versions of ksh whence -v "$cmd" 2>&1 ) 2> /dev/null | grep -v 'not found' | grep 'builtin' > /dev/null 2>&1 then #No-op. Needed to support some old versions of ksh : else printf "$cmd is not a built-in\n" >&2 return 1 fi done return 0} I have tested this with ksh88 in Solaris, AIX, and HP-UX. It works in all the cases I tested. I have also tested this with the modern versions of ksh in FreeBSD, Ubuntu, Fedora, and Debian.
If your concern is about aliases, just do: [[ $(unalias -- "$cmd"; type -- "$cmd") = *builtin ]] ( $(...) create a subshell environment, so unalias is only in effect there). If you're also concerned about functions, also run command unset -f -- "$cmd" before type .
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/200639", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/83381/" ] }
201,666
There are createuser & dropuser commands: createuser - define a new PostgreSQL user accountdropuser - remove a PostgreSQL user account Is there a corresponding way to list the user accounts? These two commands do not require the user to invoke psql nor understand details of using it.
Use the psql shell and: \deu[+] [PATTERN] such as: postgres=# \deu+ List of user mappings Server | User name | FDW Options --------+-----------+-------------(0 rows) And for all users: postgres=# \du List of roles Role name | Attributes | Member of ------------+------------------------------------------------+----------- chpert.net | | {} postgres | Superuser, Create role, Create DB, Replication | {} Also such as MySQL, you can do : $ psql -c "\du" List of roles Role name | Attributes | Member of -----------+------------------------------------------------+----------- chpert | | {} postgres | Superuser, Create role, Create DB, Replication | {} test | | {}
{ "score": 7, "source": [ "https://unix.stackexchange.com/questions/201666", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/3999/" ] }
201,694
How would I go about discovering all of the files created by a particular user and display them to the screen? I've started a script that prompts the current user to enter the username of whom they wish to view all the files of. I've thought about using an if statement considering I'd like to include error checking. echo -e "Option 11: Display all the Files a Particular User Has Created\n\n"echo -e "Enter Username below\n"read username
You cannot do that on the usual Linux filesystems, as it doesn't keep track of the creator of the file, only of the owner of the file. The creator and owner are usually, but not necessarily the same. If you want to find the owner of the file, you can, as Bratchley indicated, use find / -type f -user user_name to find those files and display the names. To display the the files you would need some program that can show the content for any file type you might find that way. If you have such a show_file utility that takes a single file_name as argument, you can do: find / -type f -user user_name -exec show_file {} \;
{ "score": 5, "source": [ "https://unix.stackexchange.com/questions/201694", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114260/" ] }
201,704
We could set multiple IP addressses on single interface, for example using NetworkManager: How to make any connection to outside of this PC to use different IP? for example if I have 8 IP addresses ( 10.7.4.x , 10.7.4.x+1 , 10.7.4.x+2 , ...), I want to connect each destination address using different IP, either using random IP or sequential mod (when destination IP mod 8 = 0, then use x , when destination mod 8 = 1, then use x+1 , and so on)
In Linux the selection of source addresses for outgoing connections can be controlled by the routing table: ip route add 10.11.12.0/24 via 10.7.4.1 src 10.7.4.200 This is enough if you just need to use different source addresses for some fixed IP ranges. However, by combining the power of Linux netfilter ( iptables ) and policy routing ( ip rule ) you can get dynamic selection of the source address. The basic procedure is as follows: Set the appropriate marks on the packets in the PREROUTING chain of the mangle table. Use different IP routing tables for packets with different marks ( ip rule add fwmark X table Y ). In each routing table use the required src address for packets. The netfilter setup for marking packets according to the destination IP for the "mod 4" setup may look like this: iptables -A PREROUTING -t mangle -j CONNMARK --restore-markiptables -A PREROUTING -t mangle -m mark --mark 0x0 -d 0.0.0.0/0.0.0.3 \ -j MARK --set-mark 1iptables -A PREROUTING -t mangle -m mark --mark 0x0 -d 0.0.0.1/0.0.0.3 \ -j MARK --set-mark 2iptables -A PREROUTING -t mangle -m mark --mark 0x0 -d 0.0.0.2/0.0.0.3 \ -j MARK --set-mark 3iptables -A PREROUTING -t mangle -m mark --mark 0x0 -d 0.0.0.3/0.0.0.3 \ -j MARK --set-mark 4iptables -A POSTROUTING -t mangle -j CONNMARK --save-mark (For this particular case you can omit two CONNMARK commands, because the other marking commands will give the same result for all packets in the same connection; however, for more complex cases, like the round-robin usage of source addresses, these commands are required to ensure that all packets in the connection will use the same route.) The IP routing setup may then look like this: ip route add default via 10.7.4.1 src 10.7.4.200 table 1ip route add default via 10.7.4.1 src 10.7.4.201 table 2ip route add default via 10.7.4.1 src 10.7.4.202 table 3ip route add default via 10.7.4.1 src 10.7.4.203 table 4ip rule add fwmark 1 pref 1 table 1ip rule add fwmark 2 pref 2 table 2ip rule add fwmark 3 pref 3 table 3ip rule add fwmark 4 pref 4 table 4
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/201704", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27996/" ] }
201,720
I'm am using servers (debian 7) and I'm currently running cron-apt to e-mail me when there are new upgrades available. Is the following command safe to run when new upgrades are shown? sudo apt-get dist-upgrade Are there any checks I should do before upgrading? I'm a little concerned that simply upgrading everything every time I get an email might cause failures.
sudo apt-get dist-upgrade is very safe to run as it won't do anything to the system, instead stopping to ask for your confirmation ;) You would have to add a -y switch, which is intended for unattended upgrades and makes apt assume that you always answer 'yes' to questions: sudo apt-get -y dist-upgrade . The man page states that If an undesirable situation, such as changing a held package, trying to install a unauthenticated package or removing an essential package occurs then apt-get will abort but running dist-upgrade unattanded is always risky so you may want to avoid that. You can always check what apt would do by adding a -s switch, like so: sudo apt-get -s dist-upgrade . This switches apt into simulation mode, in which no changes are made and you can safely review all the changes apt would make to the system. There is also a more conservative mode of running apt, namely apt-get upgrade . The man page for apt-get is very clear on what it does: Packages currently installed with new versions available are retrieved and upgraded; under no circumstances are currently installed packages removed, or packages not already installed retrieved and installed. New versions of currently installed packages that cannot be upgraded without changing the install status of another package will be left at their current version. In my original answer I somehow assumed you're going to run dist-upgrade via cron, which, after reading more carefully, does not seem to be the case. However I'm leaving the relevant paragraph as a general comment: It not advisable to run sudo apt-get -y dist-upgrade via cron, especially if your apt sources happen to point to a testing branch (which generally should not happen on servers, especially in production) as you may end up with an unusable system. You're relatively safe if you're using Debian's stable branch but I'd still recommend to attend upgrades. Anyway, if you're doing a dist-upgrade that is going to perform serious changes you should always have a backup. Just in case.
{ "score": 4, "source": [ "https://unix.stackexchange.com/questions/201720", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/114297/" ] }