source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
226,546 | After reinstalling the server I can not mount it: sshfs [email protected]:/var /remote_mount
fuse: bad mount point `/remote_mount': Transport endpoint is not connected When I SSH, I get an error: # ssh [email protected]
The authenticity of host 'example.com (xxx.xxx.xxx.xxx)' can't be established.
ECDSA key fingerprint is 57:b6:bd:76:17:80:73:85:4a:14:8a:6f:dc:fa:fe:7c.
Are you sure you want to continue connecting (yes/no)? | This error popped up for me after I had been using sshfs on and off for years. A search found this page but all the "setup sshd" answers were not much help as sshfs had been working well until it suddenly didn't and ssh worked just fine to other locations. However, after a bit of frustrating poking and testing I found the solution. The problem started with a sshfs mount failing from a bad hostname in it. As ls -l $mountpoint failed with this error I tried clearing the trouble with fusermount -u $mountpoint , and the mount started to work again! Even a simple ls $mountpoint made the error after the failed sshfs. | {
"source": [
"https://unix.stackexchange.com/questions/226546",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/83275/"
]
} |
226,563 | From the post Why can rm remove read-only files? I understand that rm just needs write permission on directory to remove the file. But I find it hard to digest the behaviour where we can easily delete a file who owner and group different. I tried the following mtk : my username abc : created a new user $ ls -l file
-rw-rw-r-- 1 mtk mtk 0 Aug 31 15:40 file
$ sudo chown abc file
$ sudo chgrp abc file
$ ls -l file
-rw-rw-r-- 1 abc abc 0 Aug 31 15:40 file
$ rm file
$ ls -l file
<deleted> I was thinking this shouldn't have been allowed. A user should be able to delete only files under his ownership? Can someone shed light on why this is permitted? and what is the way to avoid this? I can think only restricting the write permission of the parent directory to dis-allow surprised deletes of file. | The reason why this is permitted is related to what removing a file actually does. Conceptually, rm 's job is to remove a name entry from a directory. The fact that the file may then become unreachable if that was the file's only name and that the inode and space occupied by the file can therefore be recovered at that point is almost incidental. The name of the system call that the rm command invokes, which is unlink , is even suggestive of this fact. And, removing a name entry from a directory is fundamentally an operation on that directory , so that directory is the thing that you need to have permission to write. The following scenario may make it feel more comfortable? Suppose there are directories: /home/me # owned and writable only by me
/home/you # owned and writable only by you And there is a file which is owned by me and which has two hard links: /home/me/myfile
/home/you/myfile Never mind how that hard link /home/you/myfile got there in the first place. Maybe root put it there. The idea of this example is that you should be allowed to remove the hard link /home/you/myfile . After all, it's cluttering up your directory. You should be able to control what does and doesn't exist inside /home/you . And when you do remove /home/you/myfile , notice that you haven't actually deleted the file. You've only removed one link to it. Note that if the sticky bit is set on the directory containing a file (shows up as t in ls ), then you do need to be the owner of the file in order to be allowed to delete it (unless you own the directory). The sticky bit is usually set on /tmp . | {
"source": [
"https://unix.stackexchange.com/questions/226563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
226,716 | I'm looking for the latest source code of man command, the version in my Linux is pretty old(v1.6f), but I failed after googling a while. I mean the latest source code of man , not man-pages but the binary file in /usr/bin/man itself which can be compiled and installed. | You can usually query your distribution to see where sources come from. For example, I'm on Fedora, and I can see that the man command comes from the man-db package: $ rpm -qf /usr/bin/man
man-db-2.6.7.1-16.fc21.x86_64 I can then query the man-db package for the upstream url: $ rpm -qi man-db | grep -i url
URL : http://www.nongnu.org/man-db/ And there you are, http://www.nongnu.org/man-db/ . You can perform a similar sequence of steps with the packaging systems used on other distributions. | {
"source": [
"https://unix.stackexchange.com/questions/226716",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22322/"
]
} |
226,831 | My font rendering in Firefox looks terrible on pages such as facebook.com and twitter.com: I'm running Debian 8 and fiddling with hardware acceleration, and it doesn't seem to work. | I've had this issue for ages, maybe it's time to do something about it! It comes done to ClearType , Microsoft and patents from what I read. Most *nix distro's disable any patent protected font rendering by default. Read about Debian and fonts here , you want Subpixel-hinting and Font-smoothing section. There's a config file on that page but I will add here for future reference. Create a file called .fonts.conf in your home directory, and add the following: <?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<match target="font">
<edit mode="assign" name="rgba">
<const>rgb</const>
</edit>
</match>
<match target="font">
<edit mode="assign" name="hinting">
<bool>true</bool>
</edit>
</match>
<match target="font">
<edit mode="assign" name="hintstyle">
<const>hintslight</const>
</edit>
</match>
<match target="font">
<edit mode="assign" name="antialias">
<bool>true</bool>
</edit>
</match>
<match target="font">
<edit mode="assign" name="lcdfilter">
<const>lcddefault</const>
</edit>
</match>
</fontconfig> | {
"source": [
"https://unix.stackexchange.com/questions/226831",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131210/"
]
} |
226,832 | I've issued a command gzip -r project.zip project/* in projects home directory and I've messed things up; every file in project directory and all other subdirectories have .gz extension. How do I undo this operation, i.e., how do I remove .gz extension from script, so I do not need to rename every file by hand? | I've had this issue for ages, maybe it's time to do something about it! It comes done to ClearType , Microsoft and patents from what I read. Most *nix distro's disable any patent protected font rendering by default. Read about Debian and fonts here , you want Subpixel-hinting and Font-smoothing section. There's a config file on that page but I will add here for future reference. Create a file called .fonts.conf in your home directory, and add the following: <?xml version='1.0'?>
<!DOCTYPE fontconfig SYSTEM 'fonts.dtd'>
<fontconfig>
<match target="font">
<edit mode="assign" name="rgba">
<const>rgb</const>
</edit>
</match>
<match target="font">
<edit mode="assign" name="hinting">
<bool>true</bool>
</edit>
</match>
<match target="font">
<edit mode="assign" name="hintstyle">
<const>hintslight</const>
</edit>
</match>
<match target="font">
<edit mode="assign" name="antialias">
<bool>true</bool>
</edit>
</match>
<match target="font">
<edit mode="assign" name="lcdfilter">
<const>lcddefault</const>
</edit>
</match>
</fontconfig> | {
"source": [
"https://unix.stackexchange.com/questions/226832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40993/"
]
} |
226,872 | I find myself needing to rearrange a system's partitions to move data previously under the root filesystem into dedicated mount points. The volumes are all in LVM, so this is relatively easy: create new volumes, move data into them, shrink the root filesystem, then mount the new volumes at the appropriate points. The issue is step 3, shrinking the root filesystem. The filesystems involved are ext4, so online resizing is supported; however, while mounted, the filesystems can only be grown. To shrink the partition requires unmounting it, which of course is not possible for the root partition in normal operation. Answers around the Web seem to revolve around booting a LiveCD or other rescue media, doing the shrink operation, then booting back into the installed system. However, the system in question is remote, and I have access only via SSH. I can reboot, but booting a rescue disc and doing operations from the console is not possible. How can I unmount the root filesystem while maintaining remote shell access? | In solving this issue, the information provided at http://www.ivarch.com/blogs/oss/2007/01/resize-a-live-root-fs-a-howto.shtml was pivotal. However, that guide is for a very old version of RHEL, and various information was obsolete. The instructions below are crafted to work with CentOS 7, but they should be easily enough transferable to any distro that runs systemd. All commands are run as root. Ensure the system is in a stable state Make sure no one else is using it and nothing else important is going on. It's probably a good idea to stop service-providing units like httpd or ftpd, just to ensure external connections don't disrupt things in the middle. systemctl stop httpd
systemctl stop nfs-server
# and so on.... Unmount all unused filesystems umount -a This will print a number of 'Target is busy' warnings, for the root volume itself and for various temporary/system FSs. These can be ignored for the moment. What's important is that no on-disk filesystems remain mounted, except the root filesystem itself. Verify this: # mount alone provides the info, but column makes it possible to read
mount | column -t If you see any on-disk filesystems still mounted, then something is still running that shouldn't be. Check what it is using fuser : # if necessary:
yum install psmisc
# then:
fuser -vm <mountpoint>
systemctl stop <whatever>
umount -a
# repeat as required... Make the temporary root
Note: if /tmp is a directory on /, we will not be able to unmount / later in this procedure if we use /tmp/tmproot. Thus it may be necessary to use an alternative mountpoint such as /tmproot instead. mkdir /tmp/tmproot
mount -t tmpfs none /tmp/tmproot
mkdir /tmp/tmproot/{proc,sys,dev,run,usr,var,tmp,oldroot}
cp -ax /{bin,etc,mnt,sbin,lib,lib64} /tmp/tmproot/
cp -ax /usr/{bin,sbin,lib,lib64} /tmp/tmproot/usr/
cp -ax /var/{account,empty,lib,local,lock,nis,opt,preserve,run,spool,tmp,yp} /tmp/tmproot/var/ This creates a very minimal root system, which breaks (among other things) manpage viewing (no /usr/share ), user-level customizations (no /root or /home ) and so forth. This is intentional, as it constitutes encouragement not to stay in such a jury-rigged root system any longer than necessary. At this point you should also ensure that all the necessary software is installed, as it will also assuredly break the package manager. Glance through all the steps, and make sure you have the necessary executables. Pivot into the root mount --make-rprivate / # necessary for pivot_root to work
pivot_root /tmp/tmproot /tmp/tmproot/oldroot
for i in dev proc sys run; do mount --move /oldroot/$i /$i; done systemd causes mounts to allow subtree sharing by default (as with mount --make-shared ), and this causes pivot_root to fail. Hence, we turn this off globally with mount --make-rprivate / . System and temporary filesystems are moved wholesale into the new root. This is necessary to make it work at all; the sockets for communication with systemd, among other things, live in /run , and so there's no way to make running processes close it. Ensure remote access survived the changeover systemctl restart sshd
systemctl status sshd After restarting sshd, ensure that you can get in, by opening another terminal and connecting to the machine again via ssh. If you can't, fix the problem before moving on. Once you've verified you can connect in again, exit the shell you're currently using and reconnect. This allows the remaining forked sshd to exit and ensures the new one isn't holding /oldroot . Close everything still using the old root fuser -vm /oldroot This will print a list of processes still holding onto the old root directory. On my system, it looked like this: USER PID ACCESS COMMAND
/oldroot: root kernel mount /oldroot
root 1 ...e. systemd
root 549 ...e. systemd-journal
root 563 ...e. lvmetad
root 581 f..e. systemd-udevd
root 700 F..e. auditd
root 723 ...e. NetworkManager
root 727 ...e. irqbalance
root 730 F..e. tuned
root 736 ...e. smartd
root 737 F..e. rsyslogd
root 741 ...e. abrtd
chrony 742 ...e. chronyd
root 743 ...e. abrt-watch-log
libstoragemgmt 745 ...e. lsmd
root 746 ...e. systemd-logind
dbus 747 ...e. dbus-daemon
root 753 ..ce. atd
root 754 ...e. crond
root 770 ...e. agetty
polkitd 782 ...e. polkitd
root 1682 F.ce. master
postfix 1714 ..ce. qmgr
postfix 12658 ..ce. pickup You need to deal with each one of these processes before you can unmount /oldroot . The brute-force approach is simply kill $PID for each, but this can break things. To do it more softly: systemctl | grep running This creates a list of running services. You should be able to correlate this with the list of processes holding /oldroot , then issue systemctl restart for each of them. Some services will refuse to come up in the temporary root and enter a failed state; these don't really matter for the moment. If the root drive you want to resize is an LVM drive, you may also need to restart some other running services, even if they do not show up in the list created by fuser -vm /oldroot . You might be unable to to resize an LVM drive under Step 7 because of this Error: fsadm: Cannot proceed with mounted filesystem "/oldroot" You can try systemctl restart systemd-udevd and if that fails, you can find the leftover mounts with grep system /proc/*/mounts | column -t Look for processes that say mounts:none and try restarting these: PATH BIN FSTYPE
/proc/16395/mounts:tmpfs /run/systemd/timesync tmpfs
/proc/16395/mounts:none /var/lib/systemd/timesync tmpfs
/proc/18485/mounts:tmpfs /run/systemd/inhibit tmpfs
/proc/18485/mounts:tmpfs /run/systemd/seats tmpfs
/proc/18485/mounts:tmpfs /run/systemd/sessions tmpfs
/proc/18485/mounts:tmpfs /run/systemd/shutdown tmpfs
/proc/18485/mounts:tmpfs /run/systemd/users tmpfs
/proc/18485/mounts:none /var/lib/systemd/linger tmpfs Some processes can't be dealt with via simple systemctl restart . For me these included auditd (which doesn't like to be killed via systemctl , and so just wanted a kill -15 ). These can be dealt with individually. The last process you'll find, usually, is systemd itself. For this, run systemctl daemon-reexec . Once you're done, the table should look like this: USER PID ACCESS COMMAND
/oldroot: root kernel mount /oldroot Unmount the old root umount /oldroot At this point, you can carry out whatever manipulations you require. The original question needed a simple resize2fs invocation, but you can do whatever you want here; one other use case is transferring the root filesystem from a simple partition to LVM/RAID/whatever. Pivot the root back mount <blockdev> /oldroot
mount --make-rprivate / # again
pivot_root /oldroot /oldroot/tmp/tmproot
for i in dev proc sys run; do mount --move /tmp/tmproot/$i /$i; done This is a straightforward reversal of step 4. Dispose of the temporary root Repeat steps 5 and 6, except using /tmp/tmproot in place of /oldroot . Then: umount /tmp/tmproot
rmdir /tmp/tmproot Since it's a tmpfs, at this point the temporary root dissolves into the ether, never to be seen again. Put things back in their places Mount filesystems again: mount -a At this point, you should also update /etc/fstab and grub.cfg in accordance with any adjustments you made during step 7. Restart any failed services: systemctl | grep failed
systemctl restart <whatever> Allow shared subtrees again: mount --make-rshared / Start the stopped service units - you can use this single command: systemctl isolate default.target And you're done. Many thanks to Andrew Wood, who worked out this evolution on RHEL4, and steve, who provided me the link to the former. | {
"source": [
"https://unix.stackexchange.com/questions/226872",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/103923/"
]
} |
226,936 | How to setup the email client Mutt to send, receive and read email under CentOS and Ubuntu using a Gmail account as a relay | Gmail Setup For authentication, you'll have to do either of two things: Generate an application-specific password for your Google Account (your only option if you're using 2FA), Turn on less-secure app access (not an option with 2FA) In gmail, go click the gear icon, go to Settings , go to the tab Forwarding POP/IMAP , and click the Configuration instructions link in IMAP Access row. Then click I want to enable IMAP . At the bottom of the page, under the paragraph about configuring your mail client, select Other . Note the mail server information and use that information for further settings as shown in the next section. Install mutt CentOS yum install mutt Ubuntu sudo apt-get install mutt Configure Mutt Create mkdir -p ~/.mutt/cache/headers
mkdir ~/.mutt/cache/bodies
touch ~/.mutt/certificates Create mutt configuration file muttrc touch ~/.mutt/muttrc Open muttrc vim ~/.mutt/muttrc Add following configurations set ssl_starttls=yes
set ssl_force_tls=yes
set imap_user = "[email protected]"
set imap_pass = "PASSWORD"
set from="[email protected]"
set realname="Your Name"
set folder = "imaps://imap.gmail.com/"
set spoolfile = "imaps://imap.gmail.com/INBOX"
set postponed="imaps://imap.gmail.com/[Gmail]/Drafts"
set header_cache = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
set certificate_file = "~/.mutt/certificates"
set smtp_url = "smtps://[email protected]:[email protected]:465/"
set move = no
set imap_keepalive = 900 Make appropriate changes, like change_this_user_name to your gmail user name and PASSWORD to your gmail password. And save the file. Now you are ready to send, receive and read email using email client Mutt by simply typing mutt . For the first time it will prompt to accept SSL certificates; press a to always accept these certificates. Now you will be presented with your Gmail inbox. | {
"source": [
"https://unix.stackexchange.com/questions/226936",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130286/"
]
} |
227,017 | In the company I am working now there is a legacy service and its init script is using old SysvInit, but is running over systemd (CentOS 7). Because there's a lot of computation, this service takes around 70 seconds to finish. I didn't configure any timeout for systemd, and didn't change the default configs at /etc/systemd/system.conf , but still when I execute service SERVICE stop my service is timing out after 60 seconds. Checking with journalctl -b -u SERVICE.service I find this log: Sep 02 11:27:46 service.hostname systemd[1]: Stopping LSB: Start/Stop
Sep 02 11:28:46 service.hostname SERVICE[24151]: Stopping service: Error code: 255
Sep 02 11:28:46 service.hostname SERVICE[24151]: [FAILED] I already tried changing the DefaultTimeoutStopSec property at /etc/systemd/system.conf to 90s , but the timeout still happens. Does anyone have any idea why is it timeouting at 60s? Is there somewhere else that this timeout value is configured? Is there a way I can check it? This service runs with java 7 and to daemonize it, it uses JSVC . I configured the -wait parameter with the value 120 . | My systemd service kept timing out because of how long it would take to boot up also, so this fixed it for me: Edit your systemd file: For modern versions of systemd : Run systemctl edit --full node.service ( replace "node" with your service name ). This will create a system file at /etc/systemd/system/node.service.d/ that will override the system file at /usr/lib/systemd/system/node.service . This is the proper way to configure your system files. More information about how to use systemctl edit is here . Directly editing system file : The system file for me is at /usr/lib/systemd/system/node.service . Replace "node" with your application name. However, it is not safe to directly edit files in /usr/lib/systemd/ (See comments) Use TimeoutStartSec , TimeoutStopSec or TimeoutSec (more info here ) to specify how long the timeout should be for starting & stopping the process. Afterwards, this is how my systemd file looked: [Unit]
Description=MyProject
Documentation=man:node(1)
After=rc-local.service
[Service]
WorkingDirectory=/home/myproject/GUIServer/Server/
Environment="NODE_PATH=/usr/lib/node_modules"
ExecStart=-/usr/bin/node Index.js
Type=simple
Restart=always
KillMode=process
TimeoutSec=900
[Install]
WantedBy=multi-user.target You can also view the current Timeout status by running any of these (but you'll need to edit your service to make changes! See step 1). Confusingly, the associated properties have a "U" in their name for microseconds. See this Github issue for more information: systemctl show node.service -p TimeoutStartUSec systemctl show node.service -p TimeoutStopUSec systemctl show node.service -p TimeoutUSec Next you'll need to reload the systemd with systemctl reload node.service Now try to start your service with systemctl start node.service If that didn't work , try to reboot systemctl with systemctl reboot If that didn't work , try using the --no-block option for systemctl like so: systemctl --no-block start node.service . This option is described here : "Do not synchronously wait for the requested operation to finish. If this is not specified, the job will be verified, enqueued and systemctl will wait until the unit's start-up is completed. By passing this argument, it is only verified and enqueued." There is also the option to use systemctl mask instead of systemctl start . For more info see here . Updates from Comments: TimeoutSec=infinity : Instead of using "infinity" here, put a large amount of time instead, like TimeoutSec=900 (15 min). If the application takes "forever" to exit, then it's possible that it will block a reboot indefinitely. Credit @Alexis Wilke and @JCCyC Instead of editing /usr/lib/systemd/system , try systemctl edit instead or edit /etc/systemd/system to override them instead. You should never edit service files in /usr/lib/ . Credit @ryeager and @0xC0000022L ** Update from systemd source docs **
When specified "infinity" as a value to any of these timeout params, the timeout logic is disabled . JobTimeoutSec=, JobRunningTimeoutSec=,TimeoutStartSec=, TimeoutAbortSec= The default is "infinity" (job timeouts disabled), except for device units where JobRunningTimeoutSec= defaults to DefaultTimeoutStartSec=. Reference: enter link description here Similarly this logic applies to service level and laid out clearly in URL below.
Reference: enter link description here | {
"source": [
"https://unix.stackexchange.com/questions/227017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20763/"
]
} |
227,070 | In all shells I am aware of, rm [A-Z]* removes all files that start with an uppercase letter, but with bash this removes all files that start with a letter. As this problem exists on Linux and Solaris with bash-3 and bash-4, it cannot be a bug caused by a buggy pattern matcher in libc or a miss-configured locale definition. Is this strange and risky behavior intended or is this just a bug that exists unfixed since many years? | Note that when using range expressions like [a-z], letters of the other case may be included, depending on the setting of LC_COLLATE. LC_COLLATE is a variable which determines the collation order used when sorting the results of pathname expansion, and determines the behavior of range expressions, equivalence classes, and collating sequences within pathname expansion and pattern matching. Consider the following: $ touch a A b B c C x X y Y z Z
$ ls
a A b B c C x X y Y z Z
$ echo [a-z] # Note the missing uppercase "Z"
a A b B c C x X y Y z
$ echo [A-Z] # Note the missing lowercase "a"
A b B c C x X y Y z Z Notice when the command echo [a-z] is called, the expected output would be all files with lower case characters. Also, with echo [A-Z] , files with uppercase characters would be expected. Standard collations with locales such as en_US have the following order: aAbBcC...xXyYzZ Between a and z (in [a-z] ) are ALL uppercase letters, except for Z . Between A and Z (in [A-Z] ) are ALL lowercase letters, except for a . See: aAbBcC[...]xXyYzZ
| |
from a to z
aAbBcC[...]xXyYzZ
| |
from A to Z If you change the LC_COLLATE variable to C it looks as expected: $ export LC_COLLATE=C
$ echo [a-z]
a b c x y z
$ echo [A-Z]
A B C X Y Z So, it's not a bug , it's a collation issue . Instead of range expressions you can use POSIX defined character classes , such as upper or lower . They work also with different LC_COLLATE configurations and even with accented characters : $ echo [[:lower:]]
a b c x y z à è é
$ echo [[:upper:]]
A B C X Y Z | {
"source": [
"https://unix.stackexchange.com/questions/227070",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/120884/"
]
} |
227,343 | Can someone explain to me what this command does? EDIT: Do not run this command! It will break your installation. sudo find / -exec rm {} \; | Bad Things ® ™. It's (almost) the equivalent of sudo rm -rf / - it will, as root, find all files or directories starting from / and recursively descending from there, and then execute the rm command against each file/directory it finds. It won't actually delete directory entries as there's no -f or -r options passed to rm , but it will remove all the file entries. Hint: don't run this unless you feel like reinstalling your operating system. | {
"source": [
"https://unix.stackexchange.com/questions/227343",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131562/"
]
} |
227,577 | I'm a long time Linux user for over 15 years but one thing I hate with a passion is the mandated directory structure. I don't like that /usr/bin is the dumping ground for binaries or libs in /usr/lib , /usr/lib32 , /usr/libx32 , /lib , /lib32 etc... Random stuff in /usr/share etc. It's dumb and confusing. But some like it and tastes differ. I want a directory structure where each package is isolated. Imagine instead if the media player dragon had it's own structure: /software/dragon
/software/dragon/bin/x86/dragon
/software/dragon/doc/README
/software/dragon/doc/copyright
/software/dragon/lib/x86/libdragon.so Or: /software/zlib/include/zlib.h
/software/zlib/lib/1.2.8/x86/libz.so
/software/zlib/lib/1.2.8/x64/libz.so
/software/zlib/doc/examples/...
/software/zlib/man/... You get the point. What are my options? Is there any Linux distro that uses something like my scheme? Can some distro be modified to work like I want it (Gentoo??) or do I need LFS? Is there any prior art in this area? Like publications on if the scheme is feasible or unfeasible? Not looking for OS X. :) But OS X-inspired is totally ok. Edit : I have no idea how PATH , LD_LIBRARY_PATH and other environment variables that depend on a small set of paths should work out. I'm thinking that if I have the KDE editor Kate installed in /software/kate/bin/x86/bin/kate then I'm ok with having to type the full path to the binary to start it. How it should work for dynamic libraries and dlopen calls, I don't know but it can't be an unsolvable engineering problem. | First, an up-front conflict-of-interest disclaimer: I am a long-time GoboLinux developer. Second, an up-front claim of domain expertise: I am a long-time GoboLinux developer. There are a few different structures in current use. GoboLinux has one, and tools like GNU Stow , Homebrew , etc, use something quite similar (primarily for user programs). NixOS also uses a non-standard hierarchy for programs, and philosophy of life. It's also a reasonably common LFS experiment. I'm going to describe all of those, and then comment from experience on how that works out in practice ("feasibility"). The short answer is that yes, it's feasible, but you have to really want it . GoboLinux GoboLinux has a structure very similar to what you describe. Software is installed under /Programs : /Programs/ZSH/5.0.8 contains all the files belonging to ZSH 5.0.8, in the usual bin / lib /... directories. The system tools create symlinks to those files under a /System/Links hierarchy, which maps onto /usr ¹. The PATH variable contains only the single unified executable directory, and LD_LIBRARY_PATH is unused. Multiple versions of software can coexist at once, but only one file by a given name ( bin/zsh ) will be linked actively at once. You can access the others by their full paths. A set of compatibility symlinks also exists, so /bin and /usr/bin map to the unified executables directory, and so on. This makes life easier for software at run time. A kernel patch, GoboHide, allows those compatibility symlinks to be hidden from file listings (but still traversable). Contra another answer, you do not need to modify kernel code: GoboHide is purely cosmetic, and the kernel does not depend on user-space paths in general². GoboLinux does have a bespoke init system, but that is also not required to do this. The tagline has always been "the filesystem is the package manager", but there are reasonably ordinary package-management tools in the system. You can do everything using cp , rm , and ln , though. If you want to use GoboLinux, you are very welcome. I will note, though, that it's a small development team, and you're likely to find that some software you want isn't packaged up if nobody has wanted to use it before. The good news is that it's generally fairly easy to build a program for the system (a standard "recipe" is about three lines long); the bad news is that sometimes it's unpleasantly complicated, which I'll cover more below. Publications There are a few "publications". I gave a presentation at linux.conf.au 2010 on the system as a whole that covers everything generally, which is available in video: ogv mp4 (also on your local Linux Australia mirror); I also wrote up my notes into prose. There are also a few older documents, including the famous " I am not clueless ", on the GoboLinux website , which addresses some objections and issues. I think that we're all a bit less gung-ho these days, and I suspect that a future release will adopt /usr as the base location for the symlinks. NixOS NixOS puts each installed program into its own directory under /nix/store . Those directories are named something like /nix/store/5rnfzla9kcx4mj5zdc7nlnv8na1najvg-firefox-3.5.4/ — there is a cryptographic hash representing the whole set of dependencies and configuration leading to that program. Inside that directory are all the associated files, with more-or-less normal locations locally. It also allows you to have multiple versions around at once, and to use any of them. NixOS has a whole philosophy associated with it of reproducible configuration: it's essentially got a configuration management system baked into it from the start. It relies on some environmental manipulation to present the right world of installed programs to the user. LFS It's fairly straightforward to go through Linux From Scratch and set up exactly the hierarchy you want: just make the directories and configure everything to install in the right place. I've done it a few times in building GoboLinux experiments, and it's not substantially harder than plain LFS. You do need to make the compatibility symlinks in that case; otherwise it is substantially harder, but careful use of union mounts could probably avoid that if you really wanted. I feel like there was an LFS Hint about exactly that at one point, but I can't seem to find it now. On Feasibility The thing about the FHS is that it's a standard, it's very common, and it broadly reflects the existing usage at the time it was written. Most users will never be on a system that doesn't follow that layout in essence. The result of that is that lots of software has latent dependencies on it that nobody realises, often completely unintentionally. All those scripts with #!/bin/bash ? No good if you don't have Bash there. That is why GoboLinux has all those compatibility symlinks; it's just practical. A lot of software fails to function either at build time or at run time under a non-standard layout, and then it requires patching to correct, often quite intrusively. Your basic Autoconf program will usually happily install itself wherever you tell it, and it's fairly easy to automate the process of passing in the correct --prefix . Other build systems aren't always so nice, either by intentionally baking in the hierarchy, or by leading authors to write non-portable configuration. CMake is a major offender in the latter category. That means that if you want to live in this world you have to be prepared to do a lot of fiddly work up front in other people's build systems. It is a real hassle to have to dynamically patch generated files during compilation. Runtime is another matter again. Many programs have assumptions about where their own files, or someone else's files, are found either relative to them or absolutely. When you start using symlinks to present a consistent view, lots of programs have bugs handling them (or sometimes, arguably correct behaviour that is unhelpful to you). For example, a tool foobar may expect to find the baz executable next to it, or in ../sbin . Depending on whether it reads its symlink or not, those can be two different places, and neither of them may be correct anyway. A combined problem is the /usr/share directory. It's for shared files, of course, but when you put every program in its own prefix they're no longer actually shared. That leads to programs unable to find standard icons and the like. GoboLinux dealt with this in a pretty ugly way: at build time, $prefix/share was a symlink to $prefix/Shared , and after building the link was pointed to the global share directory instead. It now uses compile-time sandboxing and file movement to deal with share (and the other directories), but runtime errors from reading links can still be an issue. Suites of multiple programs are another problem. GoboLinux has never gotten GNOME working fully, and I don't believe NixOS has either, because the layout interdependencies are so baked in that it's just intractable to cure them all. So, yes, it's feasible , but: There is quite a lot of work involved in just making things function. Some software may just never work. People will look at you funny. All of those may or may not be a problem for you. ¹ Version 14.01 uses /System/Index , which maps directly onto /usr . I suspect a future version may drop the Links/Index hierarchy and use /usr across the board. ² It does require /bin/sh to exist by default. | {
"source": [
"https://unix.stackexchange.com/questions/227577",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57817/"
]
} |
227,662 | I want to rename multiple files (file1 ... fileN to file1_renamed ... fileN_renamed) using find command: find . -type f -name 'file*' -exec mv filename='{}' $(basename $filename)_renamed ';' But getting this error: mv: cannot stat ‘filename=./file1’: No such file or directory This not working because filename is not interpreted as shell variable. | The following is a direct fix of your approach: find . -type f -name 'file*' -exec sh -c 'x="{}"; mv "$x" "${x}_renamed"' \; However, this is very expensive if you have lots of matching files, because you start a fresh shell (that executes a mv ) for each match. And if you have funny characters in any file name, this will explode. A more efficient and secure approach is this: find . -type f -name 'file*' -print0 | xargs --null -I{} mv {} {}_renamed It also has the benefit of working with strangely named files. If find supports it, this can be reduced to find . -type f -name 'file*' -exec mv {} {}_renamed \; The xargs version is useful when not using {} , as in find .... -print0 | xargs --null rm Here rm gets called once (or with lots of files several times), but not for every file. I removed the basename in you question, because it is probably wrong: you would move foo/bar/file8 to file8_renamed , not foo/bar/file8_renamed . Edits (as suggested in comments): Added shortened find without xargs Added security sticker | {
"source": [
"https://unix.stackexchange.com/questions/227662",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/90223/"
]
} |
227,876 | I'm a new Linux user trying to change the screen resolution as there is no option under display. I have successfully managed to add new resolutions by fluke by following online guide. I don't have a GPU, I don't know if this is the issue? Below is my xrandr -q output. root@kali:~# xrandr -q
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 1280 x 1024, current 1280 x 1024, maximum 1280 x 1024
default connected 1280x1024+0+0 0mm x 0mm
1280x1024 0.0*
1920x1200_60.00 (0x145) 193.2MHz
h: width 1920 start 2056 end 2256 total 2592 skew 0 clock 74.6KHz
v: height 1200 start 1203 end 1209 total 1245 clock 59.9Hz
1440x900_59.90 (0x156) 106.3MHz
h: width 1440 start 1520 end 1672 total 1904 skew 0 clock 55.8KHz
v: height 900 start 901 end 904 total 932 clock 59.9Hz | Here are the steps you need to add a new custom resolution and apply it. Following steps are for adding a 1920x1080 resolution, but you can use it for any other resolution you want. But make sure your monitor and onboard graphics support that resolution. # First we need to get the modeline string for xrandr
# Luckily, the tool "gtf" will help you calculate it.
# All you have to do is to pass the resolution & the-
# refresh-rate as the command parameters:
gtf 1920 1080 60
# In this case, the horizontal resolution is 1920px the
# vertical resolution is 1080px & refresh-rate is 60Hz.
# IMPORTANT: BE SURE THE MONITOR SUPPORTS THE RESOLUTION
# Typically, it outputs a line starting with "Modeline"
# e.g. "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync
# Copy this entire string (except for the starting "Modeline")
# Now, use "xrandr" to make the system recognize a new
# display mode. Pass the copied string as the parameter
# to the --newmode option:
xrandr --newmode "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync
# Well, the string within the quotes is the nick/alias
# of the display mode - you can as well pass something
# as "MyAwesomeHDResolution". But, careful! :-|
# Then all you have to do is to add the new mode to the
# display you want to apply, like this:
xrandr --addmode VGA1 "1920x1080_60.00"
# VGA1 is the display name, it might differ for you.
# Run "xrandr" without any parameters to be sure.
# The last parameter is the mode-alias/name which
# you've set in the previous command (--newmode)
# It should add the new mode to the display & apply it.
# Usually unlikely, but if it doesn't apply automatically
# then force it with this command:
xrandr --output VGA1 --mode "1920x1080_60.00" Original source: https://gist.github.com/debloper/2793261 I also wrote a script that does all these steps automatically. You can try it out if the above steps seem too complicated for you: https://gist.github.com/chirag64/7853413 | {
"source": [
"https://unix.stackexchange.com/questions/227876",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132940/"
]
} |
227,891 | diff -u file1.txt file2.txt > patchfile creates a patch file which consists of instruction for patch to convert file1.txt to be exactly like file2.txt Can't this be done using cp command instead? I can imagine this to be useful for when the file is too large and has to be transferred over a network where this approach might save bandwidth. Is there any other way to use diff/patch which would be advantageous in other scenarios? | Diffs can be more complicated than just comparing one file versus another. The can compare entire directory hierarchies. Consider the example that I want fix a bug in GCC. My change adds a line or two in 4 or 5 files and deletes a handful of lines in those and other files. If I want to communicate these changes to someone, potentially for inclusion into GCC my options are Copy the entire source tree Copy only the files that were changed Supply just the changes I've made Copying the entire source tree doesn't make sense, but what about the other two options, which gets at the core of your question. Now consider that someone else also worked on the same file as I did and we both give our changes to someone. How will this person know what we've done and if the changes are compatible (different parts of the file) or conflict (same lines of the file)? He will diff them! The diff can tell him how the files differ from each other and from the unmodified source file. If the diff is what is needed, it just makes more sense to just send the diff in the first place. A diff can also contain changes from more than one file, so while I edited 9 files in total, I can provide a single diff file to describe those changes. Diffs can also be used to provide history. What if a change three months ago caused a bug I only discovered today. If I can narrow down when the bug was introduced and can isolate it to a specific change, I can use the diff to "undo" or revert the change. This is not something I could as easily do if I were only copying files around. This all ties into source version control where programs may record files history as a series of diffs from the time it was created until today. The diffs provide history (I can recreate the file as it was on any particular day), I can see who to blame for breaking something (the diff has an owner) and I can easily submit changes to upstream projects by giving them specific diffs (maybe they are only interested in one change when I've made many). In summary, yes, cp is easier than diff and patch , but the utility of diff and patch is greater than cp for situations where how files change is important to track. | {
"source": [
"https://unix.stackexchange.com/questions/227891",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132948/"
]
} |
227,910 | I found a good replacement IDE for Delphi called Lazarus. But I don't have a question for programmers. Will the statically linked Linux binary work on all Linux distributions? I.e. it does not matter on what Linux distro I built it and it will work on Debian / ArchLinux / Ubuntu / OpenSUSE / ... whatever? As a result of my findings, does really only matter 32bit vs 64bit? I want to be sure before I publish. | This answer was first written for the more general question "will my binary run on all distros", but it addresses statically linked binaries in the second half. For anything that is more complex than a statically linked hello world, the answer is probably no . Without testing it on distribution X, assume the answer is no for X. If you want to ship your software in binary form, restrict yourself to a few popular distributions for the field of use of your software (desktop, server, embedded, ...) the latest one or two versions of each Otherwise you end up with houndreds of distribution of all sizes, versions and ages (ten year old distribution are still in use and supported). Test for those. Just a few pointer on what can (and will) go wrong otherwise: The package of a tool/library you need is named differently across distributions and even versions of the same distribution The libraries you need are too new or too old (wrong version). Don't assume just because your program can link, it links with the right library. The same library (file on disk) is differently named on different distributions, making linking impossible 32bit on 64bit: the 32bit environment might not be installed or some non-essential 32bit library is moved into an extra package apart from the 32on64 environment, so you have an extra dependency just for this case. Shell: don't assume your version of Bash. Don't assume even Bash. Tools: don't assume some non POSIX command line tool exists anywhere. Tools: don't assume the tool recognizes an option just because the GNU version of your distro does. Kernel interfaces: Don't assume the existence or structure of files in /proc just because they exist/have the structure on your machine Java: are you really sure your program runs on IBM's JRE as shipped with SLES without testing it? Bonus: Instruction sets: binary compiled on your machine does not run on older hardware. Is linking statically (or: bundling all the libraries you need with your software) a solution? Even if it works technically, the associated costs might be to high. So unfortunately, the answer is probably no either. Security: you shift the responsibility to update the libraries from the user of your software to yourself. Size and complexity: just for fun try to build a statically linked GUI program. Interoperability: if your software is a "plugin" of any kind, you depend on the software which calls you. Library design: if you link your program statically to GNU libc and use name services ( getpwnam() etc.), you end up linked dynamically against libc's NSS (name service switch). Library design: the library you link your program statically with uses data files or other resources (like timezones or locales). For all the reasons mentioned above, testing is essential. Get familiar with KVM or other virtualization techniques and have a VM of every Distribution you plan to support. Test your software on every VM. Use minimal installations of those distributions. Create a VM with a restricted instruction set (e.g. no SSE 4). Statically linked or bundled only: check your binaries with ldd to see whether they are really statically linked / use only your bundled libraries. Statically linked or bundled only: create an empty directory and copy your software into it. chroot into that directory and run your software. | {
"source": [
"https://unix.stackexchange.com/questions/227910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/126755/"
]
} |
227,951 | This is a situation I am frequently in: I have a source server with a 320GB hard-drive inside of it, and 16GB of ram ( exact specs available here , but as this is an issue I run into frequently on other machines as well, I would prefer the answer to work on any "reasonable" Linux machine) I have a backup server with several terabytes of hard-drive space ( exact specs here , see disclaimer above) I want to transfer 320GB of data from the source server to the target server (specifically, the data from /dev/sda ). The two computers are physically next to each other, so I can run cables between them. I'm on a LAN, and I'm using a new-ish router , which means my network speeds should "ideally" be 1000Mbit, right? Security is not an issue. I am on a local network, and I trust all machines on the network, including the router. (optional) I don't necessarily need a signed checksum of the data, but basic error checking (such as dropped packets, or the drive becoming unreadable) should be detected rather than just disappear into the output. I searched for this question online, and have tested several commands. The one that appears the most often is this: ssh [email protected] 'dd bs=16M if=/dev/sda | gzip' > backup_sda.gz This command has proven too slow (it ran for an hour, only got about 80GB through the data). It took about 1 minute and 22 seconds for the 1GB test packet, and ended up being twice as fast when not compressed. The results may also have been skewed by the fact that the transferred file is less than the amount of RAM on the source system. Moreover (and this was tested on 1GB test pieces), I'm getting issues if I use the gzip command and dd ; the resulting file has a different checksum when extracted on the target, than it does if piped directly. I'm still trying to figure out why this is happening. | Since the servers are physically next to each other, and you mentioned in the comments you have physical access to them, the fastest way would be to take the hard-drive out of the first computer, place it into the second, and transfer the files over the SATA connection. | {
"source": [
"https://unix.stackexchange.com/questions/227951",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5769/"
]
} |
228,015 | Can someone give me a command that would: move a file towards a new directory and leave a symlink in its old location towards its new one | mv moves a file, and ln -s creates a symbolic link, so the basic task is accomplished by a script that executes these two commands: #!/bin/sh
mv -- "$1" "$2"
ln -s -- "$2" "$1" There are a few caveats. If the second argument is a directory, then mv would move the file into that directory, but ln -s would create a link to the directory rather than to the moved file. #!/bin/sh
set -e
original="$1" target="$2"
if [ -d "$target" ]; then
target="$target/${original##*/}"
fi
mv -- "$original" "$target"
ln -s -- "$target" "$original" Another caveat is that the first argument to ln -s is the exact text of the symbolic link. It's relative to the location of the target, not to the directory where the command is executed. If the original location is not in the current directory and the target is not expressed by an absolute path, the link will be incorrect. In this case, the path needs to be rewritten. In this case, I'll create an absolute link (a relative link would be preferable, but it's harder to get right). This script assumes that you don't have file names that end in a newline character. #!/bin/sh
set -e
original="$1" target="$2"
if [ -d "$target" ]; then
target="$target/${original##*/}"
fi
mv -- "$original" "$target"
case "$original" in
*/*)
case "$target" in
/*) :;;
*) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}"
esac
esac
ln -s -- "$target" "$original" If you have multiple files, process them in a loop. #!/bin/sh
while [ $# -gt 1 ]; do
eval "target=\${$#}"
original="$1"
if [ -d "$target" ]; then
target="$target/${original##*/}"
fi
mv -- "$original" "$target"
case "$original" in
*/*)
case "$target" in
/*) :;;
*) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}"
esac
esac
ln -s -- "$target" "$original"
shift
done | {
"source": [
"https://unix.stackexchange.com/questions/228015",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133016/"
]
} |
228,018 | My scripts executes the sub shell command along the lines of: ( while ..... ) $3>$testdir/$testfile.log I get the error: line 75: syntax error near unexpected token `$3'
line 75: ` ) $3>$testdir/$testfile.log' I've tried several options, and seems > is only happy when it's hard coded number rather than a variable. Am I missing a parenthesis? | mv moves a file, and ln -s creates a symbolic link, so the basic task is accomplished by a script that executes these two commands: #!/bin/sh
mv -- "$1" "$2"
ln -s -- "$2" "$1" There are a few caveats. If the second argument is a directory, then mv would move the file into that directory, but ln -s would create a link to the directory rather than to the moved file. #!/bin/sh
set -e
original="$1" target="$2"
if [ -d "$target" ]; then
target="$target/${original##*/}"
fi
mv -- "$original" "$target"
ln -s -- "$target" "$original" Another caveat is that the first argument to ln -s is the exact text of the symbolic link. It's relative to the location of the target, not to the directory where the command is executed. If the original location is not in the current directory and the target is not expressed by an absolute path, the link will be incorrect. In this case, the path needs to be rewritten. In this case, I'll create an absolute link (a relative link would be preferable, but it's harder to get right). This script assumes that you don't have file names that end in a newline character. #!/bin/sh
set -e
original="$1" target="$2"
if [ -d "$target" ]; then
target="$target/${original##*/}"
fi
mv -- "$original" "$target"
case "$original" in
*/*)
case "$target" in
/*) :;;
*) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}"
esac
esac
ln -s -- "$target" "$original" If you have multiple files, process them in a loop. #!/bin/sh
while [ $# -gt 1 ]; do
eval "target=\${$#}"
original="$1"
if [ -d "$target" ]; then
target="$target/${original##*/}"
fi
mv -- "$original" "$target"
case "$original" in
*/*)
case "$target" in
/*) :;;
*) target="$(cd -- "$(dirname -- "$target")" && pwd)/${target##*/}"
esac
esac
ln -s -- "$target" "$original"
shift
done | {
"source": [
"https://unix.stackexchange.com/questions/228018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61859/"
]
} |
228,235 | It's very easy to use split() in JavaScript to break a string into an array. What about shell script? Say I want to do this: $ script.sh var1_var2_var3 When the user give such string var1_var2_var3 to the script.sh, inside the script it will convert the string into an array like array=( var1 var2 var3 )
for name in ${array[@]}; do
# some code
done | Bourne/POSIX-like shells have a split+glob operator and it's invoked every time you leave a parameter expansion ( $var , $- ...), command substitution ( $(...) ), or arithmetic expansion ( $((...)) ) unquoted in list context. Actually, you invoked it by mistake when you did for name in ${array[@]} instead of for name in "${array[@]}" . (Actually, you should beware that invoking that operator like that by mistake is source of many bugs and security vulnerabilities ). That operator is configured with the $IFS special parameter (to tell what characters to split on (though beware that space, tab and newline receive a special treatment there)) and the -f option to disable ( set -f ) or enable ( set +f ) the glob part. Also note that while the S in $IFS was originally (in the Bourne shell where $IFS comes from) for S eparator, in POSIX shells, the characters in $IFS should rather be seen as delimiters or terminators (see below for an example). So to split on _ : string='var1_var2_var3'
IFS=_ # delimit on _
set -f # disable the glob part
array=($string) # invoke the split+glob operator
for i in "${array[@]}"; do # loop over the array elements. To see the distinction between separator and delimiter , try on: string='var1_var2_' That will split it into var1 and var2 only (no extra empty element). So, to make it similar to JavaScript's split() , you'd need an extra step: string='var1_var2_var3'
IFS=_ # delimit on _
set -f # disable the glob part
temp=${string}_ # add an extra delimiter
array=($temp) # invoke the split+glob operator (note that it would split an empty $string into 1 (not 0 ) element, like JavaScript's split() ). To see the special treatments tab, space and newline receive, compare: IFS=' '; string=' var1 var2 ' (where you get var1 and var2 ) with IFS='_'; string='_var1__var2__' where you get: '' , var1 , '' , var2 , '' . Note that the zsh shell doesn't invoke that split+glob operator implicitly like that unless in sh or ksh emulation. There, you have to invoke it explicitely. $=string for the split part, $~string for the glob part ( $=~string for both), and it also has a split operator where you can specify the separator: array=(${(s:_:)string}) or to preserve the empty elements: array=("${(@s:_:)string}") Note that there s is for splitting , not delimiting (also with $IFS , a known POSIX non-conformance of zsh ). It's different from JavaScript's split() in that an empty string is split into 0 (not 1) element. A notable difference with $IFS -splitting is that ${(s:abc:)string} splits on the abc string, while with IFS=abc , that would split on a , b or c . With zsh and ksh93 , the special treatment that space, tab or newline receive can be removed by doubling them in $IFS . As a historic note, the Bourne shell (the ancestor or modern POSIX shells) always stripped the empty elements. It also had a number of bugs related to splitting and expansion of $@ with non-default values of $IFS . For instance IFS=_; set -f; set -- $@ would not be equivalent to IFS=_; set -f; set -- $1 $2 $3... . Splitting on regexps Now for something closer to JavaScript's split() that can split on regular expressions, you'd need to rely on external utilities. In the POSIX tool-chest, awk has a split operator that can split on extended regular expressions (those are more or less a subset of the Perl-like regular expressions supported by JavaScript). split() {
awk -v q="'" '
function quote(s) {
gsub(q, q "\\" q q, s)
return q s q
}
BEGIN {
n = split(ARGV[1], a, ARGV[2])
for (i = 1; i <= n; i++) printf " %s", quote(a[i])
exit
}' "$@"
}
string=a__b_+c
eval "array=($(split "$string" '[_+]+'))" The zsh shell has builtin support for Perl-compatible regular expressions (in its zsh/pcre module), but using it to split a string, though possible is relatively cumbersome. | {
"source": [
"https://unix.stackexchange.com/questions/228235",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45317/"
]
} |
228,314 | I have added a custom path to PATH variable in my /root/.bashrc file When i do sudo su; echo $PATH , it shows the entry, '/path/to/custom/bins'. But i do sudo sh -c 'echo $PATH' , it shows, /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin The folder paths added in .bashrc file are not visible. Doesn't sudo command have the same environment as a root user? | .bashrc is a configuration file of bash, only when it's executed interactively. It's only loaded when you start bash, not when you run some other program such as sh (not even if bash is invoked via the name sh ). And it's only loaded when bash is interactive, not when it's executing a script or a command with -c . sudo sh -c 'echo $PATH' or sudo bash -c 'echo $PATH' doesn't invoke an interactive shell, so .bashrc is not involved. sudo su; echo $PATH runs an interactive instance of root's shell. If that's bash, then ~root/.bashrc is loaded. This snippet executes echo $PATH once this interactive shell terminates, so whatever happens in the interactive shell has no influence on what the snippet prints at the end. But if you type echo $PATH at the prompt of the interactive shell started by sudo su , you will see the value set by ~root/.bashrc . Since .bashrc is invoked in each interactive shell, not by login shells (not even by interactive login shells, which is a design defect in bash), it's the wrong place to define environment variables. Use .bashrc for interactive bash settings such as key bindings, aliases and completion settings. Set environment variables in files that are loaded when you log in: ~/.pam_environment or ~/.profile . So set PATH in .profile instead of .bashrc , and either run a login shell with sudo -i 'echo $PATH' , or explicitly source .profile with sudo sh -c '. ~/.profile; echo $PATH' . | {
"source": [
"https://unix.stackexchange.com/questions/228314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
228,331 | I have a script looks like: c=0
for f in */*; do
cp -v "$f" "/myhome/CE$(printf '%0*d' 2 $BATCHNUM)-new-stuctures_extracted/test-$(printf '%0*d' 5 $c)"
c=$((c=c+1))
done However, the user must provide a variable call BATCHNUM and otherwise I need to force this script stop running. It will be better if I could force the script that calls this script to stop too (or even the #1 script that calls #2 script which calls this script). | The quickest way is probably to add these two lines to the start of the script: set -u # or set -o nounset
: "$BATCHNUM" The first line sets the nounset option in the shell running the script, which aborts if you try to expand an unset variable; the second expands $BATCHNUM in the context of a no-op, to trigger the abort before doing anything else. If you want a more helpful error message, you could instead write: if [[ -z "$BATCHNUM" ]]; then
echo "Must provide BATCHNUM in environment" 1>&2
exit 1
fi Or similar. | {
"source": [
"https://unix.stackexchange.com/questions/228331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/118930/"
]
} |
228,558 | I'm trying to remove some characters from file(UTF-8). I'm using tr for this purpose: tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat File contains some foreign characters (like "Латвийская" or "àé"). tr doesn't seem to understand them: it treats them as non-alpha and removes too. I've tried changing some of my locale settings: LC_CTYPE=C LC_COLLATE=C tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat
LC_CTYPE=ru_RU.UTF-8 LC_COLLATE=C tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat
LC_CTYPE=ru_RU.UTF-8 LC_COLLATE=ru_RU.UTF-8 tr -cs '[[:alpha:][:space:]]' ' ' <testdata.dat Unfortunately, none of these worked. How can I make tr understand Unicode? | That's a known ( 1 , 2 , 3 , 4 , 5 , 6 ) limitation of the GNU implementation of tr . It's not as much that it doesn't support foreign , non-English or non-ASCII characters, but that it doesn't support multi-byte characters. Those Cyrillic characters would be treated OK, if written in the iso8859-5 (single-byte per character) character set (and your locale was using that charset), but your problem is that you're using UTF-8 where non-ASCII characters are encoded in 2 or more bytes. GNU's got a plan (see also ) to fix that and work is under way but not there yet. FreeBSD or Solaris tr don't have the problem. In the mean time, for most use cases of tr , you can use GNU sed or GNU awk which do support multi-byte characters. For instance, your: tr -cs '[[:alpha:][:space:]]' ' ' could be written: gsed -E 's/( |[^[:space:][:alpha:]])+/ /' or: gawk -v RS='( |[^[:space:][:alpha:]])+' '{printf "%s", sep $0; sep=" "}' To convert between lower and upper case ( tr '[:upper:]' '[:lower:]' ): gsed 's/[[:upper:]]/\l&/g' (that l is a lowercase L , not the 1 digit). or: gawk '{print tolower($0)}' For portability, perl is another alternative: perl -Mopen=locale -pe 's/([^[:space:][:alpha:]]| )+/ /g'
perl -Mopen=locale -pe '$_=lc$_' If you know the data can be represented in a single-byte character set, then you can process it in that charset: (export LC_ALL=ru_RU.iso88595
iconv -f utf-8 |
tr -cs '[:alpha:][:space:]' ' ' |
iconv -t utf-8) < Russian-file.utf8 | {
"source": [
"https://unix.stackexchange.com/questions/228558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/129998/"
]
} |
228,597 | When I use cp -R inputFolder outputFolder the result is context-dependent : if outputFolder does not exist, it will be created, and the cloned folder path will be outputFolder . if outputFolder exists, then the clone created will be outputFolder/inputFolder This is horrible , because I want to create some installation script, and if the user runs it twice by mistake, he will have outputFolder created the first time, then on the second run all the stuff will be created once again in outputFolder/inputFolder . I want always the first behavior: create a clone next to the original (as a sibling). I want to use cp to be portable (e.g. MINGW does not have rsync shipped) I checked cp -R --parents but this recreates the path all the way up the directory tree (so the clone will not be outputFolder but some/path/outputFolder ) --remove-destination or --update in case 2 do not change anything, still things are copied into outputFolder/inputFolder Is there a way to do this without first checking for existence of the outputFolder (if folder does not exist then...) or using rm -rf outputFolder ? What is the agreed, portable UNIX way of doing that? | Use this instead: cp -R inputFolder/. outputFolder This works in exactly the same way that, say, cp -R aaa/bbb ccc works: if ccc doesn't exist then it's created as a copy of bbb and its contents; but if ccc already exists then ccc/bbb is created as the copy of bbb and its contents. For almost any instance of bbb this gives the undesirable behaviour that you noted in your Question. However, in this specific situation the bbb is just . , so aaa/bbb is really aaa/. , which in turn is really just aaa but by another name. So we have these two scenarios: ccc does not exist: The command cp -R aaa/. ccc means "create ccc and copy the contents of aaa/. into ccc/. , i.e. copy aaa into ccc . ccc does exist: The command cp -R aaa/. ccc means "copy the contents of aaa/. into ccc/. , i.e. copy aaa into ccc . | {
"source": [
"https://unix.stackexchange.com/questions/228597",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10745/"
]
} |
228,603 | On my Amazon Linux (RHEL-derived) system, within /etc/login.defs, I've set the minimum number of days between password changes to 1 day: PASS_MIN_DAYS 1 . I thought I should be doing that with PAM configuration files in /etc/pam.d/. However, I cannot find any documentation for doing so. Is there a way to control when passwords can be changed using PAM, and what is it, please? | Use this instead: cp -R inputFolder/. outputFolder This works in exactly the same way that, say, cp -R aaa/bbb ccc works: if ccc doesn't exist then it's created as a copy of bbb and its contents; but if ccc already exists then ccc/bbb is created as the copy of bbb and its contents. For almost any instance of bbb this gives the undesirable behaviour that you noted in your Question. However, in this specific situation the bbb is just . , so aaa/bbb is really aaa/. , which in turn is really just aaa but by another name. So we have these two scenarios: ccc does not exist: The command cp -R aaa/. ccc means "create ccc and copy the contents of aaa/. into ccc/. , i.e. copy aaa into ccc . ccc does exist: The command cp -R aaa/. ccc means "copy the contents of aaa/. into ccc/. , i.e. copy aaa into ccc . | {
"source": [
"https://unix.stackexchange.com/questions/228603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54643/"
]
} |
228,699 | I have a multi line log entry format that I need to process. The log looks something like this: --START--
Device=B
Data=asdfasdf
Lorem=Ipsum
--END--
--START--
Device=A
Data=asdfasdf
Lorem=Ipsum
--END--
--START--
Device=B
Data=asdfasdf
--END--
--START--
Device=A
Data=asdfasdf
--END--
--START--
Device=B
Data=asdfasdf
--END--
--START--
Device=C
Data=asdfasdf
Lorem=Ipsum
--END-- I want to print everything between --START-- and --END-- if a particular pattern is matched. e.g: Print all entries where Device=A --START--
Device=A
Data=asdfasdf
Lorem=Ipsum
--END--
--START--
Device=A
Data=asdfasdf
--END-- All I've been able to do so far is write: sed -e -n '/--START--/,/--END--/p' < input Which effectively prints the input but I think I need to add {} to filter with N and then print if that condition matches. I also think I'm completely lost. Any idea on how to print multiple lines if a single line matches a condition? | $ sed -n '/--START--/{:a;N;/--END--/!ba; /Device=A/p}' file
--START--
Device=A
Data=asdfasdf
Lorem=Ipsum
--END--
--START--
Device=A
Data=asdfasdf
--END-- (The above was tested on GNU sed. It would have to be massaged to run on BSD/OSX.) How it works: /--START--/{...} Every time we reach a line that contains --START-- , run the commands inside the braces {...} . :a Define a label a . N Read the next line and add it to the pattern space. /--END--/!ba Unless the pattern space now contains --END-- , jump back to label a . /Device=A/p If we get here, that means that the patterns space starts with --START-- and ends with --END-- . If, in addition, the pattern space contains Device=A , then print ( p ) it. | {
"source": [
"https://unix.stackexchange.com/questions/228699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133477/"
]
} |
229,022 | Today I'm learning something about fifo with this article: Introduction to Named Pipes , which mentions cat <(ls -l) . I did some experiments by using sort < (ls -l) , which pops out an error: -bash: syntax error near unexpected token `('` Then I found I misadded an extra space in the command. But, why this extra command will lead to this failure? Why must the redirect symbol be close to the ( ? | Because that's not an < , it's a <() which is completely different. This is called process substitution , it is a feature of certain shells that allows you to use the output of one process as input for another. The > and < operators redirect output to and input from files . The <() operator deals with commands (processes), not files. When you run sort < (ls) You are attempting to run the command ls in a subshell (that's what the parentheses mean), then to pass that subshell as an input file to sort . This, however, is not accepted syntax and you get the error you saw. | {
"source": [
"https://unix.stackexchange.com/questions/229022",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74226/"
]
} |
229,049 | Data 1
\begin{document}
3 Code #!/bin/bash
function getStart {
local START="$(awk '/begin\{document\}/{ print NR; exit }' data.tex)"
echo $START
}
START2=$(getStart)
echo $START2 which returns 2 but I want 3 .
I change unsuccessfully the end by this answer about How can I add numbers in a bash script : START2=$((getStart+1)) How can you increment a local variable in Bash script? | I'm getting 2 from your code. Nevertheless, you can use the same technique for any variable or number: local start=1
(( start++ )) or (( ++start )) or (( start += 1 )) or (( start = start + 1 )) or just local start=1
echo $(( start + 1 )) etc. | {
"source": [
"https://unix.stackexchange.com/questions/229049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16920/"
]
} |
229,188 | How can I keep long strings from truncating in terminal? For example if I run journalctl -xn There's a lot of text that I cannot read. I am open to using other programs/tools. | From the journalctl manpage: The output is paged through less by default, and long lines are
"truncated" to screen width. The hidden part can be viewed by using the
left-arrow and right-arrow keys. Paging can be disabled; see the
--no-pager option and the "Environment" section below. If you don't want to constantly be using the left and right arrow keys, simply pipe it directly to less : $ journalctl -xn | less This will wrap lines that are too long for your terminal (the default behavior of less , which journalctl overrides). Or, of course, if you don't mind possibly having to use your terminal's scrollback, you could use no pager at all: $ journalctl -xn --no-pager | {
"source": [
"https://unix.stackexchange.com/questions/229188",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104015/"
]
} |
229,234 | Having a CSV file like this: HEADER
"first, column"|"second "some random quotes" column"|"third ol' column"
FOOTER and looking for result like: HEADER
first, column|second "some random quotes" column|third ol' column in other words removing "FOOTER", quotes in beginning, end and around |. So far this code works: sed '/FOOTER/d' csv > csv1 | #remove FOOTER
sed 's/^\"//' csv1 > csv2 | #remove quote at the beginning
sed 's/\"$//' csv2 > csv3 | #remove quote at the end
sed 's/\"|\"/|/g' csv3 > csv4 #remove quotes around pipe As you see the problem is it creates 4 extra files. Here is another solution, that has a goal not to create extra files and to do the same thing in a single script. It doesn't work very well. #!/bin/ksh
sed '/begin/, /end/ {
/FOOTER/d
s/^\"//
s/\"$//
s/\"|\"/|/g
}' csv > csv4 | First of all, as Michael showed, you can just combine all of these into a single command: sed '/^FOOTER/d; s/^\"//; s/\"$//; s/\"|\"/|/g' csv > csv1 I think some sed implementations can't cope with that and might need: sed -e '/^FOOTER/d' -e 's/^\"//' -e 's/\"$//' -e 's/\"|\"/|/g' csv > csv1 That said, it looks like your fields are defined by | and you just want to remove " around the entire field, leaving those that are within the field. In that case, you could do: $ sed '/FOOTER/d; s/\(^\||\)"/\1/g; s/"\($\||\)/\1/g' csv
HEADER
first, column|second "some random quotes" column|third ol' column Or, with GNU sed : sed -r '/FOOTER/d; s/(^|\|)"/\1/g; s/"($|\|)/\1/g' csv You could also use Perl: $ perl -F"|" -lane 'next if /FOOTER/; s/^"|"$// for @F; print @F' csv
HEADER
first, column|second some random quotes column|third ol' column | {
"source": [
"https://unix.stackexchange.com/questions/229234",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/66371/"
]
} |
229,247 | I have a column of words in which English words are glued to Chinese words like this: abominate******** abhor************* (The stars represent the Chinese alphabet) I want to write a script to separate the English words and put it in another file. Is sth like this possible by script writing? Any suggestion is welcome. | First of all, as Michael showed, you can just combine all of these into a single command: sed '/^FOOTER/d; s/^\"//; s/\"$//; s/\"|\"/|/g' csv > csv1 I think some sed implementations can't cope with that and might need: sed -e '/^FOOTER/d' -e 's/^\"//' -e 's/\"$//' -e 's/\"|\"/|/g' csv > csv1 That said, it looks like your fields are defined by | and you just want to remove " around the entire field, leaving those that are within the field. In that case, you could do: $ sed '/FOOTER/d; s/\(^\||\)"/\1/g; s/"\($\||\)/\1/g' csv
HEADER
first, column|second "some random quotes" column|third ol' column Or, with GNU sed : sed -r '/FOOTER/d; s/(^|\|)"/\1/g; s/"($|\|)/\1/g' csv You could also use Perl: $ perl -F"|" -lane 'next if /FOOTER/; s/^"|"$// for @F; print @F' csv
HEADER
first, column|second some random quotes column|third ol' column | {
"source": [
"https://unix.stackexchange.com/questions/229247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133863/"
]
} |
229,431 | I'm new to using SSH and related technologies, so it's very possible I'm not understanding something basic. I'm trying to SSH into a web server (that I own) and the connection is never established due to timeout. ~ $ ssh -vvv example.com
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/USER/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 102: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to example.com [123.45.67.89] port 22.
debug1: connect to address IPADD port 22: Operation timed out
ssh: connect to host example.com port 22: Operation timed out My first thought was that I had somehow specified the domain wrong, or that something was wrong with my site. So I tried connecting to the same domain via FTP, and that worked fine (was prompted for user name): ~ $ ftp
ftp> open
(to) example.com
Connected to example.com.
220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
220-You are user number 2 of 50 allowed.
220-Local time is now 12:47. Server port: 21.
220-This is a private system - No anonymous login
220-IPv6 connections are also welcome on this server.
220 You will be disconnected after 15 minutes of inactivity.
Name (example.com:USER): So then I thought maybe I was just using SSH wrong. I started watching this tutorial video . At about 1 minute in he does ssh [email protected] and gets a username prompt, but it gives me the same timeout as above. I then tried ssh google.com which does the same. ssh localhost , on the other hand, works fine. So the problem seems to be something to do with SSH requests over a network. My next thought was that it may be a firewall issue. I do have Sophos installed on this machine, but according to my administrator it "should not" block outgoing SSH requests. Can anyone help figure out why this is happening? | That error message means the server to which you are connecting does not reply to SSH connection attempts on port 22. There are three possible reasons for that: You're not running an SSH server on the machine. You'll need to install it to be able to ssh to it. You are running an SSH server on that machine, but on a nonstandard port. You need to figure out on which port it is running; say it's on port 2222, you then run ssh -p 2222 hostname . You are running an SSH server on that machine, and it does use the port on which you are trying to connect, but the machine has a firewall that does not allow you to connect to it. You'll need to figure out how to change the firewall, or maybe you need to ssh from a different host to be allowed in. EDIT : as (correctly) pointed out in the comments, the third is certainly the case; the other two would result in the server sending a TCP "reset" package back upon the client's connection attempt, resulting in a "connection refused" error message, rather than the timeout you're getting. The other two might also be the case, but you need to fix the third first before you can move on. | {
"source": [
"https://unix.stackexchange.com/questions/229431",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133987/"
]
} |
229,541 | When running ps with the -f option in PuTTY (to see the command corresponding to each process), lines which are longer than the terminal width are not fully visible (they are not wrapped on multiple lines). How can I force line wrapping so that I can see the full commands (on multiple lines, if necessary) when running ps -f ? | If you have a POSIX-conforming ps implementation, you may try ps -f | more Note that we¹ recently changed the behavior and if you have an implementation that follows POSIX issue 7 tc2, you may try: ps -wwf | more ¹ We being the people who have weekly teleconferences to discuss the evolvement of the POSIX standard. | {
"source": [
"https://unix.stackexchange.com/questions/229541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/101052/"
]
} |
229,787 | I would like to find an application external to the internet browser that would play only youtube sound. Preferably a very light one, CLI or GUI. | There is youtube-dl that lets you download youtube videos from the cli. There is also a new(ish) tool called mps-youtube, that I haven't personally used, but looks like it does exactly what you want. https://github.com/mps-youtube/mps-youtube Give it a try and let us know if it works MPS is available in Ubuntu repos. Launch the MPS console with mpsyt To search youtube in mps console: /<your_search_term> After searching a term, and then selecting a number, the stream will play sound; there are play/pause, seek, volume options: To see options: mpsyt h More detailed options: mpsyt help search mpsyt help download After searching and then selecting the number of the stream with a command that would show download options: d <number> Playlists can also be searched in the PLS console with pls <search_term> or even simpler //<serch_term> | {
"source": [
"https://unix.stackexchange.com/questions/229787",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
229,931 | Given a (really long) list of zip files, how can you tell the size of them once uncompressed? | You can do that using unzip -Zt zipname which prints a summary directly about the archive content, with total size. Here is an example on its output: unzip -Zt a.zip
1 file, 14956 bytes uncompressed, 3524 bytes compressed: 76.4% Then, using awk, you can extract the number of bytes: unzip -Zt a.zip | awk '{print $3}'
14956 Finally, put it in a for loop as in Tom's answer: total=0
for file in *.zip; do # or whichever files you want
(( total += $(unzip -Zt $file |awk '{ print $3 }') ))
done
echo $total | {
"source": [
"https://unix.stackexchange.com/questions/229931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46600/"
]
} |
230,047 | Let's say you have a project structure with lots of Makefiles and there is a top level Makefile that includes all the other. How can you list all the possible targets? I know writing make and then tabbing to get the suggestions would generally do the trick, but in my case there are 10000 targets. Doing this passes the results through more and also for some reason scrolling the list results in a freeze. Is there another way? | This is how the bash completion module for make gets its list: make -qp |
awk -F':' '/^[a-zA-Z0-9][^$#\/\t=]*:([^=]|$)/ {split($1,A,/ /);for(i in A)print A[i]}' |
sort -u It prints out a newline-delimited list of targets, without paging. | {
"source": [
"https://unix.stackexchange.com/questions/230047",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134429/"
]
} |
230,206 | Answering this question caused me to ask another question: I thought the following scripts do the same thing and the second one should be much faster, because the first one uses cat that needs to open the file over and over but the second one opens the file only one time and then just echoes a variable: (See update section for correct code.) First: #!/bin/sh
for j in seq 10; do
cat input
done >> output Second: #!/bin/sh
i=`cat input`
for j in seq 10; do
echo $i
done >> output while input is about 50 megabytes. But when I tried the second one, it was too ,too slow because echoing the variable i was a massive process. I also got some problems with the second script, for example the size of output file was lower than expected. I also checked the man page of echo and cat to compare them: echo - display a line of text cat - concatenate files and print on the standard output But I didn't get the difference. So: Why cat is so fast and echo is so slow in the second script? Or is the problem with variable i ? ( because in the man page of echo it is said it displays "a line of text" and so I guess it is
optimized only for short variables, not for very very long variables
like i . However, that is only a guess.) And why I got problems when I use echo ? UPDATE I used seq 10 instead of `seq 10` incorrectly. This is edited code: First: #!/bin/sh
for j in `seq 10`; do
cat input
done >> output Second: #!/bin/sh
i=`cat input`
for j in `seq 10`; do
echo $i
done >> output (Special thanks to roaima .) However, it is not the point of the problem. Even if the loop occurs only one time, I get the same problem: cat works much faster than echo . | There are several things to consider here. i=`cat input` can be expensive and there's a lot of variations between shells. That's a feature called command substitution. The idea is to store the whole output of the command minus the trailing newline characters into the i variable in memory. To do that, shells fork the command in a subshell and read its output through a pipe or socketpair. You see a lot of variation here. On a 50MiB file here, I can see for instance bash being 6 times as slow as ksh93 but slightly faster than zsh and twice as fast as yash . The main reason for bash being slow is that it reads from the pipe 128 bytes at a time (while other shells read 4KiB or 8KiB at a time) and is penalised by the system call overhead. zsh needs to do some post-processing to escape NUL bytes (other shells break on NUL bytes), and yash does even more heavy-duty processing by parsing multi-byte characters. All shells need to strip the trailing newline characters which they may be doing more or less efficiently. Some may want to handle NUL bytes more gracefully than others and check for their presence. Then once you have that big variable in memory, any manipulation on it generally involves allocating more memory and coping data across. Here, you're passing (were intending to pass) the content of the variable to echo . Luckily, echo is built-in in your shell, otherwise the execution would have likely failed with an arg list too long error. Even then, building the argument list array will possibly involve copying the content of the variable. The other main problem in your command substitution approach is that you're invoking the split+glob operator (by forgetting to quote the variable). For that, shells need to treat the string as a string of characters (though some shells don't and are buggy in that regard) so in UTF-8 locales, that means parsing UTF-8 sequences (if not done already like yash does), look for $IFS characters in the string. If $IFS contains space, tab or newline (which is the case by default), the algorithm is even more complex and expensive. Then, the words resulting from that splitting need to be allocated and copied. The glob part will be even more expensive. If any of those words contain glob characters ( * , ? , [ ), then the shell will have to read the content of some directories and do some expensive pattern matching ( bash 's implementation for instance is notoriously very bad at that). If the input contains something like /*/*/*/../../../*/*/*/../../../*/*/* , that will be extremely expensive as that means listing thousands of directories and that can expand to several hundred MiB. Then echo will typically do some extra processing. Some implementations expand \x sequences in the argument it receives, which means parsing the content and probably another allocation and copy of the data. On the other hand, OK, in most shells cat is not built-in, so that means forking a process and executing it (so loading the code and the libraries), but after the first invocation, that code and the content of the input file will be cached in memory. On the other hand, there will be no intermediary. cat will read large amounts at a time and write it straight away without processing, and it doesn't need to allocate huge amount of memory, just that one buffer that it reuses. It also means that it's a lot more reliable as it doesn't choke on NUL bytes and doesn't trim trailing newline characters (and doesn't do split+glob, though you can avoid that by quoting the variable, and doesn't expand escape sequence though you can avoid that by using printf instead of echo ). If you want to optimise it further, instead of invoking cat several times, just pass input several times to cat . yes input | head -n 100 | xargs cat Will run 3 commands instead of 100. To make the variable version more reliable, you'd need to use zsh (other shells can't cope with NUL bytes) and do it: zmodload zsh/mapfile
var=$mapfile[input]
repeat 10 print -rn -- "$var" If you know the input doesn't contain NUL bytes, then you can reliably do it POSIXly (though it may not work where printf is not builtin) with: i=$(cat input && echo .) || exit # add an extra .\n to avoid trimming newlines
i=${i%.} # remove that trailing dot (the \n was removed by cmdsubst)
n=10
while [ "$n" -gt 10 ]; do
printf %s "$i"
n=$((n - 1))
done But that is never going to be more efficient than using cat in the loop (unless the input is very small). | {
"source": [
"https://unix.stackexchange.com/questions/230206",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/132907/"
]
} |
230,238 | It seems like every application from the terminal gives warnings and error messages, even though it appears to run fine. Emacs: ** (emacs:5004): WARNING **: Couldn't connect to accessibility bus:
Failed to connect to socket /tmp/dbus-xxfluS2Izg: Connection refused Evince: ** (evince:5052): WARNING **: Couldn't connect to accessibility bus:
Failed to connect to socket /tmp/dbus-xxfluS2Izg: Connection refused
(evince:4985): Gtk-CRITICAL **: gtk_widget_show: assertion
'GTK_IS_WIDGET (widget)' failed
(evince:4985): Gtk-CRITICAL **: gtk_widget_show: assertion
'GTK_IS_WIDGET (widget)' failed Firefox: (process:5059): GLib-CRITICAL **: g_slice_set_config: assertion
'sys_page_size == 0' failed The list goes on. Is this behavior common or is there something wrong with my system? How I fix these issues? | Unfortunately, GTK libraries (used in particular by GNOME) tend to emit a lot of scary-looking messages. Sometimes these messages indicate potential bugs, sometimes they're totally spurious, and it's impossible to tell which is which without delving deep into the code. As an end user, you can't do anything about it. You can report those as bugs (even if the program otherwise behaves correctly, emitting spurious error messages is a bug), but when the program is basically working, these bugs are understandably treated as very low priority. The accessibility warning is a known bug with an easy workaround if you don't use any accessibility feature: export NO_AT_BRIDGE=1 In my experience, Gtk-CRITICAL bugs are completely spurious; while they do indicate a programming error somewhere, they shouldn't be reported to end-users, only to the developer who wrote the program (or the underlying library — often the developer of the program itself can't do anything about it because it's a bug in a library that's called by a library that's called by a library that's used in the program). | {
"source": [
"https://unix.stackexchange.com/questions/230238",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134562/"
]
} |
230,308 | I just received a new USB flash drive, and set up 2 encrypted partitions on it. I used dm-crypt (LUKS mode) through cryptsetup . With an additional non-encrypted partition, the drive has the following structure: /dev/sdb1 , encrypted, hiding an ext4 filesystem labelled "Partition 1". /dev/sdb2 , encrypted, hiding another ext4 filesystem, labelled "Partition 2". /dev/sdb3 , clear, visible ext4 filesystem labelled "Partition 3". Because the labels are attached to the ext4 filesystems, the first two remain completely invisible as long as the partitions haven't been decrypted. This means that, in the meantime, the LUKS containers have no labels. This is particularly annoying when using GNOME (automount), in which case the partitions appear as " x GB Encrypted " and " y GB Encrypted " until I decide to unlock them. This isn't really a blocking problem, but it's quite annoying, since I really like my labels and would love to see them appear even when my partitions are still encrypted. Therefore, is there a way to attach labels to dm-crypt+LUKS containers, just like we attach labels to ext4 filesystems? Does the dm-crypt+LUKS header have some room for that, and if so, how may I set a label? Note that I don't want to expose my ext4 labels before decryption, that would be silly. I'd like to add other labels to the containers, which could appear while the ext4 labels are hidden. | For a permanent solution to change the label of the container , use: sudo cryptsetup config /dev/sdb1 --label YOURLABEL Edit: Notice that labeling only works with Luks2 headers. In any case, it is possible to convert a Luks1 header into Luks2 with: sudo cryptsetup convert /dev/sdb1 --type luks2 OBS: Please notice that Luks2 header occupy more space, which can reduce the total number of key slots. Converting Luks2 back to Luks1 is also possible, but there are reports of people who have had problems or difficulties in converting back. | {
"source": [
"https://unix.stackexchange.com/questions/230308",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41892/"
]
} |
230,330 | I copied a snippet of Bash to background an ssh command executed remotely: ssh user@remote <<CMD
some process <&- >log 2>error &
CMD What does <&- do? My guess is that it is the same as < /dev/null My next understanding is that the three main file descriptors ( stdin , stdout , stderr ) need to be closed to prevent: The job being backgrounded and the script exiting -- conflicting
somehow? When the terminal closes, all processes that are
accepting stdin from terminal are closed? | <&- is not quite the same thing as < /dev/null . <&- closes fd 0, whereas < /dev/null redirects it from the device /dev/null , which never provides any data and always gives EOF on read. The difference is mostly that a read(2) call from a closed FD (the <&- case) will error with EBADF, whereas a call from a null-redirected FD will return no bytes read (end-of-file condition). If your program never reads from stdin, the distinction doesn't matter. Closing the FDs is good practice if you're backgrounding something, since a backgrounded process will hang if it tries to read anything from TTY. This example doesn't fully handle everything it should, though; ideally there would be a nohup or setsid invocation somewhere, to fully disassociate the background process. | {
"source": [
"https://unix.stackexchange.com/questions/230330",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/74847/"
]
} |
230,472 | Suppose my non-root 32-bit app runs on a 64-bit system, all filesystems of which are mounted as read-only. The app creates an image of a 64-bit ELF in memory. But due to read-only filesystems it can't dump this image to a file to do an execve on. Is there still a supported way to launch a process from this image? Note: the main problem here is to switch from 32-bit mode to 64-bit, not doing any potentially unreliable hacks . If this is solved, then the whole issue becomes trivial — just make a custom loader. | Yes, via memfd_create and fexecve : int fd = memfd_create("foo", MFD_CLOEXEC);
// write your image to fd however you want
fexecve(fd, argv, envp); | {
"source": [
"https://unix.stackexchange.com/questions/230472",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27672/"
]
} |
230,481 | I am using Ubuntu, and the youtube-dl command is working absolutely fine. However, now I want to download only a portion a video that is too long. So I want to download only a few minutes of that video, e.g. from minute 13 to minute 17. Is there any way to do that? | I don't believe youtube-dl alone will do what you want. However you can combine it with a command line utility like ffmpeg. First acquire the actual URL using youtube-dl: youtube-dl -g "https://www.youtube.com/watch?v=V_f2QkBdbRI" Copy the output of the command and paste it as part of the -i parameter of the next command: ffmpeg -ss 00:00:15.00 -i "OUTPUT-OF-FIRST URL" -t 00:00:10.00 -c copy out.mp4 The -ss parameter in this position states to discard all input up until 15 seconds into the video. The -t option states to capture for 10 seconds. The rest of the command tells it to store as an mp4. ffmpeg is a popular tool and should be in any of the popular OS repositories/package managers. | {
"source": [
"https://unix.stackexchange.com/questions/230481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134732/"
]
} |
230,634 | I get access to some xeon machines for checking performance. I want to find out what architecture they are using such as Haswell, Sandybridge , Ivybridge. Is there a command to find this out? | It's a bit of a cheap workaround but you could get that info from gcc !
I'll explain : gcc is able to optimize binaries for each subarch with the -march option. Moreover, it is able to detect yours and automatically optimize for your machine with -march=native
Assuming so, you just have to call gcc with march=native and ask it what flags it would use :
in short gcc -march=native -Q --help=target|grep march for me it gives -march= bdver1 but my pc runs with an amd buldozer processor | {
"source": [
"https://unix.stackexchange.com/questions/230634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134842/"
]
} |
230,673 | I would like to generate a random string (e.g. passwords, user names, etc.). It should be possible to specify the needed length (e.g. 13 chars). What tools can I use? (For security and privacy reasons, it is preferable that strings are generated off-line, as opposed to online on a website.) | My favorite way to do it is by using /dev/urandom together with tr to delete unwanted characters. For instance, to get only digits and letters: tr -dc A-Za-z0-9 </dev/urandom | head -c 13 ; echo '' Alternatively, to include more characters from the OWASP password special characters list : tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' </dev/urandom | head -c 13 ; echo If you have some problems with tr complaining about the input, try adding LC_ALL=C like this: LC_ALL=C tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' </dev/urandom | head -c 13 ; echo | {
"source": [
"https://unix.stackexchange.com/questions/230673",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/55183/"
]
} |
230,800 | I am trying to convert my video library to HEVC format to gain space. I ran the following command on all of the video files in my library: #!/bin/bash
for i in *.mp4;
do
#Output new files by prepending "X265" to the names
avconv -i "$i" -c:v libx265 -c:a copy X265_"$i"
done Now, most videos convert fine and the quality is the same as before. However, a few videos which are of very high quality (e.g. one movie print which is of 5GB) loses quality -- the video is all pixelated. I am not sure what to do in this case. Do I need to modify the crf parameter in my command line? Or something else? The thing is, I am doing a bulk conversion. So, I need a method where avconv automatically adjusts whatever parameter needs adjustment, for each video. UPDATE-1 I found that crf is the knob I need to adjust. The default CRF is 28. For better quality, I could use something less than 28. For example: avconv -i input.mp4 -c:v libx265 -x265-params crf=23 -c:a copy output.mp4 However, the problem is that for some videos CRF value of 28 is good enough, while for some videos, lower CRF is required. This is something which I have to check manually by converting small sections of the big videos. But in bulk conversion, how would I check each video manually? Is their some way that avconv can adjust CRF according to the input video intelligently? UPDATE-2 I found that there is a --lossless option in x265: http://x265.readthedocs.org/en/default/lossless.html . However, I don't know how to use it correctly. I tried using it in the following manner but it yielded opposite results (the video was even more pixelated): avconv -i input.mp4 -c:v libx265 -x265-params lossless -c:a copy output.mp4 | From my own experience, if you want absolutely no loss in quality, --lossless is what you are looking for. Not sure about avconv but the command you typed looks identical to what I do with FFmpeg . In FFmpeg you can pass the parameter like this: ffmpeg -i INPUT.mkv -c:v libx265 -preset ultrafast -x265-params lossless=1 OUTPUT.mkv Most x265 switches (options with no value) can be specified like this (except those CLI-only ones, those are only used with x265 binary directly). With that out of the way, I'd like to share my experience with x265 encoding. For most videos (be it WMV, or MPEG, or AVC/H.264) I use crf=23 . x265 decides the rest of the parameters and usually it does a good enough job. However often before I commit to transcoding a video in its entirety, I test my settings by converting a small portion of the video in question. Here's an example, suppose an mkv file with stream 0 being video, stream 1 being DTS audio, and stream 2 being a subtitle: ffmpeg -hide_banner \
-ss 0 \
-i "INPUT.mkv" \
-attach "COVER.jpg" \
-map_metadata 0 \
-map_chapters 0 \
-metadata title="TITLE" \
-map 0:0 -metadata:s:v:0 language=eng \
-map 0:1 -metadata:s:a:0 language=eng -metadata:s:a:0 title="Surround 5.1 (DTS)" \
-map 0:2 -metadata:s:s:0 language=eng -metadata:s:s:0 title="English" \
-metadata:s:t:0 filename="Cover.jpg" -metadata:s:t:0 mimetype="image/jpeg" \
-c:v libx265 -preset ultrafast -x265-params \
crf=22:qcomp=0.8:aq-mode=1:aq_strength=1.0:qg-size=16:psy-rd=0.7:psy-rdoq=5.0:rdoq-level=1:merange=44 \
-c:a copy \
-c:s copy \
-t 120 \
"OUTPUT.HEVC.DTS.Sample.mkv" Note that the backslashes signal line breaks in a long command, I do it to help me keep track of various bits of a complex CLI input. Before I explain it line-by-line, the part where you convert only a small portion of a video is the second line and the second last line: -ss 0 means seek to 0 second before starts decoding the input, and -t 120 means stop writing to the output after 120 seconds. You can also use hh:mm:ss or hh:mm:ss.sss time formats. Now line-by-line: -hide_banner prevents FFmpeg from showing build information on start. I just don' want to see it when I scroll up in the console; -ss 0 seeks to 0 second before start decoding the input. Note that if this parameter is given after the input file and before the output file, it becomes an output option and tells ffmpeg to decode and ignore the input until x seconds, and then start writing to output. As an input option it is less accurate (because seeking is not accurate in most container formats), but takes almost no time. As an output option it is very precise but takes a considerable amount of time to decode all the stream before the specified time, and for testing purpose you don't want to waste time; -i "INPUT.mkv" : Specify the input file; -attach "COVER.jpg" : Attach a cover art (thumbnail picture, poster, whatever) to the output. The cover art is usually shown in file explorers; -map_metadata 0 : Copy over any and all metadata from input 0, which in the example is just the input; -map_chapters 0 : Copy over chapter info (if present) from input 0; -metadata title="TITLE" : Set the title of the video; -map 0:0 ... : Map stream 0 of input 0, which means we want the first stream from the input to be written to the output. Since this stream is a video stream, it is the first video stream in the output , hence the stream specifier :s:v:0 . Set its language tag to English; -map 0:1 ... : Similar to line 8, map the second stream (DTS audio), and set its language and title (for easier identification when choosing from players); -map 0:2 ... : Similar to line 9, except this stream is a subtitle; -metadata:s:t:0 ... : Set metadata for the cover art. This is required for mkv container format; -c:v libx265 ... : Video codec options. It's so long that I've broken it into two lines. This setting is good for high quality bluray video (1080p) with minimal banding in gradient (which x265 sucks at). It is most likely an overkill for DVDs and TV shows and phone videos. This setting is mostly stolen from this Doom9 post ; crf=22:... : Continuation of video codec parameters. See the forum post mentioned above; -c:a copy : Copy over audio; -c:s copy : Copy over subtitles; -t 120 : Stop writing to the output after 120 seconds, which gives us a 2-minute clip for previewing trancoding quality; "OUTPUT.HEVC.DTS.Sample.mkv" : Output file name. I tag my file names with the video codec and the primary audio codec. Whew. This is my first answer so if there is anything I missed please leave a comment. I'm not a video production expert, I'm just a guy who's too lazy to watch a movie by putting the disc into the player. PS. Maybe this question belongs to somewhere else as it isn't strongly related to Unix & Linux. | {
"source": [
"https://unix.stackexchange.com/questions/230800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89385/"
]
} |
231,265 | I am wondering to ask the difference of these two commands (i.e. only the order of their options are different): tar -zxvf foo.tar.gz tar -zfxv foo.tar.gz The first one ran perfectly but the second one said: tar: You must specify one of the `-Acdtrux' or `--test-label' options
Try `tar --help' or `tar --usage' for more information. And tar with --test-label and -zfxv said : tar (child): xv: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now Then I looked at tar manual and realised that all the example there are using switch -f in the end!! AFAICT there is no need for this restriction, or is there?! because in my view switches should be order free. | The order of switches is free, but -f has a mandatory argument which is the file that tar will read/write. You could do tar -zf foo.tar.gz -xv and that will work, and has your requirement of a non-specific order of switches. This is how all commands that have options that have arguments work. | {
"source": [
"https://unix.stackexchange.com/questions/231265",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135244/"
]
} |
231,273 | Currently I have Linux Mint installed on my PC with a USB hard drive partition mounted as /home . This is working well. If I install a second USB hard drive, is there any chance Linux will get confused between the two, and try mount the second hard drive's partition as /home on boot? That would be bad. Coming from Windows, I've seen it happen often that drive letters are not "remembered" correctly causing all sorts of issues. I guess the main question is: How does Linux actually know which USB hard drive is /dev/sdb and which is /media/misha/my_2nd_drive ? | Usually the location of the USB port (Bus/Device) determines the order it's detected on. However, don't rely on this. Each file system has a UUID which stands for universally unique identifier ( FAT and NTFS use a slightly different scheme, but they also have an identifier that can be used as a UUID). You can rely on the (Linux) UUID to be unique. For more information about UUIDs, see this Wikipedia article . Use the disk UUID as a mount argument. To find out what the UUID is, run this: $ sudo blkid /dev/sdb1 ( blkid needs to read the device, hence it needs root powers, hence the sudo . If you've already become root, the sudo is not needed.) You can then use that UUID in /etc/fstab like this: UUID=7e839ad8-78c5-471f-9bba-802eb0edfea5 /home ext4 defaults 0 2 There can then be no confusion about what disk is to be mounted on /home. For manual mounting you can use /dev/disk/by-uuid/..... | {
"source": [
"https://unix.stackexchange.com/questions/231273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133840/"
]
} |
231,386 | On Fedora 22, gpg doesn't find gpg-agent: % gpg-agent --daemon
% gpg -vvv --use-agent --no-tty --decrypt file.gpg
gpg: using character set `utf-8'
:pubkey enc packet: version 3, algo 1, keyid 3060B8F7271AFBAF
data: [4094 bits]
gpg: public key is 271AFBAF
gpg: using subkey 271AFBAF instead of primary key 50EA64D5
gpg: using subkey 271AFBAF instead of primary key 50EA64D5
gpg: gpg-agent is not available in this session
gpg: Sorry, no terminal at all requested - can't get input | Looking at the versions reveals the problem: % gpg-agent --version
gpg-agent (GnuPG) 2.1.7
% gpg --version
gpg (GnuPG) 1.4.19 The components come from different packages ( gnupg2-2.1.7-1.fc22.x86_64 and gnupg-1.4.19-2.fc22.x86_64 in my case). The solution is to use the gpg2 command instead of gpg . | {
"source": [
"https://unix.stackexchange.com/questions/231386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24042/"
]
} |
231,605 | We can use up and down arrow to navigate in command history. In some IDEs, such as Matlab, if we input something and then press the arrow keys, we scroll among only the history commands starting with what we have input. That's really convenient, but in a shell terminal, this doesn't work. Is there some way to gain a similar function in a shell terminal? And any other tips for improving efficiency in terminal use? | What you are looking for is Ctrl R . Type Ctrl R and then type part of the command you want. Bash will display the first matching command. Keep typing Ctrl R and bash will cycle through previous matching commands. To search backwards in the history, type Ctrl S instead. (If Ctrl S doesn't work that way for you, that likely means that you need to disable XON/XOFF flow control: to do that, run stty -ixon .) This is documented under "Searching" in man bash . | {
"source": [
"https://unix.stackexchange.com/questions/231605",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67765/"
]
} |
231,676 | I have a simple bash script that starts two servers: #!/bin/bash
(cd ./frontend && gulp serve) & (cd ./backend && gulp serve --verbose) If the second command exits, it seems that the first command continues running. How can I change this so that if either command exits, the other is terminated? Note that we don't need to check the error levels of the background processes, just whether they have exited. | This starts both processes, waits for the first one that finishes and then kills the other: #!/bin/bash
{ cd ./frontend && gulp serve; } &
{ cd ./backend && gulp serve --verbose; } &
wait -n
pkill -P $$ How it works Start: { cd ./frontend && gulp serve; } &
{ cd ./backend && gulp serve --verbose; } & The above two commands start both processes in background. Wait wait -n This waits for either background job to terminate. Because of the -n option, this requires bash 4.3 or better. Kill pkill -P $$ This kills any job for which the current process is the parent. In other words, this kills any background process that is still running. If your system does not have pkill , try replacing this line with: kill 0 which also kills the current process group . Easily testable example By changing the script, we can test it even without gulp installed: $ cat script.sh
#!/bin/bash
{ sleep $1; echo one; } &
{ sleep $2; echo two; } &
wait -n
pkill -P $$
echo done The above script can be run as bash script.sh 1 3 and the first process terminates first. Alternatively, one can run it as bash script.sh 3 1 and the second process will terminate first. In either case, one can see that this operates as desired. | {
"source": [
"https://unix.stackexchange.com/questions/231676",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/61668/"
]
} |
232,384 | Trying to figure out how to convert an argument to an integer to perform arithmetic on, and then print it out, say for addOne.sh : echo $1 + 1
>>sh addOne.sh 1
prints 1 + 1 | In bash, one does not "convert an argument to an integer to perform arithmetic". In bash, variables are treated as integer or string depending on context. (If you are using a variable in an integer context, then, obviously, the variable better contain a string that looks like a valid integer. Otherwise, you'll get an error.) To perform arithmetic, you should invoke the arithmetic expansion operator $((...)) . For example: $ a=2
$ echo "$a + 1"
2 + 1
$ echo "$(($a + 1))"
3 or generally preferred: $ echo "$((a + 1))"
3 You should be aware that bash (as opposed to ksh93, zsh or yash) only performs integer arithmetic. If you have floating point numbers (numbers with decimals), then there are other tools to assist. For example, use bc : $ b=3.14
$ echo "$(($b + 1))"
bash: 3.14 + 1: syntax error: invalid arithmetic operator (error token is ".14 + 1")
$ echo "$b + 1" | bc -l
4.14 Or you can use a shell with floating point arithmetic support instead of bash: zsh> echo $((3.14 + 1))
4.14 | {
"source": [
"https://unix.stackexchange.com/questions/232384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135986/"
]
} |
232,657 | My sed command is, sed '/(.*:)/d' <<< 'abcd:bcde:cdeaf' It must return, bcde:cdeaf (i.e.) all characters before the first colon in the line and the colon itself must be removed. But this is not removing anything. My confusion arises mainly due to, 1) Does parens for pattern matching need to be escaped inside sed regex-es? 2) In either case(with escaping/no escpaing), it is nt working.
I tried, sed -E '/\\(.*:\\)/d' <<< 'abcd:bcde' | The d command in sed deletes a whole line. What you want to use here is an s (substitution) command. $ echo 'abcd:bcde:cdeaf' | sed 's/[^:]*://'
bcde:cdeaf The [^:] is how to write "not a colon". The * after the not-a-colon expression means "any number of the things right before me" (in this case, the not-colon). Finally, the : selects a colon. Matches always happen from left to right in a greedy fashion, so this is guaranteed to match the first part of the given string, up to and including the first colon. In other words, select any number of things that aren't a colon and the first colon. The // means to replace the matched substring with nothing (i.e. delete it). | {
"source": [
"https://unix.stackexchange.com/questions/232657",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
232,666 | What is the difference between following 2 commands? cp -rp /dir1/*.* /dir2/
cp -rp /dir1/* /dir2/ | *.* only matches filenames with a dot in the middle or at the end. For example: abc.jpg
def. * matches the filenames above, plus the names which don't have a dot at all. for example: data | {
"source": [
"https://unix.stackexchange.com/questions/232666",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42312/"
]
} |
232,708 | How to format output of ps -p command? To not show me something like this: PID TTY TIME CMD but just PIDs. I'm using Linux. | Use the -o option to select which columns are displayed. If you put = after the column name, the header line is suppressed. ps -o pid= -p 1 23 456
ps -o pid= -o ppid= -o pgid= -o sid= -p 1 23 456 | {
"source": [
"https://unix.stackexchange.com/questions/232708",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136197/"
]
} |
232,718 | I have two files. The first file has 11 columns, for example: 1 2 3 4 5 6 7 8 9 10 11 The second has 10 columns and might look like this: 11 22 33 44 55 66 77 88 99 100 What I want to do is look at file1, and if column 7 is some value, for example say it's between 14 and 15, then replace column 9 of file1 with the the value of column 9 from file2. So in my example above, file1 would be rewritten as: 1 2 3 4 5 6 7 8 99 10 11 Checking to see if a column is between a certain value is trivial: awk '$7 < 15 && $7 >= 14 However, I'm having problems replacing column 9 of file1 with the value from the file2. file1 is NOT necessarily just one row. It could have any number of rows, and in every instance where the value is between 14 and 15 column 9 needs to be replaced. If the value is less than 14 or greater than 15 then the columns should remain as is. I don't believe this should be difficult, but I'm not having any luck. Help would be appreciated, and thanks in advance! | Use the -o option to select which columns are displayed. If you put = after the column name, the header line is suppressed. ps -o pid= -p 1 23 456
ps -o pid= -o ppid= -o pgid= -o sid= -p 1 23 456 | {
"source": [
"https://unix.stackexchange.com/questions/232718",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130145/"
]
} |
232,782 | Compare Debian (left) and Ubuntu (right): $ ifconfig $ ifconfig
bash: ifconfig: command not found eth0 Link encap ...
$ which ifconfig $ which ifconfig
$ /sbin/ifconfig Then as superuser: # ifconfig # ifconfig
eth0 Link encap ... eth0 Link encap ...
# which ifconfig # which ifconfig
/sbin/ifconfig /sbin/ifconfig Furthermore: # ls -l /sbin/ifconfig # ls -l /sbin/ifconfig
-rwxr-xr-x 1 root root 68360 ... -rwxr-xr-x 1 root root 68040 ... It seems to me the only reason I cannot run ifconfig without superpowers on Debian is that it's not in my path. When I use /sbin/ifconfig it does work. Is there any reason I should not add /usr/local/sbin:/usr/sbin:/sbin to my path on Debian? This is a personal computer, I am the only human user. Versions used ( uname -a ): Ubuntu: Linux ubuntu 3.13.0-51-generic #84-Ubuntu SMP Wed Apr 15 12:08:34 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Debian: Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux | In the Debian Policy is written that Debian follows the File Hierarchy Standard version 2.3. Note #19 on the standard says: Deciding what things go into "sbin" directories is simple: if a
normal (not a system administrator) user will ever run it directly,
then it must be placed in one of the "bin" directories. Ordinary users
should not have to place any of the sbin directories in their path. For example, files such as chfn which users only occasionally use must
still be placed in /usr/bin. ping, although it is absolutely necessary
for root (network recovery and diagnosis) is often used by users and
must live in /bin for that reason. We recommend that users have read and execute permission for
everything in /sbin except, perhaps, certain setuid and setgid
programs. The division between /bin and /sbin was not created for
security reasons or to prevent users from seeing the operating system,
but to provide a good partition between binaries that everyone uses
and ones that are primarily used for administration tasks. There is no
inherent security advantage in making /sbin off-limits for users . Short answer: Is there any reason I should not add /usr/local/sbin:/usr/sbin:/sbin to my path on Debian? As the note states, there is no reason why you should not do that. Since you're the only one using the system and you need the binaries in the sbin directories, feel free to add them to your $PATH . At this point let me guide you to an excellent answer how to do that correctly. | {
"source": [
"https://unix.stackexchange.com/questions/232782",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
232,881 | Is there away to change the background color of a rxvt-unicode session on the fly? Like with Ctrl key? I have a bunch of Urxvt windows and I would like to color some dynamically to help me distinguish them. But again, I mean on the fly... | urxvt 2.6 in 2004 added support for xterm's dynamic colors feature. In XTerm Control Sequences , this is OSC 11. OSC 10 sets the default text color. The changelog mentioned part of the change: 2.6 Fri Apr 2 03:24:10 CEST 2004
- minor doc corrections.
- WARNING: changed menu sequence from ESC ] 10 to ESC ] 703 to
avoid clashes with xterm.
- changed OSC701/OSC702 sequence to return standard escaped reports.
- xterm-compat: make set window colour and other requests report
window colour when arg is "?". but the source-code tells the story, as usual: /*
* XTerm escape sequences: ESC ] Ps;Pt (ST|BEL)
* 0 = change iconName/title
* 1 = change iconName
* 2 = change title
* 4 = change color
+ * 10 = change fg color
+ * 11 = change bg color
* 12 = change text color
* 13 = change mouse foreground color
* 17 = change highlight character colour
@@ -2949,20 +3236,21 @@
* 50 = change font
*
* rxvt extensions:
- * 10 = menu (may change in future)
* 20 = bg pixmap
* 39 = change default fg color
* 49 = change default bg color
* 55 = dump scrollback buffer and all of screen
* 701 = change locale
* 702 = find font
+ * 703 = menu
*/ The manual rxvt(7) gives no useful information: XTerm Operating System Commands
"ESC ] Ps;Pt ST"
Set XTerm Parameters. 8-bit ST: 0x9c, 7-bit ST sequence: ESC \
(0x1b, 0x5c), backwards compatible terminator BEL (0x07) is also
accepted. any octet can be escaped by prefixing it with SYN (0x16,
^V). This simple example sets both foreground (text) and background default colors: #!/bin/sh
printf '\033]10;red\007'
printf '\033]11;green\007' Like xterm , these default colors can be overridden temporarily by "ANSI" colors. The feature can be disabled in xterm using the dynamicColors resource. Unlike xterm , urxvt has no resource-setting for the feature. VTE also implements the feature, and likewise doesn't document it. urxvt at least started with documentation from rxvt . For VTE, you have to read the source code. The relevant feature in vteseq.cc looks like this: /* Change the default background cursor, BEL terminated */
static void
vte_sequence_handler_change_background_color_bel (VteTerminalPrivate *that, GValueArray *params)
{
vte_sequence_handler_change_special_color_internal (that, params,
VTE_DEFAULT_BG, -1, 11, BEL);
}
/* Change the default background cursor, ST terminated */
static void
vte_sequence_handler_change_background_color_st (VteTerminalPrivate *that, GValueArray *params)
{
vte_sequence_handler_change_special_color_internal (that, params,
VTE_DEFAULT_BG, -1, 11, ST);
} That code dates back to sometime in 2003 (when it was written in C): commit f39e281529827f68fd0e9bba41785d66a21efc1c
Author: Nalin Dahyabhai <[email protected]>
Date: Wed Jan 22 21:35:22 2003 +0000
accept OSC{number};{string}ST as set-text-parameters, per XTerm docs (part
* src/caps.c: accept OSC{number};{string}ST as set-text-parameters, per XTerm
docs (part of #104154).
* src/keymap.c: revert change to prepend "1;" to keys with modifiers (#104139). Further reading: Can I set a color by its number? (xterm FAQ) | {
"source": [
"https://unix.stackexchange.com/questions/232881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81854/"
]
} |
232,946 | My goal is copy only all files from ~/local_dir to [email protected] /var/www/html/target_dir using scp and do not create local_dir category in local_dir. /var/www/html/target_dir/files.. but not /var/www/html/target_dir/local_dir/files.. when use -r parameter | scp has the -r argument. So, try using: $ scp -r ~/local_dir [email protected]:/var/www/html/target_dir The -r argument works just like the -r arg in cp, it will transfer your entire folder and all the files and subdirectories inside. | {
"source": [
"https://unix.stackexchange.com/questions/232946",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/135235/"
]
} |
233,123 | I'm in an operating systems class. Coming up, we have to do some work modifying kernel code. We have been advised not to use personal machines to test (I suppose this means install it) as we could write bad code and write over somewhere we shouldn't. We are given access to a machine in a lab to be safe. If I were to test using a VM, would that protect the host system from potentially unsafe code? I really want to not have to be stuck to a system at school and snapshots will be useful. If it is still high risk, any suggestions on what I need to consider to test safely? We will be using something like linuxmint to start with. If anyone wants to see what will be in the current project: http://www.cs.fsu.edu/~cop4610t/assignments/project2/writeup/specification.pdf | The main risks developing kernel modules are that you can crash your system much more easily than with regular code, and you'll probably find that you sometimes create modules that can't be unloaded which means you'll have to reboot to re-load them after you fix what's wrong. Yes, a VM is fine for this kind of development and it's what I use when I'm working on kernel modules. The VM nicely isolates your test environment from your running system. If you're going to take and restore snapshots, you should keep your source code checked in to a version control repository outside the VM so you don't accidentally lose your latest code when you discard the VM's current state. | {
"source": [
"https://unix.stackexchange.com/questions/233123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136496/"
]
} |
233,275 | Which permissions affect hard link creation? Does file ownership itself matter? Suppose user alice wants to create a hard link to the file target.txt in a directory target-dir . Which permissions does alice need on both target.txt and target-dir ? If target.txt is owned by user bill and target-dir is owned by user chad , does that change anything? I've tried to simulate this situation by creating the following folder/file structure on an ext4 filesystem: #> ls -lh . *
.:
drwxr-xr-x 2 bill bill 60 Oct 1 11:29 source-dir
drwxrwxrwx 2 chad chad 60 Oct 1 11:40 target-dir
source-dir:
-r--r--r-- 1 bill bill 0 Oct 1 11:29 target.txt
target-dir:
-rw-rw-r-- 1 alice alice 0 Oct 1 11:40 dummy While alice can create a soft link to target.txt , she can't create a hard link: #> ln source-dir/target.txt target-dir/
ln: failed to create hard link ‘target-dir/target.txt’ => ‘source-dir/target.txt’: Operation not permitted If alice owns target.txt and no permissions are changed, the hard link succeeds. What am I missing here? | To create the hard link, alice will need write+execute permissions on target-dir on all cases. The permissions needed on target.txt will vary: If fs.protected_hardlinks = 1 then alice needs either ownership of target.txt or at least read+write permissions on it. If fs.protected_hardlinks = 0 then any set of permissions will do; Even 000 is okay. This answer to a similar question had the missing piece of information to answer this question. From this commit message (emphasis mine): On systems that have user-writable directories on the same partition
as system files, a long-standing class of security issues is the
hardlink-based time-of-check-time-of-use race, most commonly seen in
world-writable directories like /tmp. The common method of exploitation
of this flaw is to cross privilege boundaries when following a given
hardlink (i.e. a root process follows a hardlink created by another
user). Additionally, an issue exists where users can "pin" a potentially
vulnerable setuid/setgid file so that an administrator will not actually
upgrade a system fully. The solution is to permit hardlinks to only be created when the user is
already the existing file's owner, or if they already have read/write
access to the existing file . | {
"source": [
"https://unix.stackexchange.com/questions/233275",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134135/"
]
} |
233,276 | I have a few years of experience with using Linux on the command line, but this is my first time trying to set it up with a GUI. I'm on CentOS 7 (64 bit) and I've run the following commands: yum groupinstall "X Window System" "Desktop"
yum install tigervnc-server xorg-x11-fonts-Type1
vncpasswd After using those commands to install stuff (a VNC server and Gnome, I think), I created this file at /root/.vnc/xstartup : #!/bin/sh
# Uncomment the following two lines for normal desktop:
unset SESSION_MANAGER
exec /etc/X11/xinit/xinitrc
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
startx &
exec gnome-session & Then I tried starting the vnc server with just this: vncserver . This printed out: New '<VM-Name>:1 (root)' desktop is <VM-Name>:1
Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/<VM-Name>:1.log I launched VNC Viewer on my local machine (Windows 7, 64 bit) and connected to the VM, but all I saw was a dark gray background with 3 checkboxes in the top left corner regarding clipboards. I get an X for a cursor. Nothing I press on the keyboard seems to do anything. Everything VNC wise seems to be fine but I was expecting to have some sort of desktop from which I could browse my file system... or some other way of doing anything graphically with this VM. It seems like it must not be finding my window or desktop manager or something (my terminology might be off - please correct me if it is) - but my script said to launch gnome, and the VNC logs didn't indicate any issue, so shouldn't I see something other than a gray rectangle? Since I mentioned it, here's what's in my VNC logs ( /root/.vnc/<VM-Name>:1.log ): Xvnc TigerVNC 1.2.80 - built Jun 10 2014 06:14:52
Copyright (C) 1999-2011 TigerVNC Team and many others (see README.txt)
See http://www.tigervnc.org for information on TigerVNC.
Underlying X server release 11500000, The X.Org Foundation
Initializing built-in extension Generic Event Extension
Initializing built-in extension SHAPE
Initializing built-in extension MIT-SHM
Initializing built-in extension XInputExtension
Initializing built-in extension XTEST
Initializing built-in extension BIG-REQUESTS
Initializing built-in extension SYNC
Initializing built-in extension XKEYBOARD
Initializing built-in extension XC-MISC
Initializing built-in extension XFIXES
Initializing built-in extension RENDER
Initializing built-in extension RANDR
Initializing built-in extension COMPOSITE
Initializing built-in extension DAMAGE
Initializing built-in extension MIT-SCREEN-SAVER
Initializing built-in extension DOUBLE-BUFFER
Initializing built-in extension RECORD
Initializing built-in extension DPMS
Initializing built-in extension X-Resource
Initializing built-in extension XVideo
Initializing built-in extension XVideo-MotionCompensation
Initializing built-in extension VNC-EXTENSION
Initializing built-in extension GLX
Wed Sep 30 13:10:31 2015
vncext: VNC extension running!
vncext: Listening for VNC connections on all interface(s), port 5901
vncext: created VNC server for screen 0
Wed Sep 30 13:10:47 2015
Connections: accepted: <my ip>::47407
SConnection: Client needs protocol version 3.8
SConnection: Client requests security type VncAuth(2)
Wed Sep 30 13:11:02 2015
VNCSConnST: Server default pixel format depth 24 (32bpp) little-endian rgb888
VNCSConnST: Client pixel format depth 8 (8bpp) color-map
Wed Sep 30 14:27:49 2015
Connections: closed: <my ip>::47407 (Clean disconnection)
SMsgWriter: framebuffer updates 3
SMsgWriter: raw rects 1, bytes 16396
SMsgWriter: ZRLE rects 1, bytes 802
SMsgWriter: raw bytes equivalent 802840, compression ratio 46.682172 Nothing in here indicates any sort of error to me. Is there another log file I should check somewhere else? Should I somehow enter a debug mode for something (what/how?) Is there something missing from my xstartup script (with is +x executable, by the way). Is everything working fine and there's just some key combination I need to send to get a screen other than the blank gray screen? Is there something I should look for in netstat or ps that would indicate to me if things were or weren't working? Edit: After making changes suggested by roaima to my xstartup file and restarting VNC, this is the output I'm getting in the log file: Xvnc TigerVNC 1.2.80 - built Jun 10 2014 06:14:52
Copyright (C) 1999-2011 TigerVNC Team and many others (see README.txt)
See http://www.tigervnc.org for information on TigerVNC.
Underlying X server release 11500000, The X.Org Foundation
Initializing built-in extension Generic Event Extension
Initializing built-in extension SHAPE
Initializing built-in extension MIT-SHM
Initializing built-in extension XInputExtension
Initializing built-in extension XTEST
Initializing built-in extension BIG-REQUESTS
Initializing built-in extension SYNC
Initializing built-in extension XKEYBOARD
Initializing built-in extension XC-MISC
Initializing built-in extension XFIXES
Initializing built-in extension RENDER
Initializing built-in extension RANDR
Initializing built-in extension COMPOSITE
Initializing built-in extension DAMAGE
Initializing built-in extension MIT-SCREEN-SAVER
Initializing built-in extension DOUBLE-BUFFER
Initializing built-in extension RECORD
Initializing built-in extension DPMS
Initializing built-in extension X-Resource
Initializing built-in extension XVideo
Initializing built-in extension XVideo-MotionCompensation
Initializing built-in extension VNC-EXTENSION
Initializing built-in extension GLX
Thu Oct 1 12:01:36 2015
vncext: VNC extension running!
vncext: Listening for VNC connections on all interface(s), port 5901
vncext: created VNC server for screen 0
/root/.vnc/xstartup: line 8: gnome-session: command not found
/root/.vnc/xstartup: line 6: xterm: command not found
xauth: file /root/.serverauth.2286 does not exist
X.Org X Server 1.15.0
Release Date: 2013-12-27
X Protocol Version 11, Revision 0
Build Operating System: 2.6.32-220.17.1.el6.x86_64
Current Operating System: Linux InteractSL-TaylorCognosTest 3.10.0-229.7.2.el7.x86_64 #1 SMP Tue Jun 23 22:06:11 UTC 2015 x86_64
Kernel command line: BOOT_IMAGE=/vmlinuz-3.10.0-229.7.2.el7.x86_64 root=UUID=9bdbb9b7-a256-4676-8449-34b054b2950a ro vconsole.keymap=us crashkernel=auto vconsole.font=latarcyrheb-sun16 LANG=en_US.UTF-8
Build Date: 10 April 2015 11:44:42AM
Build ID: xorg-x11-server 1.15.0-33.el7_1
Current version of pixman: 0.32.4
Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.0.log", Time: Thu Oct 1 12:01:39 2015
(==) Using config directory: "/etc/X11/xorg.conf.d"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
Initializing built-in extension Generic Event Extension
Initializing built-in extension SHAPE
Initializing built-in extension MIT-SHM
Initializing built-in extension XInputExtension
Initializing built-in extension XTEST
Initializing built-in extension BIG-REQUESTS
Initializing built-in extension SYNC
Initializing built-in extension XKEYBOARD
Initializing built-in extension XC-MISC
Initializing built-in extension XINERAMA
Initializing built-in extension XFIXES
Initializing built-in extension RENDER
Initializing built-in extension RANDR
Initializing built-in extension COMPOSITE
Initializing built-in extension DAMAGE
Initializing built-in extension MIT-SCREEN-SAVER
Initializing built-in extension DOUBLE-BUFFER
Initializing built-in extension RECORD
Initializing built-in extension DPMS
Initializing built-in extension Present
Initializing built-in extension X-Resource
Initializing built-in extension XVideo
Initializing built-in extension XVideo-MotionCompensation
Initializing built-in extension SELinux
Initializing built-in extension XFree86-VidModeExtension
Initializing built-in extension XFree86-DGA
Initializing built-in extension XFree86-DRI
Initializing built-in extension DRI2
Loading extension GLX
xinit: connection to X server lost
^M
waiting for X server to shut down
Thu Oct 1 12:01:39 2015
Connections: accepted: 129.42.208.178::30139
Thu Oct 1 12:01:40 2015
SConnection: Client needs protocol version 3.8
SConnection: Client requests security type VncAuth(2)
error setting MTRR (base = 0xf0000000, size = 0x00400000, type = 1) Invalid argument (22)
(EE) Server terminated successfully (0). Closing log file.
Thu Oct 1 12:01:41 2015
VNCSConnST: Server default pixel format depth 24 (32bpp) little-endian rgb888
VNCSConnST: Client pixel format depth 8 (8bpp) color-map
Thu Oct 1 12:05:11 2015
Connections: closed: 129.42.208.178::30139 (Clean disconnection)
SMsgWriter: framebuffer updates 3
SMsgWriter: raw rects 1, bytes 16396
SMsgWriter: ZRLE rects 1, bytes 773
SMsgWriter: raw bytes equivalent 802840, compression ratio 46.761023 | To create the hard link, alice will need write+execute permissions on target-dir on all cases. The permissions needed on target.txt will vary: If fs.protected_hardlinks = 1 then alice needs either ownership of target.txt or at least read+write permissions on it. If fs.protected_hardlinks = 0 then any set of permissions will do; Even 000 is okay. This answer to a similar question had the missing piece of information to answer this question. From this commit message (emphasis mine): On systems that have user-writable directories on the same partition
as system files, a long-standing class of security issues is the
hardlink-based time-of-check-time-of-use race, most commonly seen in
world-writable directories like /tmp. The common method of exploitation
of this flaw is to cross privilege boundaries when following a given
hardlink (i.e. a root process follows a hardlink created by another
user). Additionally, an issue exists where users can "pin" a potentially
vulnerable setuid/setgid file so that an administrator will not actually
upgrade a system fully. The solution is to permit hardlinks to only be created when the user is
already the existing file's owner, or if they already have read/write
access to the existing file . | {
"source": [
"https://unix.stackexchange.com/questions/233276",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/102759/"
]
} |
233,327 | I found a collection of slackbuilds, some i need
there are on GitHub. https://github.com/PhantomX/slackbuilds/ I don't want to get all git. git clone https://github.com/PhantomX/slackbuilds.git But only get a slackbuild, for this one . How to do this? Is it possible? | You will end up downloading the entire history, so I don't see much benefit in it, but you can checkout specific parts using a "sparse" checkout. Quoting this Stack Overflow post : The steps to do a sparse clone are as follows: mkdir <repo>
cd <repo>
git init
git remote add -f origin <url> I'm going to interrupt here. Since I'm quoting another post, I don't want to edit the quoted parts, but do not use -f with git remote add . It will do a fetch, which will pull in the entire history. Just add the remote without a fetch: git remote add origin <url> And then do a shallow fetch like described later. This creates an empty repository with your remote, and fetches all
objects but doesn't check them out. Then do: git config core.sparseCheckout true Now you need to define which files/folders you want to actually check
out. This is done by listing them in .git/info/sparse-checkout , eg: echo "some/dir/" >> .git/info/sparse-checkout
echo "another/sub/tree" >> .git/info/sparse-checkout [...] You might want to have a look at the extended tutorial and you
should probably read the official documentation for sparse
checkout . You might be better off using a shallow clone . Instead of a normal git pull , try: git pull --depth=1 origin master I had an occasion to test this again recently, trying to get only the Ubuntu Mono Powerline fonts . The steps above ended up downloading some 11 MB, where the Ubuntu Fonts themselves are ~900 KB: % git pull --depth=1 origin master
remote: Enumerating objects: 310, done.
remote: Counting objects: 100% (310/310), done.
remote: Compressing objects: 100% (236/236), done.
remote: Total 310 (delta 75), reused 260 (delta 71), pack-reused 0
Receiving objects: 100% (310/310), 10.40 MiB | 3.25 MiB/s, done.
Resolving deltas: 100% (75/75), done.
From https://github.com/powerline/fonts
* branch master -> FETCH_HEAD
* [new branch] master -> origin/master
% du -hxd1 .
11M ./.git
824K ./UbuntuMono
12M . A normal clone took about 20 MB. There's some savings, but not enough. Using the --filter + checkout method in Ciro Santilli's answer really cuts down the size, but as mentioned there, downloads each blob one by one, which is slow: % git fetch --depth=1 --filter=blob:none
remote: Enumerating objects: 52, done.
remote: Counting objects: 100% (52/52), done.
remote: Compressing objects: 100% (49/49), done.
remote: Total 52 (delta 1), reused 35 (delta 1), pack-reused 0
Receiving objects: 100% (52/52), 14.55 KiB | 1.32 MiB/s, done.
Resolving deltas: 100% (1/1), done.
From https://github.com/powerline/fonts
* [new branch] master -> origin/master
* [new branch] terminus -> origin/terminus
% git checkout origin/master -- UbuntuMono
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 1.98 KiB | 1.98 MiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 581 bytes | 581.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 121.43 KiB | 609.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 100.66 KiB | 512.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 107.62 KiB | 583.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 112.15 KiB | 791.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 454 bytes | 454.00 KiB/s, done.
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1 (delta 0), reused 1 (delta 0), pack-reused 0
Receiving objects: 100% (1/1), 468 bytes | 468.00 KiB/s, done.
% du -hxd1 .
692K ./.git
824K ./UbuntuMono
1.5M . TL;DR: Use all of --filter , sparse checkout and shallow clone to reduce the total download, or only use sparse checkout + shallow clone if you don't care about the total download and just want that one directory however it may be obtained. | {
"source": [
"https://unix.stackexchange.com/questions/233327",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/80389/"
]
} |
233,337 | You log in to an unfamiliar UNIX or Linux system (as root). Which commands do you run to orient yourself and figure out what kind of system you are on? How do you figure out what type of hardware is in use, which type of operating system is running and what the current situation is when it comes to permissions and security? What is the first and second command you type? | a dual-use question! Either a Software Archaeologist or an Evil Hacker could use the answers to this question! Now, which am I? I always used to use ps -ef versus ps -augxww to find out what I was on. Linux and System V boxes tended to like "-ef" and error on "-augxww", vice versa for BSD and old SunOS machines. The output of ps can let you know a lot as well. If you can log in as root, and it's a Linux machine, you should do lsusb and lspci - that will get you 80% of the way towards knowing what the hardware situation is. dmesg | more can help you understand any current problems on just about anything. It's beginning to be phased out, but doing ifconfig -a can usually tell you a lot about the network interfaces, and the networking. Running mii-tool and/or ethtool on the interfaces you see in ifconfig output that look like cabled ethernet can give you some info too. Runnin ip route or netstat -r can be informative about Internet Protocol routing, and maybe something about in-use network interfaces. A mount invocation can tell you about the disk(s) and how they're mounted. Running uptime , and then last | more can tell you something about the current state of maintenance. Uptimes of 100+ days probably means "it's time to change the oil and fluids", metaphorically speaking. Running who is also Looking at /etc/resolv.conf and /etc/hosts can tell you about the DNS setup of that machine. Maybe do nslookup google.com or dig bing.com to see if DNS is mostly functional. It's always worth watching what errors ("command not found") and what variants of commands ("ps -ef" vs "ps augxww") work to determine what variant of Unix or Linux or BSD you just ended up on. The presence or absence of a C compiler, and where it lives is important. Do which cc or better, which -a cc to find them. | {
"source": [
"https://unix.stackexchange.com/questions/233337",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3920/"
]
} |
233,345 | About a month ago I switched from Ubuntu 14.04 LTS to Arch and I'm quite happy with this decision. However, I miss some features with my new distro, especially Shift + printscr which in Unity allows selection of a screen region to be captured. I use i3 WM. So, my question is: how can I configure Unity-like screenshot behaviour to be able to snap screen regions or windows with a keyboard shortcut or something (without digging into window id and console stuff)? | You can use import , part of ImageMagick. Capture a region This will change your cursor into a crosshair and when you click and drag to form a box, that box will be saved as ss.png . import ss.png Capture whole display import -window root ss.png You can also replace the word root with the window id to capture a specific window. | {
"source": [
"https://unix.stackexchange.com/questions/233345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136629/"
]
} |
233,468 | I just switched to debian jessie, and most things run okay, including my graphical display manager wdm . The thing is, I just don't understand how this works. Obviously my /etc/init.d/wdm script is called, because when I put an early exit in there, wdm is not started. But when I alternatively rename the /etc/rc3.d directory (my default runlevel used to be 3), then wdm is still started. I could not find out how systemd finds this script and I do not understand what it does to all the other init.d scripts. When and how does systemd run init.d scrips? In the long run, should I get rid of all init.d scripts? | chaos' answer is what some documentation says. But it's not what systemd actually does. (It's not what van Smoorenburg rc did, either. The van Smoorenburg rc most definitely did not ignore LSB headers, which insserv used to calculate static orderings, for starters.) The Freedesktop documentation, such as that "Incompatibilities" page, is in fact wrong, on these and other points. (The HOME environment variable in fact is often set, for example. This went wholly undocumented anywhere for a long time. It's now documented in the manual, at least, but that Freedesktop WWW page still hasn't been corrected.) The native service format for systemd is the service unit . systemd's service management proper operates solely in terms of those, which it reads from one of nine directories where (system-wide) .service files can live. /etc/systemd/system , /run/systemd/system , /usr/local/lib/systemd/system , and /usr/lib/systemd/system are four of those directories. The compatibility with van Smoorenburg rc scripts is achieved with a conversion program, named systemd-sysv-generator . This program is listed in the /usr/lib/systemd/system-generators/ directory and is thus run automatically by systemd early in the bootstrap process at every boot, and again every time that systemd is instructed to re-load its configuration later on. This program is a generator , a type of ancillary utility whose job is to create service unit files on the fly, in a tmpfs where three more of those nine directories (which are intended to be used only by generators) are located. systemd-sysv-generator generates the service units that run the van Smoorenburg rc scripts from /etc/init.d , if it doesn't find a native systemd service unit by that name already existing in the other six locations. systemd service management only knows about service units. These automatically (re-)generated service units are written to invoke the van Smoorenburg rc scripts. They have, amongst other things: [Unit]
SourcePath=/etc/init.d/wibble
[Service]
ExecStart=/etc/init.d/wibble start
ExecStop=/etc/init.d/wibble stop Received wisdom is that the van Smoorenburg rc scripts must have an LSB header, and are run in parallel without honouring the priorities imposed by the /etc/rc?.d/ system. This is incorrect on all points. In fact, they don't need to have an LSB header, and if they do not systemd-sysv-generator can recognize the more limited old RedHat comment headers ( description: , pidfile: , and so forth). Moreover, in the absence of an LSB header it will fall back to the contents of the /etc/rc?.d symbolic link farms, reading the priorities encoded into the link names and constructing a before/after ordering from them, serializing the services. Not only are LSB headers not a requirement, and not only do they themselves encode before/after orderings that serialize things to an extent, the fallback behaviour in their complete absence is actually significantly non-parallelized operation. The reason that /etc/rc3.d didn't appear to matter is that you probably had that script enabled via another /etc/rc?.d/ directory. systemd-sysv-generator translates being listed in any of /etc/rc2.d/ , /etc/rc3.d/ , and /etc/rc4.d/ into a native Wanted-By relationship to systemd's multi-user.target . Run levels are "obsolete" in the systemd world, and you can forget about them. Further reading systemd-sysv-generator . systemd manual pages. Freedesktop.org. "Environment variables in spawned processes" . systemd.exec . systemd manual pages. Freedesktop.org. https://unix.stackexchange.com/a/394191/5132 https://unix.stackexchange.com/a/204075/5132 https://unix.stackexchange.com/a/196014/5132 https://unix.stackexchange.com/a/332797/5132 | {
"source": [
"https://unix.stackexchange.com/questions/233468",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37945/"
]
} |
233,490 | Is there a way to show the connections of a process? Something like that: show PID in which show is a command to do this, and PID is the ID of the process.
The output that I want is composed of all the connections of the process (in real-time). For example, if the process tries to connect to 173.194.112.151 the output is 173.194.112.151 . A more specific example with Firefox: show `pidof firefox` and with Firefox I go at first to google.com , then to unix.stackexchange.com and finally to 192.30.252.129 . The output, when I close the browser, must be: google.com
stackexchange.com
192.30.252.129 (Obviously with the browser this output is not realistic, because there are a lot of other related connections, but this is only an example.) | You're looking for strace !
I found this answer on askubuntu , but it's valid for Unix: To start and monitor an new process: strace -f -e trace=network -s 10000 PROCESS ARGUMENTS To monitor an existing process with a known PID: strace -p $PID -f -e trace=network -s 10000 Otherwise, but that's specific to Linux, you can run the process in an isolated network namespace and use wireshark to monitor the traffic . This will probably be more convenient than reading the strace log: create a test network namespace: ip netns add test create a pair of virtual network interfaces (veth-a and veth-b): ip link add veth-a type veth peer name veth-b change the active namespace of the veth-a interface: ip link set veth-a netns test configure the IP addresses of the virtual interfaces: ip netns exec test ifconfig veth-a up 192.168.163.1 netmask 255.255.255.0
ifconfig veth-b up 192.168.163.254 netmask 255.255.255.0 configure the routing in the test namespace: ip netns exec test route add default gw 192.168.163.254 dev veth-a activate ip_forward and establish a NAT rule to forward the traffic coming in from the namespace you created (you have to adjust the network interface and SNAT ip address): echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s 192.168.163.0/24 -o YOURNETWORKINTERFACE -j SNAT --to-source YOURIPADDRESS (You can also use the MASQUERADE rule if you prefer) finally, you can run the process you want to analyze in the new namespace, and wireshark too: ip netns exec test thebinarytotest
ip netns exec test wireshark You'll have to monitor the veth-a interface. | {
"source": [
"https://unix.stackexchange.com/questions/233490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/136738/"
]
} |
233,552 | I am going to learn how to use the command line. Specifically, I will be using the book: "The Linux Command Line: A Complete Introduction" . Now, do I have to use a Linux distro to walk through the book, or would OS X be sufficient? If I need a Linux distro, then would using it through a VM be sufficient or do I need to install it natively? | I would highly recommend running Linux in a VM. All the software is available freely to download and there is no practical difference between running in a VM and running natively for the purposes of learning the command line. Furthermore, Linux command line mostly consists of bash + GNU coreutils , which is very different from BSD Unix (and OS X is a succedessor of BSD Unix). There is a very big difference of preferences in writing arguments in BSD Unix and GNU Linux. You can bite yourself even as non-newbie with different options to standard utilities like ps and tar if you work on both systems. Using OS X when your book is Linux specific will regularly throw up inconsistencies and differences that will appear superficial when you're more experienced, but will simply be confusing when you're learning. Keep things easy for yourself. This will also allow you to experiment without the worry of breaking your machine by deleting or changing any important files. And last, though certainly not least, it will allow you to set up a SSH connection to your VM from your OS X Terminal, so you can get used to using SSH keys, and to the idea that it makes no difference whether your Linux server is a native machine, a local VM, or running out on AWS or Digital Ocean: it all works the same! | {
"source": [
"https://unix.stackexchange.com/questions/233552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/106637/"
]
} |
233,614 | I am trying to change my username, as per advice here however after running the following command: CurrentName@HostName ~ $ sudo usermod -l TheNameIWantToChange -d /home/TheNameIWantToChange -m CurrentName Terminal responds with: CurrentName@HostName ~ $ usermod: user CurrentName is currently used by process 2491 And the username stays the same. Does anybody know how I could fix this and change my username after all? | To quote man usermod : CAVEATS
You must make certain that the named user is not executing any
processes when this command is being executed
if the user's numerical user ID, the user's name, or the user's home
directory is being changed. usermod
checks this on Linux, but only check if the user is logged in
according to utmp on other architectures. So, you need to make sure the user you're renaming is not logged in. Also, I note you're not running this as root. Either run it as root, or run with "sudo usermod". | {
"source": [
"https://unix.stackexchange.com/questions/233614",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/134726/"
]
} |
233,832 | I have two video clips. Both are 640x480 and last 10 minutes. One contains background audio, the other one a singing actor. I would like to create a single 10 minute video clip measuring 1280x480 (in other words, I want to place the videos next to each other and play them simultaneously, mixing audio from both clips). I've tried trying to figure out how to do this with ffmpeg/avidemux, but so far I came up empty. They all refer to concatenating when I search for merging. Any recommendations? | ffmpeg \
-i input1.mp4 \
-i input2.mp4 \
-filter_complex '[0:v]pad=iw*2:ih[int];[int][1:v]overlay=W/2:0[vid]' \
-map '[vid]' \
-c:v libx264 \
-crf 23 \
-preset veryfast \
output.mp4 This essentially doubles the size of input1.mp4 by padding the right side with black the same size as the original video, and then places input2.mp4 over the top of that black area with the overlay filter. Source: https://superuser.com/questions/153160/join-videos-split-screen | {
"source": [
"https://unix.stackexchange.com/questions/233832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14584/"
]
} |
234,264 | I am trying to use a variable consisting of different strings separated with a | as a case statement test. For example: string="\"foo\"|\"bar\""
read choice
case $choice in
$string)
echo "You chose $choice";;
*)
echo "Bad choice!";;
esac I want to be able to type foo or bar and execute the first part of the case statement. However, both foo and bar take me to the second: $ foo.sh
foo
Bad choice!
$ foo.sh
bar
Bad choice! Using "$string" instead of $string makes no difference. Neither does using string="foo|bar" . I know I can do it this way: case $choice in
"foo"|"bar")
echo "You chose $choice";;
*)
echo "Bad choice!";;
esac I can think of various workarounds but I would like to know if it's possible to use a variable as a case condition in bash. Is it possible and, if so, how? | The bash manual states: case word in [ [(] pattern [ | pattern ] ... ) list ;; ] ... esac Each pattern examined is expanded using tilde expansion, parameter and variable expansion, arithmetic substitution, command substitution, and process substitution. No «Pathname expansion» Thus: a pattern is NOT expanded with «Pathname expansion». Therefore: a pattern could NOT contain "|" inside. Only: two patterns could be joined with the "|". This works: s1="foo"; s2="bar" # or even s1="*foo*"; s2="*bar*"
read choice
case $choice in
$s1|$s2 ) echo "Two val choice $choice"; ;; # not "$s1"|"$s2"
* ) echo "A Bad choice! $choice"; ;;
esac Using « Extended Globbing » However, word is matched with pattern using « Pathname Expansion » rules. And « Extended Globbing » here , here and, here allows the use of alternating ("|") patterns. This also work: shopt -s extglob
string='@(foo|bar)'
read choice
case $choice in
$string ) printf 'String choice %-20s' "$choice"; ;;&
$s1|$s2 ) printf 'Two val choice %-20s' "$choice"; ;;
*) printf 'A Bad choice! %-20s' "$choice"; ;;
esac
echo String content The next test script shows that the pattern that match all lines that contain either foo or bar anywhere is '*$(foo|bar)*' or the two variables $s1=*foo* and $s2=*bar* Testing script: shopt -s extglob # comment out this line to test unset extglob.
shopt -p extglob
s1="*foo*"; s2="*bar*"
string="*foo*"
string="*foo*|*bar*"
string='@(*foo*|*bar)'
string='*@(foo|bar)*'
printf "%s\n" "$string"
while IFS= read -r choice; do
case $choice in
"$s1"|"$s2" ) printf 'A first choice %-20s' "$choice"; ;;&
$string ) printf 'String choice %-20s' "$choice"; ;;&
$s1|$s2 ) printf 'Two val choice %-20s' "$choice"; ;;
*) printf 'A Bad choice! %-20s' "$choice"; ;;
esac
echo
done <<-\_several_strings_
f
b
foo
bar
*foo*
*foo*|*bar*
\"foo\"
"foo"
afooline
onebarvalue
now foo with spaces
_several_strings_ | {
"source": [
"https://unix.stackexchange.com/questions/234264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
234,278 | I get permission denied when trying to move folder Music via mv although directory owner is set to my user and user permissions are set to 7. What's going on? (I know that I could use sudo but I want to find out what's wrong. Something smells fishy here). Ps: I am on Mac OS X El Capitan. | I was using Windows Subsystem for Linux. I had the directory open in a different bash instance. Closing it let me move the directory. | {
"source": [
"https://unix.stackexchange.com/questions/234278",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137269/"
]
} |
234,311 | Following all the instructions from UNIXMEN , installed postgresql-9.4 in CentOS 6.4 . Everything went well, started the service and could access pgsql screen. But when I try to configure the phpPgAdmin , I couldn't find the files postgresql.conf pg_hba.conf config.inc.php phpPgAdmin.conf The instructions says, postgresql home directory will sit in /etc/../ and some say it will be in /var/lib/ . Where does the directory gets created (in CentOS)? Is installation directory path is different in centos, redhat(RHEL7) & ubuntu? Update: I ran a quick locate command for *postgresql.conf** and *hba.conf**, I found the sample files as postgresql.conf.sample and pg_hba.conf.sample (at /usr/pgsql-9.4/share/ ) | If you type the following: sudo su - postgres after installing postgresql-server, that should take you right to the home directory of postgres and will have the configuration files you are looking for. Usually in a RHEL environment, the configuration files would be stored in /var/lib/pgsql/ . On my test environment it is stored in /var/lib/pgsql/9.1/data . | {
"source": [
"https://unix.stackexchange.com/questions/234311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/67557/"
]
} |
234,402 | I was wondering if there was a way to register this, but since most modern search engines don't work well with phrases over about 5 words in length, I need some help on this one. I was wondering this because I'm making a bash script that has to register files as certain types and make decisions accordingly. This technically isn't important to my project, but I was curious. Also, if they are considered to be regular files, then is there a way to check if these files are hard linked without having to parse ls -i ? And is there a way to check if some arbitrary file, X, is hard linked to some other arbitrary file, Y, without using the find -i command? | In Unix-style systems, the data structure which represents filesystem objects (in other words, the data about a file), is stored in what's called an "inode". A file name is just a link to this inode, and is referred to as a "hard link". There is no difference between the first name a file is given and any subsequent link. So the answer is, "yes": a hard link is a regular file and, indeed, a regular file is a hard link. The ls command will show you how many hard links there are to the file. For example: seumasmac@comp:~$ echo Hello > /tmp/hello.txt
seumasmac@comp:~$ ls -l /tmp/hello.txt
-rw-rw-r-- 1 seumasmac seumasmac 6 Oct 4 13:05 /tmp/hello.txt Here we've created a file called /tmp/hello.txt . The 1 in the output from ls -l indicates that there is 1 hard link to this file. This hard link is the filename itself /tmp/hello.txt . If we now create another hard link to this file: seumasmac@comp:~$ ln /tmp/hello.txt /tmp/helloagain.txt
seumasmac@comp:~$ ls -l /tmp/hello*
-rw-rw-r-- 2 seumasmac seumasmac 6 Oct 4 13:05 /tmp/helloagain.txt
-rw-rw-r-- 2 seumasmac seumasmac 6 Oct 4 13:05 /tmp/hello.txt you can now see that both filenames indicate there are 2 hard links to the file. Neither of these is the "proper" filename, they're both equally valid. We can see that they both point to the same inode (in this case, 5374043): seumasmac@comp:~$ ls -i /tmp/hello*
5374043 /tmp/helloagain.txt 5374043 /tmp/hello.txt There is a common misconception that this is different for directories. I've heard people say that the number of links returned by ls for a directory is the number of subdirectories, including . and .. which is incorrect . Or, at least, while it will give you the correct number, it's right for the wrong reasons! If we create a directory and do a ls -ld we get: seumasmac@comp:~$ mkdir /tmp/testdir
seumasmac@comp:~$ ls -ld /tmp/testdir
drwxrwxr-x 2 seumasmac seumasmac 4096 Oct 4 13:20 /tmp/testdir This shows there are 2 hard links to this directory. These are: /tmp/testdir
/tmp/testdir/. Note that /tmp/testdir/.. is not a link to this directory, it's a link to /tmp . And this tells you why the "number of subdirectories" thing works. When we create a new subdirectory: seumasmac@comp:~$ mkdir /tmp/testdir/dir2
seumasmac@comp:~$ ls -ld /tmp/testdir
drwxrwxr-x 3 seumasmac seumasmac 4096 Oct 4 13:24 /tmp/testdir you can now see there are 3 hard links to /tmp/testdir directory. These are: /tmp/testdir
/tmp/testdir/.
/tmp/testdir/dir2/.. So every new sub-directory will increase the link count by one, because of the .. entry it contains. | {
"source": [
"https://unix.stackexchange.com/questions/234402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/130361/"
]
} |
234,432 | I want to delete the last column of a txt file, while I do not know what the column number is. How could I do this? Example: Input: 1223 1234 1323 ... 2222 123
1233 1234 1233 ... 3444 125
0000 5553 3455 ... 2334 222 And I want my output to be: 1223 1234 1323 ... 2222
1233 1234 1233 ... 3444
0000 5553 3455 ... 2334 | With awk : awk 'NF{NF-=1};1' <in >out or: awk 'NF{NF--};1' <in >out or: awk 'NF{--NF};1' <in >out Although this looks like voodoo, it works.
There are three parts to each of these awk commands. The first is NF , which is a precondition for the second part. NF is a variable containing the number of fields in a line. In AWK, things are true if they're not 0 or empty string "" . Hence, the second part (where NF is decremented) only happens if NF is not 0. The second part (either NF-=1 NF-- or --NF ) is just subtracting one from the NF variable. This prevent the last field from being printed, because when you change a field (removing the last field in this case), awk re-construct $0 , concatenate all fields separated by space by default. $0 didn't contain the last field anymore. The final part is 1 . It's not magical, it's just used as a expression that means true . If an awk expression evaluates to true without any associated action, awk default action is print $0 . | {
"source": [
"https://unix.stackexchange.com/questions/234432",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133262/"
]
} |
234,832 | Is it possible for ping command in Linux(CentOS) to send 0 bytes. In windows one can define using -l argument command tried ping localhost -s 0
PING localhost (127.0.0.1) 0(28) bytes of data.
8 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64
8 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64
^C
--- localhost ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
man ping
-s packetsize
Specifies the number of data bytes to be sent. The default is
56, which translates into 64 ICMP data bytes when combined with
the 8 bytes of ICMP header data. Edit1: adding windows output of ping just in case some one needs it ping 127.0.0.1 -l 0
Pinging 127.0.0.1 with 0 bytes of data:
Reply from 127.0.0.1: bytes=0 time<1ms TTL=128
Reply from 127.0.0.1: bytes=0 time<1ms TTL=128
Reply from 127.0.0.1: bytes=0 time<1ms TTL=128
Ping statistics for 127.0.0.1:
Packets: Sent = 3, Received = 3, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
ping 127.0.0.1
Pinging 127.0.0.1 with 32 bytes of data:
Reply from 127.0.0.1: bytes=32 time<1ms TTL=128
Reply from 127.0.0.1: bytes=32 time<1ms TTL=128
Reply from 127.0.0.1: bytes=32 time<1ms TTL=128
Ping statistics for 127.0.0.1:
Packets: Sent = 3, Received = 3, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms | A ping cannot be 0 bytes on Linux, Windows or any other platform that claims to be able to send pings. At the very least the packet must contain an IP header and a non-malformed no-trick-playing ping will also include an ICMP header, which is 8 bytes long. It is possible that windows differs in how they output the bytes received. Linux tells you the size of the ICMP portion of the packet (8 bytes for the ICMP header plus any ICMP data present). Windows may instead print the number of ICMP payload data bytes so that while it tells you "0", those 8 ICMP header bytes are still there. To truly have 0 ICMP bytes that means your packet is a raw IP header and no longer an ICMP ping request. The point is, even if windows is telling you the ping packet is 0 bytes long, it isn't. The minimum size of an ICMP echo request or echo reply packet is 28 bytes: 20 byte IP header, 4 byte ICMP header, 4 byte echo request/reply header data, 0 bytes of ICMP payload data. When ping on linux prints: 8 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 Those 8 bytes are the 4 byte ICMP header and the 4 byte ICMP echo reply header data and reflect an ICMP payload data size of 0 bytes. | {
"source": [
"https://unix.stackexchange.com/questions/234832",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133967/"
]
} |
234,903 | I'm trying to tunnel to a server via a bridge server. So far, I've been able to get it working from the command shell properly using the following command: ssh -A -t [email protected] ssh -A [email protected] But I've been trying to wrap this into my ~/.ssh/config file and I have troubles. I've tried: Host axp
User remote_userid
HostName remoteserver.com
IdentityFile ~/.ssh/id_rsa.eric
ProxyCommand ssh -A -t bridge_userid@bridge_userid.com ssh -A remote_userid@%h But when I do, I get the following error message from remoteserver.com and I'm not sure what is causing it: ksh: SSH-2.0-OpenSSH_6.8^M: not found I know that when I log into remoteserver.com , my shell is /usr/bin/ksh . I've tried to add path arguments to the ssh commands in the config file, but it made no difference. Any ideas what it can be? | Jakuje's answer is right, but since OpenSSH 7.3 , you can now use -J ProxyJump which is easier. See my notes: OpenSSH 7.3 or above Use ProxyJump . As explained in the manual: -J [user@]host[:port] Connect to the target host by first making an ssh connection to the jump host and then establishing a TCP forwarding to the ultimate destination from there. Multiple jump hops may be specified separated by comma characters. This is a shortcut to specify a ProxyJump configuration directive. ProxyJump ~/.ssh/config example ~/.ssh/config Host server1
Hostname server1.example.com
IdentityFile ~/.ssh/id_rsa
Host server2_behind_server1
Hostname server2.example.com
IdentityFile ~/.ssh/id_rsa
ProxyJump server1 Connect with ssh server2_behind_server1 -v Add -v for verbose output ProxyJump -J Command line example ~/.ssh/config Host server1
Hostname server1.example.com
IdentityFile ~/.ssh/id_rsa
Host server2
Hostname server2.example.com
IdentityFile ~/.ssh/id_rsa Connect with ssh server2 -J server1 -v Or use -o ssh server2 -o 'ProxyJump server1' -v OpenSSH 5.4 or above Use ProxyCommand with -W ~/.ssh/config Host server1
Hostname server1.example.com
IdentityFile ~/.ssh/id_rsa
Host server2
Hostname server2.example.com
IdentityFile ~/.ssh/id_rsa
ProxyCommand ssh server1 -W %h:%p Connect with ssh server2 -v Or use -o ssh server2 -o 'ProxyCommand ssh server1 -W %h:%p' -v OpenSSH bellow 5.4 ~/.ssh/config Host server1
Hostname server1.example.com
IdentityFile ~/.ssh/id_rsa
Host server2
Hostname server2.example.com
IdentityFile ~/.ssh/id_rsa
ProxyCommand ssh server1 nc %h %p 2> /dev/null Connect with: ssh server2 -v Or use -o ssh server2 -o 'ProxyCommand ssh server1 nc %h %p 2> /dev/null' -v Sources -J added in OpenSSH 7.3 ssh(1): Add a ProxyJump option and corresponding -J command-line
flag to allow simplified indirection through a one or more SSH
bastions or "jump hosts". -W added in OpenSSH 5.4 Added a 'netcat mode' to ssh(1): "ssh -W host:port ..." This connects
stdio on the client to a single port forward on the server. This
allows, for example, using ssh as a ProxyCommand to route connections
via intermediate servers. bz#1618 | {
"source": [
"https://unix.stackexchange.com/questions/234903",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28391/"
]
} |
234,970 | I have several SSL certificates, and I would like to be notified, when a certificate has expired. My idea is to create a cronjob, which executes a simple command every day. I know that the openssl command in Linux can be used to display the certificate info of remote server, i.e.: openssl s_client -connect www.google.com:443 But I don't see the expiration date in this output. Also, I have to terminate this command with CTRL + c . How can I check the expiration of a remote certificate from a script (preferably using openssl ) and do it in "batch mode" so that it runs automatically without user interaction? | Your command would now expect a http request such as GET index.php for example. Use this instead: if true | openssl s_client -connect www.google.com:443 2>/dev/null | \
openssl x509 -noout -checkend 0; then
echo "Certificate is not expired"
else
echo "Certificate is expired"
fi true : will just give no input followed by eof, so that openssl exits after connecting. openssl ... : the command from your question 2>/dev/null : error output will be ignored. openssl x509 : activates X.509 Certificate Data Management. This will read from standard input defaultly -noout : Suppresses the whole certificate output -checkend 0 : check if the certificate is expired in the next 0 seconds | {
"source": [
"https://unix.stackexchange.com/questions/234970",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
235,017 | Help required - in the context of shell scripting on a GNU/LINUX bash: I always use set -e . Often, I would like to grep and do not always want the script to terminate execution if grep has an exit status of 1 indicating pattern not found. Somethings I have tried to solve this problem are as follows: (Try I) If set +o pipefail and invoke grep with something like grep 'p' | wc -l then I get the desired behaviour until a future maintainer enables pipefail . Also, I like enabling pipefail so this does not work for me. (Try II) Use a sed or awk and only print lines matching pattern, then wc matched lines to test for matched pattern. I don't like this option because using sed to grep seems like a workaround for my true problem. (Try III) This one is my least favorite - something like: set +e; grep 'p'; set-e Any insight/idioms would be most appreciated - thank you. | You can put the grep in an if condition, or if you don't care about the exit status, add || true . Example: grep kills the shell $ bash
$ set -e
$ echo $$
33913
$ grep foo /etc/motd
$ echo $$
9233 solution 1: throw away the non-zero exit status $ bash
$ set -e
$ echo $$
34074
$ grep foo /etc/motd || true
$ echo $$
34074 solution 2: explicitly test the exit status $ if ! grep foo /etc/motd; then echo not found; fi
not found
$ echo $$
34074 From the bash man page discussing set -e : The shell does not exit if the command
that fails is part of the command list immediately following a while or until keyword, part of the test following the if or elif reserved words, part of
any command executed in a && or ││ list except the command following the
final && or ││ , any command in a pipeline but the last, or if the command’s
return value is being inverted with ! . | {
"source": [
"https://unix.stackexchange.com/questions/235017",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24113/"
]
} |
235,100 | I can't remember the trick where I could get the last command without running it: let's say I want to be be able to access the command !1255 when pressing the up arrow key and modify the command. So what's the trick to call the command, make it be shown up in the command line but not executed and afterwards accessible via the arrow key up? I tried with putting an echo, but then I have an echo before the command, I don't remember how to do it correctly. | !1255:p Will do this ! is history recall 1255 is the line number :p prints but does not execute Then you can use up-arrow to get ther previous (unexecuted) command back and you can change it as you need. I often combine this with hg ("History Grep") - my favorite alias. $ alias hg # Maybe use hgr instead if you are a Mercurial CLI user.
alias hg='history | tail -200 | grep -i' This searches for text on a recent history line, regardless of case and is used this way: When I want to search for recent vi commands to edit a certain file and then I want to re-use one of them to edit the same file but with a different file extension. $ hg variables
6153 vi Variables/user-extensions.js
6176 vi Variables/user-extensions.js
6178 vi Variables/user-extensions.js
6190 vi Variables/user-extensions.js
6230 hg variables
$ # Notice the difference in case with V and v is ignored
$ !6190:p
vi Variables/user-extensions.js
$ ["up-arrow"]
$ vi Variables/user-extensions.[now change .js to .html] I also define hga ("History Grep All") to search my entire history: $ alias hga
alias hga='history | grep -i' but I don't use it much because my history is (intentionally) very large and I get too much output that later affects scrolling back thru pages in my terminal. | {
"source": [
"https://unix.stackexchange.com/questions/235100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/137808/"
]
} |
235,158 | How to check the filesystem type of a logical volume using lvm or any other utility? For example, if my logical volume is /dev/vg1/lv1 then how to check its filesystem type? I have made a ext4 filesystem in the logical volume using mkfs -t ext4 /dev/vg1/lv1 . But don't know how to verify it. I could not see any option for thin is lvm ? | Same as you would with any other block device. e.g. file -s /dev/vg1/lv1 If /dev/vg1/lv1 is a symbolic link, you'll also need file 's -L (aka --dereference ) option to de-reference it (i.e. follow it to the real device node it's pointing to): file -L -s /dev/vg1/lv1 BTW, it's OK to use -L on a regular file. If it's ext4, it'll say something like: /dev/vg1/lv1: Linux rev 1.0 ext4 filesystem data, UUID=xxxx, volume name "yyyy" (needs journal recovery) (extents) (large files) (huge files) Alternatively, you could run blkid /dev/vg1/lv1 . That would report something like: /dev/vg1/lv1: LABEL="yyyy" UUID="xxxx" TYPE="ext4" From man file : -s, --special-files Normally, file only attempts to read and determine the type of argument files which stat(2) reports are ordinary files. This prevents problems, because reading special files may have peculiar consequences. Specifying the -s option causes file to also read argument files which are block or character special files. This is useful for determining the filesystem types of the data in raw disk partitions, which are block special files. This option also causes file to disregard the file size as reported by stat(2) since on some systems it reports a zero size for raw disk partitions. | {
"source": [
"https://unix.stackexchange.com/questions/235158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/109220/"
]
} |
235,223 | I'm trying to include some env vars into a Makefile. The env file looks like: FOO=bar
BAZ=quux Note there's no leading export to each env var. If I add the leading export and just include the env file in the Makefile, everything works as it should. But I need to keep the env vars sans leading export . That prevents me from just using include envfile in the Makefile. I've also tried doing something like this: sed '/^#/!s/^/export /' envfile > $(BUILDDIR)/env
include $(BUILDDIR)/env But doing that cause make to throw an error because the env file isn't there for including. | If you are using gnu make, what should work is to include the envfile file, then
export the list of vars got from the same file: #!make
include envfile
export $(shell sed 's/=.*//' envfile)
test:
env | {
"source": [
"https://unix.stackexchange.com/questions/235223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/14476/"
]
} |
235,335 | I started thinking about this issue in the context of etiquette on the Linux Kernel Mailing list. As the world's best known and arguably most successful and important free software project, the Linux kernel gets plenty of press. And the project founder and leader, Linus Torvalds, clearly needs no introduction here. Linus occasionally attracts controversy with his flames on the LKML. These flames are frequently, by his own admission, to do with breaking user space. Which brings me to my question. Can I have some historical perspective on why breaking user space is such a bad thing? As I understand it, breaking user space would require fixes on the application level, but is this such a bad thing, if it improves the kernel code? As I understand it, Linus' stated policy is that not breaking user space trumps everything else, including code quality. Why is this so important, and what are the pros and cons of such a policy? (There are clearly some cons to such a policy, consistently applied, since Linus occasionally has "disagreements" with his top lieutenants on the LKML on exactly this topic. As far as I can tell, he always gets his way in the matter.) | The reason is not a historical one but a practical one. There are many many many programs that run on top of the Linux kernel; if a kernel interface breaks those programs then everybody would need to upgrade those programs. Now it's true that most programs do not in fact depend on kernel interfaces directly (the system calls ), but only on interfaces of the C standard library (C wrappers around the system calls). Oh, but which standard library? Glibc? uClibC? Dietlibc? Bionic? Musl? etc. But there are also many programs that implement OS-specific services and depend on kernel interfaces that are not exposed by the standard library. (On Linux, many of these are offered through /proc and /sys .) And then there are statically compiled binaries. If a kernel upgrade breaks one of these, the only solution would be to recompile them. If you have the source: Linux does support proprietary software too. Even when the source is available, gathering it all can be a pain. Especially when you're upgrading your kernel to fix a bug with your hardware. People often upgrade their kernel independently from the rest of their system because they need the hardware support. In the words of Linus Torvalds : Breaking user programs simply isn't acceptable. (…) We know that people use old binaries for years and years, and that making a new release doesn't mean that you can just throw that out. You can trust us. He also explains that one reason to make this a strong rule is to avoid dependency hell where you'd not only have to upgrade another program to get some newer kernel to work, but also have to upgrade yet another program, and another, and another, because everything depends on a certain version of everything. It's somewhat ok to have a well-defined one-way dependency. It's sad, but inevitable sometimes. (…) What is NOT ok is to have a two-way dependency. If user-space HAL code depends on a new kernel, that's ok, although I suspect users would hope that it wouldn't be "kernel of the week", but more a "kernel of the last few months" thing. But if you have a TWO-WAY dependency, you're screwed. That means that you have to upgrade in lock-step, and that just IS NOT ACCEPTABLE. It's horrible for the user, but even more importantly, it's horrible for developers, because it means that you can't say "a bug happened" and do things like try to narrow it down with bisection or similar. In userspace, those mutual dependencies are usually resolved by keeping different library versions around; but you only get to run one kernel, so it has to support everything people might want to do with it. Officially , backward compatibility for [system calls declared stable] will be guaranteed for at least 2 years. In practice though, Most interfaces (like syscalls) are expected to never change and always be available. What does change more often is interfaces that are only meant to be used by hardware-related programs, in /sys . ( /proc , on the other hand, which since the introduction of /sys has been reserved for non-hardware-related services, pretty much never breaks in incompatible ways.) In summary, breaking user space would require fixes on the application level and that's bad because there's only one kernel, which people want to upgrade independently of the rest of their system, but there are many many applications out there with complex interdependencies. It's easier to keep the kernel stable that to keep thousands of applications up-to-date on millions of different setups. | {
"source": [
"https://unix.stackexchange.com/questions/235335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4671/"
]
} |
236,084 | I'm using CentOS 7 what my aim is to create a cron for every five seconds but as I researched we can use cron only for a minute so what I am doing now is I have created a shell file. hit.sh while sleep 5; do curl http://localhost/test.php; done but I have hit it manually through right clicking it. What I want is to create a service for that file so that i can start and stop it automatically. I found the script to create a service #!/bin/bash
# chkconfig: 2345 20 80
# description: Description comes here....
# Source function library.
. /etc/init.d/functions
start() {
# code to start app comes here
# example: daemon program_name &
}
stop() {
# code to stop app comes here
# example: killproc program_name
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
# code to check status of app comes here
# example: status program_name
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0 But I don't know what to write in start or stop methods I tried placing the same content of hit.sh in start(){} but it gave error for } in stop method. | Users trying to run a script as a daemon on a modern system should be using systemd : [Unit]
Description=hit service
After=network-online.target
[Service]
ExecStart=/path/to/hit.sh
[Install]
WantedBy=multi-user.target Save this as /etc/systemd/system/hit.service , and then you will be able to start/stop/enable/disable it with systemctl start hit , etc. Old answer from 2015: If you'd like to reuse your code sample, it could look something like: #!/bin/bash
case "$1" in
start)
/path/to/hit.sh &
echo $!>/var/run/hit.pid
;;
stop)
kill `cat /var/run/hit.pid`
rm /var/run/hit.pid
;;
restart)
$0 stop
$0 start
;;
status)
if [ -e /var/run/hit.pid ]; then
echo hit.sh is running, pid=`cat /var/run/hit.pid`
else
echo hit.sh is NOT running
exit 1
fi
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0 Naturally, the script you want to be executed as a service should go to e.g. /usr/local/bin/hit.sh , and the above code should go to /etc/init.d/hitservice . For each runlevel which needs this service running, you will need to create a respective symlink. For example, a symlink named /etc/init.d/rc5.d/S99hitservice will start the service for runlevel 5. Of course, you can still start and stop it manually via service hitservice start / service hitservice stop | {
"source": [
"https://unix.stackexchange.com/questions/236084",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131129/"
]
} |
236,127 | I have to disable some event to avoid an immediate wakeup after suspend in my desktop machine, and I made it by trial and error (works well, so that is not a problem). But I wonder... for example in my laptop I have a long list in /proc/acpi/wakeup : [...]
RP03 S4 *disabled
PXSX S4 *disabled
RP04 S4 *disabled pci:0000:00:1c.3
PXSX S4 *enabled pci:0000:03:00.0
RP06 S4 *disabled
[...] I have searched around and I can't find a place where a list with the meaning of the 4-letter code in the first column is explained. I imagine that the events with a device name after them are linked/generated by that device, but I am at a loss with most of the rest... minus wild guesses. How can I know what, for example, event RP06 is? Is there anywhere a list? Or are that codes vendor-specific? | The codes come from the DSDT (Differentiated System Description Table) of your BIOS.
This "table" describes the integrated devices on your mainboard, their dependencies and power-management functions. Devices in the DSDT are arranged in a tree and each path component is limited to 4 characters. The codes in /proc/acpi/wakeup are the last path components (aka the names) of the devices the vendor used for the devices. They are inherently vendor-specific, as the vendor may name any device as he likes. But there are some names that are common between many vendors, either because they are used as examples in the ACPI specification or because they are obvious abbreviations: PS2K: PS/2 keyboard PS2M: PS/2 mouse PWRB or PBTN: Power button SLPB: Sleep button LID: Laptop lid RP0x or EXPx: PCIE slot #x (aka PCI Express Root Port #x) EHCx or USBx: USB 2.0 (EHCI) chip XHC: USB 3.0 (XHCI) chip PEGx: PCI Express for Graphics slot #x GLAN or IGBE: Gigabit Ethernet | {
"source": [
"https://unix.stackexchange.com/questions/236127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/52205/"
]
} |
236,128 | I have these variant of urls RewriteRule ^/trendy/the-reason-for-example/? http://www.example.com/trendy/the-reason [NC, L, R=301]
RewriteRule ^/lol/2015/10..._for-example http://www.example.com/lol/the-reason [NC, L, R=301]
RewriteRule ^/sports/this-one***-as-well/ http://www.example.com/sports/this-one [NC, L, R=301]
RewriteRule ^/buzz/the-#reason-for-buzz http://www.example.com/buzz/buzz-sells [NC, L, R=301]
RewriteRule ^/omg/ what-the-hell http://www.example.com/omg/wthell [NC, L, R=301]
RewriteRule ^/hash/HELL-YEAH http://www.example.com/hash/oh-yes [NC, L, R=301]
RewriteRule ^/celeb/he-did-it! http://www.example.com/celeb/we-believe [NC, L, R=301] and I want to edit them using awk (sed or any other tool) to help edit these variant of URLs so it passes apache's rewriterule config test Notice the characters like (*), (.), (#), (!) and even the space on the 5th line
How do I edit these set of lines so that everything looks correct to be deployed to apache and pass apache config test httpd -t ? EDIT: v1 Here is something that I am looking for that will pass apache's test RewriteRule ^/trendy/the-reason-for-example/? http://www.example.com/trendy/the-reason [NC, L, R=301]
RewriteRule ^/lol/2015/10\.\.\._for-example http://www.example.com/lol/the-reason [NC, L, R=301]
RewriteRule ^/sports/this-one\*\*\*-as-well/ http://www.example.com/sports/this-one [NC, L, R=301]
RewriteRule ^/buzz/the-\#reason-for-buzz http://www.example.com/buzz/buzz-sells [NC, L, R=301]
RewriteRule ^/omg/\ what-the-hell http://www.example.com/omg/wthell [NC, L, R=301]
RewriteRule ^/hash/HELL-YEAH http://www.example.com/hash/oh-yes [NC, L, R=301]
RewriteRule ^/celeb/he-did-it\! http://www.example.com/celeb/we-believe [NC, L, R=301] NOTE: please note line 5 had space and I had to escape the space. So there needs to be a way to detect is there is space in column 2 then escape it -- something along that line. | The codes come from the DSDT (Differentiated System Description Table) of your BIOS.
This "table" describes the integrated devices on your mainboard, their dependencies and power-management functions. Devices in the DSDT are arranged in a tree and each path component is limited to 4 characters. The codes in /proc/acpi/wakeup are the last path components (aka the names) of the devices the vendor used for the devices. They are inherently vendor-specific, as the vendor may name any device as he likes. But there are some names that are common between many vendors, either because they are used as examples in the ACPI specification or because they are obvious abbreviations: PS2K: PS/2 keyboard PS2M: PS/2 mouse PWRB or PBTN: Power button SLPB: Sleep button LID: Laptop lid RP0x or EXPx: PCIE slot #x (aka PCI Express Root Port #x) EHCx or USBx: USB 2.0 (EHCI) chip XHC: USB 3.0 (XHCI) chip PEGx: PCI Express for Graphics slot #x GLAN or IGBE: Gigabit Ethernet | {
"source": [
"https://unix.stackexchange.com/questions/236128",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138406/"
]
} |
236,147 | The path is as follows: /usr/lib64/qt3.3/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/mj1210/.local/bin:/home/mj1210/bin | The codes come from the DSDT (Differentiated System Description Table) of your BIOS.
This "table" describes the integrated devices on your mainboard, their dependencies and power-management functions. Devices in the DSDT are arranged in a tree and each path component is limited to 4 characters. The codes in /proc/acpi/wakeup are the last path components (aka the names) of the devices the vendor used for the devices. They are inherently vendor-specific, as the vendor may name any device as he likes. But there are some names that are common between many vendors, either because they are used as examples in the ACPI specification or because they are obvious abbreviations: PS2K: PS/2 keyboard PS2M: PS/2 mouse PWRB or PBTN: Power button SLPB: Sleep button LID: Laptop lid RP0x or EXPx: PCIE slot #x (aka PCI Express Root Port #x) EHCx or USBx: USB 2.0 (EHCI) chip XHC: USB 3.0 (XHCI) chip PEGx: PCI Express for Graphics slot #x GLAN or IGBE: Gigabit Ethernet | {
"source": [
"https://unix.stackexchange.com/questions/236147",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138498/"
]
} |
236,365 | In bash scripts I try to keep my variables local to functions wherever I can and then pass what I need out of functions like bellow #!/bin/bash
function FUNCTION() {
local LOCAL="value"
echo "$LOCAL" # return this variable
}
GLOBAL=$(FUNCTION)
echo "$GLOBAL" But is it possible to do this while including the function's own echos so that if the function has it's own messages to output I don't have to catch them in a variable #!/bin/bash
function FUNCTION() {
local LOCAL="value"
echo "$LOCAL" # return this variable
echo "This function is done now" # do not return this variable
}
GLOBAL=$(FUNCTION)
echo "$GLOBAL" # should only echo 'value' | Anything that's printed by the function can be captured if you capture the right output stream. So the easiest way to print something and save some other output is to redirect the superfluous output to standard error: function FUNCTION() {
local LOCAL="value"
echo "$LOCAL"
echo "This function is done now" >&2
} Another possibility is to log to a file rather than printing log messages directly, for example using something like this: log() {
printf '%s\n' "$@" > my.log
} That said, Bash functions cannot return variables . The only actual "return" value is the exit code. For this reason ( and many others ), if you want reliable logging, return values, exception handling and more you'll want to use a different language like Python, Ruby or Java. | {
"source": [
"https://unix.stackexchange.com/questions/236365",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/89568/"
]
} |
236,382 | The install guide for ack suggests installing the ack script using this command: curl http://beyondgrep.com/ack-2.14-single-file > ~/bin/ack && chmod 0755 !#:3 I assume that the !#:3 at the end is some kind of back-reference, but what does it mean? Is there an equivalent in zsh? Google has not been helpful. | This is a special syntax, expanded by bash. It also works for zsh. According to the bash man page (section HISTORY EXPANSION), the pattern
expands as following: The event designator !# refers to the entire command line typed so far which is curl http://beyondgrep.com/ack-2.14-single-file > ~/bin/ack && chmod 0755 : splits between the event designator (this case the entire line)
and the word designator (selects a sub-part) the word designator 3 which selects the third word/argument (counting of words starts at zero), in this case ~/bin/ack . The final command line (usually displayed before executed) is: curl http://beyondgrep.com/ack-2.14-single-file > ~/bin/ack && chmod 0755 ~/bin/ack . For details, see the bash manual or very similar the zsh manual | {
"source": [
"https://unix.stackexchange.com/questions/236382",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2171/"
]
} |
236,391 | I have a graphical machine of rhel7, when i run who command , one of the outputs in 2nd column is ':0'. What does ':0' mean ? | This is a special syntax, expanded by bash. It also works for zsh. According to the bash man page (section HISTORY EXPANSION), the pattern
expands as following: The event designator !# refers to the entire command line typed so far which is curl http://beyondgrep.com/ack-2.14-single-file > ~/bin/ack && chmod 0755 : splits between the event designator (this case the entire line)
and the word designator (selects a sub-part) the word designator 3 which selects the third word/argument (counting of words starts at zero), in this case ~/bin/ack . The final command line (usually displayed before executed) is: curl http://beyondgrep.com/ack-2.14-single-file > ~/bin/ack && chmod 0755 ~/bin/ack . For details, see the bash manual or very similar the zsh manual | {
"source": [
"https://unix.stackexchange.com/questions/236391",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138665/"
]
} |
236,659 | The GNU Coreutils manual for mv says: -f --force Do not prompt the user before removing a destination file. However, this already seems to be the default behaviour for mv , so the -f option appears to be superfluous. E.g. in GNU Bash version 4.3.11: $ ls -l
total 0
$ touch 1 2; mv -f 1 2; ls
2
$ touch 1 2; mv 1 2; ls
2 It seems unlikely the intention of the -f flag is to override alias mv="mv -i" , because there are several standard ways of overriding an alias (e.g. using \mv ) that would do this more concisely and in a way that is consistent across commands. The manual notes that, "If you specify more than one of the -i, -f, -n options, only the final one takes effect," but it still seems unlikely the intention of the -f flag is to override the -i flag in general, because equivalent behaviour can be achieved by simply using mv , which is much more concise and comprehensible than using mv -if . That being the case, what is the purpose of the -f flag? Why does it exist? | The usage of -f is more clearly described in the man page from 4BSD, which was where the -f and -i options were added: If file2 already exists, it is removed before file1 is moved. If file2 has a mode which forbids writing, mv prints the mode and reads the standard input to obtain a line; if the line begins with y, the move takes place; if not, mv exits. Options: -i stands for interactive mode. Whenever a move is to supercede an existing file, the user is prompted by the name of the file followed by a question mark. If he answers with a line starting with 'y', the move continues. Any other reply prevents the move from occurring. -f stands for force. This option overrides any mode restrictions or the -i switch. An even more precise definition of how mv operates is given in the POSIX standard , which adds that -f only overrides -i if it occurs later in the command line. So the default behavior is a bit different from -f . The default is to ask for confirmation only when the target isn't writable. (This behavior goes back at least as far as V4 , where mv didn't take any options.) If the -i option is given, mv will additionally ask for confirmation whenever the target exists. The -f option will inhibit asking in both of those cases (if it occurs after any -i ). | {
"source": [
"https://unix.stackexchange.com/questions/236659",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
236,751 | I have a file which is having following content: zdk
aaa
b12
cdn
dke
kdn Input1: aaa and cdn Output 1: aaa
b12
cdn Input 2: zdk and dke Output 2: zdk
aaa
b12
cdn
dke I could use below commands to achieve: grep -a aaa -A2 file # Output 1
grep -a aaa -A4 file # Output 2 But in the file I don't know what is the exact occurrence (position) of end string pattern (file is having 20000 rows) How can I achieve this? | grep won't help you here. This is a job better accomplished with sed using range expressions: $ sed -n '/aaa/,/cdn/p' file
aaa
b12
cdn
$ sed -n '/zdk/,/dke/p' file
zdk
aaa
b12
cdn
dke sed -n suppresses the automatic printing, so that lines are printed just if explicitly asked to. And this happens when the range /aaa/,/cdn/ happens. These range expressions are also available in awk , where you can say: awk '/zdk/,/dke/' file Of course, all these conditions can be expanded to a more strict regex like sed -n '/^aaa$/,/^cdn$/p' file to check that the lines consist on exactly aaa and cdn , nothing else. | {
"source": [
"https://unix.stackexchange.com/questions/236751",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/81640/"
]
} |
237,050 | I am making a backup of some files and am creating a checksum file, checksums.txt , to later check the integrity of my data. However pasting each checksum manually into the file is inconvenient since my backup consists of hundreds of files, so asking around I got the suggestion to simplify the process by redirecting the output to the checksum file: $ md5sum file > checksums.txt or $ sha512sum file > checksums.txt However doing this replaces the contents of the checksums.txt file with the checksum of, in this example, the file file; I would like, instead, to append the checksum of file to checksums.txt without deleting its contents. So my question is: how do I do this? Just one more thing. Since I am a basic user, try (only if possible) to make easy-to-follow suggestions. | If you want to append to a file you have to use >> . So your examples would be $ md5sum file >> checksums.txt and $ sha512sum file >> checksums.txt | {
"source": [
"https://unix.stackexchange.com/questions/237050",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/93996/"
]
} |
237,063 | I was happily watching a TV show episode and 5 minutes later I am with a fried computer. I was using elementaryOS. I am going step by step with what just happened: I was using VLC when suddently the video stopped working and after 10-20 seconds a warning message from VLC popped up, said something like it couldn't reproduce the file. Then everything started freezing really fast, I could barely do one task, minimize and maximize some windows and 10 seconds later the system completely froze. I forced shutdown the computer holding the button and started it again. It starts, shows the Acer logo, then the Windows startup menu, I press Escape so Grub can show up, here's where I have elementary OS as well as Ubuntu. Surprise. Frozen black screen and Grub does not show up. After a few seconds I get a screen saying error: unknown filesystem. Entering rescue mode... and a grub rescue> prompt. I shut down and restart again 3-4 more times and the same thing keeps happening. Then the 4th or 5th time I restart the computer not even the Windows startup menu is showing up, it just froze at the Acer logo. Shutdown again and now the Windows menu is up, press Escape, and same story with Grub. It's gone. Again that error. I shutdown again and it stalls for a while in the Acer screen but finally the Windows menu shows up. And I'm like well "would even Windows work?". Nope, it doesn't. Trying to start Windows just brings back the Acer screen and it's locked there as I am typing this. Latest update: when I start up the computer there's a weird cracking repeating sound. Fried hard drive? | If you want to append to a file you have to use >> . So your examples would be $ md5sum file >> checksums.txt and $ sha512sum file >> checksums.txt | {
"source": [
"https://unix.stackexchange.com/questions/237063",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/64512/"
]
} |
237,072 | Recently, I installed Kali Linux 2.0 as a third OS on my Dell Latitude E7240. I used Unetbootin to make a bootable USB of Kali Linux. When I booted from it, it gave me options I wasn't used to. I chose Default the first time, and I was met with a black screen, so I manually shut my computer off, rebooted, and then chose Live Encrypted USB Persistence. After a while of outputting stuff while booting which I did not understand at all, I finally got into a live session of Kali Linux 2.0. From there, I searched in the applications and found Install Kali, which I clicked. I was given a graphical interface which removed the dash and bar at the top, and only showed me the installer, which was partially cut off. So, for I believe two things (one of which was configuring the network), I could not see any options, and blindly hit enter. However, I'm fairly certain nothing went wrong here, as the installation carried through smoothly and asked me mostly what I would expect to be asked while installing. However, since I already have Ubuntu, the first time, I selected no for installing Grub, then it said I had to make my OS bootable and so I had to install something somewhere, so I just chose my hard drive, /dev/sda , rather than entering the device manually, which I don't have any experience with. I finished the installation successfully, then rebooted. My Ubuntu Grub loaded, but I didn't see Kali Linux. I tried following tutorials to add it to Grub, but had no luck. So, I reinstalled Kali, this time choosing to install Grub. Then I rebooted, and the Kali Grub showed up. However, now I get warnings when booting into Kali ( Using Kali Linux ) and when booting into Ubuntu. This post is about booting into Ubuntu. When I boot into Ubuntu, I get the warnings shown in the following image: EDIT2: New Image with more of the warnings (the warnings that flash for less than a second). I don't think I have experienced any problems yet. However, just to be safe, I would like to know what this means. So what does that mean? If I am missing information, please tell me, and I will add it. EDIT1: This all began AFTER I installed Kali Linux. | If you want to append to a file you have to use >> . So your examples would be $ md5sum file >> checksums.txt and $ sha512sum file >> checksums.txt | {
"source": [
"https://unix.stackexchange.com/questions/237072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139098/"
]
} |
237,221 | If I understand the Linux philosophy correctly, sudo should be used sparingly, and most operations should be performed as an under-privileged user. But that doesn't seem to make sense, since I'm always having to input sudo , whether I'm managing packages, editing config files, installing a program from source, or what have you. These are not even technical stuff, just anything a regular user does. It reminds me very much of Window's UAC, which people either disable, or configure to not require a password (just a click). Furthermore, many people's Windows users are administrator accounts as well. Also, I've seen some people display commands that require sudo privileges without sudo . Do they have their system configured in such a way that sudo is not required? | You mentioned these system adminstration functions managing packages, editing config files, installing a program from source as things that anything a regular user does In a typical multiuser system these are not ordinary user actions; a systems administrator would worry about this. Ordinary users (not "under privileged") can then use the system without worrying about its upkeep. On a home system, yes, you end up having to administer the system as well as using it. Is it really such a hardship to use sudo ? Remember that if it's just your system there's no reason why you can't either pop into a root shell ( sudo -s - see this post for an overview of various means of getting a root shell) and/or configure sudo not to prompt for a password. | {
"source": [
"https://unix.stackexchange.com/questions/237221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139248/"
]
} |
237,443 | I know how to pass arguments into a shell script. These arguments are declared in AWS datapipeline and passed through. This is what a shell script would look like: firstarg=$1
secondarg=$2 How do I do this in Python? Is it the exact same? | This worked for me: import sys
firstarg=sys.argv[1]
secondarg=sys.argv[2]
thirdarg=sys.argv[3] | {
"source": [
"https://unix.stackexchange.com/questions/237443",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133594/"
]
} |
237,531 | If you issue the ls -all command some files are displayed with the timestamp containing the year without the time and others with the timestamp containing the time but not the year. Why does this happen? Is the timestamp representative of the time the file was created at? | By default, file timestamps are listed in abbreviated form, using a
date like ‘Mar 30 2002’ for non-recent timestamps, and a
date-without-year and time like ‘Mar 30 23:45’ for recent timestamps.
This format can change depending on the current locale as detailed
below. A timestamp is considered to be recent if it is less than six
months old, and is not dated in the future. If a timestamp dated today
is not listed in recent form, the timestamp is in the future, which
means you probably have clock skew problems which may break programs
like make that rely on file timestamps. Source: http://www.gnu.org/software/coreutils/manual/coreutils.html#Formatting-file-timestamps To illustrate: $ for i in {1..7}; do touch -d "$i months ago" file$i; done
$ ls -l
total 0
-rw-r--r-- 1 terdon terdon 0 Sep 21 02:38 file1
-rw-r--r-- 1 terdon terdon 0 Aug 21 02:38 file2
-rw-r--r-- 1 terdon terdon 0 Jul 21 02:38 file3
-rw-r--r-- 1 terdon terdon 0 Jun 21 02:38 file4
-rw-r--r-- 1 terdon terdon 0 May 21 02:38 file5
-rw-r--r-- 1 terdon terdon 0 Apr 21 2015 file6
-rw-r--r-- 1 terdon terdon 0 Mar 21 2015 file7 | {
"source": [
"https://unix.stackexchange.com/questions/237531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122887/"
]
} |
237,854 | When you paste some command in terminal, it will sometimes automatically execute the command (just like if the "Enter" key was pressed), sometimes not. I've been using Linux for ages, pasted thousands of commands in various consoles on many distros, and I am still unable to tell if the command I'm about to paste will be executed automatically or not. What triggers this behavior? | It's the return character in the text you are copying that's triggering the automatic execution. Let's take a different example, copy these lines all at once and paste them into your terminal: echo "Hello";
echo "World"; If you look in your terminal, you will not see this: $ echo "Hello";
echo "World"; You will see this (there may also be a line saying World ): $ echo "Hello";
Hello
$ echo "World"; Instead of waiting for all the input to be pasted in, the first line executes (and for the same reason, the second line may or may not do so as well). This is because there is a RETURN character between the two lines. When you press the ENTER key on your keyboard, all you are doing is sending the character with the ASCII value of 13 . That character is detected immediately by your terminal, and knows it has special instructions to execute what you have typed so far. When stored on your computer or printed on your screen, the RETURN character is just like any other letter of the alphabet, number, or symbol. This character can be deleted with backspace, or copied to the clipboard just like any other regular character. The only difference is, when your browser sees the character, it knows that instead of printing a visible character, it should treat it differently, and has special instructions to move the next set of text down to the next line. The RETURN character and the SPACE character (ascii 32 ), along with a few other seldom used characters, are known as "non-printing characters" for this reason. Sometimes when you copy text from a website, it's difficult to copy only the text and not the return at the end (and is often made more difficult by the styling on the page). Experiment time! Below you will find two commands that will illustrate the problem, and that you can "practice" on. Start your cursor right before echo and drag until the highlight is right before the arrow: echo "Wait for my signal...";<- End cursor here right after the semicolon And now try the second command. Start your cursor right before echo and drag down until the cursor is on the second line, but is right in front of the <- arrow. Copy it, and then paste it into your terminal: echo 'Go go go!';
<- End cursor here right before the arrow Depending on your browser, it may not even be visible that the text you selected went over two lines. But when you paste it into the terminal, you will find that it executes the line, because it found a RETURN character in the copied text. | {
"source": [
"https://unix.stackexchange.com/questions/237854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/54995/"
]
} |
237,861 | I am trying to set up a LAN chat with two users using Linux server and none of them is root. I have tried this two methods: write account_name on both computers And: nc -l port_number on first computer nc IP_adress port_number on second computer But the problem is whenever I write something and person on the other side hits enter it breaks also my line e.g: I am typing: "This is just a sim enter ple text". And this enter from another person breaks my line. Is there way how can I fix that? Or another way I can set up this chat? | It's the return character in the text you are copying that's triggering the automatic execution. Let's take a different example, copy these lines all at once and paste them into your terminal: echo "Hello";
echo "World"; If you look in your terminal, you will not see this: $ echo "Hello";
echo "World"; You will see this (there may also be a line saying World ): $ echo "Hello";
Hello
$ echo "World"; Instead of waiting for all the input to be pasted in, the first line executes (and for the same reason, the second line may or may not do so as well). This is because there is a RETURN character between the two lines. When you press the ENTER key on your keyboard, all you are doing is sending the character with the ASCII value of 13 . That character is detected immediately by your terminal, and knows it has special instructions to execute what you have typed so far. When stored on your computer or printed on your screen, the RETURN character is just like any other letter of the alphabet, number, or symbol. This character can be deleted with backspace, or copied to the clipboard just like any other regular character. The only difference is, when your browser sees the character, it knows that instead of printing a visible character, it should treat it differently, and has special instructions to move the next set of text down to the next line. The RETURN character and the SPACE character (ascii 32 ), along with a few other seldom used characters, are known as "non-printing characters" for this reason. Sometimes when you copy text from a website, it's difficult to copy only the text and not the return at the end (and is often made more difficult by the styling on the page). Experiment time! Below you will find two commands that will illustrate the problem, and that you can "practice" on. Start your cursor right before echo and drag until the highlight is right before the arrow: echo "Wait for my signal...";<- End cursor here right after the semicolon And now try the second command. Start your cursor right before echo and drag down until the cursor is on the second line, but is right in front of the <- arrow. Copy it, and then paste it into your terminal: echo 'Go go go!';
<- End cursor here right before the arrow Depending on your browser, it may not even be visible that the text you selected went over two lines. But when you paste it into the terminal, you will find that it executes the line, because it found a RETURN character in the copied text. | {
"source": [
"https://unix.stackexchange.com/questions/237861",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/139074/"
]
} |
237,914 | I have a 2013 Retina MacBook Pro, and I really want to install Debian on it. I have the know-how and have had at least three Debian systems before this. I am very knowledgable with the command-line and Linux's inner workings, and partitioning isn't an issue for me. So, I just have one question before I install Debian. My dad has warned me that Linux, in particular, can make laptop batteries explode and/or ruin hardware on MacBooks. I find this very strange, but don't really have any research to disprove it. I can't seem to find anything about it on the Internet, so can someone help me out? | Laptop batteries typically have onboard firmware to control safe charging & discharging of the battery, report battery charge level to the OS, and prevent thermal runaway , which is what will cause an Li-ion battery to explode (or more accurately, catch fire). Most modern ones also contain mechanical failsafes to prevent such fires & explosions. This firmware is stored on the battery, separate from the OS. While it can be updated from the OS (although this depends on the battery & laptop), it's not something that is altered when installing a new OS or something that is typically ever tampered with unless done so by the user running a battery firmware update. The only thing changing OS will affect is the load on the system & the hardware drivers used, not the safety features of the battery. Load on the system in and of itself will not normally cause issues with the battery other than faster discharging. Interestingly, according to this forbes article , there was actually a vulnerability in Apple laptops (running OSX, not Linux) that could do nasty things to the firmware on the batteries - perhaps your Dad has read something like that which is why he seems to think the OS can do this? (It's more than likely been fixed since 2011 when the article was written). EDIT - in conclusion, aside from possible attack vectors for battery firmware hacks, the choice of OS alone cannot cause a battery to explode. | {
"source": [
"https://unix.stackexchange.com/questions/237914",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/133466/"
]
} |
238,152 | I'm trying to copy a batch of files with scp but it is very slow. This is an example with 10 files: $ time scp cap_* user@host:~/dir
cap_20151023T113018_704979707.png 100% 413KB 413.2KB/s 00:00
cap_20151023T113019_999990226.png 100% 413KB 412.6KB/s 00:00
cap_20151023T113020_649251955.png 100% 417KB 416.8KB/s 00:00
cap_20151023T113021_284028464.png 100% 417KB 416.8KB/s 00:00
cap_20151023T113021_927950468.png 100% 413KB 413.0KB/s 00:00
cap_20151023T113022_567641507.png 100% 413KB 413.1KB/s 00:00
cap_20151023T113023_203534753.png 100% 414KB 413.5KB/s 00:00
cap_20151023T113023_855350640.png 100% 412KB 411.7KB/s 00:00
cap_20151023T113024_496387641.png 100% 412KB 412.3KB/s 00:00
cap_20151023T113025_138012848.png 100% 414KB 413.8KB/s 00:00
cap_20151023T113025_778042791.png 100% 413KB 413.4KB/s 00:00
real 0m43.932s
user 0m0.074s
sys 0m0.030s The strange thing is that the transfer rate is about 413KB/s and the file size is about 413KB so really it should transfer one file per second, however it's taking about 4.3 seconds per file. Any idea where this overhead comes from, and is there any way to make it faster? | You could use rsync (over ssh ), which uses a single connection to transfer all the source files. rsync -avP cap_* user@host:dir If you don't have rsync (and why not!?) you can use tar with ssh like this, which avoids creating a temporary file (these two alternatives are equivalent): tar czf - cap_* | ssh user@host tar xvzfC - dir
tar cf - cap_* | gzip | ssh user@host 'cd dir && gzip -d | tar xvf -' The rsync is to be preferred, all other things being equal, because it's restartable in the event of an interruption. | {
"source": [
"https://unix.stackexchange.com/questions/238152",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38085/"
]
} |
Subsets and Splits