source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
123,714
Not sure how to google this (I tried with some things and didn't get much) or how this is even called. This is my typical workflow: SSH to a server run something there and leave it running in the background (bg command for example) exit SSH and come back later Is there any way I could, when I come back later, see if the process was killed and on what status it was killed (segfault or something else) or get any kind of exit status information of the process I was running? For example, if I run it normally and leave my terminal open, it will print segmentation fault or something else, but I have no idea how to obtain this status once I disconnected and connected back again.
This vulnerability has a high potential impact because if your system has been attacked, it will remain vulnerable even after patching, and attacks may not have left any traces in logs. Chances that if you patched quickly and you aren't a high-profile target, nobody will have gotten around to attacking you, but it's hard to be sure. Am I vulnerable? The buggy version of OpenSSL The buggy software is the OpenSSL library 1.0.1 up to 1.0.1f , and OpenSSL 1.0.2 up to beta1. Older versions (0.9.x, 1.0.0) and versions where the bug has been fixed (1.0.1g onwards, 1.0.2 beta 2 onwards) are not affected. It's an implementation bug, not a flaw in the protocol, so only programs that use the OpenSSL library are affected. You can use the command line tool openssl version -a to display the OpenSSL version number. Note that some distributions port the bug fix to earlier releases; if your package's change log mentions the Heartbleed bug fix, that's fine, even if you see a version like 1.0.1f. If openssl version -a mentions a build date (not the date on the first line) of 2014-04-07 around evening UTC or later, you should be fine. Note that the OpenSSL package may have 1.0.0 in its name even though the version is 1.0.1 ( 1.0.0 refers to the binary compatibility). Affected applications Exploitation is performed through an application which uses the OpenSSL library to implement SSL connections . Many applications use OpenSSL for other cryptographic services, and that's fine: the bug is in the implementation of a particular feature of the SSL protocol, the β€œheartbeat”. You may want to check which programs are linked against the library on your system. On systems that use dpkg and apt (Debian, Ubuntu, Mint, …), the following command lists installed packages other than libraries that use libssl1.0.0 (the affected package): apt-cache rdepends libssl1.0.0 | tail -n +3 | xargs dpkg -l 2>/dev/null | grep '^ii' | grep -v '^ii lib' If you run some server software that's on this list and listens to SSL connections , you're probably affected. This concerns web servers, email servers, VPN servers, etc. You'll know that you've enabled SSL because you had to generate a certificate, either by submitting a certificate signing request to a certification authority or by making your own self-signed certificate. (It's possible that some installation procedure has generated a self-signed certificate without you noticing, but that's generally done only for internal servers, not for servers exposed to the Internet.) If you ran a vulnerable server exposed to the Internet, consider it compromised unless your logs show no connection since the announcement on 2014-04-07. (This assumes that the vulnerability wasn't exploited before its announcement.) If your server was only exposed internally, whether you need to change the keys will depend on what other security measures are in place. Client software is affected only if you used it to connect to a malicious server. So if you connected to your email provider using IMAPS, you don't need to worry (unless the provider was attacked β€” but if that's the case they should let you know), but if you browsed random websites with a vulnerable browser you may need to worry. So far it seems that the vulnerability wasn't being exploited before it was discovered, so you only need to worry if you connected to malicious servers since 2014-04-08. The following programs are unaffected because they don't use OpenSSL to implement SSL: SSH (the protocol is not SSL) Chrome/Chromium ( uses NSS ) Firefox (uses NSS) (at least with Firefox 27 on Ubuntu 12.04, but not with all builds? What is the impact? The bug allows any client who can connect to your SSL server to retrieve about 64kB of memory from the server at a time. The client doesn't need to be authenticated in any way. By repeating the attack, the client can dump different parts of the memory in successive attempts. This potentially allows the attacker to retrieve any data that has been in the memory of the server process, including keys, passwords, cookies, etc. One of the critical pieces of data that the attacker may be able to retrieve is the server's SSL private key. With this data, the attacker can impersonate your server. The bug also allows any server that your SSL client connected to to retrieve about 64kB of memory from the client at a time. This is a worry if you used a vulnerable client to manipulate sensitive data and then later connected to an untrusted server with the same client. The attack scenarios on this side are thus significantly less likely than on the server side. Note that for typical distributions, there is no security impact on package distribution as the integrity of packages relies on GPG signatures, not on SSL transport. How do I fix the vulnerability? Remediation of exposed servers Take all affected servers offline. As long as they're running, they're potentially leaking critical data. Upgrade the OpenSSL library package . All distributions should have a fix out by now (either with 1.0.1g, or with a patch that fixes the bug without changing the version number). If you compiled from source, upgrade to 1.0.1g or above. Make sure that all affected servers are restarted. On Linux, you can check if potentially affected processes are still running with grep 'libssl.*(deleted)' /proc/*/maps Generate new keys . This is necessary because the bug might have allowed an attacker to obtain the old private key. Follow the same procedure you used initially. If you use certificates signed by a certification authority, submit your new public keys to your CA. When you get the new certificate, install it on your server. If you use self-signed certificates, install it on your server. Either way, move the old keys and certificates out of the way (but don't delete them, just ensure they aren't getting used any more). Now that you have new uncompromised keys, you can bring your server back online . Revoke the old certificates. Damage assessment : any data that has been in the memory of a process serving SSL connections may potentially have been leaked. This can include user passwords and other confidential data. You need to evaluate what this data may have been. If you're running a service that allows password authentication, then the passwords of users who connected since a little before the vulnerability was announced should be considered compromised. Check your logs and change the passwords of any affected user. Also invalidate all session cookies, as they may have been compromised. Client certificates are not compromised. Any data that was exchanged since a little before the vulnerability may have remained in the memory of the server and so may have been leaked to an attacker. If someone has recorded an old SSL connection and retrieved your server's keys, they can now decrypt their transcript. (Unless PFS was ensured β€”Β if you don't know, it wasn't.) Remediation in other cases Servers that only listen on localhost or on an intranet are only to be considered exposed if untrusted users can connect to them. With clients, there are only rare scenarios where the bug can have been exploited: an exploit would require that you used the same client process to manipulate confidential data (e.g. passwords, client certificates, …); and then, in the same process, connected to a malicious server over SSL. So for example an email client that you only use to connect to your (not completely untrusted) mail provider is not a concern (not a malicious server). Running wget to download a file is not a concern (no confidential data to leak). If you did that between 2014-04-07 evening UTC and upgrading your OpenSSL library, consider any data that was in the client's memory to be compromised. References The Heartbleed Bug (by one of the two teams who independently discovered the bug) How exactly does the OpenSSL TLS heartbeat (Heartbleed) exploit work? Does Heartbleed mean new certificates for every SSL server? Heartbleed: What is it and what are options to mitigate it?
{ "source": [ "https://unix.stackexchange.com/questions/123714", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63545/" ] }
123,717
On my server I have a directory structure looking something like this: /myproject/code I normally have an ssh connection to the server and 'stand' in that directory: root@machine:/myproject/code# When I deploy a new version of my code, the code directory is removed so I'm left with: root@machine:/myproject/code# ./run -bash: ./run: No such file or directory And the only solution I've found is to cd out and back in: root@machine:/myproject/code# cd ../code root@machine:/myproject/code# ./run Running... Can I avoid this? It's a somewhat strange behavior. If you have a nice explanation why this happens I would appreciate it.
To me the "cd ../code" is a noop. I'm very interested into hearing why it isn't. Because files and directories are fundamentally filesystem inodes , not names -- this is perhaps an implementation detail specific to the filesystem type, but it is true for all the ext systems, so I'll stick to it here. When a new directory code is created, it is associated with a new inode, and that's where it is. There is no record kept of previously deleted files and directories, so there is no means by which the system could check what inode it used to occupy and perhaps shuffle things around so that it is the same again; such a system would quickly become unworkable, and in any case, it is probably no guarantee that you would be back there again -- that would be sort of undesirable, since it means you could also accidentally end up somewhere else if a directory is created that takes your (currently unused) inode. I'm not sure if this last possibility exists, or if the inode of the deleted directory currently assigned to your present working directory is tracked so that nothing will be assigned to it for the duration, etc.
{ "source": [ "https://unix.stackexchange.com/questions/123717", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21718/" ] }
123,725
I'm trying to setup the WiFi connection on my Lenovo B590 but I get the following error : ifdown: interface wlan0 not configured wpa_supplicant: /sbin/wpa_supplicant daemon failed to start run-parts: /etc/network/if-pre-up.d/wpasupplicant exited with return code 1 Internet Systems Consortium DHCP Client 4.2.2 Copyright 2004-2011 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Cannot find device "wlan0" Bind socket to interface: No such device Failed to bring up work-network So I looked up which firmware I need so my Debian 7 Wheezy system could find wlan0. NOTE: the wlan chipset is a Broadcom Corp. BCM43142 and its PCI-ID is 14e4:4365 host@user $ lspci -vnn -d 14e4: |grep Network 02:00.0 Network controller [0280]: Broadcom Corporation BCM43142 802.11b/g/n [14e4:4365] (rev 01) I found this article on the Debian Wiki giving all the steps needed to get it working, I followed them and got no errors anywhere. I set up the WiFi interface with wpa_supplicant but when I run $ ifdown wlan0 $ ifup wlan0=work-network I still get the error Cannot find device "wlan0" Bind socket to interface: No such device Do you know what is missing for it to work? EDIT: below is the dmesg output $ dmesg |grep broadcom -i [ 2.574645] usb 1-1.4: Manufacturer: Broadcom Corp [ 6.828086] eth1: Broadcom BCM4365 802.11 Hybrid Wireless Controller 6.20.55.19 (r300276) [ 10.343512] Broadcom 43xx driver loaded [ Features: PMNLS ] And here is the content of /etc/network/interfaces # The loopback network interface auto lo # iface lo inet loopback iface work-network inet dhcp wpa-conf /etc/wpa_supplicant/work-network.conf # The primary network interface auto eth0 allow-hotplug eth0
To me the "cd ../code" is a noop. I'm very interested into hearing why it isn't. Because files and directories are fundamentally filesystem inodes , not names -- this is perhaps an implementation detail specific to the filesystem type, but it is true for all the ext systems, so I'll stick to it here. When a new directory code is created, it is associated with a new inode, and that's where it is. There is no record kept of previously deleted files and directories, so there is no means by which the system could check what inode it used to occupy and perhaps shuffle things around so that it is the same again; such a system would quickly become unworkable, and in any case, it is probably no guarantee that you would be back there again -- that would be sort of undesirable, since it means you could also accidentally end up somewhere else if a directory is created that takes your (currently unused) inode. I'm not sure if this last possibility exists, or if the inode of the deleted directory currently assigned to your present working directory is tracked so that nothing will be assigned to it for the duration, etc.
{ "source": [ "https://unix.stackexchange.com/questions/123725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
123,849
I'm trying to update the timestamp to the current time on all of the xml files in my directory (recursively). I'm using Mac OSX 10.8.5. On about 300,000 files, the following echo command takes 10 seconds : for file in `find . -name "*.xml"`; do echo >> $file; done However, the following touch command takes 10 minutes ! : for file in `find . -name "*.xml"`; do touch $file; done Why is echo so much faster than touch here?
In bash, touch is an external binary, but echo is a shell builtin : $ type echo echo is a shell builtin $ type touch touch is /usr/bin/touch Since touch is an external binary, and you invoke touch once per file, the shell must create 300,000 instances of touch , which takes a long time. echo , however, is a shell builtin, and the execution of shell builtins does not require forking at all. Instead, the current shell does all of the operations and no external processes are created; this is the reason why it is so much faster. Here are two profiles of the shell's operations. You can see that a lot of time is spent cloning new processes when using touch . Using /bin/echo instead of the shell builtin should show a much more comparable result. Using touch $ strace -c -- bash -c 'for file in a{1..10000}; do touch "$file"; done' % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 56.20 0.030925 2 20000 10000 wait4 38.12 0.020972 2 10000 clone 4.67 0.002569 0 80006 rt_sigprocmask 0.71 0.000388 0 20008 rt_sigaction 0.27 0.000150 0 10000 rt_sigreturn [...] Using echo $ strace -c -- bash -c 'for file in b{1..10000}; do echo >> "$file"; done' % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 34.32 0.000685 0 50000 fcntl 22.14 0.000442 0 10000 write 19.59 0.000391 0 10011 open 14.58 0.000291 0 20000 dup2 8.37 0.000167 0 20013 close [...]
{ "source": [ "https://unix.stackexchange.com/questions/123849", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
123,917
I tried to display only hidden files but don't know how to do it. That is working (but matching also dots in other places) ls -la | grep '\.' Was trying adding ^ but didn't find the solution.
ls -ld .* will do what you want.
{ "source": [ "https://unix.stackexchange.com/questions/123917", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64780/" ] }
124,052
If I run tar -cvf on a directory of size 937MB to create an easily downloadable copy of a deeply nested folder structure, do I risk filling the disk given the following df -h output: /dev/xvda1 7.9G 3.6G 4.3G 46% / tmpfs 298M 0 298M 0% /dev/shm Related questions: If the disk might fill up, why i.e. what will Linux (Amazon AMI) and/or tar be doing under the hood? How can I accurately determine this information myself without asking again?
tar -c data_dir | wc -c without compression or tar -cz data_dir | wc -c with gzip compression or tar -cj data_dir | wc -c with bzip2 compression will print the size of the archive that would be created in bytes, without writing to disk. You can then compare that to the amount of free space on your target device. You can check the size of the data directory itself, in case an incorrect assumption was made about its size, with the following command: du -h --max-depth=1 data_dir As already answered, tar adds a header to each record in the archive and also rounds up the size of each record to a multiple of 512 bytes (by default).Β The end of an archive is marked by at least two consecutive zero-filled records. So it is always the case that you will have an uncompressed tar file larger than the files themselves, the number of files and how they align to 512 byte boundaries determines the extra space used. Of course, filesystems themselves use block sizes that maybe bigger than an individual file's contents so be careful where you untar it, the filesystem may not be able to hold lots of small files even though it has free space greater than the tar size! https://en.wikipedia.org/wiki/Tar_(computing)#Format_details
{ "source": [ "https://unix.stackexchange.com/questions/124052", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50131/" ] }
124,081
Attention please: I am not asking how to make a file from the command line! I have been using touch for making files for years without paying attention that its main purpose is something else. If one wants to create a file from command line there are so many possibilities: touch foo.bar > foo.bar cat > foo.bar echo -n > foo.bar printf '' > foo.bar And I'm sure there are more. But the fact is, none of the commands above are actually designed for creating files. For example, man touch suggests this command is for changing file timestamps. Why doesn't an OS as complete as Unix (or Linux) have a command solely designed for creating files?
I would say because it's hardly ever necessary to create an empty file that you won't fill with content immediately on the command line or in shell scripting. There is absolutely no benefit in creating a file first and then using I/O redirection to write to the file if you can do so in one step. In those cases where you really want to create an empty file and leave it I'd argue that > "${file}" could not be briefer and more elegant. TL;DR : It doesn't exist because creating empty files most often has no use, and in cases it has there are already a myriad of options available to achieve this goal. On a side note, using touch only works if the file does not exist whereas options using redirection will always truncate the file, even if it exists (so technically, those solutions are not identical). > foo is the preferred method since it saves a fork and echo -n should be avoided in general since it's highly unportable.
{ "source": [ "https://unix.stackexchange.com/questions/124081", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56144/" ] }
124,127
I'm writing an application. It has the ability to spawn various external processes. When the application closes, I want any processes it has spawned to be killed. Sounds easy enough, right? Look up my PID, and recursively walk the process tree, killing everything in sight, bottom-up style. Except that this doesn't work . In one specific case, I spawn foo , but foo just spawns bar and then immediately exits, leaving bar running. There is now no record of the fact that bar was once part of the application's process tree. And hence, the application has no way of knowing that it should kill bar . I'm pretty sure I can't be the first person on Earth to try to do this. So what's the standard solution? I guess really I'm looking for some way to "tag" a process in such a way that any process it spawns will unconditionally inherit the same tag. (So far, the best I can come up with is running the application as a different user. That way, you can just indescriminently kill all processes beloning to that user. But this has all sorts of access permission problems...)
Update This is one of those ones where I clearly should have read the question more carefully (though seemingly this is the case with most answers on to this question). I have left the original answer intact because it gives some good information, even though it clearly misses the point of the question. Using SID I think the most general, robust approach here (at least for Linux) is to use SID (Session ID) rather than PPID or PGID. This is much less likely to be changed by child processes and, in the case of shell script, the setsid command can be used to start a new session. Outside of the shell the setuid system call can be used. For a shell that is a session leader, you can kill all the other processes in the session by doing (the shell won't kill itself): kill $(ps -s $$ -o pid=) Note: The trailing equals sign in argument pid= removes the PID column header. Otherwise, using system calls, call getsid for each process seems like the only way. Using a PID namspace This is the most robust approach, however the downsides are that it is Linux only and that it needs root privileges. Also the shell tools (if used) are very new and not widely available. For a more detailed discussion of PID namespaces, please see this question - Reliable way to jail child processes using `nsenter:` . The basic approach here is that you can create a new PID namespace by using the CLONE_NEWPID flag with the clone system call (or via the unshare command). When a process in a PID namespace is orphaned (ie when it parent process finishes), it is re-parented to the top level PID namespace process rather than the init . This means that you can always identify all the descendants of the top level process by walking the process tree. In the case of a shell script the PPID approach below would then reliably kill all descendants. Further reading on PID namespaces: Namespaces in operation, part 3: PID namespaces Namespaces in operation, part 4: more on PID namespaces Original Answer Killing child processes The easy way to do this in a shell script, provided pkill is available is: pkill -P $$ This kills all children of the current given process ( $$ expands to the PID of the current shell). If pkill isn't available, a POSIX compatible way is: kill $(ps -o pid= --ppid $$) Killing all descendent processes Another situation is that you may want to kill all the descendants of the current shell process as well as just the direct children. In this case you can use the recursive shell function below to list all the descendant PIDs, before passing them as arguments to kill: list_descendants () { local children=$(ps -o pid= --ppid "$1") for pid in $children do list_descendants "$pid" done echo "$children" } kill $(list_descendants $$) Double forks One thing to beware of, which might prevent the above from working as expected is the double fork() technique. This is commonly used when daemonising a process. As the name suggests the process that is to be started runs in the second fork of the original process. Once the process is started, the first fork then exits meaning that the process becomes orphaned. In this case it will become a child of the init process instead of the original process that it was started from. There is no robust way to identify which process was the original parent, so if this is the case, you can't expect to be able to kill it without having some other means of identification (a PID file for example). However, if this technique has been used, you shouldn't try to kill the process without good reason. Further Reading: Why fork() twice What is the reason for performing a double fork when creating a daemon?
{ "source": [ "https://unix.stackexchange.com/questions/124127", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26776/" ] }
124,212
my Dell laptop is subject to this bug with kernel 3.14. As a workaround I wrote a simple script /usr/bin/brightness-fix: #!/bin/bash echo 0 > /sys/class/backlight/intel_backlight/brightnes (and made executable: chmod +x /usr/bin/brightness-fix ) and a systemd service calling it that is executed at startup: /etc/systemd/system/brightness-fix.service [Unit] Description=Fixes intel backlight control with Kernel 3.14 [Service] Type=forking ExecStart=/usr/bin/brightness-fix TimeoutSec=0 StandardOutput=syslog #RemainAfterExit=yes #SysVStartPriority=99 [Install] WantedBy=multi-user.target and enabled: systemctl enable /etc/systemd/system/brightness-fix.service That works like a charm and I can control my display brightness as wanted. The problem comes when the laptop resumes after going to sleep mode (e.g. when closing the laptop lip): brightness control doesn't work anymore unless I manually execute my fisrt script above: /usr/bin/brightness-fix How can I create another systemd service like mine above to be executed at resume time? EDIT: According to comments below I have modified my brightness-fix.service like this: [Unit] Description=Fixes intel backlight control with Kernel 3.14 [Service] Type=oneshot ExecStart=/usr/local/bin/brightness-fix TimeoutSec=0 StandardOutput=syslog [Install] WantedBy=multi-user.target sleep.target also I have added echo "$1 $2" > /home/luca/br.log to my script to check whether it is actually executed. The script it is actually executed also at resume ( post suspend ) but it has no effect (backlit is 100% and cannot be changed). I also tried logging $DISPLAY and $USER and, at resume time, they are empty. So my guess is that the script is executed too early when waking up from sleep. Any hint?
I know this is an old question, but the following unit file worked for me to run a script upon resume from sleep: [Unit] Description=<your description> After=suspend.target [Service] User=root Type=oneshot ExecStart=<your script here> TimeoutSec=0 StandardOutput=syslog [Install] WantedBy=suspend.target I believe it is the After=suspend.target that makes it run on resume, rather than when the computer goes to sleep.
{ "source": [ "https://unix.stackexchange.com/questions/124212", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57287/" ] }
124,225
Are the major, minor number Unique? Do we have any citations and reference to it? NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 465.8G 0 β”œβ”€sda1 8:1 0 298.2M 0 β”œβ”€sda2 8:2 0 3G 0 β”œβ”€sda3 8:3 0 458.7G 0 / β”œβ”€sda4 8:4 0 1K 0 └─sda5 8:5 0 3.8G 0 sr0 11:0 1 1024M 0
From The Linux Programming Interface , Β§14.1 Each device file has a major ID number and a minor ID number. The major ID identifies the general class of device, and is used by the kernel to look up the appropriate driver for this type of device. The minor ID uniquely identifies a particular device within a general class. The major and minor IDs of a device file are displayed by the ls Β­-l command. [...] Each device driver registers its association with a specific major device ID, and this association provides the connection between the device special file and the device. The name of the device file has no relevance when the kernel looks for the device driver. See also this old (2001) Linux Device Drivers (2e) chapter . i.e. the intention is to provide a unique mapping of major:minor to device:instance for each type of device. Strictly, you can have two distinct devices with the same major: minor, as long as one is char and one is block: # ls -l /dev/ram1 /dev/mem crw-r----- 1 root kmem 1, 1 Jan 1 1970 /dev/mem brw-rw---- 1 root disk 1, 1 Jan 1 1970 /dev/ram1 On Linux, at any point in time on one system the major:minor numbers for each type of device are unique. The numbers may however change over time, and need not be the same on across different Linux systems (even the same distribution, kernel and hardware). Note that character and block devices have distinct numbering spaces, e.g. block major 1 is assigned to RAM disks, char major 1 is assigned to a set of kernel devices including null and zero. Historically device majors were (mostly) statically allocated through a registry (also still present, though unmaintained, in the kernel source Documentation/devices.txt ). These days many devices are allocated dynamically, this is managed by udev , and the mappings viewable in /proc/devices . The fixed devices still exists in incude/uapi/linux/major.h (recently moved from include/major.h ) Now although the major:minor combination uniquely identifies specific device instances, there's nothing to stop you creating multiple devices nodes (files) that refer to the same device. They don't even have to be created in /dev (but they do have to be on a filesystem that supports creating device nodes, and isn't mounted with the nodev option). A common use is creating duplicate zero, null and random devices in a chroot: # find /dev /var/chroot -regextype posix-extended -regex ".*/(zero|null|random)" -type c | xargs ls -l crwxrwxrwx 1 root root 1, 3 2012-11-21 03:22 /dev/null crw-rw-r-- 1 root root 1, 8 2012-05-07 10:35 /dev/random crw-rw-rw- 1 root root 1, 5 2012-11-21 03:22 /dev/zero crwxrwxrwx 1 root root 1, 3 2012-11-21 03:22 /var/chroot/sendmail/dev/null crw-rw-r-- 1 root root 1, 8 2012-05-07 10:35 /var/chroot/sendmail/dev/random crw-rw-rw- 1 root root 1, 5 2012-11-21 03:22 /var/chroot/sendmail/dev/zero The names are just aliases, the kernel doesn't care much about most names or locations, it cares about the major number so that it can select the correct driver, and the driver (usually) cares about the minor number so it can select the correct instance. Most names are simply convention (though some are defined by POSIX ). Note also that one device may register for multiple major numbers, check the sd driver in /proc/devices ; a driver module name ( .ko ) need not be the same as the device name, and need not be the same as the device node in /dev , and a single driver module may manage multiple logical/physical devices or device names. To recap: you may have two or more device nodes (in /dev/ or elsewhere) which have the same major:minor numbers, but if they are the same type they refer to the same device. You can have one driver which can handle multiple major instances, but within the kernel and within the driver, for each type (char or block) the major:minor number is taken to refer to a specific device (major) and a specific instance (minor) of the device. You cannot have two device nodes with the same type and major:minor and expect them to access two different logical or physical devices. When a device is being accessed the kernel select one driver based on the type and the major number (and not based on the the device node name), and by convention the minor number deterministically selects a specific instance or sub-function. Update Some interesting history and some *BSD perspective can be found in Poul-Henning Kamp's 2002 BSDCon presentation: https://www.usenix.org/legacy/events/bsdcon/full_papers/kamp/kamp_html/ If you leap back in time to 1978 (courtesy of Alcatel-Lucent, the Bell System Technical Journal Jul-Aug 1978) the ' Unix Time Sharing System ' sets it out clearly (p1937): Devices are characterized by a major device number, a minor device number, and a class (block or character). For each class, there is an array of entry points into the device drivers. The major device number is used to index the array when calling the code for a particular device driver. The minor device number is passed to the device driver as an argument. The minor number has no significance other than that attributed to it by the driver. Usually, the driver uses the minor number to access one of several identical physical devices.
{ "source": [ "https://unix.stackexchange.com/questions/124225", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40151/" ] }
124,310
When I have a tmux window vertically split into two panes, how can i spawn a new third horizontal pane that stretches over the full width? e.g. How do I get from this: Ctr-b % +–––––––––+–––––––––+ | | | | | | | | | | | | | | | +–––––––––+–––––––––+ to this? Ctr-b % Ctr-b ...now what? +–––––––––+–––––––––+ | | | | | | | | | | | | | | | +–––––––––+–––––––––+ | | | | +–––––––––––––––––––+ instead of this? Ctr-b % Ctr-b " +––––––––+––––––––––+ | | | | | | | | | | | | | | | | +––––––––––+ | | | | | | +––––––––+––––––––––+ Note: I don't want to cycle through all possible layout combinations via Ctr-b Space to eventually get to the desired layout - it should be achieved with as much brevity as possible.
You can use one of the five preset layout modes (tiled) to achieve this. From your starting point (a single vertical split), open a new pane, which by default will split the active pane and then arrange the panes into tiled mode: Ctrl b , Alt 5 From man tmux : M-1 to M-5 Β Β  Arrange panes in one of the five preset layouts: even-horizontal,even-vertical, main-horizontal, main-vertical, or tiled. You could optionally add a select-layout tiled to a keybind in your .tmux.conf if this was a layout you wanted regularly.
{ "source": [ "https://unix.stackexchange.com/questions/124310", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/2407/" ] }
124,342
One of my servers is set up to automatically mount a Windows directory using fstab. However, after my last reboot it stopped working. The line in fstab is: //myserver/myfolder /mnt/backup cifs credentials=home/myfolder/.Smbcredentials The .Smbcredentials file is: username=myaccount password=mypassword domain=mydomain I do a mount -a and I receive mount error 13 = Permission denied . If I do this enough it will lock out my Windows account, so I know it's trying. I've checked that my password is correct. What am i doing wrong?
A couple of things to check out. I do something similar and you can test mount it directly using the mount command to make sure you have things setup right. Permissions on credentials file Make sure that this file is permissioned right. $ sudo ls -l /etc/smb_credentials.txt -rw-------. 1 root root 54 Mar 24 13:19 /etc/smb_credentials.txt Verbose mount You can coax more info out of mount using the -v switch which will often times show you where things are getting tripped up. $ sudo mount -v -t cifs //server/share /mnt \ -o credentials=/etc/smb_credentials.txt Resulting in this output if it works: mount.cifs kernel mount options: ip=192.168.1.14,unc=\\server\share,credentials=/etc/smb_credentials.txt,ver=1,user=someuser,domain=somedom,pass=******** Check the logs After running the above mount command take a look inside your dmesg and /var/log/messages or /var/log/syslog files for any error messages that may have been generated when you attempted the mount . Type of security You can pass a lot of extra options via the -o .. switch to mount. These options are technology specific, so in your case they're applicable to mount.cifs specifically. Take a look at the mount.cifs man page for more on all the options you can pass. I would suspect you're missing an option to sec=... . Specifically one of these options: sec= Security mode. Allowed values are: Β· none - attempt to connection as a null user (no name) Β· krb5 - Use Kerberos version 5 authentication Β· krb5i - Use Kerberos authentication and forcibly enable packet signing Β· ntlm - Use NTLM password hashing Β· ntlmi - Use NTLM password hashing and force packet signing Β· ntlmv2 - Use NTLMv2 password hashing Β· ntlmv2i - Use NTLMv2 password hashing and force packet signing Β· ntlmssp - Use NTLMv2 password hashing encapsulated in Raw NTLMSSP message Β· ntlmsspi - Use NTLMv2 password hashing encapsulated in Raw NTLMSSP message, and force packet signing The default in mainline kernel versions prior to v3.8 was sec=ntlm. In v3.8, the default was changed to sec=ntlmssp. You may need to adjust the sec=... option so that it's either sec=ntlm or sec=ntlmssp . References Thread: mount -t cifs results gives mount error(13): Permission denied
{ "source": [ "https://unix.stackexchange.com/questions/124342", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65040/" ] }
124,407
I used several colors in my bash PS1 prompt such as: \033]01;31\] # pink \033]00m\] # white \033]01;36\] # bold green \033]02;36\] # green \033]01;34\] # blue \033]01;33\] # bold yellow Where can I find a list of the color codes I can use? I looked at Colorize Bash Console Color but it didn't answer my question about a list of the actual codes. It would be nice if there was a more readable form also. See also: How can I get my PS1 prompt to show time, user, host, directories, and Git branch
Those are ANSI escape sequences ; that link is to a chart of color codes but there are other interesting things on that Wikipedia page as well. Not all of them work on (e.g.) a normal Linux console. This is incorrect: \033]00m\] # white 0 resets the terminal to its default (which is probably white). The actual code for white foreground is 37. Also, the escaped closing brace at the end ( \] ) is not part of the color sequence (see the last few paragraphs below for an explanation of their purpose in setting a prompt). Note that some GUI terminals allow you to specify a customized color scheme. This will affect the output. There's a list here which adds 7 foreground and 7 background colors I had not seen before, but they seem to work: # Foreground colors 90 Dark gray 91 Light red 92 Light green 93 Light yellow 94 Light blue 95 Light magenta 96 Light cyan # Background colors 100 Dark gray 101 Light red 102 Light green 103 Light yellow 104 Light blue 105 Light magenta 106 Light cyan In addition, if you have a 256 color GUI terminal (I think most of them are now), you can apply colors from this chart: The ANSI sequence to select these, using the number in the bottom left corner, starts 38;5; for the foreground and 48;5; for the background, then the color number, so e.g.: echo -e "\\033[48;5;95;38;5;214mhello world\\033[0m" Gives me a light orange on tan (meaning, the color chart is roughly approximated). You can see the colors in this chart 1 as they would appear on your terminal fairly easily: #!/bin/bash color=16; while [ $color -lt 245 ]; do echo -e "$color: \\033[38;5;${color}mhello\\033[48;5;${color}mworld\\033[0m" ((color++)); done The output is self-explanatory. Some systems set the $TERM variable to xterm-256color if you are on a 256 color terminal via some shell code in /etc/profile . On others, you should be able to configure your terminal to use this. That will let TUI applications know there are 256 colors, and allow you to add something like this to your ~/.bashrc : if [[ "$TERM" =~ 256color ]]; then PS1="MyCrazyPrompt..." fi Beware that when you use color escape sequences in your prompt, you should enclose them in escaped ( \ prefixed) square brackets, like this: PS1="\[\033[01;32m\]MyPrompt: \[\033[0m\]" Notice the [ 's interior to the color sequence are not escaped, but the enclosing ones are. The purpose of the latter is to indicate to the shell that the enclosed sequence does not count toward the character length of the prompt. If that count is wrong, weird things will happen when you scroll back through the history, e.g., if it is too long, the excess length of the last scrolled string will appear attached to your prompt and you won't be able to backspace into it (it's ignored the same way the prompt is). Also note that if you want to include the output of a command run every time the prompt is used (as opposed to just once when the prompt is set), you should set it as a literal string with single quotes, e.g.: PS1='\[\033[01;32m\]$(date): \[\033[0m\]' Although this is not a great example if you are happy with using bash's special \d or \D{format} prompt escapes -- which are not the topic of the question but can be found in man bash under PROMPTING . There are various other useful escapes such as \w for current directory, \u for current user, etc. 1. The main portion of this chart, colors 16 - 231 (notice they are not in numerical order) are a 6 x 6 x 6 RGB color cube. "Color cube" refers to the fact that an RGB color space can be represented using a three dimensional array (with one axis for red, one for green, and one for blue). Each color in the cube here can be represented as coordinates in a 6 x 6 x 6 array, and the index in the chart calculated thusly: 16 + R * 36 + G * 6 + B The first color in the cube, at index 16 in the chart, is black (RGB 0, 0, 0). You could use this formula in shell script: #!/bin/sh function RGBcolor { echo "16 + $1 * 36 + $2 * 6 + $3" | bc } fg=$(RGBcolor 1 0 2) # Violet bg=$(RGBcolor 5 3 0) # Bright orange. echo -e "\\033[1;38;5;$fg;48;5;${bg}mviolet on tangerine\\033[0m"
{ "source": [ "https://unix.stackexchange.com/questions/124407", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/10043/" ] }
124,410
When I do a yum update or apt-get update , my machine is hitting several servers and downloading several packages. I would imagine that those servers are handling millions of similar requests on a daily basis. Who pays for the maintenance, existence, bandwidth of those servers? If the answer depends on the distro, then CentOS, Arch and Ubuntu would be good examples. I am wondering about this because I am using these free operating systems and I am consuming bandwidth, but I have not paid anybody for this privilege.
I would assume most distros accept individual private donations (they may also accept free hosting). However, that is probably not the bulk of their financing in most cases. Note that some of the major distros may have some paid staff, and possibly also office space, the cost of which likely exceeds that of hosting the repos 1 . This should not be taken to mean that they are not primarily volunteer based (except for the commercial variants, they are), just that they do have operating budgets. Fedora is owned by Redhat, and the latter is a publicly traded, billion dollar annual business. I would presume they do quite a bit to help support the former. According to wikipedia , CentOS is now also owned by Redhat and earlier this year Redhat announced their ongoing sponsorship of CentOS development. Ubuntu is owned by Canonical , which I do not think is on a par with Redhat, but they probably still have revenues into the tens of millions USD per year. Last time I downloaded an image, Ubuntu was pretty aggressive about encouraging you to make a small donation at the same time. $5 a year would I think cover the costs of repo hosting associated with the average installation. The Debian project has been around for nearly 20 years and surely has a substantial core of users willing to help support it. They also have a list of "partners" here which provide them with resources. I would think Canonical helps out significantly, since Ubuntu is reliant upon Debian, but judging from this link provided in Kiwi's answer, they are still having to beg publicly for $250K to cover meeting costs, which is pretty disappointing. Arch is likely much poorer than the other distros mentioned here, but they may still collect enough money from various sources to support some development staff and hosting. They do not appear to obviously solicit on their site, so I would guess this funding comes mostly from industry (and possibly, government) grants. 1. To get some idea of how much this hosting would actually cost, consider that GNU/Linux systems probably account for 1-2% of desktop systems worldwide and at least 40% of web servers . If we then assume this might amount to ~25 million systems, if a large (theoretical) distro accounted for 10% of those and each user accounted for 4 MB a day averaged out over time, this would amount to 10 TB/day. I would think if you know the right people, you could perhaps get 3000 TB/month for <$5000 US.
{ "source": [ "https://unix.stackexchange.com/questions/124410", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65082/" ] }
124,444
I'd like a way to add things to $PATH, system-wide or for an individual user, without potentially adding the same path multiple times. One reason to want to do this is so that additions can be made in .bashrc , which does not require a login, and is also more useful on systems which use (e.g.) lightdm , which never calls .profile . I am aware of questions dealing with how to clean duplicates from $PATH, but I do not want to remove duplicates . I would like a way to add paths only if they are not already present.
Suppose that the new path that we want to add is: new=/opt/bin Then, using any POSIX shell, we can test to see if new is already in the path and add it if it isn't: case ":${PATH:=$new}:" in *:"$new":*) ;; *) PATH="$new:$PATH" ;; esac Note the use of colons. Without the colons, we might think that, say, new=/bin was already in the path because it pattern matched on /usr/bin . While PATHs normally have many elements, the special cases of zero and one elements in the PATH is also handled. The case of the PATH initially having no elements (being empty) is handled by the use of ${PATH:=$new} which assigns PATH to $new if it is empty. Setting default values for parameters in this way is a feature of all POSIX shells: see section 2.6.2 of the POSIX docs .) A callable function For convenience, the above code can be put into a function. This function can be defined at the command line or, to have it available permanently, put into your shell's initialization script (For bash users, that would be ~/.bashrc ): pupdate() { case ":${PATH:=$1}:" in *:"$1":*) ;; *) PATH="$1:$PATH" ;; esac; } To use this path update function to add a directory to the current PATH: pupdate /new/path
{ "source": [ "https://unix.stackexchange.com/questions/124444", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
124,455
I have a program I need to run at startup, it has output on stdout and stderr that I want to redirect to the system log using the logger command. What I have in my startup script is thie: /home/dirname/application_name -v|logger 2>&1 & This is redirecting the stdout to syslog just fine but stderr is coming to the console, so I need to refine the command.
You need to combine the output of STDERR and STDOUT prior to piping it to logger . Try this instead: /home/dirname/application_name -v 2>&1 | logger & Example $ echo "hi" 2>&1 | logger & [1] 26818 [1]+ Done echo "hi" 2>&1 | logger $ sudo tail /var/log/messages Apr 12 17:53:57 greeneggs saml: hi You can use the abbreviated notation here as well, if used cautiously in a actual Bash shell (not to be confused with Dash): $ echo "hi" |& logger & NOTE: This is equivalent to <cmd1> 2>&1 | <cmd2> . Again only use the above when making use of an actual Bash shell interactively, would be a good way to approach it. excerpt from ABSG # |& was added to Bash 4 as an abbreviation for 2>&1 |. References Advanced Bash Scripting Guide - Chapter 20. I/O Redirection
{ "source": [ "https://unix.stackexchange.com/questions/124455", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/42163/" ] }
124,462
If I do: $ ls -R .: 4Shared/ Cloud/ ./4Shared: UFAIZLV2R7.part3.rar ./Cloud: UFAIZLV2R7.part2.rar.part UFAIZLV2R7.part1.rar.part UFAIZLV2R7.part4.rar.part If I want to list the .rar files only, and I use grep , it will show me too the .rar.part files, what is not my wish. I am solving this using find or ls **/*.rar as told in this thread and they work fine, but I would like to learn if it is possible to do it via grep . I have tried (thinking about EOL ): ls -R | grep ".rar\n" with no results. I think that the problem lies in discover if the greping is found at the end of the line , but I am not sure. Any help out here, please?
The $ anchor matches the end of a line. ls -R | grep '\.rar$' You can also use find for this: find . -name '*.rar'
{ "source": [ "https://unix.stackexchange.com/questions/124462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/57439/" ] }
124,518
What are the differences between dc and bc calculators? When should I use dc and when bc ?
dc is a very archaic tool and somewhat older than bc . To quote the Wikipedia page : It is one of the oldest Unix utilities, predating even the invention of the C programming language; like other utilities of that vintage, it has a powerful set of features but an extremely terse syntax. The syntax is a reverse polish notation, which basically means that the arguments (ie numbers) come first followed by the operator. A basic example of the dc usage is: echo '3 4 * p' | dc Where the p is required to print the result of the calculation. bc on the other hand uses the more familiar infix notation and thus is more intuitive to use. Here is an example of bc usage: echo '3 * 4' | bc Which one to use? bc is standardised by POSIX and so is probably the more portable of the two (at least on modern systems). If you are doing manual calculator work then it is definitely the choice (unless you are somewhat of a masochist). dc can still have its uses though, here is a case where the reverse polish notation comes in handy. Imagine you have a program which outputs a stream of numbers that you want to total up, eg: 23 7 90 74 29 To do this with dc is very simple (at least with modern implementations where each operator can take more than two numbers) since you only have to append a +p to the stream, eg: { gen_nums; echo +p } | dc But with bc it is more complex since we not only need to put a + between each number and make sure everything is on the same line, but also make sure there is a newline at the end: { gen_nums | sed '$ !s/$/+/' | tr -d '\n'; echo; } | bc
{ "source": [ "https://unix.stackexchange.com/questions/124518", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20334/" ] }
124,590
I was reading a bash script someone made and I noticed that the author doesn't use eval to evaluate a variable as a command The author used bash -c "$1" instead of eval "$1" I assume using eval is the preferred method and it's probably faster anyway. Is that true? Is there any practical difference between the two? What are notable differences between the two?
eval "$1" executes the command in the current script. It can set and use shell variables from the current script, set environment variables for the current script, set and use functions from the current script, set the current directory, umask, limits and other attributes for the current script, and so on. bash -c "$1" executes the command in a completely separate script, which inherits environment variables, file descriptors and other process environment (but does not transmit any change back) but does not inherit internal shell settings (shell variables, functions, options, traps, etc.). There is another way, (eval "$1") , which executes the command in a subshell: it inherits everything from the calling script but does not transmit any change back. For example, assuming that the variable dir isn't exported and $1 is cd "$foo"; ls , then: cd /starting/directory; foo=/somewhere/else; eval "$1"; pwd lists the content of /somewhere/else and prints /somewhere/else . cd /starting/directory; foo=/somewhere/else; (eval "$1"); pwd lists the content of /somewhere/else and prints /starting/directory . cd /starting/directory; foo=/somewhere/else; bash -c "$1"; pwd lists the content of /starting/directory (because cd "" doesn't change the current directory) and prints /starting/directory .
{ "source": [ "https://unix.stackexchange.com/questions/124590", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65149/" ] }
124,681
How do I setup ssh from the host to the guest using qemu? I am able to use port redirection when I boot the VM without any special parameters, as follows: /usr/bin/qemu-system-x86_64 -hda ubuntu1204 -m 512 -redir tcp:7777::8001 But when I try to boot using the following: /usr/bin/qemu-system-x86_64 \ -m 1024 \ -name vserialtest \ -hda ubuntu1204 \ -chardev socket,host=localhost,port=7777,server,nowait,id=port1-char \ -device virtio-serial \ -device virtserialport,id=port1,chardev=port1-char,name=org.fedoraproject.port.0 \ -net user,hostfwd=tcp:7777::8001 I get the following error and the VM does not boot: qemu-system-x86_64: -net user,hostfwd=tcp:7777::8001: invalid host forwarding rule 'tcp:7777::8001' qemu-system-x86_64: -net user,hostfwd=tcp:7777::8001: Device 'user' could not be initialized Please note that I am able to boot the VM without the -net parameter without any issues, however, I want to setup ssh from the host to the guest. ssh from guest to host works fine as expected. Edit I have tried using -net user,hostfwd=tcp::7777-:8001 as well as -net user,hostfwd=tcp::7777:8001 but still the error persists and the VM does not boot.
I think that the error does not come from the -net statement, but from: -chardev socket,host=localhost,port=7777,server,nowait,id=port1-char The statement uses already the port 7777 . For the port forwarding, with -net user,hostfwd=tcp::7777-:8001 It works fine when not setting up the virtio serial channel. If I understand right, you want to set up a virtio serial channel to communicate from the host to the VMΒ using a Unix Domain Socket? In this case, the following could do the job: /usr/bin/qemu-system-x86_64 \ -m 1024 \ -name vserialtest \ -hda ubuntu1204 \ -chardev socket,path=/tmp/port1,server=on,wait=off,id=port1-char \ -device virtio-serial \ -device virtserialport,id=port1,chardev=port1-char,name=org.fedoraproject.port.0 \ -net user,hostfwd=tcp::7777-:8001 An example of how to connect from the host using ssh to the VM: -net user,hostfwd=tcp::10022-:22 -net nic This host-forwarding maps the localhost (host) port 10022 to the port 22 on the VM. Once the VM was started like this, you can access it from the localhost as follows: ssh vmuser@localhost -p10022 The -net nic command initializes a very basic virtual network interface card.
{ "source": [ "https://unix.stackexchange.com/questions/124681", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29330/" ] }
124,757
How can I update the cache / index of locate? I installed new packages and the files are clearly not yet indexed. So which command do I have to commit, in order for the indexer to trigger? I'm currently working on debian jessie (testing): with Linux mbpc 3.13-1-amd64 #1 SMP Debian 3.13.7-1 (2014-03-25) x86_64 GNU/Linux
The command is: sudo updatedb See man updatedb for more details.
{ "source": [ "https://unix.stackexchange.com/questions/124757", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65400/" ] }
124,762
I was recently trying to learn more about how the shell works and was looking at how the clear command works. The executable is located in /usr/bin/clear and it seems to print out a bunch of blank lines (equal to the height of the terminal) and puts the cursor at the top-left of the terminal. The output of the command is always the same, regardless of the size of the terminal: $ clear | hexdump -C 00000000 1b 5b 48 1b 5b 32 4a |.[H.[2J| 00000007 and can be replicated with the echo having the exact same effect: $ /bin/echo -e "\x1b\x5b\x48\x1b\x5b\x32\x4a\c" I was really curious how this output of this command translates to clearing the console.
The output of the clear command is console escape codes. The exact codes required depend on the exact terminal you are using, however most use ANSI control sequences. Here is a good link explaining the various codes - http://www.termsys.demon.co.uk/vtansi.htm . The relevant snippets are: Cursor Home <ESC>[{ROW};{COLUMN}H Sets the cursor position where subsequent text will begin. If no row/column parameters are provided (ie. <ESC>[H), the cursor will move to the home position, at the upper left of the screen. And: Erase Screen <ESC>[2J Erases the screen with the background colour and moves the cursor to home. Where <ESC> is hex 1B or octal 033 . Another way to view the characters is with: clear | sed -n l
{ "source": [ "https://unix.stackexchange.com/questions/124762", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65239/" ] }
124,811
Does the command line have a way to get a recommended list of programs used to open a particular file, based on the file type? For example, a .pdf file would have an open with... recommendation using the programs Evince and Document Viewer . I use the command line for most things, but sometimes I forget the name of a program that I want to use to open a particular type of file. BTW I am using Ubuntu 13.10. pro-tip Thanks to @slm 's selected answer below, I made the following bash script in a file called openwith.sh : xdg-mime query default $(xdg-mime query filetype $1) Add as an alias or execute directly as an openwith command.
There isn't a command that I've ever seen that will act as "open with..." but you can use the command xdg-open <file> to open a given <file> in the application that's associated with that particular type of file. Examples Opening a text file: $ xdg-open tstfile.txt $ Resulting in the file tstfile.txt being opened in gedit : Opening a LibreOffice Writer document: $ xdg-open tstfile.odt $ Resulting in the file tstfile.odt being opened in Writer: What apps get used? You can use xdg-mime to query the system to find out what applications are associated to a given file type. $ xdg-mime query default $(xdg-mime query filetype tstfile.txt) gedit.desktop calibre-ebook-viewer.desktop $ xdg-mime query default $(xdg-mime query filetype tstfile.odt) libreoffice-writer.desktop calibre-ebook-viewer.desktop This is a 2 step operation. First I'm querying for the mime-type of a given file, xdg-mime query filetype tstfile.txt , which will return text/plain . This is then used to perform another lookup to find out the list of applications that are associated with this mime-type. As you can see above I have 2 apps associated, gedit and calibre , for .txt files. You can use xdg-mime to change the associations too. See man xdg-mime for more details.
{ "source": [ "https://unix.stackexchange.com/questions/124811", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59802/" ] }
124,816
For those out of the loop, sl is a humourous command line tool that is meant to trip people up if they mistype ls . When invoked it prints a Steam Locomotive. For example: ( ) (@@) ( ) (@) () @@ O @ O @ O (@@@) ( ) (@@@@) ( ) ==== ________ ___________ _D _| |_______/ \__I_I_____===__|_________| |(_)--- | H\________/ | | =|___ ___| _________________ / | | H | | | | ||_| |_|| _| \_____A | | | H |__--------------------| [___] | =| | | ________|___H__/__|_____/[][]~\_______| | -| | |/ | |-----------I_____I [][] [] D |=======|____|________________________|_ __/ =| o |=-O=====O=====O=====O \ ____Y___________|__|__________________________|_ |/-=|___|= || || || |_____/~\___/ |_D__D__D_| |_D__D__D_| \_/ \__/ \__/ \__/ \__/ \_/ \_/ \_/ \_/ \_/ However, in the man page for sl , it states the following bug: BUGS It rarely shows contents of current directory. So, the question remains, are there some conditions, under which sl actually does show the current directory?
As far as I know, the only condition under which sl shows the current directory is when you mistype it as ls .
{ "source": [ "https://unix.stackexchange.com/questions/124816", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
124,855
On Linux (Debian, Ubuntu Mint...), Is there any option command or something that I can use to transfer files to another user without having to do : sudo mv /home/poney/folderfulloffiles /home/unicorn/ sudo chown -R unicorn:unicorn /home/unicorn/folderfulloffiles
Use rsync(1) : rsync \ --remove-source-files \ --chown=unicorn:unicorn \ /home/poney/folderfulloffiles /home/unicorn/
{ "source": [ "https://unix.stackexchange.com/questions/124855", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/53092/" ] }
124,878
So... I have two monitors on my Ubuntu machine. And every time I launch some Windows OpenGL application under Wine it turns off the second monitor. And leaves it turned off when the application exits. I wonder, is there a shell command which will instantly turn the second monitor on?
The xrandr command is the one you are looking for. An example usage is: xrandr --output HDMI1 --auto --same-as LVDS1 You can have --left-of , --right-of . Run xrandr on its own to see the different outputs that are available.
{ "source": [ "https://unix.stackexchange.com/questions/124878", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/14265/" ] }
124,918
I am using curl to upload a file to a server via an HTTP post. curl -X POST [email protected] server-URL When I manually execute this command on the command line, I get a response from the server like "Upload successful" . However, how if I want to execute this curl command via a script, how can I find out if my POST request was successful?
The simplest way is to store the response and compare it: $ response=$(curl -X POST [email protected] server-URL); $ if [ "Upload successful" == "${response}" ]; then … fi; I haven't tested that. The syntax might be off, but that's the idea. I'm sure there are more sophisticated ways of doing it such as checking curl's exit code or something. update curl returns quite a few exit codes. I'm guessing a failed post might result in 55 Failed sending network data. So you could probably just make sure the exit code was zero by comparing to $? ( Expands to the exit status of the most recently executed foreground pipeline. ): $ curl -X POST [email protected] server-URL; $ if [ 0 -eq $? ]; then … fi; Or if your command is relatively short and you want to do something when it fails, you could rely on the exit code as the condition in a conditional statement: $ if curl --fail -X POST [email protected] server-URL; then # …(success) else # …(failure) fi; I think this format is often preferred , but personally I find it less readable.
{ "source": [ "https://unix.stackexchange.com/questions/124918", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59877/" ] }
124,947
I was researching the other question , when I realized I don't understand what's happening under the hood, what are those /dev/fd/* files and how come child processes can open them.
Well, there are many aspects to it. File descriptors For each process, the kernel maintains a table of open files (well, it might be implemented differently, but since you are not able to see it anyways, you can just assume it's a simple table). That table contains information about which file it is/where it can be found, in which mode you opened it, at which position you are currently reading/writing, and whatever else is needed to actually perform I/O operations on that file. Now the process never gets to read (or even write) that table. When the process opens a file, it gets back a so-called file descriptor. Which is simply an index into the table. The directory /dev/fd and its content On Linux dev/fd is actually a symbolic link to /proc/self/fd . /proc is a pseudo file system in which the kernel maps several internal data structures to be accessed with the file API (so they just look like regular files/directories/symlinks to the programs). Especially there's information about all processes (which is what gave it the name). The symbolic link /proc/self always refers to the directory associated with currently running process (that is, the process requesting it; different processes therefore will see different values). In the process's directory, there's a subdirectory fd which for each open file contains a symbolic link whose name is just the decimal representation of file descriptor (the index into the process's file table, see previous section), and whose target is the file it corresponds to. File descriptors when creating child processes A child process is created by a fork . A fork makes a copy of the file descriptors, which means that the child process created has the very same list of open files as the parent process does. So unless one of the open files is closed by the child, accessing an inherited file descriptor in the child will access the very same file as accessing the original file descriptor in the parent process. Note that after a fork, you initially have two copies of the same process which differ only in the return value from the fork call (the parent gets the PID of the child, the child gets 0). Normally, a fork is followed by an exec to replace one of the copies by another executable. The open file descriptors survive that exec. Note also that before the exec, the process can do other manipulations (like closing files that the new process should not get, or opening other files). Unnamed pipes An unnamed pipe is just a pair of file descriptors created on request by the kernel, so that everything written to the first file descriptor is passed to the second. The most common use is for the piping construct foo | bar of bash , where the standard output of foo is replaced by the write part of the pipe, and the standard input is replaces by the read part. Standard input and standard output are just the first two entries in the file table (entry 0 and 1; 2 is standard error), and therefore replacing it means just rewriting that table entry with the data corresponding to the other file descriptor (again, the actual implementation may differ). Since the process cannot access the table directly, there's a kernel function to do that. Process substitution Now we have everything together to understand how the process substitution works: The bash process creates an unnamed pipe for communication between the two processes created later. Bash forks for the echo process. The child process (which is an exact copy of the original bash process) closes the reading end of the pipe and replaces its own standard output with the writing end of the pipe. Given that echo is a shell builtin, bash might spare itself the exec call, but it doesn't matter anyway (the shell builtin might also be disabled, in which case it execs /bin/echo ). Bash (the original, parent one) replaces the expression <(echo 1) by the pseudo file link in /dev/fd referring to the reading end of the unnamed pipe. Bash execs for the PHP process (note that after the fork, we are still inside [a copy of] bash). The new process closes the inherited write end of the unnamed pipe (and does some other preparatory steps), but leaves the read end open. Then it executed PHP. The PHP program receives the name in /dev/fd/ . Since the the corresponding file descriptor is still open, it still corresponds to the reading end of the pipe. Therefore if the PHP program opens the given file for reading, what it actually does is to create a second file descriptor for the reading end of the unnamed pipe. But that's no problem, it could read from either. Now the PHP program can read the reading end of the pipe through the new file descriptor, and thus receive the standard output of the echo command which goes to the writing end of the same pipe.
{ "source": [ "https://unix.stackexchange.com/questions/124947", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29867/" ] }
125,132
I'm trying to create a bunch of symbolic links, but I can't figure out why this is working ln -s /Users/niels/something/foo ~/bin/foo_link while this cd /Users/niels/something ln -s foo ~/bin/foo_link is not. I believe it has something to do with foo_link linking to foo in /Users/niels/bin instead of /Users/niels/something So the question is, how do I create a symbolic link that points to an absolute path, without actually typing it? For reference, I am using Mac OS X 10.9 and Zsh.
The easiest way to link to the current directory as an absolute path, without typing the whole path string would be ln -s "$(pwd)/foo" ~/bin/foo_link The target (first) argument for the ln -s command works relative to the symbolic link's location, not your current directory. It helps to know that, essentially, the created symlink (the second argument) simply holds the text you provide for the first argument. Therefore, if you do the following: cd some_directory ln -s foo foo_link and then move that link around mv foo_link ../some_other_directory ls -l ../some_other_directory you will see that foo_link tries to point to foo in the directory it is residing in. This also works with symbolic links pointing to relative paths. If you do the following: ln -s ../foo yet_another_link and then move yet_another_link to another directory and check where it points to, you'll see that it always points to ../foo . This is the intended behaviour, since many times symbolic links might be part of a directory structure that can reside in various absolute paths. In your case, when you create the link by typing ln -s foo ~/bin/foo_link foo_link just holds a link to foo , relative to its location. Putting $(pwd) in front of the target argument's name simply adds the current working directory's absolute path, so that the link is created with an absolute target.
{ "source": [ "https://unix.stackexchange.com/questions/125132", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40820/" ] }
125,140
A typical interaction for a program I've written might look like this: Enter command: a_command Completed a command Enter command: another_command Completed another command I typically run my program like ./program < input.txt , where input.txt would contain: a_command another_command I want to be able to capture the entire interaction (not just the output) like above. How can I do this with bash? EDIT: program is a binary (specifically, it's in C++), not a bash script. I have access to the source code, but I'd like to do this without having to modify the source code.
The easiest way to link to the current directory as an absolute path, without typing the whole path string would be ln -s "$(pwd)/foo" ~/bin/foo_link The target (first) argument for the ln -s command works relative to the symbolic link's location, not your current directory. It helps to know that, essentially, the created symlink (the second argument) simply holds the text you provide for the first argument. Therefore, if you do the following: cd some_directory ln -s foo foo_link and then move that link around mv foo_link ../some_other_directory ls -l ../some_other_directory you will see that foo_link tries to point to foo in the directory it is residing in. This also works with symbolic links pointing to relative paths. If you do the following: ln -s ../foo yet_another_link and then move yet_another_link to another directory and check where it points to, you'll see that it always points to ../foo . This is the intended behaviour, since many times symbolic links might be part of a directory structure that can reside in various absolute paths. In your case, when you create the link by typing ln -s foo ~/bin/foo_link foo_link just holds a link to foo , relative to its location. Putting $(pwd) in front of the target argument's name simply adds the current working directory's absolute path, so that the link is created with an absolute target.
{ "source": [ "https://unix.stackexchange.com/questions/125140", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65450/" ] }
125,179
I want to know what is exit 99 and why would one use it and what are the significance uses of it. For example, I'm using exit 99 .
There is no significance to exiting with code 99, other than there is perhaps in the context of a specific program. Either way, exit exits the shell with a certain exit code, in this case, 99. You can find more information in help exit : exit: exit [n] Exit the shell. Exits the shell with a status of N. If N is omitted, the exit status is that of the last command executed.
{ "source": [ "https://unix.stackexchange.com/questions/125179", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/40921/" ] }
125,183
The Question: I plugged in a device (i.e. GSM modem) through a serial port (a.k.a. RS-232), and I need to see with which file in /dev/ filesystem this device was tied up, to be able to communicate with it. Unfortunately there is no newly created file in /dev/ nor can be seen anything in dmesg output. So this seems to be a hard question. Background: I had never worked with a serial device, so yesterday, when there appeared a need, I tried to Google it but couldn't find anything helpful. I spent a few hours in seek, and I want to share a found answer as it could be helpful for someone.
Unfortunately serial ports are non-PlugNPlay, so kernel doesn't know which device was plugged in. After reading a HowTo tutorial I've got the working idea. The /dev/ directory of unix like OSes contains files named as ttySn (with n being a number) . Most of them doesn't correspond to existing devices. To find which ones do, issue a command: $ dmesg | grep ttyS [ 0.872181] 00:06: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 0.892626] 00:07: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 0.915797] 0000:01:01.0: ttyS4 at I/O 0x9800 (irq = 19) is a ST16650V2 [ 0.936942] 0000:01:01.1: ttyS5 at I/O 0x9c00 (irq = 18) is a ST16650V2 Above is an example output of my PC. You can see the initialization of a few serial ports: ttyS0 , ttyS1 , ttyS4 , ttyS5 . One of them is going to have a positive voltage upon a device plugged in. So by comparing the content of the file /proc/tty/driver/serial with and without the device plugged in we can easily find the ttyS related to our device. So, now do: $ sudo cat /proc/tty/driver/serial> /tmp/1 (un)plug a device $ sudo cat /proc/tty/driver/serial> /tmp/2 Next check the difference between the two files. Below is an output of my PC: $ diff /tmp/1 /tmp/2 2c2 < 0: uart:16550A port:000003F8 irq:4 tx:6 rx:0 --- > 0: uart:16550A port:000003F8 irq:4 tx:6 rx:0 CTS|DSR By comparing the three numbers with the dmesg output we can determine which one is the port: [ 0.872181] 00:06: ttyS0 at I/O 0x3f8 ( irq = 4 ) is a 16550A Hence, our device is /dev/ttyS0 , mission accomplished!
{ "source": [ "https://unix.stackexchange.com/questions/125183", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59928/" ] }
125,343
I would like to understand the term "system call". I am familiar that system calls are used to get kernel services from a userspace application. The part i need clarification with is the difference between a "system call" and a "C implementation of the system call". Here is a quote that confuses me: On Unix-like systems, that API is usually part of an implementation of the C library (libc), such as glibc, that provides wrapper functions for the system calls, often named the same as the system calls that they call What are the "system calls that they call"? Where is their source? Can I include them directly in my code? Is the "system call" in a generic sense just a POSIX defined interface but to actually see the implementation one could examine the C source and in it see how the actual userspace to kernel communication actually goes? Background note: I'm trying to understand if, in the end, each c function ends up interacting with devices from /dev .
System calls per se are a concept. They represent actions that processes can ask the kernel to perform. Those system calls are implemented in the kernel of the UNIX-like system. This implementation (written in C, and in asm for small parts) actually performs the action in the system. Then, processes use an interface to ask the system for the execution of the system calls. This interface is specified by POSIX. This is a set of functions of the C standard library. They are actually wrappers, they may perform some checks and then call a system-specific function in the kernel that tell it to do the actions required by the system call. And the trick is that those functions which are the interface are named the same as the system calls themselves and are often referred directly as "the system calls". You could call the function in the kernel that perform the system call directly through the system-specific mechanism. The problem is that it makes your code absolutely not portable. So, a system call is: a concept, a sequence of action performed by the kernel to offer a service to a user process the function of the C standard library you should use in your code to get this service from the kernel.
{ "source": [ "https://unix.stackexchange.com/questions/125343", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29436/" ] }
125,346
My last rkhunter scan reported a couple of warnings that deserve to be checked. Main reason for my suspect is that I wasn't on the machine at (03-Apr-2014 01:12:12) ->AM I googled for understand what's the purpose of the 2 files I mentioned in question title, but I didn't find very helpful answers. Can anybody tell me what's the aim of those files, and maybe also why/when it would be modified by the system itself? [10:17:11] Warning: The file properties have changed: [10:17:11] File: /usr/sbin/sshd [10:17:11] Current hash: 900e153506754ceb7b19f3a01a3ad5e36d43d958 [10:17:11] Stored hash : 55a1a63a46d84eb9d0322f96bd9a61f070e90698 [10:17:11] Current inode: 149998 Stored inode: 142248 [10:17:11] Current file modification time: 1396480332 (03-Apr-2014 01:12:12) [10:17:11] Stored file modification time : 1360359087 (08-Feb-2013 22:31:27) [10:17:34] Warning: The file properties have changed: [10:17:34] File: /usr/bin/ssh [10:17:34] Current hash: 60366d414c711a70f9e313f5ff26213ca513b565 [10:17:34] Stored hash : 1b410fb0de841737f963e1ee011989f155f41259 [10:17:34] Current inode: 150030 Stored inode: 142203 [10:17:34] Current file modification time: 1396480332 (03-Apr-2014 01:12:12) [10:17:34] Stored file modification time : 1360359087 (08-Feb-2013 22:31:27) the apt logs files making me worry, I censored couple of info. Apparently in the 03-Apr-2014 I didn't installed nothing. Start-Date: 2014-04-01 15:49:18 Commandline: *********** Install: *********** End-Date: 2014-04-01 15:49:29 Start-Date: 2014-04-08 14:03:52 Commandline: *********** Install: *********** End-Date: 2014-04-08 14:04:04 By the way I think (hope) they are false positives [edit:not anymore]. Maybe files edited by some process of the system and normally not recorded in the .dat file of rkhunter because I didn't updated. I came here to find some confirmation or some more paranoia.
System calls per se are a concept. They represent actions that processes can ask the kernel to perform. Those system calls are implemented in the kernel of the UNIX-like system. This implementation (written in C, and in asm for small parts) actually performs the action in the system. Then, processes use an interface to ask the system for the execution of the system calls. This interface is specified by POSIX. This is a set of functions of the C standard library. They are actually wrappers, they may perform some checks and then call a system-specific function in the kernel that tell it to do the actions required by the system call. And the trick is that those functions which are the interface are named the same as the system calls themselves and are often referred directly as "the system calls". You could call the function in the kernel that perform the system call directly through the system-specific mechanism. The problem is that it makes your code absolutely not portable. So, a system call is: a concept, a sequence of action performed by the kernel to offer a service to a user process the function of the C standard library you should use in your code to get this service from the kernel.
{ "source": [ "https://unix.stackexchange.com/questions/125346", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55995/" ] }
125,385
is there any way (what is the easiest way in bash) to combine the following: mkdir foo cd foo The manpage for mkdir does not describe anything like that, maybe there is a fancy version of mkdir ? I know that cd has to be shell builtin, so the same would be true for the fancy mkdir ... Aliasing?
Function? mkcdir () { mkdir -p -- "$1" && cd -P -- "$1" } Put the above code in the ~/.bashrc , ~/.zshrc or another file sourced by your shell. Then source it by running e.g. source ~/.bashrc to apply changes. After that simply run mkcdir foo or mkcdir "nested/path/in quotes" . Notes: "$1" is the first argument of the mkcdir command. Quotes around it protects the argument if it has spaces or other special characters. -- makes sure the passed name for the new directory is not interpreted as an option to mkdir or cd , giving the opportunity to create a directory that starts with - or -- . -p used on mkdir makes it create extra directories if they do not exist yet, and -P used makes cd resolve symbolic links. Instead of source -ing the rc, you may also restart the terminal emulator/shell.
{ "source": [ "https://unix.stackexchange.com/questions/125385", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63453/" ] }
125,389
I am getting lot's of incoming spam mail's to mail id's in my websites such as ([email protected], [email protected]) mostly to info pages not to user's mail account. How to stop mail from such mail id's. I have exim mail server with cpanel and whm account.
Function? mkcdir () { mkdir -p -- "$1" && cd -P -- "$1" } Put the above code in the ~/.bashrc , ~/.zshrc or another file sourced by your shell. Then source it by running e.g. source ~/.bashrc to apply changes. After that simply run mkcdir foo or mkcdir "nested/path/in quotes" . Notes: "$1" is the first argument of the mkcdir command. Quotes around it protects the argument if it has spaces or other special characters. -- makes sure the passed name for the new directory is not interpreted as an option to mkdir or cd , giving the opportunity to create a directory that starts with - or -- . -p used on mkdir makes it create extra directories if they do not exist yet, and -P used makes cd resolve symbolic links. Instead of source -ing the rc, you may also restart the terminal emulator/shell.
{ "source": [ "https://unix.stackexchange.com/questions/125389", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63175/" ] }
125,399
I would like to shrink my logical volume for home directory, so that I can extend volume for root. df -h /dev/mapper/vg_mitoscomp-lv_root 50G 33G 15G 70% / /dev/mapper/vg_mitoscomp-lv_home 53G 180M 51G 1% /home lsblk ─sdc2 8:34 0 111.3G 0 part β”œβ”€vg_mitoscomp-lv_root (dm-0) 253:0 0 50G 0 lvm / β”œβ”€vg_mitoscomp-lv_swap (dm-1) 253:1 0 7.6G 0 lvm [SWAP] └─vg_mitoscomp-lv_home (dm-2) 253:2 0 53.8G 0 lvm /home I can successfully unmount /home , but than I can't perform health check nor can I resize volume. [root@MITOs-Comp ~]# e2fsck -f /dev/mapper/vg_mitoscomp-lv_home e2fsck 1.41.12 (17-May-2010) /dev/mapper/vg_mitoscomp-lv_home is in use. e2fsck: Cannot continue, aborting. root@MITOs-Comp ~]# resize2fs /dev/mapper/vg_mitoscomp-lv_home 10G resize2fs 1.41.12 (17-May-2010) resize2fs: Device or resource busy while trying to open /dev/mapper/vg_mitoscomp-lv_home Couldn't find valid filesystem superblock.
Function? mkcdir () { mkdir -p -- "$1" && cd -P -- "$1" } Put the above code in the ~/.bashrc , ~/.zshrc or another file sourced by your shell. Then source it by running e.g. source ~/.bashrc to apply changes. After that simply run mkcdir foo or mkcdir "nested/path/in quotes" . Notes: "$1" is the first argument of the mkcdir command. Quotes around it protects the argument if it has spaces or other special characters. -- makes sure the passed name for the new directory is not interpreted as an option to mkdir or cd , giving the opportunity to create a directory that starts with - or -- . -p used on mkdir makes it create extra directories if they do not exist yet, and -P used makes cd resolve symbolic links. Instead of source -ing the rc, you may also restart the terminal emulator/shell.
{ "source": [ "https://unix.stackexchange.com/questions/125399", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65565/" ] }
125,400
This is in regard to linux, but if anyone knows of a general *nix method that would be good. I booted a system yesterday with an ethernet cable plugged in. "NetworkManager" is not installed, so once it started I went to look for the name of the ethernet interface with ifconfig to start a DHCP client manually, but it did not show anything other than lo . The NIC was listed via lspci , and the appropriate kernel driver was loaded. The system normally uses wifi, and I could remember the interface name for that was wlan0 . When I tried ifconfig wlan0 up , wlan0 appeared. But the only ethernet interface names I could remember were eth[N] and em[N] -- neither of which worked. This document refers to "predictable interface names" but does not do a good job of explaining what they might be in simple terms. It does refer to a piece of source code which implies the name in this case might be deduced from the the PCI bus and slot numbers, which seems like an unnecessarily complicated hassle. Other searching around led me to believe that this might be determined by systemd in conjunction with udev , but there are almost 100 files in /usr/lib/udev/rules.d and spending an hour trying to determine where (and if ) there's a systemd config file for this also seems ridiculous. It would also be nice to know for certain that they are available, not just how they might be named if they are, so I can rule out hardware problems, etc. Isn't there a simple way to find the names of available network interfaces on linux?
The simplest method I know to list all of your interfaces is ifconfig -a EDIT If you're on a system where that has been made obsolete, you can use ip link show
{ "source": [ "https://unix.stackexchange.com/questions/125400", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/25985/" ] }
125,429
When administering Linux systems I often find myself struggling to track down the culprit after a partition goes full. I normally use du / | sort -nr but on a large filesystem this takes a long time before any results are returned. Also, this is usually successful in highlighting the worst offender but I've often found myself resorting to du without the sort in more subtle cases and then had to trawl through the output. I'd prefer a command line solution which relies on standard Linux commands since I have to administer quite a few systems and installing new software is a hassle (especially when out of disk space!)
Try ncdu , an excellent command-line disk usage analyser:
{ "source": [ "https://unix.stackexchange.com/questions/125429", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
125,526
Just upgraded to Ubuntu 14.04, which seems to also make a full reinstall of Chromium (as all my plugins were removed). Now, trying to access https://extensions.gnome.org/ to enable Gnome Shell extensions, but the message: We cannot detect a running copy of GNOME on this system, so some parts of the interface may be disabled. See our troubleshooting entry for more information. keeps appearing. There is nothing in my chrome://plugins page, but the site still seems to be whitelisted in my "click to play" settings. Has anyone found out how to force Chrome to get this plugin?
Chrome and Chromium dropped support for the NPAPI plugins (Netscape Plugin Application Programming Interface) in favor of PPAPI (Pepper Plugin Application Programming Interface) so all plugins that use NPAPI (like GNOME Extension plugin) are just not supported. The only alternative is using another browser that allows them (like Firefox) or asking the developers to move to PPAPI (unlikely). NOTE: This is the blog post from the Chromium blog mentioning this, titled: Saying Goodbye to Our Old Friend NPAPI .
{ "source": [ "https://unix.stackexchange.com/questions/125526", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65626/" ] }
125,546
I have installed a vlc in centos 6.5. I want to run it as root. But I get the following error, "VLC is not supposed to be run as root. Sorry. If you need to use real-time priorities and/or privileged TCP ports you can use vlc-wrapper (make sure it is Set-UID root and cannot be run by non-trusted users first)."
This is what worked for me. No compilation required. sed -i 's/geteuid/getppid/' /usr/bin/vlc Using VLC media player 2.0.3 Twoflower on a Raspberry Pi. Explanation: The initialization script check if the UID is equals to zero. Zero is reserved for the root user. Using sed to replace geteuid for getppid fools the initialization script because it is always > 0 . While running the VLC as root is not recommended, it works. Be aware of the risks and obviously do not do it for production environments. For Freebsd and other finicky unices and macos use the proper full syntax: sed -i '' 's/geteuid/getppid/' /usr/local/bin/vlc
{ "source": [ "https://unix.stackexchange.com/questions/125546", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65639/" ] }
125,776
when I tried to change file name from old.file(1).gz to new.file.gz , It says syntax error, I am using ubuntu 12.04. mv old.file(1).gz new.file.gz bash: syntax error near unexpected token `('
Yeti's comment will work for you, but if you would like to know why, it's because parentheses are interpreted as special characters, and have to either be escaped with \ or the entire filename quoted (as above) [edit: sorry, only the ( and ) need to be quoted]. If you have tab completion enabled, just type the first few characters of the file name and hit tab. I.e., typing mv old and hitting tab, should turn into mv old.file\(1\).gz (unless there are other potential files that old* could refer to).
{ "source": [ "https://unix.stackexchange.com/questions/125776", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20346/" ] }
125,833
I followed this tutorial to set up IP rules on ubuntu 12.04. Everything worked fine on setup -- but now I've made changes to the firewall that do not persist upon reboot. I do not understand why that is. Here is a demonstration of how I am using iptables-persistent. What am I doing wrong? $ sudo service iptables-persistent start * Loading iptables rules... * IPv4... * IPv6... $ sudo iptables -L //shows a certain rule $ iptables -D INPUT ... //command successfully drops the rule $ sudo iptables -L //shows rule has been deleted $ sudo service iptables-persistent restart * Loading iptables rules... * IPv4... * IPv6... [ OK ] $ sudo iptables -L //rule is back
iptables-persistent does not work that way. Restarting the iptables-persistent "service" does not capture the current state of the iptables and save it; all it does is reinstate the iptables rules that were saved when the package was last configured. To configure iptables-persistent , you need to tell it about your current iptables ruleset. One way to accomplish that is as follows: iptables-save >/etc/iptables/rules.v4 ip6tables-save >/etc/iptables/rules.v6 Or, equivalently, the iptables-persistent package also provides the following: dpkg-reconfigure iptables-persistent (You will need to answer yes to the questions about whether to save the rules.) After that, the next time iptables-persistent is started/restarted, the iptables rulesets you expect will be loaded.
{ "source": [ "https://unix.stackexchange.com/questions/125833", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18366/" ] }
125,834
Context: Recently I've bought an SSD. Before that I had 2 HDDs; one being a 500GB drive with a W7 installation, and another 320GB Drive containing two partitions, one of which for storage (NTFS) and another to hold a linux mint installation (Which I'm currently on). When the SSD arrived I set up another W7 installation. Once this was fully set-up, I wanted to format the 500GB drive to use it for storage. I've done this today, and as a result wiped the windows loader from my system. Ideally, what I would like is for the windows boot manager to be on the SSD, and a grub installation on the 320GB drive. Then I'd like the system to boot to the 320GB drive, and give options to load the Windows boot manager, or boot to linux. Since the windows drive cannot be booted to, when running a W7 disc and using CLI I get the error "element not found" when attempting to use "recboot /fixboot". using /scanos does find my W7 installation however. I've also set the W7 partition as active and rebooted, but had the same results. I've also attempted using boot-repair, however my system still cannot boot to windows. My current boot summary is here . My current grub boot menu just lists the mint installations. How can I fix this to include a windows boot loader? @terdon: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.0-15-generic Found initrd image: /boot/initrd.img-3.11.0-15-generic Found linux image: /boot/vmlinuz-3.11.0-12-generic Found initrd image: /boot/initrd.img-3.11.0-12-generic No volume groups found done
iptables-persistent does not work that way. Restarting the iptables-persistent "service" does not capture the current state of the iptables and save it; all it does is reinstate the iptables rules that were saved when the package was last configured. To configure iptables-persistent , you need to tell it about your current iptables ruleset. One way to accomplish that is as follows: iptables-save >/etc/iptables/rules.v4 ip6tables-save >/etc/iptables/rules.v6 Or, equivalently, the iptables-persistent package also provides the following: dpkg-reconfigure iptables-persistent (You will need to answer yes to the questions about whether to save the rules.) After that, the next time iptables-persistent is started/restarted, the iptables rulesets you expect will be loaded.
{ "source": [ "https://unix.stackexchange.com/questions/125834", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65780/" ] }
125,972
Why cd .. , typed at root folder, does not warn or fails with an error? I would expect: /$ cd .. -bash: cd: ..: No such file or directory Instead, I'm left at / . Of course, this is since .. does exist in / , and is simply / , just like . . I just wonder why it is like that.
According to the Open Group (responsible for the POSIX standard): Each directory has exactly one parent directory which is represented by the name dot-dot in the first directory. [...] What the filename dot-dot refers to relative to the root directory is implementation-defined. In Version 7 it refers to the root directory itself; this is the behavior mentioned in POSIX.1-2008. In some networked systems the construction /../hostname/ is used to refer to the root directory of another host, and POSIX.1 permits this behavior. A.4.13 Pathname Resolution The dot-dot entry in the root directory is interpreted to mean the root directory itself. Thus, dot-dot cannot be used to access files outside the subtree rooted at the root directory. chroot - change root directory
{ "source": [ "https://unix.stackexchange.com/questions/125972", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59639/" ] }
126,009
I am relatively new to systemd and am learning its architecture. Right now, I'm trying to figure out how to cause a custom shell script to run. This script needs to run after the networking layer has started up. I'm running Arch, using systemd as well as netctl. To test, I wrote a simple script that simply executes ip addr list > /tmp/ip.txt . I created the following service file for this script. (/etc/systemd/system/test.service) [Unit] Description=test service [Service] ExecStart=/root/test.script [Install] WantedBy=multi-user.target I then enabled the script with, systemctl enable test Upon restarting, the script does indeed run, but it runs prior to the network being started. In other words, the output in ip.txt displays no IPv4 address assigned to the primary interface. By the time I login, the IPv4 address has indeed been assigned and networking is up. I'm guessing I could alter the point at which the script runs by messing with the WantedBy parameter, but I'm not sure how to do that. Could someone point me in the right direction?
On systemd network configuration dependencies It is very easy to affect systemd's unit ordering. On the other hand you need to be careful about what a completed unit guarantees. Configure your service On current systems, ordering after network.target just guarantees that the network service has been started, not that there's some actual configuration. You need to order after network-online.target and pull it in to achieve that. [Unit] Wants=network-online.target After=network-online.target For compatibility with older systems, you may need to order after network.target as well. [Unit] Wants=network-online.target After=network.target network-online.target That's for the unit file of your service and for systemd. Implementation in current versions of software Now you need to make sure that network-online.target works as expected (or that you at least can use network.target ). The current version of NetworkManager offers the NetworkManager-wait-online.service which gets pulled in by network-online.target and thus by your service. This special service ensures that your service will wait until all connections configured to be started automatically succeed, fail, or time out. The current version of systemd-networkd blocks your service until all devices are configured as requested. It is easier in that it currently only supports configurations that are applied at boot time (more specifically the startup time of `systemd-networkd.service). For the sake of completeness, the /etc/init.d/network service in Fedora, as interpreted by the current versions of systemd, blocks network.target and thus indirectly blocks network-online.target and your service. It's an example of a script based implementation. If your implementation, whether daemon based or script based, behaves as one of the network management services above, it will delay the start of your service until network configuration is either successfully completed, failed for a good reason, or timed out after a reasonable time frame to complete. You may want to check whether netctl works the same way and that information would be a valuable addition to this answer. Implementations in older versions of software I don't think you will see a sufficiently old version of systemd where this wouldn't work well. But you can check that at least network-online.target exists and that it gets ordered after network.target . Previously NetworkManager only guaranteed that at least one connection would get applied. And even for that to work, you would have to enable the NetworkManager-wait-online.service explicitly. This has been long fixed in Fedora but was only recently applied upstream. systemctl enable NetworkManager-wait-online.service Notes on network.target and network-online.target implementations You shouldn't ever need to make your software depend on NetworkManager.service or NetworkManager-wait-online.service nor any other specific services. Instead, all network management services should order themselves before network.target and optionally network-online.target . A simple script based network management service should finish network configuration before exiting and should order itself before network.target and thus indirectly before network-online.target . [Unit] Before=network.target [Service] Type=oneshot ExecStart=... RemainAfterExit=yes A daemon based network management service should also order itself before network.target even though it's not very useful. [Unit] Before=network.target [Service] Type=simple ExecStart=... A service that waits for the daemon to finish should order itself after the specific service and before network-online.target . It should use Requisite on the daemon service so that it fails immediately if the respective network management service isn't being used. [Unit] Requisite=... After=... Before=network-online.target [Service] Type=oneshot ExecStart=... RemainAfterExit=yes The package should install a symlink to the waiting service in the wants directory for network-online.target so that it gets pulled in by services that want to wait for configured network. ln -s /usr/lib/systemd/system/... /usr/lib/systemd/system/network-online.target.wants/ Related documentation http://www.freedesktop.org/software/systemd/man/systemd.special.html#network-online.target http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ Final notes I hope I not only helped to answer your question at the time you asked it, but also contributed to improving the situation in upstream and Linux distributions, so that I can now give a better answer than was possible at the time of writing the original one.
{ "source": [ "https://unix.stackexchange.com/questions/126009", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65884/" ] }
126,014
I am trying to time something using: /usr/bin/time myCommand However, since /usr/bin/time writes to stderr, if myCommand also writes to stderr, I will get more than just time's output on the stream. What I want to do is, redirect all of myCommand's output to /dev/null , but still write time's output to stderr. Using an example myCommand that writes to stderr of ls /nofile , we see that (obviously) there is no output at all with the following: $ /usr/bin/time ls /nofile 2> /dev/null $ Without any redirection, we see both the output from ls (to stderr) and the output from time (also to stderr): $ /usr/bin/time ls /nofile ls: cannot access /nofile: No such file or directory 0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 3776maxresident)k 0inputs+0outputs (0major+278minor)pagefaults 0swaps What I want is something that simply produces: $ /usr/bin/time ls /nofile > RedirectThatImAskingFor 0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 3776maxresident)k 0inputs+0outputs (0major+278minor)pagefaults 0swaps Any ideas?
In ksh, bash and zsh, time is a keyword, not a builtin. Redirections on the same line apply only to the command being timed, not to the output of time itself. $ time ls -d / /nofile >/dev/null 2>/dev/null real 0m0.003s user 0m0.000s sys 0m0.000s To redirect the output from time itself in these shells, you need to use an additional level of grouping. { time mycommand 2>&3; } 3>&2 2>mycommand.time If you use the GNU version of the standalone time utility, it has a -o option to write the output of time elsewhere than stderr. You can make time write to the terminal: /usr/bin/time -o /dev/tty mycommand >/dev/null 2>/dev/null If you want to keep the output from time on its standard error, you need an extra level of file descriptor shuffling. /usr/bin/time -o /dev/fd/3 mycommand 3>&2 >/dev/null 2>/dev/null With any time utility, you can invoke an intermediate shell to perform the desired redirections. Invoking an intermediate shell to perform extra actions such as cd , redirections, etc. is pretty common β€”Β it's the kind of little things that shells are designed to do. /usr/bin/time sh -c 'exec mycommand >/dev/null 2>/dev/null'
{ "source": [ "https://unix.stackexchange.com/questions/126014", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/62940/" ] }
126,134
If the which command is not available, is there another 'standard' method to find out where a command's executable can be found? If there is no other 'standard' method available, the actual system I face currently is a bare Android emulator with an ash Almquist shell , if that means anything.
This should be a standard solution: type type -t type -p
{ "source": [ "https://unix.stackexchange.com/questions/126134", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/18768/" ] }
126,238
I have a log file, and when I open it via vim, it looks not readable, and it has [converted] at the bottom. What does [converted] mean? Is there a way to fix the format issue so that it is human readable?
It means that vim detected that the file did not match the charset given by your locale and made a conversion. If you run the command :set from within vim : :set --- Options --- autoindent fileformat=dos scroll=7 textwidth=70 background=dark filetype=asciidoc shiftwidth=2 ttyfast cscopetag helplang=en softtabstop=2 ttymouse=sgr cscopeverbose hlsearch syntax=asciidoc noendofline list tabpagemax=3 expandtab ruler textmode backspace=indent,eol,start comments=s1:/*,ex:*/,://,b:#,:%,:XCOMM,fb:-,fb:*,fb:+,fb:.,fb:> cscopeprg=/usr/bin/cscope fileencoding=utf-8 fileencodings=ucs-bom,utf-8,latin1 Notice the last 2 options, fileencoding & fileencodings . The first is the encoding used for the current file, the second is a comma separated list of recognized encodings. So when you see that message vim is telling you that it's completed converting the file from fileencoding to encoding . Check out :help fileencoding or :help encoding for additional details. Reference I found the thread below, which I used as a source when this was answered. The original site is now gone (accessible in this answer's history), so I'm moving the contents of that thread here for posterity sake. The link was still in the Wayback Machine . #1 Eli the Bearded January 21st, 2004 - 06:51 pm ET | Report spam In comp.os.linux.misc, Leon. wrote: Hide the quote "GaΓ©tan Martineau" wrote in message news:E9jLb.2903$ > [ system_notes]$ vi installation_chouette.txt > What means the [converted] at the bottom of the screen, as in: > "installation_chouette.txt" [converted] 2576L, 113642C It means that vim detected that the file did not match the charset given by your locale and made a conversion. What does :set Tell you about "fileencoding" and "fileencodings"? The first is the encoding used for the current file, the second is a comma separated list of recognized encodings. Hide the quote > This file has accented characters. How can I save the file so that if I > reload if again, I do not see "converted"? Figure out what charset you want, and then :set fileencoding=[charset] :w Hide the quote It means deleting the Microsoft Dos/ Windows CR LF end of lines, to just LF - unix standard end of lines. It does not. If you open a file with DOS line ends, vim reports [dos] after the filename, not [converted]. If you do have a dos file that you wish to convert to unix line ends, you can :set fileformat=unix :w Elijah
{ "source": [ "https://unix.stackexchange.com/questions/126238", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39989/" ] }
126,297
I have a different (assigned, non-changable) username on one of the servers I log to regularly, and I would like to avoid writing it down every time. Can I make these lines [tohecz@localhost ~]$ ssh myserver.cz [tohecz@localhost ~]$ ssh anotherserver.cz behave as the following? [tohecz@localhost ~]$ ssh [email protected] [tohecz@localhost ~]$ ssh [email protected]
Add the following in your ~/.ssh/config file: Host myserver.cz User tohecz Host anotherserver.cz User anotheruser You can specify a lot of default parameters for your hosts using this file. Just have a look at man ssh_config for other possibilities.
{ "source": [ "https://unix.stackexchange.com/questions/126297", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22723/" ] }
126,457
I have a repository installed. I want to update that repository with new rpm files. Can I rerun createrepo command on this repository in order to make my new files available to the servers using that repository?
You need --update option with createrepo createrepo --update: Sometimes you have a lot of packages in your repsitory and regenerating the meta data for each package when only a few packages have been added or changed is just too time consuming. This is where --update comes in handy. You run createrepo just like you did before but you pass the --update flag to it. Like this: createrepo --update </path/to/repo> Now, createrepo will only update the items which have been changed, been added or been removed since the last time the meta data was generated. Also mentioned in man createrepo ( Reference )
{ "source": [ "https://unix.stackexchange.com/questions/126457", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66119/" ] }
126,514
It may sound like I'm asking the same thing as this question , but I have different requirements. This is an example of my filesystem: /code/ internal/ dev/ main/ public/ dev/ main/ release/ tools/ /code/internal/dev/ , /code/public/dev/ and /code/tools/ contain subdirectories for multiple projects. I work almost exclusively in the dev branches of /code/internal/ and /code/public/ , and often I want to search for a text string in those directories along with /code/tools/ (which has no branches). In these instances I have to do three separate commands: $ grep -r "some string" /code/internal/dev/ $ grep -r "some string" /code/public/dev/ $ grep -r "some string" /code/tools/ I'd like to know if there's a single command to do this. If not, I would most likely need to write a simple bash script.
You can concatenate several paths for grep to look for: grep -r "some string" /code/internal/dev/ /code/public/dev/ /code/tools/
{ "source": [ "https://unix.stackexchange.com/questions/126514", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22172/" ] }
126,630
I am looking for a program which I can use to create ASCII diagrams such these: +---------+ | | +--------------+ | NFS |--+ | | | | | +-->| CacheFS | +---------+ | +----------+ | | /dev/hda5 | | | | | +--------------+ +---------+ +-->| | | | | | |--+ | AFS |----->| FS-Cache | | | | |--+ +---------+ +-->| | | | | | | +--------------+ +---------+ | +----------+ | | | | | | +-->| CacheFiles | | ISOFS |--+ | /var/cache | | | +--------------+ +---------+ It should preferably be a package available in Debian . The wonderful diagram displayed above is taken from the Linux kernel documentation. I cannot believe they were created by hand. There must be some tool to create them.
asciio I've used asciio for several years. Many of the diagrams on this site I've created using asciio . example vncviewer .-,( ),-. __ _ .-( )-. gateway vncserver [__]|=| ---->( internet )-------> __________ ------> ____ __ /::/|_| '-( ).-' [_...__...Β°] | | |==| '-.( ).-' |____| | | /::::/ |__| The GUI looks like this. NOTE: Everything is driven from the right click menu as well as short-cut keys. DrawIt Using vim along with the DrawIt plugin you can also create basic diagrams. A good overview of how to install and use it is available here in this article titled: How To Create ASCII Drawings in Vim Editor (Draw Boxes, Lines, Ellipses, Arrows Inside Text File) . asciiflow There's a website called asciiflow which is probably the easiest way to draw these types of diagrams. JavE Another tool, JavE , written in Java that can create ascii diagrams like this as well. ,'''''''''''''| | Controller | | | '`'i'''''''''' ,' `. ,' `. - - ,'''''''''''''| ,''''''''''''`. | Model |______| View | | | | | `'''''''''''' '`''''''''''''' The GUI looks like this: Resources Flytrap and Asciio Installing Asciio in Ubuntu App :: Asciio - Graphical user interface for ASCII Charts
{ "source": [ "https://unix.stackexchange.com/questions/126630", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43007/" ] }
126,719
Googling this didn't show up any results. Here's what I mean: I have a binary file named x in my path (not the current folder, but it is in the PATH ), and also a folder with the same name in the current working directory. If I type x , I want the binary to execute, but instead it cd's into that folder. How do I fix this?
TL; DR Add this line to your ~/.zshrc : unsetopt autocd AUTO_CD Option and howto find it First of all the option you are looking for is AUTO_CD. You can easily find it by looking up `man zshoptions`. Use your pagers search function, usually you press / and enter the keyword. With n you jump to the next occurrence. This will bring up the following: [..] Changing Directories AUTO_CD (-J) If a command is issued that can't be executed as a normal command, and the command is the name of a directory, perform the cd command to that directory. [..] The option can be unset using unsetopt AUTO_CD . Turning it properly off You are using oh-my-zsh which is described as "A community-driven framework for managing your zsh configuration" Includes 120+ optional plugins (rails, git, OSX, hub, capistrano, brew, ant, macports, etc), ... So the next thing is to find out, how to enable/disable options according to the framework. The readme.textile file states that the prefered way to enable/disable plugins would be an entry in your .zshrc: plugins=(git osx ruby) Find out which plugin uses the AUTO_CD option. As discovered from the manpage it can be invoked via the -J switch or AUTO_CD. Since oh-my-zsh is available on github, searching for it will turn up the file lib/theme-and-appearance.zsh . If you don't want to disable the whole plugin "theme-and-appearance", put a unsetopt AUTO_CD in your .zshrc. Don't modify the files of oh-my-zsh directly, because in case you are updating the framework, your changes will be lost. Why executables are not invoked directly Your third question is howto execute a binary directly: You have to execute your binary file via a path, for example with a prefixed `./` as in `./do-something`. This is some kind of a security feature and should not be changed. hing of plugging in an USB stick, mounting it and having a look on it with `ls`. If there is a executable called `ls` which deletes your home directory, everything would be gone, since this would have overwritten the order of your $PATH. If you have commands you call repeatedly, setting up an alias in your .zshrc would be a common solution.
{ "source": [ "https://unix.stackexchange.com/questions/126719", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/8376/" ] }
126,725
I have a flash usb drive and up till now it has worked well. Recently I recorded iso to it using dd. Now I want to delete it. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ....... sdb 8:16 1 14.6G 0 disk └─sdb1 8:17 1 14.5G 0 part /media/alex/ARCH_201404 sr0 11:0 1 1024M 0 rom $ mount /dev/sdb1 on /media/alex/ARCH_201404 type iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks2) When I did this $ sudo dd ibs=4096 count=1 if=/dev/zero of=/dev/sdb1 1+0 records in 8+0 records out 4096 bytes (4.1 kB) copied, 0.00053675 s, 7.6 MB/s it seemed to succeed but when I explored the usb flash all the files were still there. When did this: sudo rm -r /media/alex/ARCH_201404/* and I got the error: .................. rm: cannot remove β€˜/media/alex/ARCH_201404/loader/entries/uefi-shell-v1-x86_64.conf’: Read-only file system rm: cannot remove β€˜/media/alex/ARCH_201404/loader/entries/uefi-shell-v2-x86_64.conf’: Read-only file system rm: cannot remove β€˜/media/alex/ARCH_201404/loader/loader.conf’: Read-only file system ..................... What can I do about it?
TL; DR Add this line to your ~/.zshrc : unsetopt autocd AUTO_CD Option and howto find it First of all the option you are looking for is AUTO_CD. You can easily find it by looking up `man zshoptions`. Use your pagers search function, usually you press / and enter the keyword. With n you jump to the next occurrence. This will bring up the following: [..] Changing Directories AUTO_CD (-J) If a command is issued that can't be executed as a normal command, and the command is the name of a directory, perform the cd command to that directory. [..] The option can be unset using unsetopt AUTO_CD . Turning it properly off You are using oh-my-zsh which is described as "A community-driven framework for managing your zsh configuration" Includes 120+ optional plugins (rails, git, OSX, hub, capistrano, brew, ant, macports, etc), ... So the next thing is to find out, how to enable/disable options according to the framework. The readme.textile file states that the prefered way to enable/disable plugins would be an entry in your .zshrc: plugins=(git osx ruby) Find out which plugin uses the AUTO_CD option. As discovered from the manpage it can be invoked via the -J switch or AUTO_CD. Since oh-my-zsh is available on github, searching for it will turn up the file lib/theme-and-appearance.zsh . If you don't want to disable the whole plugin "theme-and-appearance", put a unsetopt AUTO_CD in your .zshrc. Don't modify the files of oh-my-zsh directly, because in case you are updating the framework, your changes will be lost. Why executables are not invoked directly Your third question is howto execute a binary directly: You have to execute your binary file via a path, for example with a prefixed `./` as in `./do-something`. This is some kind of a security feature and should not be changed. hing of plugging in an USB stick, mounting it and having a look on it with `ls`. If there is a executable called `ls` which deletes your home directory, everything would be gone, since this would have overwritten the order of your $PATH. If you have commands you call repeatedly, setting up an alias in your .zshrc would be a common solution.
{ "source": [ "https://unix.stackexchange.com/questions/126725", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61718/" ] }
126,786
I'm trying to make a systemd timer that runs every 15 minutes. Right now I have: timer-fifteen.timer : [Unit] Description=15min timer [Timer] OnBootSec=0min OnCalendar=*:*:0,15,30,45 Unit=timer-fifteen.target [Install] WantedBy=basic.target timer-fifteen.target : [Unit] Description=15min Timer Target StopWhenUnneeded=yes This runs over and over again without stopping. Does it need to be *:0,15,30,45:* instead? How can I make this work?
Your syntax translates to every 15 seconds , if you want every 15 minutes , IMO the most readable way is: OnCalendar=*:0/15 An answer most similar to what you use in your question is: OnCalendar=*:0,15,30,45 More information: http://www.freedesktop.org/software/systemd/man/systemd.time.html
{ "source": [ "https://unix.stackexchange.com/questions/126786", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45476/" ] }
126,789
I just installed Arch Linux following a video tutorial from youtube for the first time. However, when I try starting the GNOME terminal it won't start. It will say Terminal at the top of my screen for a couple of seconds and there will be a loading symbol, but after a couple of seconds they both disappear and no terminal will appear. Because I can't load or start the terminal I can't do anything (I can't even answer the question to register to the Arch Linux forums). What am I doing wrong?
I had the same issue after a fresh install of arch. I checked, double checked and triple checked the locale.gen and even removed every locale except en_US.UTF-8. I was just about to give up when I checked under settings, Regions & Language and discovered the language was not set even though I had run the command to set it. After picking english and rebooting it works fine.
{ "source": [ "https://unix.stackexchange.com/questions/126789", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66309/" ] }
126,812
I have a script that I want to be able to run in two machines. These two machines get copies of the script from the same git repository. The script needs to run with the right interpreter (e.g. zsh ). Unfortunately, both env and zsh live in different locations in the local and remote machines: Remote machine $ which env /bin/env $ which zsh /some/long/path/to/the/right/zsh Local machine $ which env /usr/bin/env $which zsh /usr/local/bin/zsh How can I set up the shebang so that running the script as /path/to/script.sh always uses the Zsh available in PATH ?
You cannot solve this through shebang directly, since shebang is purely static. What you could do is having some Β»least common multiplierΒ« (from a shell perspective) in the shebang and re-execute your script with the right shell, if this LCM isn't zsh. In other words: Have your script executed by a shell found on all systems, test for a zsh -only feature and if the test turns out false, have the script exec with zsh , where the test will succeed and you just continue. One unique feature in zsh , for example, is the presence of the $ZSH_VERSION variable: #!/bin/sh - [ -z "$ZSH_VERSION" ] && exec zsh - "$0" ${1+"$@"} # zsh-specific stuff following here echo "$ZSH_VERSION" In this simple case, the script is first executed by /bin/sh (all post-80s Unix-like systems understand #! and have a /bin/sh , either Bourne or POSIX but our syntax is compatible to both). If $ZSH_VERSION is not set, the script exec 's itself through zsh . If $ZSH_VERSION is set (resp. the script already is run through zsh ), the test is simply skipped. VoilΓ . This only fails if zsh isn't in the $PATH at all. Edit: To make sure, you only exec a zsh in the usual places, you could use something like for sh in /bin/zsh \ /usr/bin/zsh \ /usr/local/bin/zsh; do [ -x "$sh" ] && exec "$sh" - "$0" ${1+"$@"} done This could save you from accidentally exec 'ing something in your $PATH which is not the zsh you're expecting.
{ "source": [ "https://unix.stackexchange.com/questions/126812", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/4531/" ] }
126,908
Is there a way to programmatically obtain a SSH server key fingerprint without authenticating to it? I'm trying ssh -v user@host false 2>&1 | grep "Server host key" , but this hangs waiting for a password if key based auth is not setup.
You could do this by combining ssh-keyscan and ssh-keygen : $ file=$(mktemp) $ ssh-keyscan host > $file 2> /dev/null $ ssh-keygen -l -f $file 521 de:ad:be:ef:de:ad:be:ef:de:ad:be:ef:de:ad:be:ef host (ECDSA) 4096 8b:ad:f0:0d:8b:ad:f0:0d:8b:ad:f0:0d:8b:ad:f0:0d host (RSA) $ rm $file Edit : since OpenSSH 7.2 this oneliner works: ssh-keyscan host | ssh-keygen -lf - (credits to @mykhal)
{ "source": [ "https://unix.stackexchange.com/questions/126908", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22022/" ] }
126,927
I've seen this comment many times on Unix & Linux as well as on other sites that use the phrasing "backticks have been deprecated", with respect to shells such as Bash & Zsh. Is this statement true or false?
There are two different meanings of "deprecated." be deprecated: (chiefly of a software feature) be usable but regarded as obsolete and best avoided, typically due to having been superseded. β€”New Oxford American Dictionary By this definition backticks are deprecated. Deprecated status may also indicate the feature will be removed in the future. β€” Wikipedia By this definition backticks are not deprecated. Still supported: Citing the Open Group Specification on Shell Command Languages, specifically section 2.6.3 Command Substitution , it can be seen that both forms of command substitution, backticks ( `..cmd..` ) or dollar parens ( $(..cmd..) ) are still supported insofar as the specification goes. Command substitution allows the output of a command to be substituted in place of the command name itself. Command substitution shall occur when the command is enclosed as follows: $(command) or (backquoted version): `command` The shell shall expand the command substitution by executing command in a subshell environment (see Shell Execution Environment) and replacing the command substitution (the text of command plus the enclosing $() or backquotes) with the standard output of the command, removing sequences of one or more <newline> characters at the end of the substitution. Embedded <newline> characters before the end of the output shall not be removed; however, they may be treated as field delimiters and eliminated during field splitting, depending on the value of IFS and quoting that is in effect. If the output contains any null bytes, the behavior is unspecified. Within the backquoted style of command substitution, <backslash> shall retain its literal meaning, except when followed by: '$', ' ` ', or <backslash> . The search for the matching backquote shall be satisfied by the first unquoted non-escaped backquote; during this search, if a non-escaped backquote is encountered within a shell comment, a here-document, an embedded command substitution of the $(command) form, or a quoted string, undefined results occur. A single-quoted or double-quoted string that begins, but does not end, within the " `...` " sequence produces undefined results. With the $(command) form, all characters following the open parenthesis to the matching closing parenthesis constitute the command. Any valid shell script can be used for command, except a script consisting solely of re-directions which produces unspecified results. So then why does everyone say that backticks have been deprecated? Because most of the use cases should be making use of the dollar parens form instead of backticks. (Deprecated in the first sense above.) Many of the most reputable sites (including U&L) often state this as well, throughout, so it's sound advice. This advice should not be confused with some non-existent plan to remove support for backticks from shells. BashFAQ #082 - Why is $(...) preferred over `...` (backticks)? `...` is the legacy syntax required by only the very oldest of non-POSIX-compatible bourne-shells. There are several reasons to always prefer the $(...) syntax: ... Bash Hackers Wiki - Obsolete and deprecated syntax This is the older Bourne-compatible form of the command substitution . Both the `COMMANDS` and $(COMMANDS) syntaxes are specified by POSIX, but the latter is greatly preferred, though the former is unfortunately still very prevalent in scripts. New-style command substitutions are widely implemented by every modern shell (and then some). The only reason for using backticks is for compatibility with a real Bourne shell (like Heirloom). Backtick command substitutions require special escaping when nested, and examples found in the wild are improperly quoted more often than not. See: Why is $(...) preferred over `...` (backticks)? . POSIX standard rationale Because of these inconsistent behaviors, the backquoted variety of command substitution is not recommended for new applications that nest command substitutions or attempt to embed complex scripts. NOTE: This third excerpt (above) goes on to show several situations where backticks simply won't work, but the newer dollar parens method does, beginning with the following paragraph: Additionally, the backquoted syntax has historical restrictions on the contents of the embedded command. While the newer "$()" form can process any kind of valid embedded script, the backquoted form cannot handle some valid scripts that include backquotes. If you continue reading that section the failures are highlighted showing how they would fail using backticks, but do work using the newer dollar parens notation. Conclusions So it's preferable that you use dollar parens instead of backticks but you aren't actually using something that's been technically "deprecated" as in "this will stop working entirely at some planned point." After reading all this you should have the take away that you're strongly encouraged to use dollar parens unless you specifically require compatibility with a real original non-POSIX Bourne shell.
{ "source": [ "https://unix.stackexchange.com/questions/126927", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/7453/" ] }
126,938
I've just encountered several answers such as to parsing a delimited text file... that use the construct: while IFS=, read xx yy zz;do echo $xx $yy $zz done < input_file where the IFS variable is set before the read command. I've been reading through the bash reference but can't figure out why this is legal. I tried $ x="once upon" y="a time" echo $x $y from the bash command prompt but got nothing echoed. Can someone point me to where that syntax is defined in the reference that allows the IFS variable to be set in that way? Is it a special case or can I do something similar w/ other variables?
Relevant information can be found on the man page provided by the BASH maintainer (last checked August 2020). Section Shell Grammar, Simple Commands states (emphasis added): A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections , and terminated by a control operator. The first word specifies the command to be executed, and is passed as argument zero. The remaining words are passed as arguments to the invoked command. So you can pass any variable you'd like. Your echo example does not work because the variables are passed to the command, not set in the shell. The shell expands $x and $y before invoking the command. This works, for example: $ x="once upon" y="a time" bash -c 'echo $x $y' once upon a time
{ "source": [ "https://unix.stackexchange.com/questions/126938", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66383/" ] }
127,002
When I do: $ traceroute 8.8.8.8 I get in the output showing only stars, even though pinging to google.com ( 8.8.8.8) appears to be working. What do I need to fix in my files/conf in order to solve this issue? what I need to check? Ping command output $ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=73.4 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=46 time=69.6 ms traceroute command output $ traceroute 8.8.8.8 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * *
I found that traceroute -I gave me more complete results. -I, --icmp Use ICMP ECHO for probes
{ "source": [ "https://unix.stackexchange.com/questions/127002", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/-1/" ] }
127,063
The "tree" command uses nice box-drawing characters to show the tree but I want to use the output in a "code-page-neutral" context (I know that really there's always a code page, but by restricting it to the lower characters I hope to be free of worries that someone in Ulan Bator sees smiley faces, etc). For example instead of: β”œβ”€β”€ include β”‚Β Β  β”œβ”€β”€ foo β”‚Β Β  └── bar I'd like something like: +-- include | +-- foo | \-- bar but none of the "tree" switch combinations I tried gave this (seems more as if they take the box-drawing chars as the baseline and make it yet prettier) I also looked for box-drawing filters to perform such conversions without finding anything beyond an infinite amount of ASCII art :-). A generic filter smells like something to be cooked-up in 15 mins - plus two more incremental days stumbling into all the amusing corner cases :-)
I'm not sure about this but I think all you need is tree | sed 's/β”œ/\+/g; s/─/-/g; s/β””/\\/g' For example: $ tree . β”œβ”€β”€ file0 └── foo β”œβ”€β”€ bar β”‚Β Β  └── file2 └── file1 2 directories, 3 files $ tree | sed 's/β”œ/\+/g; s/─/-/g; s/β””/\\/g' . +-- file0 \-- foo +-- bar β”‚Β Β  \-- file2 \-- file1 2 directories, 3 files Alternatively, you can use the --charset option: $ tree --charset=ascii . |-- file0 `-- foo |-- bar | `-- file2 `-- file1 2 directories, 3 files
{ "source": [ "https://unix.stackexchange.com/questions/127063", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/60145/" ] }
127,076
I want to install a program in Linux and run it as a daemon. (Team Speak 3 in this case, but the question is general in nature). There is no package provided, only tarred binaries. Where in the directory structure should I put such a program by convention? On the web I found that /opt is for "optional addon apps", while /usr is for "user programs". I found one tutorial suggesting /opt while another suggested /usr . So which one is "more correct"?
The "more correct" depends on your distribution. You should check your distribution's guidelines on where to put software that isn't managed by the package manager (often /usr/local ) OR on how to create your own package for it. As you said, TeamSpeak just put everything in one folder (and may not be easy to reorganise), yes /opt/ is probably best. (But, for instance, in Arch Linux, the package manager can install there, so I'd still make a PKGBUILD to install in /opt .) Also distributions usually try to follow the Filesystem Hierarchy Standard , so this is where to look for more generic conventions.
{ "source": [ "https://unix.stackexchange.com/questions/127076", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/20908/" ] }
127,169
I usually assumed that tar was a compression utility, but I am unsure, does it actually compress files, or is it just like an ISO file, a file to hold files?
Tar is an archiving tool (Tape ARchive), it only collects files and their metadata together and produces one file. If you want to compress that file later you can use gzip/bzip2/xz. For convenience, tar provides arguments to compress the archive automatically for you. Checkout the tar man page for more details.
{ "source": [ "https://unix.stackexchange.com/questions/127169", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61161/" ] }
127,235
The default journal mode for Ext4 is data=ordered , which, per the documentation, means that "All data are forced directly out to the main file system prior to its metadata being committed to the journal." However, there is also the data=journal option, which means that "All data are committed into the journal prior to being written into the main file system. Enabling this mode will disable delayed allocation and O_DIRECT support." My understanding of this is that the data=journal mode will journal all data as well as metadata, which, on the face of it, appears to mean that this is the safest option in terms of data integrity and reliability, though maybe not so much for performance. Should I go with this option if reliability is of the utmost concern, but performance much less so? Are there any caveats to using this option? For background, the system in question is on a UPS and write caching is disabled on the drives.
Yes, data=journal is the safest way of writing data to disk. Since all data and metadata are written to the journal before being written to disk, you can always replay interrupted I/O jobs in the case of a crash. It also disables the delayed allocation feature, which may lead to data loss . The 3 modes are presented in order of safeness in the manual : data=journal data=ordered data=writeback There's also another option which may interest you: commit=nrsec (*) Ext4 can be told to sync all its data and metadata every 'nrsec' seconds. The default value is 5 seconds. The only known caveat is that it can become terribly slow. You can reduce the performance impact by disabling the access time update with the noatime option.
{ "source": [ "https://unix.stackexchange.com/questions/127235", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/45219/" ] }
127,334
According with: 3.2.5.3 Grouping Commands {} { list; } Placing a list of commands between curly braces causes the list to be executed in the current shell context. No subshell is created . Using ps to see this in action This is the process hierarchy for a process pipeline executed directly on command line. 4398 is the PID for the login shell: sleep 2 | ps -H; PID TTY TIME CMD 4398 pts/23 00:00:00 bash 29696 pts/23 00:00:00 sleep 29697 pts/23 00:00:00 ps Now follows the process hierarchy for a process pipeline between curly braces executed directly on command line. 4398 is the PID for the login shell. It's similar to the hierarchy above proving that everything is executed in current shell context : { sleep 2 | ps -H; } PID TTY TIME CMD 4398 pts/23 00:00:00 bash 29588 pts/23 00:00:00 sleep 29589 pts/23 00:00:00 ps Now, this is the process hierarchy when the sleep in the pipeline is itself placed inside curly braces (so two levels of braces in all) { { sleep 2; } | ps -H; } PID TTY TIME CMD 4398 pts/23 00:00:00 bash 29869 pts/23 00:00:00 bash 29871 pts/23 00:00:00 sleep 29870 pts/23 00:00:00 ps Why does bash have to create a subshell to run sleep in the 3rd case when the documentation states that commands between curly braces are executed in current shell context?
In a pipeline, all commands run concurrently (with their stdout/stdin connected by pipes) so in different processes. In cmd1 | cmd2 | cmd3 All three commands run in different processes, so at least two of them have to run in a child process. Some shells run one of them in the current shell process (if builtin like read or if the pipeline is the last command of the script), but bash runs them all in their own separate process (except with the lastpipe option in recent bash versions and under some specific conditions). {...} groups commands. If that group is part of a pipeline, it has to run in a separate process just like a simple command. In: { a; b "$?"; } | c We need a shell to evaluate that a; b "$?" is a separate process, so we need a subshell. The shell could optimise by not forking for b since it's the last command to be run in that group. Some shells do it, but apparently not bash .
{ "source": [ "https://unix.stackexchange.com/questions/127334", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/21233/" ] }
127,335
I have a problem with themes in Openbox, I checked every theme and none of them changed the background or the style of the windows! Openbox only changes the titlebars and the right-click menus. What happen?
In a pipeline, all commands run concurrently (with their stdout/stdin connected by pipes) so in different processes. In cmd1 | cmd2 | cmd3 All three commands run in different processes, so at least two of them have to run in a child process. Some shells run one of them in the current shell process (if builtin like read or if the pipeline is the last command of the script), but bash runs them all in their own separate process (except with the lastpipe option in recent bash versions and under some specific conditions). {...} groups commands. If that group is part of a pipeline, it has to run in a separate process just like a simple command. In: { a; b "$?"; } | c We need a shell to evaluate that a; b "$?" is a separate process, so we need a subshell. The shell could optimise by not forking for b since it's the last command to be run in that group. Some shells do it, but apparently not bash .
{ "source": [ "https://unix.stackexchange.com/questions/127335", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66568/" ] }
127,352
I need to make periodic backups of a directory on a remote server which is a virtual machine hosted by a research organisation. They mandate that access to VMs is through ssh keys, which is all good, except that I can't figure out how to point rsync to the ssh key for this server. Rsync has no problem if the key file is ~/.ssh/id_rsa , but when it is something else I get Permission denied (publickey) . With ssh I can specify the identity file with -i , but rsync appears to have no such option. I have also tried temporarily moving the key on the local machine to ~/.ssh/id_rsa , but that similarly does not work. tl;dr Can you specify an identity file with rsync?
You can specify the exact ssh command via the '-e' option: rsync -Pav -e "ssh -i $HOME/.ssh/somekey" username@hostname:/from/dir/ /to/dir/ Many ssh users are unfamiliar with their ~/.ssh/config file. You can specify default settings per host via the config file. Host hostname User username IdentityFile ~/.ssh/somekey In the long run it is best to learn the ~/.ssh/config file.
{ "source": [ "https://unix.stackexchange.com/questions/127352", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/61344/" ] }
127,432
I've configured an ubuntu server with openssh in order to connect to it and execute commands from a remote system like a phone or a laptop. The problem is... I'm probably not the only one. Is there a way to know all the login attempts that have been made to the server?
On Ubuntu servers, you can find who logged in when (and from where) in the file /var/log/auth.log . There, you find entries like: May 1 16:17:02 owl CRON[9019]: pam_unix(cron:session): session closed for user root May 1 16:17:43 owl sshd[9024]: Accepted publickey for root from 192.168.0.101 port 37384 ssh2 May 1 16:17:43 owl sshd[9024]: pam_unix(sshd:session): session opened for user root by (uid=0)
{ "source": [ "https://unix.stackexchange.com/questions/127432", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65903/" ] }
127,443
My laptop has a touchscreen but I do not use this. How do I disable this functionality? I use Arch Linux. I figured I could try removing the related driver. According to this page the possible drivers are all named xf86-input* . However, it looks like I have nothing like that installed: # pacman -Qs xf86-input local/xf86-input-evdev 2.8.3-1 (xorg-drivers xorg) X.org evdev input driver local/xf86-input-joystick 1.6.2-3 (xorg-drivers xorg) X.Org Joystick input driver local/xf86-input-keyboard 1.8.0-2 (xorg-drivers xorg) X.Org keyboard input driver local/xf86-input-mouse 1.9.0-2 (xorg-drivers xorg) X.org mouse input driver local/xf86-input-synaptics 1.7.5-1 (xorg-drivers xorg) Synaptics driver for notebook touchpads local/xf86-input-vmmouse 13.0.0-3 (xorg-drivers xorg) X.org VMWare Mouse input driver local/xf86-input-void 1.4.0-6 (xorg-drivers xorg) X.org void input driver Any idea how I can track down the responsible driver or in some other way disable the touch screen functionality?
Besides uninstalling the appropriate drivers (which might fail to work since some devices act as usual mouse devices and only need specific drivers for more sophisticated features and your list of installed drivers suggests this) you can also disable the device via the xinput tool or by explicitly matching in xorg.conf . To disable the device using xinput , you'll have to determine the devices XInput id: $ xinput ⎑ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ SynPS/2 Synaptics TouchPad id=10 [slave pointer (2)] ⎜ ↳ TPPS/2 IBM TrackPoint id=11 [slave pointer (2)] ⎜ ↳ My annoying touchscreen id=14 [slave pointer (2)] ⎣ Virtual core keyboard id=3 [master keyboard (2)] ↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)] ↳ Power Button id=6 [slave keyboard (3)] ↳ Video Bus id=7 [slave keyboard (3)] ↳ Sleep Button id=8 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=9 [slave keyboard (3)] ↳ ThinkPad Extra Buttons id=12 [slave keyboard (3)] ↳ HID 0430:0005 id=13 [slave keyboard (3)] In this example, Β»My annoying touchscreenΒ« has the id 14 . So to disable it, simply type $ xinput disable 14 To disable it via xorg.conf , you simply create a file under the /etc/X11/xorg.conf.d directory, for example 99-no-touchscreen.conf with the following content: Section "InputClass" Identifier "Touchscreen catchall" MatchIsTouchscreen "on" Option "Ignore" "on" EndSection This would ignore all touchscreen devices. In case you have more than one and want to use one or several of them, you could specify the match more exactly with one of the other Match directives. See the xorg.conf manpage for more details on this (simply search for Β»MatchΒ« and you should find what you're looking for).
{ "source": [ "https://unix.stackexchange.com/questions/127443", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/16766/" ] }
127,538
(I've already noticed that this question was asked before but I think it has not been answered in a way I'd like to understand the topic.) What are the conceptual and structural differences between a Linux Kernel BSD Kernel (let's say FreeBSD) While at the end of the day they are both kernels - I would assume that there might be differences in structure, functionality and specialization. In which scenarios would one kind of kernel have an advantage over the other? (Web Server, Database, Computing, etc.) Are there any joint efforts to concentrate forces for one common kernel or certain modules or would that be pointless anyway? PS: Different license types or shipping/packaging/maintaining approaches are not of importance in this context. I'm really interested in understanding how they differ on structural, functional/feature level and specialization.
1. What are the conceptual and structural differences between a Linux-Kernel and a BSD-kernel? Regarding architecture and internal structures, there are of course differences on how things are done (ie: lvm vs geom , early and complex jail feature for FreeBSD, ...), but overall there are not that much differences between the two: BSD* kernel and Linux kernel have both evolved from a purely monolithic approach to something hybrid/modular. Still, there are fundamental differences in their approach and history: BSD-kernel are using BSD licence and Linux-kernel is using GPL licences . BSD-kernel are not stand-alone kernels but are developed as being part of a whole . Of course, this is merely a philosophical point of view and not a technical one, but this give system coherence . BSD-kernel are developed with a more conservative point-of_view and more concern about staying consistent with their approach than having fancy features. Linux-kernel are more about drivers, features, ... (the more the better). As greatly stated somewhere else : It is Intelligent Design and Order (BSD*) versus Natural Selection and Chaos (GNU/Linux). 2. In which scenarios would one kind of kernel have an advantage over the other? About their overall structure and concept, while comparing an almost vanilla Linux-kernel and a FreeBSD-kernel, they are more or less of the same general usage level , that is with no particular specialization (not real-time, not highly paralleled, not game oriented, not embedded, ...). Of course there are a few differences here and there, such as native ZFS support or the geom architecture for FreeBSD versus the many drivers or various file-systems for Linux. But nothing some general software such as web servers or databases would really use to make a real difference. Comparisons in these cases would most likely end in some tuning battle between the two, nothing major. But, some would argue that OpenBSD has a deep and consistent approach to security, while hardened Linux distributions are "just" modified versions of the vanilla Linux-kernel. This might be true for such heavily specialized system, as would Steam-OS be the number one to play games. 3. Are there any joint efforts to concentrate forces for one common kernel or certain modules? There is no joint effort to concentrate forces for one common kernel, as there are major licences, philosophical or approach issues. If some real common efforts exist such as OpenZFS , most of the time it is more about drivers and concepts taken or inspired from one another.
{ "source": [ "https://unix.stackexchange.com/questions/127538", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/63578/" ] }
127,571
My SSH sessions seem to be dropping though that is not my major problem - the major problem is my previous sessions are still alive, worse yet one of them is running visudo preventing me from accessing it! who shows a number of sessions all except my current one I know have disconnected, how can I terminate the old ones so the release their resources?
To solve the immediate problem, that the sudoers file is locked, you can simply delete the lock file. It will usually be `/etc/sudoers.tmp"; check the man page for visudo to verify. If you delete the lock file, you can run visudo again. To delete all sessions which are still left hanging, first find out the pid of your own current session. Then, if your own pid is 12345, do ps -ef | grep sshd | grep -v -e grep -e root -e 12345 | awk '{print "sudo kill -9", $2}' | sh You may want to do it without the final | sh first just to check the PIDs you're planning on killing. If you're on Linux, you can instead use pkill -o -u $USER sshd to kill your oldest SSH session. Continue doing that until your current session is the only one left. You might also want to set ServerAliveInterval 15 in your .ssh/config to send a keepalive message every 15 seconds when no data has been sent. man ssh_config for more information.
{ "source": [ "https://unix.stackexchange.com/questions/127571", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66712/" ] }
127,602
As I understand, the default stack size for a pthread on Linux is 16K. I am getting strange results on my 64-bit Ubuntu install. $ ulimit -s 8192 Also: pthread_attr_init(&attr); pthread_attr_getstacksize(&attr, &stacksize); printf("Thread stack size = %d bytes \n", stacksize); Prints Thread stack size = 8388608 bytes I'm quite sure the stack size is not "8388608". What could be wrong?
int pthread_attr_setstacksize(pthread_attr_t *attr, size_t stacksize); The stacksize attribute shall define the minimum stack size (in bytes) allocated for the created threads stack. In your example, the stack size is set to 8388608 bytes which corresponds to 8MB, as returned by the command ulimit -s So that matches. From the pthread_create() description: On Linux/x86-32 , the default stack size for a new thread is 2 megabytes . Under the NPTL threading implementation, if the RLIMIT_STACK soft resource limit at the time the program started has any value other than "unlimited", then it determines the default stack size of new threads. Using pthread_attr_setstacksize (3), the stack size attribute can be explicitly set in the attr argument used to create a thread, in order to obtain a stack size other than the default. So the thread stack size can be set either via the set function above, or the ulimit system property. For the 16k you're referring to, it's not clear on which platform you've seen that and/or if any system limit was set for this. See the pthread_create page and here for some interesting examples on this.
{ "source": [ "https://unix.stackexchange.com/questions/127602", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/29121/" ] }
127,610
Some of the servers I use traceroute on, are more than 30 hops away. How do I make traceroute trace beyond 30 hops?
From man 1 traceroute : -m max_ttl Specifies the maximum number of hops (max time-to-live value) traceroute will probe. The default is 30.
{ "source": [ "https://unix.stackexchange.com/questions/127610", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64494/" ] }
127,712
If I use mv to move a folder called "folder" to a directory that already contains "folder" will they merge or will it be replaced?
mv cannot merge or overwrite directories, it will fail with the message "mv: cannot move 'a' to 'b': Directory not empty" , even when you're using the --force option. You can work around this using other tools (like rsync , find , or even cp ), but you need to carefully consider the implications: rsync can merge the contents of one directory into another (ideally with the --remove-source-files 1 option to safely delete only those source files that were transferred successfully, and with the usual permission/ownership/time preservation option -a if you wish) … but this is a full copy operation, and can therefore be very disk-intensive. You can use find to sequentially recreate the source directory structure at the target, then individually move the actual files … but this has to recurse through the source multiple times and can encounter race conditions (new directories being created at the source during the multi-step process) cp can create hard links (simply put, additional pointers to the same existing file), which creates a result very similar to a merging mv (and is very IO-efficient since only pointers are created and no actual data has to be copied) … but this again suffers from a possible race condition (new files at the source being deleted even though they weren't copied in the previous step) You can combine rsync 's --link-dest=DIR option (to create hardlinks instead of copying file contents, where possible) and --remove-source-files to get a semantic very similar to a regular mv . For this, --link-dest needs to be given an absolute path to the source directory (or a relative path from the destination to the source ). … but this is using --link-dest in an unintended way (which may or may not cause complications), requires knowing (or determining) the absolute path to the source (as an argument to --link-dest ), and again leaves an empty directory structure to be cleaned up as per 1 . (Note: This won't work anymore as of rsync version 3.2.6 ) Which of these workarounds (if any) is appropriate will very much depend on your specific use case. As always, think before you execute any of these commands, and have backups. 1: Note that rsync --remove-source-files won't delete any directories, so you will have to do something like find -depth -type d -empty -delete afterwards to get rid of the empty source directory tree.
{ "source": [ "https://unix.stackexchange.com/questions/127712", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56196/" ] }
127,723
With ifconfig , I am able to configure multiple IP addresses on a single network interface card . Why would I want to do that and how would I later utilize several addresses, e.g. how does software know which one to use? I have only used network interfaces with a single IP address so far.
Some (but not all) reasons: In order to host multiple SSL sites as already mentioned Because you may be consolidating services from multiple hosts and you need to preserve the addresses In order to use an IP address that can later be transferred to another host To compensate for a host that's down at that moment by adding its IP address to another one If you have multiple IP networks on the same physical/logical network/vlan it will prevent traffic from being exchanged via the gateway, speeding things up and reducing the load In order to setup a device that has a default IP address and thus you need to add an address on the same network In order to use different public IP addresses to avoid firewalls or to avoid being blacklisted in SPAM filters In order to make things less obvious to external people. E.g. you may be running apache on IP address 1.2.3.4 and only allow SSH on 1.2.3.5. That way if someone attempts to attack the IP address behind a site they won't find SSH running. In order to run the same service multiple times In order to use different hostnames in reverse DNS lookups. E.g. if you're connecting from this host to something external and you want to be presented as two different domain/hostnames In order not to expose commonality between services. E.g. if you host site1.example.com and site2.example.org and you map them on different IPs instead of using CNAMEs there won't be an obvious link between them
{ "source": [ "https://unix.stackexchange.com/questions/127723", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/64494/" ] }
127,818
I want to find all instances of "index" not followed by .php in a log using less . /index(?!\.php) does not work. Is this possible? What is the regex for less and vim (do they differ?). Is this not possible with these application's respective regex libraries?
In vim , you can do like this: /index\(\.php\)\@! For more details, in command mode, try :h \@ : \@! Matches with zero width if the preceding atom does NOT match at the current position. /zero-width {not in Vi} Like '(?!pattern)" in Perl. Example matches foo\(bar\)\@! any "foo" not followed by "bar" a.\{-}p\@! "a", "ap", "aap", "app", etc. not immediately followed by a "p" if \(\(then\)\@!.\)*$ "if " not followed by "then"
{ "source": [ "https://unix.stackexchange.com/questions/127818", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/43342/" ] }
127,829
The Linux README states that: Linux has also been ported to itself. You can now run the kernel as a userspace application - this is called UserMode Linux (UML). Why would someone want to do this?
UML is very fast for development and much easier to debug. If for example you use KVM then you need to setup an environment that boots from network or be copying new kernels in the VM. With UML you just run the new kernel. At one point I was testing some networking code on the kernel. This means that you get very very frequent kernel panics or other issues. Debugging this with UML is very easy. Additionally, UML runs in places where there's no hardware assisted virtualization, so it was used even more before KVM became commonality.
{ "source": [ "https://unix.stackexchange.com/questions/127829", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/55517/" ] }
127,886
When I type in service sshd restart I get a sshd: unrecognized service error. I do have, in /etc/ssh/ a file sshd_config that I use to set config. I can also putty into the Ubuntu box (it is remote). When I type in /etc/init.d/sshd restart I get No such file or directory Under /usr/sbin/ there is an sshd file, but it is binary. Is something wrong with my sshd? What do I do to fix this? To be clear, I want to be able to type service sshd restart (like all the online tutourials say) to be able to, well, restart my sshd. So that my port changes take effect.
Ubuntu calls the service ssh , not sshd . service ssh restart The service is also controlled by upstart, and not sysvinit. So you'll find it at /etc/init/ssh.conf instead of /etc/init.d/ssh .
{ "source": [ "https://unix.stackexchange.com/questions/127886", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66854/" ] }
128,046
When I run the chroot command an error is given: failed to run command β€˜/bin/bash’: No such file or directory
This error means that there is no /bin/bash directory inside chroot . Make sure you point it to where bash (or other shell's) executable is in chroot directory. If you have /mnt/somedir/usr/bin/bash then execute chroot /mnt/somedir /usr/bin/bash . Apart from the above, you also need to add libc directory dependencies, as mentioned in the answer here .
{ "source": [ "https://unix.stackexchange.com/questions/128046", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66865/" ] }
128,190
I'm trying to start with tmux but am failing to even run it. Immediately after starting it exits, similar to this question . It happens both without a .tmux.conf , and (following some examples) with this .tmux.conf : set -g default-shell /usr/bin/zsh set -g status on set -g history-limit 10000000 set -g prefix C-t set -g status-bg green setw -g window-status-current-bg cyan setw -g window-status-current-attr bold set -g status-right '#7H | %F %s' bind-key C-t last-window setw -g monitor-activity on set -g visual-activity on Adding new-session in the beginning of the .tmux.conf , as suggested some places doesn't help, and this happens when I use tmux new $SHELL also (again suggested to solve this somewhere). I am using: tmux 1.9a Ubuntu 14.04 zsh 5.0.2 My tmux-server.log file shows this after tmux -v : server started, pid 19654 socket path /tmp/tmux-1000/default new client 7 loading /home/alon/.tmux.conf /home/alon/.tmux.conf: new-session /home/alon/.tmux.conf: set -g default-shell /usr/bin/zsh /home/alon/.tmux.conf: set -g status on /home/alon/.tmux.conf: set -g history-limit 10000000 /home/alon/.tmux.conf: set -g prefix C-t /home/alon/.tmux.conf: set -g status-bg green /home/alon/.tmux.conf: setw -g window-status-current-bg cyan /home/alon/.tmux.conf: setw -g window-status-current-attr bold /home/alon/.tmux.conf: /home/alon/.tmux.conf: set -g status-right '#7H | %F %s' /home/alon/.tmux.conf: /home/alon/.tmux.conf: bind-key C-t last-window /home/alon/.tmux.conf: /home/alon/.tmux.conf: setw -g monitor-activity on /home/alon/.tmux.conf: set -g visual-activity on cmdq 0x7f75d1784b50: new-session (client -1) spawn: /usr/bin/zsh -- session 0 created cmdq 0x7f75d1784b50: set-option -g default-shell /usr/bin/zsh (client -1) cmdq 0x7f75d1784b50: set-option -g status on (client -1) cmdq 0x7f75d1784b50: set-option -g history-limit 10000000 (client -1) cmdq 0x7f75d1784b50: set-option -g prefix C-t (client -1) cmdq 0x7f75d1784b50: set-option -g status-bg green (client -1) cmdq 0x7f75d1784b50: set-window-option -g window-status-current-bg cyan (client -1) cmdq 0x7f75d1784b50: set-window-option -g window-status-current-attr bold (client -1) cmdq 0x7f75d1784b50: set-option -g status-right "#7H | %F %s" (client -1) cmdq 0x7f75d1784b50: bind-key C-t last-window (client -1) cmdq 0x7f75d1784b50: set-window-option -g monitor-activity on (client -1) cmdq 0x7f75d1784b50: set-option -g visual-activity on (client -1) got 100 from client 7 got 101 from client 7 got 102 from client 7 got 103 from client 7 got 104 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 105 from client 7 got 106 from client 7 got 200 from client 7 cmdq 0x7f75d176b260: new-session (client 7) new term: xterm xterm override: XT xterm override: Ms ]52;%p1%s;%p2%s xterm override: Cs ]12;%p1%s xterm override: Cr ]112 xterm override: Ss [%p1%d q xterm override: Se [2 q new key Oo: 0x1021 (KP/) new key Oj: 0x1022 (KP*) new key Om: 0x1023 (KP-) new key Ow: 0x1024 (KP7) new key Ox: 0x1025 (KP8) new key Oy: 0x1026 (KP9) new key Ok: 0x1027 (KP+) new key Ot: 0x1028 (KP4) new key Ou: 0x1029 (KP5) new key Ov: 0x102a (KP6) new key Oq: 0x102b (KP1) new key Or: 0x102c (KP2) new key Os: 0x102d (KP3) new key OM: 0x102e (KPEnter) new key Op: 0x102f (KP0) new key On: 0x1030 (KP.) new key OA: 0x101d (Up) new key OB: 0x101e (Down) new key OC: 0x1020 (Right) new key OD: 0x101f (Left) new key [A: 0x101d (Up) new key [B: 0x101e (Down) new key [C: 0x1020 (Right) new key [D: 0x101f (Left) new key OH: 0x1018 (Home) new key OF: 0x1019 (End) new key [H: 0x1018 (Home) new key [F: 0x1019 (End) new key Oa: 0x501d (C-Up) new key Ob: 0x501e (C-Down) new key Oc: 0x5020 (C-Right) new key Od: 0x501f (C-Left) new key [a: 0x901d (S-Up) new key [b: 0x901e (S-Down) new key [c: 0x9020 (S-Right) new key [d: 0x901f (S-Left) new key [11^: 0x5002 (C-F1) new key [12^: 0x5003 (C-F2) new key [13^: 0x5004 (C-F3) new key [14^: 0x5005 (C-F4) new key [15^: 0x5006 (C-F5) new key [17^: 0x5007 (C-F6) new key [18^: 0x5008 (C-F7) new key [19^: 0x5009 (C-F8) new key [20^: 0x500a (C-F9) new key [21^: 0x500b (C-F10) new key [23^: 0x500c (C-F11) new key [24^: 0x500d (C-F12) new key [25^: 0x500e (C-F13) new key [26^: 0x500f (C-F14) new key [28^: 0x5010 (C-F15) new key [29^: 0x5011 (C-F16) new key [31^: 0x5012 (C-F17) new key [32^: 0x5013 (C-F18) new key [33^: 0x5014 (C-F19) new key [34^: 0x5015 (C-F20) new key [2^: 0x5016 (C-IC) new key [3^: 0x5017 (C-DC) new key [7^: 0x5018 (C-Home) new key [8^: 0x5019 (C-End) new key [6^: 0x501a (C-NPage) new key [5^: 0x501b (C-PPage) new key [11$: 0x9002 (S-F1) new key [12$: 0x9003 (S-F2) new key [13$: 0x9004 (S-F3) new key [14$: 0x9005 (S-F4) new key [15$: 0x9006 (S-F5) new key [17$: 0x9007 (S-F6) new key [18$: 0x9008 (S-F7) new key [19$: 0x9009 (S-F8) new key [20$: 0x900a (S-F9) new key [21$: 0x900b (S-F10) new key [23$: 0x900c (S-F11) new key [24$: 0x900d (S-F12) new key [25$: 0x900e (S-F13) new key [26$: 0x900f (S-F14) new key [28$: 0x9010 (S-F15) new key [29$: 0x9011 (S-F16) new key [31$: 0x9012 (S-F17) new key [32$: 0x9013 (S-F18) new key [33$: 0x9014 (S-F19) new key [34$: 0x9015 (S-F20) new key [2$: 0x9016 (S-IC) new key [3$: 0x9017 (S-DC) new key [7$: 0x9018 (S-Home) new key [8$: 0x9019 (S-End) new key [6$: 0x901a (S-NPage) new key [5$: 0x901b (S-PPage) new key [11@: 0xd002 (C-S-F1) new key [12@: 0xd003 (C-S-F2) new key [13@: 0xd004 (C-S-F3) new key [14@: 0xd005 (C-S-F4) new key [15@: 0xd006 (C-S-F5) new key [17@: 0xd007 (C-S-F6) new key [18@: 0xd008 (C-S-F7) new key [19@: 0xd009 (C-S-F8) new key [20@: 0xd00a (C-S-F9) new key [21@: 0xd00b (C-S-F10) new key [23@: 0xd00c (C-S-F11) new key [24@: 0xd00d (C-S-F12) new key [25@: 0xd00e (C-S-F13) new key [26@: 0xd00f (C-S-F14) new key [28@: 0xd010 (C-S-F15) new key [29@: 0xd011 (C-S-F16) new key [31@: 0xd012 (C-S-F17) new key [32@: 0xd013 (C-S-F18) new key [33@: 0xd014 (C-S-F19) new key [34@: 0xd015 (C-S-F20) new key [2@: 0xd016 (C-S-IC) new key [3@: 0xd017 (C-S-DC) new key [7@: 0xd018 (C-S-Home) new key [8@: 0xd019 (C-S-End) new key [6@: 0xd01a (C-S-NPage) new key [5@: 0xd01b (C-S-PPage) new key [I: 0x1031 ((null)) new key [O: 0x1032 ((null)) new key OP: 0x1002 (F1) new key OQ: 0x1003 (F2) new key OR: 0x1004 (F3) new key OS: 0x1005 (F4) new key [15~: 0x1006 (F5) new key [17~: 0x1007 (F6) new key [18~: 0x1008 (F7) new key [19~: 0x1009 (F8) new key [20~: 0x100a (F9) new key [21~: 0x100b (F10) new key [23~: 0x100c (F11) new key [24~: 0x100d (F12) new key [1;2P: 0x100e (F13) new key [1;2Q: 0x100f (F14) new key [1;2R: 0x1010 (F15) new key [1;2S: 0x1011 (F16) new key [15;2~: 0x1012 (F17) new key [17;2~: 0x1013 (F18) new key [18;2~: 0x1014 (F19) new key [19;2~: 0x1015 (F20) new key [2~: 0x1016 (IC) new key [3~: 0x1017 (DC) replacing key OH: 0x1018 (Home) replacing key OF: 0x1019 (End) new key [6~: 0x101a (NPage) new key [5~: 0x101b (PPage) new key [Z: 0x101c (BTab) replacing key OA: 0x101d (Up) replacing key OB: 0x101e (Down) replacing key OD: 0x101f (Left) replacing key OC: 0x1020 (Right) new key [3;2~: 0x9017 (S-DC) new key [3;3~: 0x3017 (M-DC) new key [3;4~: 0xb017 (M-S-DC) new key [3;5~: 0x5017 (C-DC) new key [3;6~: 0xd017 (C-S-DC) new key [3;7~: 0x7017 (C-M-DC) new key [1;2B: 0x901e (S-Down) new key [1;3B: 0x301e (M-Down) new key [1;4B: 0xb01e (M-S-Down) new key [1;5B: 0x501e (C-Down) new key [1;6B: 0xd01e (C-S-Down) new key [1;7B: 0x701e (C-M-Down) new key [1;2F: 0x9019 (S-End) new key [1;3F: 0x3019 (M-End) new key [1;4F: 0xb019 (M-S-End) new key [1;5F: 0x5019 (C-End) new key [1;6F: 0xd019 (C-S-End) new key [1;7F: 0x7019 (C-M-End) new key [1;2H: 0x9018 (S-Home) new key [1;3H: 0x3018 (M-Home) new key [1;4H: 0xb018 (M-S-Home) new key [1;5H: 0x5018 (C-Home) new key [1;6H: 0xd018 (C-S-Home) new key [1;7H: 0x7018 (C-M-Home) new key [2;2~: 0x9016 (S-IC) new key [2;3~: 0x3016 (M-IC) new key [2;4~: 0xb016 (M-S-IC) new key [2;5~: 0x5016 (C-IC) new key [2;6~: 0xd016 (C-S-IC) new key [2;7~: 0x7016 (C-M-IC) new key [1;2D: 0x901f (S-Left) new key [1;3D: 0x301f (M-Left) new key [1;4D: 0xb01f (M-S-Left) new key [1;5D: 0x501f (C-Left) new key [1;6D: 0xd01f (C-S-Left) new key [1;7D: 0x701f (C-M-Left) new key [6;2~: 0x901a (S-NPage) new key [6;3~: 0x301a (M-NPage) new key [6;4~: 0xb01a (M-S-NPage) new key [6;5~: 0x501a (C-NPage) new key [6;6~: 0xd01a (C-S-NPage) new key [6;7~: 0x701a (C-M-NPage) new key [5;2~: 0x901b (S-PPage) new key [5;3~: 0x301b (M-PPage) new key [5;4~: 0xb01b (M-S-PPage) new key [5;5~: 0x501b (C-PPage) new key [5;6~: 0xd01b (C-S-PPage) new key [5;7~: 0x701b (C-M-PPage) new key [1;2C: 0x9020 (S-Right) new key [1;3C: 0x3020 (M-Right) new key [1;4C: 0xb020 (M-S-Right) new key [1;5C: 0x5020 (C-Right) new key [1;6C: 0xd020 (C-S-Right) new key [1;7C: 0x7020 (C-M-Right) new key [1;2A: 0x901d (S-Up) new key [1;3A: 0x301d (M-Up) new key [1;4A: 0xb01d (M-S-Up) new key [1;5A: 0x501d (C-Up) new key [1;6A: 0xd01d (C-S-Up) new key [1;7A: 0x701d (C-M-Up) spawn: /usr/bin/zsh -- session 1 created writing 207 to client 7 got 208 from client 7 keys are 9 ([?62;9;c) received service class 62 complete key [?62;9;c 0xfff session 0 destroyed session 1 destroyed writing 203 to client 7 got 205 from client 7 writing 204 to client 7 lost client 7 UPDATE: Here are the results from running strace tmux : execve("/usr/bin/tmux", ["tmux"], [/* 103 vars */]) = 0 brk(0) = 0x7f86ea097000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f86e817b000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=117788, ...}) = 0 mmap(NULL, 117788, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f86e815e000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libutil.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\20\17\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=10680, ...}) = 0 mmap(NULL, 2105624, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f86e7d58000 mprotect(0x7f86e7d5a000, 2093056, PROT_NONE) = 0 mmap(0x7f86e7f59000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1000) = 0x7f86e7f59000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libtinfo.so.5", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\303\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=167096, ...}) = 0 mmap(NULL, 2264288, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f86e7b2f000 mprotect(0x7f86e7b54000, 2093056, PROT_NONE) = 0 mmap(0x7f86e7d53000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x24000) = 0x7f86e7d53000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\236\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=276880, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f86e815d000 mmap(NULL, 2373864, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f86e78eb000 mprotect(0x7f86e792d000, 2097152, PROT_NONE) = 0 mmap(0x7f86e7b2d000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x42000) = 0x7f86e7b2d000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libresolv.so.2", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320:\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=101240, ...}) = 0 mmap(NULL, 2206376, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f86e76d0000 mprotect(0x7f86e76e7000, 2097152, PROT_NONE) = 0 mmap(0x7f86e78e7000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17000) = 0x7f86e78e7000 mmap(0x7f86e78e9000, 6824, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f86e78e9000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\320\37\2\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1845024, ...}) = 0 mmap(NULL, 3953344, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f86e730a000 mprotect(0x7f86e74c5000, 2097152, PROT_NONE) = 0 mmap(0x7f86e76c5000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bb000) = 0x7f86e76c5000 mmap(0x7f86e76cb000, 17088, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f86e76cb000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0po\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=141574, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f86e815c000 mmap(NULL, 2217264, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f86e70ec000 mprotect(0x7f86e7105000, 2093056, PROT_NONE) = 0 mmap(0x7f86e7304000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x18000) = 0x7f86e7304000 mmap(0x7f86e7306000, 13616, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f86e7306000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f86e815b000 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f86e8159000 arch_prctl(ARCH_SET_FS, 0x7f86e8159740) = 0 mprotect(0x7f86e76c5000, 16384, PROT_READ) = 0 mprotect(0x7f86e7304000, 4096, PROT_READ) = 0 mprotect(0x7f86e78e7000, 4096, PROT_READ) = 0 mprotect(0x7f86e7b2d000, 4096, PROT_READ) = 0 mprotect(0x7f86e7d53000, 16384, PROT_READ) = 0 mprotect(0x7f86e7f59000, 4096, PROT_READ) = 0 mprotect(0x7f86e83ea000, 36864, PROT_READ) = 0 mprotect(0x7f86e817d000, 4096, PROT_READ) = 0 munmap(0x7f86e815e000, 117788) = 0 set_tid_address(0x7f86e8159a10) = 13671 set_robust_list(0x7f86e8159a20, 24) = 0 futex(0x7fffc947f210, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, NULL, 7f86e8159740) = -1 EAGAIN (Resource temporarily unavailable) rt_sigaction(SIGRTMIN, {0x7f86e70f29f0, [], SA_RESTORER|SA_SIGINFO, 0x7f86e70fc340}, NULL, 8) = 0 rt_sigaction(SIGRT_1, {0x7f86e70f2a80, [], SA_RESTORER|SA_RESTART|SA_SIGINFO, 0x7f86e70fc340}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [RTMIN RT_1], NULL, 8) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0 brk(0) = 0x7f86ea097000 brk(0x7f86ea0b8000) = 0x7f86ea0b8000 open("/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=4427728, ...}) = 0 mmap(NULL, 4427728, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f86e6cb3000 close(3) = 0 getcwd("/tmp", 4096) = 5 access("/usr/bin/zsh", X_OK) = 0 access("/home/alon/.tmux.conf", R_OK) = 0 getuid() = 1000 mkdir("/tmp/user/1000/tmux-1000", 0700) = -1 EEXIST (File exists) lstat("/tmp/user/1000/tmux-1000", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0 lstat("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=126976, ...}) = 0 lstat("/tmp/user", {st_mode=S_IFDIR|0711, st_size=4096, ...}) = 0 lstat("/tmp/user/1000", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0 lstat("/tmp/user/1000/tmux-1000", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0 getuid() = 1000 geteuid() = 1000 getgid() = 1000 getegid() = 1000 getuid() = 1000 geteuid() = 1000 getgid() = 1000 getegid() = 1000 socketpair(PF_LOCAL, SOCK_STREAM, 0, [3, 4]) = 0 fcntl(3, F_GETFD) = 0 fcntl(3, F_SETFD, FD_CLOEXEC) = 0 fcntl(4, F_GETFD) = 0 fcntl(4, F_SETFD, FD_CLOEXEC) = 0 fcntl(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 fcntl(4, F_GETFL) = 0x2 (flags O_RDWR) fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0 getuid() = 1000 geteuid() = 1000 getgid() = 1000 getegid() = 1000 socket(PF_LOCAL, SOCK_STREAM, 0) = 5 connect(5, {sa_family=AF_LOCAL, sun_path="/tmp/user/1000/tmux-1000/default"}, 34) = -1 ECONNREFUSED (Connection refused) close(5) = 0 open("/tmp/user/1000/tmux-1000/default.lock", O_WRONLY|O_CREAT, 0600) = 5 fcntl(5, F_SETLK, {type=F_WRLCK, whence=SEEK_CUR, start=0, len=0}) = 0 unlink("/tmp/user/1000/tmux-1000/default") = 0 socketpair(PF_LOCAL, SOCK_STREAM, 0, [6, 7]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f86e8159a10) = 13672 close(7) = 0 close(5) = 0 fcntl(6, F_GETFL) = 0x2 (flags O_RDWR) fcntl(6, F_SETFL, O_RDWR|O_NONBLOCK) = 0 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=13672, si_status=0, si_utime=0, si_stime=0} --- fcntl(0, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) fcntl(0, F_SETFL, O_RDWR|O_NONBLOCK|O_LARGEFILE) = 0 rt_sigaction(SIGINT, {SIG_IGN, [], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, NULL, 8) = 0 rt_sigaction(SIGUSR2, {SIG_IGN, [], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, NULL, 8) = 0 rt_sigaction(SIGTSTP, {SIG_IGN, [], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, NULL, 8) = 0 rt_sigaction(SIGHUP, {0x7f86e790c770, ~[RTMIN RT_1], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGCHLD, {0x7f86e790c770, ~[RTMIN RT_1], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGCONT, {0x7f86e790c770, ~[RTMIN RT_1], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGTERM, {0x7f86e790c770, ~[RTMIN RT_1], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGUSR1, {0x7f86e790c770, ~[RTMIN RT_1], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGWINCH, {0x7f86e790c770, ~[RTMIN RT_1], SA_RESTORER|SA_RESTART, 0x7f86e7340c30}, {SIG_DFL, [], 0}, 8) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 fstat(0, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 16), ...}) = 0 readlink("/proc/self/fd/0", "/dev/pts/16", 4095) = 11 stat("/dev/pts/16", {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 16), ...}) = 0 open(".", O_RDONLY) = 5 dup(0) = 7 poll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN|POLLOUT}], 2, 4294967295) = 1 ([{fd=6, revents=POLLOUT}]) sendmsg(6, {msg_name(0)=NULL, msg_iov(4)=[{"d\0\0\0\24\0\0\0\10\0\0\0\377\377\377\377\0\0\1\0", 20}, {"e\0\0\0\26\0\0\0\10\0\0\0\377\377\377\377xterm\0", 22}, {"f\0\0\0\34\0\0\0\10\0\0\0\377\377\377\377/dev/pts/16\0", 28}, {"g\0\0\0\20\0\1\0\10\0\0\0\377\377\377\377", 16}], msg_controllen=24, {cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, {5}}, msg_flags=0}, 0) = 86 close(5) = 0 poll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN|POLLOUT}], 2, 4294967295) = 1 ([{fd=6, revents=POLLOUT}]) sendmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"h\0\0\0\20\0\1\0\10\0\0\0\377\377\377\377", 16}], msg_controllen=24, {cmsg_len=20, cmsg_level=SOL_SOCKET, cmsg_type=SCM_RIGHTS, {7}}, msg_flags=0}, 0) = 16 close(7) = 0 poll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN|POLLOUT}], 2, 4294967295) = 1 ([{fd=6, revents=POLLOUT}]) sendmsg(6, {msg_name(0)=NULL, msg_iov(106)=[{"i\0\0\0\37\0\0\0\10\0\0\0\377\377\377\377XDG_SEAT=seat0\0", 31}, {"i\0\0\0\"\0\0\0\10\0\0\0\377\377\377\377XDG_SESSION_ID=c"..., 34}, {"i\0\0\0.\0\0\0\10\0\0\0\377\377\377\377LC_IDENTIFICATIO"..., 46}, {"i\0\0\0r\0\0\0\10\0\0\0\377\377\377\377SESSION_MANAGER="..., 114}, {"i\0\0\0003\0\0\0\10\0\0\0\377\377\377\377GIO_LAUNCHED_DES"..., 51}, {"i\0\0\0\33\0\0\0\10\0\0\0\377\377\377\377DISPLAY=:0\0", 27}, {"i\0\0\0&\0\0\0\10\0\0\0\377\377\377\377TMPDIR=/tmp/user"..., 38}, {"i\0\0\0\31\0\0\0\10\0\0\0\377\377\377\377JOB=dbus\0", 25}, {"i\0\0\0D\0\0\0\10\0\0\0\377\377\377\377GNOME_KEYRING_CO"..., 68}, {"i\0\0\0<\0\0\0\10\0\0\0\377\377\377\377GNOME_DESKTOP_SE"..., 60}, {"i\0\0\0C\0\0\0\10\0\0\0\377\377\377\377DEFAULTS_PATH=/u"..., 67}, {"i\0\0\0001\0\0\0\10\0\0\0\377\377\377\377QT_QPA_PLATFORMT"..., 49}, {"i\0\0\0\35\0\0\0\10\0\0\0\377\377\377\377LOGNAME=alon\0", 29}, {"i\0\0\0%\0\0\0\10\0\0\0\377\377\377\377TEXTDOMAIN=im-co"..., 37}, {"i\0\0\0\32\0\0\0\10\0\0\0\377\377\377\377INSTANCE=\0", 26}, {"i\0\0\0$\0\0\0\10\0\0\0\377\377\377\377LC_TIME=en_US.UT"..., 36}, {"i\0\0\0#\0\0\0\10\0\0\0\377\377\377\377SHELL=/usr/bin/z"..., 35}, {"i\0\0\0!\0\0\0\10\0\0\0\377\377\377\377PAPERSIZE=letter"..., 33}, {"i\0\0\0\200\1\0\0\10\0\0\0\377\377\377\377PATH=/home/alon/"..., 384}, {"i\0\0\0'\0\0\0\10\0\0\0\377\377\377\377LC_NUMERIC=en_US"..., 39}, {"i\0\0\0%\0\0\0\10\0\0\0\377\377\377\377LC_PAPER=en_US.U"..., 37}, {"i\0\0\0\"\0\0\0\10\0\0\0\377\377\377\377IM_CONFIG_PHASE="..., 34}, {"i\0\0\0%\0\0\0\10\0\0\0\377\377\377\377WEBIDE_JDK=/opt/"..., 37}, {"i\0\0\0$\0\0\0\10\0\0\0\377\377\377\377XMODIFIERS=@im=n"..., 36}, {"i\0\0\0\"\0\0\0\10\0\0\0\377\377\377\377QT4_IM_MODULE=xi"..., 34}, {"i\0\0\0J\0\0\0\10\0\0\0\377\377\377\377XDG_SESSION_PATH"..., 74}, {"i\0\0\0\37\0\0\0\10\0\0\0\377\377\377\377SESSION=ubuntu\0", 31}, {"i\0\0\0001\0\0\0\10\0\0\0\377\377\377\377TEXTDOMAINDIR=/u"..., 49}, {"i\0\0\0@\0\0\0\10\0\0\0\377\377\377\377SSH_AUTH_SOCK=/r"..., 64}, {"i\0\0\0002\0\0\0\10\0\0\0\377\377\377\377XAUTHORITY=/home"..., 50}, {"i\0\0\0'\0\0\0\10\0\0\0\377\377\377\377XDG_MENU_PREFIX="..., 39}, {"i\0\0\0\"\0\0\0\10\0\0\0\377\377\377\377GDMSESSION=ubunt"..., 34}, ...], msg_controllen=0, msg_flags=0}, 0) = 5415 poll([{fd=4, events=POLLIN}, {fd=6, events=POLLIN}], 2, 4294967295) = 1 ([{fd=6, revents=POLLIN}]) recvmsg(6, {msg_name(0)=NULL, msg_iov(1)=[{"\314\0\0\0\20\0\0\0\10\0\0\0\377\377\377\377", 65535}], msg_controllen=0, msg_flags=0}, 0) = 16 poll([{fd=4, events=POLLIN}], 1, 0) = 0 (Timeout) fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 16), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f86e817a000 write(1, "[exited]\n", 9[exited] ) = 9 getppid() = 13668 fcntl(0, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) fcntl(0, F_SETFL, O_RDWR|O_LARGEFILE) = 0 exit_group(0) = ? +++ exited with 0 +++
I took the following steps: commented out my whole tmux.conf, restart tmux to see if it doesn't exit comment out 50% of my tmux.conf, restart tmux to see if it doesn't exit keep on doing this until I see which part of my tmux configuration is to blame, then fix that In my case it was because I was using the following setting on OSX and Linux: set-option -g default-command "reattach-to-user-namespace -l bash" Which broke in my Linux because I didn't install it there. It's probably not even available in Linux.
{ "source": [ "https://unix.stackexchange.com/questions/128190", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/54958/" ] }
128,213
Linux uses a virtual memory system where all of the addresses are virtual addresses and not physical addresses. These virtual addresses are converted into physical addresses by the processor. To make this translation easier, virtual and physical memory are divided into pages. Each of these pages is given a unique number; the page frame number. Some page sizes can be 2 KB, 4 KB, etc. But how is this page size number determined? Is it influenced by the size of the architecture? For example, a 32-bit bus will have 4 GB address space.
You can find out a system's default page size by querying its configuration via the getconf command: $ getconf PAGE_SIZE 4096 or $ getconf PAGESIZE 4096 NOTE: The above units are typically in bytes, so the 4096 equates to 4096 bytes or 4kB. This is hardwired in the Linux kernel's source here: Example $ more /usr/src/kernels/3.13.9-100.fc19.x86_64/include/asm-generic/page.h ... ... /* PAGE_SHIFT determines the page size */ #define PAGE_SHIFT 12 #ifdef __ASSEMBLY__ #define PAGE_SIZE (1 << PAGE_SHIFT) #else #define PAGE_SIZE (1UL << PAGE_SHIFT) #endif #define PAGE_MASK (~(PAGE_SIZE-1)) How does shifting give you 4096? When you shift bits, you're performing a binary multiplication by 2. So in effect a shifting of bits to the left ( 1 << PAGE_SHIFT ) is doing the multiplication of 2^12 = 4096. $ echo "2^12" | bc 4096
{ "source": [ "https://unix.stackexchange.com/questions/128213", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/15378/" ] }
128,220
Most of the info I see online says to edit /etc/resolv.conf , but any changes I make there just get overridden. $ cat /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- # YOUR CHANGES WILL BE OVERWRITTEN nameserver 127.0.1.1 It seems that 127.0.1.1 is a local instance of dnsmasq . The dnsmasq docs say to edit /etc/resolv.conf . I tried putting custom nameservers in /etc/resolv.conf.d/base , but the changes didn't show up in /etc/resolv.conf after running sudo resolvconf -u . FYI, I don't want to change DNS on a per-connection basis, I want to set default DNS settings to use for all connections when not otherwise specified. UPDATE: I answered this question myself: https://unix.stackexchange.com/a/163506/67024 I think it's the best solution since: It works. It requires the least amount of changes and It still works in conjunction with dnsmasq's DNS cache, rather than bypassing it.
I believe if you want to override the DNS nameserver you merely add a line similar to this in your base file under resolv.conf.d . Example NOTE: Before we get started, sure the following package is installed, apt install resolvconf . $ sudo vim /etc/resolvconf/resolv.conf.d/base Then put your nameserver list in like so: nameserver 8.8.8.8 nameserver 8.8.4.4 Finally update resolvconf : $ sudo resolvconf -u If you take a look at the man page for resolvconf it describes the various files under /etc/resolvconf/resolv.conf.d/ . /etc/resolvconf/resolv.conf.d/base File containing basic resolver information. The lines in this file are included in the resolver configuration file even when no interfaces are configured. /etc/resolvconf/resolv.conf.d/head File to be prepended to the dynamically generated resolver configuration file. Normally this is just a comment line. /etc/resolvconf/resolv.conf.d/tail File to be appended to the dynamically generated resolver configuration file. To append nothing, make this an empty file. This file is a good place to put a resolver options line if one is needed, e.g., options inet6 Even though there's a warning at the top of the head file: $ cat /etc/resolvconf/resolv.conf.d/head # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN this warning is is there so that when these files are constructed, the warning will ultimately work its way into the resulting resolv.conf file that these files will be used to make. So you could just as easily have added the nameserver lines that are described above for the base file, to the head file too. References Persist dns nameserver for ubuntu 14.04 How do I add a DNS server via resolv.conf?
{ "source": [ "https://unix.stackexchange.com/questions/128220", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67024/" ] }
128,336
On my Arch Linux system (Linux Kernel 3.14.2) bind mounts do not respect the read only option # mkdir test # mount --bind -o ro test/ /mnt # touch /mnt/foo creates the file /mnt/foo . The relevant entry in /proc/mounts is /dev/sda2 /mnt ext4 rw,noatime,data=ordered 0 0 The mount options do not match my requested options, but do match both the read/write behaviour of the bind mount and the options used to originally mount /dev/sda2 on / /dev/sda2 / ext4 rw,noatime,data=ordered 0 0 If, however, I remount the mount then it respects the read only option # mount --bind -o remount,ro test/ /mnt # touch /mnt/bar touch: cannot touch β€˜/mnt/bar’: Read-only file system and the relevant entry in /proc/mounts/ /dev/sda2 /mnt ext4 ro,relatime,data=ordered 0 0 looks like what I might expect (although in truth I would expect to see the full path of the test directory). The entry in /proc/mounts/ for the orignal mount of /dev/sda2/ on / is also unchanged and remains read/write /dev/sda2 / ext4 rw,noatime,data=ordered 0 0 This behaviour and the work around have been known since at least 2008 and are documented in the man page of mount Note that the filesystem mount options will remain the same as those on the original mount point, and cannot be changed by passing the -o option along with --bind/--rbind. The mount options can be changed by a separate remount command Not all distributions behave the same. Arch seems to silently fail to respect the options while Debian generates a warning when the bind mount does not get mount read-only mount: warning: /mnt seems to be mounted read-write. There are reports that this behaviour was "fixed" in Debian Lenny and Squeeze although it does not appear to be a universal fix nor does it still work in Debian Wheezy. What is the difficultly associated with making bind mount respect the read only option on the initial mount?
Bind mount is just... well... a bind mount. I.e. it's not a new mount. It just "links"/"exposes"/"considers" a subdirectory as a new mount point. As such it cannot alter the mount parameters. That's why you're getting complaints: # mount /mnt/1/lala /mnt/2 -o bind,ro mount: warning: /mnt/2 seems to be mounted read-write. But as you said a normal bind mount works: # mount /mnt/1/lala /mnt/2 -o bind And then a ro remount also works: # mount /mnt/1/lala /mnt/2 -o bind,remount,ro However what happens is that you're changing the whole mount and not just this bind mount. If you take a look at /proc/mounts you'll see that both bind mount and the original mount change to read-only: /dev/loop0 /mnt/1 ext2 ro,relatime,errors=continue,user_xattr,acl 0 0 /dev/loop0 /mnt/2 ext2 ro,relatime,errors=continue,user_xattr,acl 0 0 So what you're doing is like changing the initial mount to a read-only mount and then doing a bind mount which will of course be read-only. UPDATE 2016-07-20: The following are true for 4.5 kernels, but not true for 4.3 kernels (This is wrong. See update #2 below): The kernel has two flags that control read-only: The MS_READONLY : Indicating whether the mount is read-only The MNT_READONLY : Indicating whether the "user" wants it read-only On a 4.5 kernel, doing a mount -o bind,ro will actually do the trick. For example, this: # mkdir /tmp/test # mkdir /tmp/test/a /tmp/test/b # mount -t tmpfs none /tmp/test/a # mkdir /tmp/test/a/d # mount -o bind,ro /tmp/test/a/d /tmp/test/b will create a read-only bind mount of /tmp/test/a/d to /tmp/test/b , which will be visible in /proc/mounts as: none /tmp/test/a tmpfs rw,relatime 0 0 none /tmp/test/b tmpfs ro,relatime 0 0 A more detailed view is visible in /proc/self/mountinfo , which takes into consideration the user view (namespace). The relevant lines will be these: 363 74 0:49 / /tmp/test/a rw,relatime shared:273 - tmpfs none rw 368 74 0:49 /d /tmp/test/b ro,relatime shared:273 - tmpfs none rw Where on the second line, you can see that it says both ro ( MNT_READONLY ) and rw ( !MS_READONLY ). The end result is this: # echo a > /tmp/test/a/d/f # echo a > /tmp/test/b/f -su: /tmp/test/b/f: Read-only file system UPDATE 2016-07-20 #2: A bit more digging into this shows that the behavior in fact depends on the version of libmount which is part of util-linux. Support for this was added with this commit and was released with version 2.27: commit 9ac77b8a78452eab0612523d27fee52159f5016a Author: Karel Zak Date: Mon Aug 17 11:54:26 2015 +0200 libmount: add support for "bind,ro" Now it's necessary t use two mount(8) calls to create a read-only mount: mount /foo /bar -o bind mount /bar -o remount,ro,bind This patch allows to specify "bind,ro" and the remount is done automatically by libmount by additional mount(2) syscall. It's not atomic of course. Signed-off-by: Karel Zak which also provides the workaround. The behavior can be seen using strace on an older and a newer mount: Old: mount("/tmp/test/a/d", "/tmp/test/b", 0x222e240, MS_MGC_VAL|MS_RDONLY|MS_BIND, NULL) = 0 <0.000681> New: mount("/tmp/test/a/d", "/tmp/test/b", 0x1a8ee90, MS_MGC_VAL|MS_RDONLY|MS_BIND, NULL) = 0 <0.011492> mount("none", "/tmp/test/b", NULL, MS_RDONLY|MS_REMOUNT|MS_BIND, NULL) = 0 <0.006281> Conclusion: To achieve the desired result one needs to run two commands (as @Thomas already said): mount SRC DST -o bind mount DST -o remount,ro,bind Newer versions of mount (util-linux >=2.27) do this automatically when one runs mount SRC DST -o bind,ro
{ "source": [ "https://unix.stackexchange.com/questions/128336", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22724/" ] }
128,379
I'm trying to run the following command: find a/folder b/folder -name *.c -o -name *.h -exec grep -I foobar '{}' + This is returning an error: find: missing argument to -exec I can't see what's wrong with this command, as it seems to match the man page: -exec command {} + This variant of the -exec option runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invoca- tions of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of '{}' is allowed within the command. The command is executed in the starting directory. I also tried: find a/folder b/folder -name *.c -o -name *.h -exec grep -I foobar {} + find a/folder b/folder -name *.c -o -name *.h -exec 'grep -I foobar' {} + find a/folder b/folder -name *.c -o -name *.h -exec 'grep -I foobar' '{}' + find a/folder b/folder -name "*.c" -o -name "*.h" -exec grep -I foobar '{}' + find a/folder b/folder \( -name *.c -o -name *.h \) -exec grep -I foobar '{}' + find a/folder b/folder -name *.c -o -name *.h -exec grep -I foobar '{}' \+
There was several issues with your attempts, including backticks used instead of quotes (removed in later edits to the question), missing quotes where they are required, extra quotes where they are useless, missing parentheses to group -o clauses, and different implementations of find used (see the comments and chat for details). Anyway, the command can be simplified like this: find a/folder b/folder -name "*.[ch]" -exec grep -I foobar {} + or, should you use an archaic GNU find version, this should always work: find a/folder b/folder -name "*.[ch]" -exec grep -I foobar {} \;
{ "source": [ "https://unix.stackexchange.com/questions/128379", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22172/" ] }
128,396
I am using bash. To browse my command history I am calling the history command which I believe is calling the Gnu program of the same name. (I don't know if there's a better bash specific way). In my .bashrc I currently have a line export PROMPT_COMMAND='history -a' to preserve history from my multiple bash sessions I am running. If I do history I currently only see 524 entries. Is this configurable? I would like to increase this to a much larger number say 2000.
First of all, history is the bash specific way, none better. The history command is a bash builtin as you can see by running $ type history history is a shell builtin Now, the number of commands it remembers is controlled by the HISTSIZE variable. To set it to a larger number add this line to your .profile (for why this is a better place for it than .bashrc , see here ): export HISTSIZE=2000 From now on, history will return the last 2000 commands you ran.
{ "source": [ "https://unix.stackexchange.com/questions/128396", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/46499/" ] }
128,434
I am in a folder with lots of .txt files, I would like to find all the files which contain stringA but don't contain stringB (they are not necessarily in the same line). Does anyone know how to do this?
As long as your filenames do not contain spaces, tabs, newline (assuming an unmodified $IFS ) or wildcard characters and don't start with - , and if your grep supports the -L option, you can do it as follows: $ cat file1 stringA stringC $ cat file2 stringA stringB $ grep -L stringB $(grep -l stringA file?) file1 The grep executed in the subshell $() , will print all filenames which contain stringA . This filelist is input for the main grep command, which lists all files that do not contain stringB . From man grep -v, --invert-match Invert the sense of matching, to select non-matching lines. (-v is specified by POSIX.) -L, --files-without-match Suppress normal output; instead print the name of each input file from which no output would normally have been printed. The scanning will stop on the first match. -l, --files-with-matches Suppress normal output; instead print the name of each input file from which output would normally have been printed. The scanning will stop on the first match. (-l is specified by POSIX.)
{ "source": [ "https://unix.stackexchange.com/questions/128434", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/44648/" ] }
128,439
I understood the very basic concept of how to use /etc/network/interfaces , but all I find online are examples, example after example, which I can copy-paste from. What I miss is an explanation of the syntax, an explanation of the meaning of the commands and which order the commands require. I want to understand, because most of the time copy-paste is not enough, because I'm not working on a fresh machine, so I can't just overwrite existing configurations because it would break a lot of stuff. man interfaces was not very helpful since it is written very complicated. Example questions I have: what does inet in an iface line mean exactly (I could not even find it in the manpage), what does manual in an iface line mean exactly (many examples use it, but according to manpage it needs an extra config file then, which the examples don't present), when do I use or need them? When not? When I create a bridge, what exactly happens to the interfaces?
Well, let’s separate it into pieces, to make it easier to understand /etc/network/interfaces : Link layer +interface type options (generally the first of each interface stanza and called address family + method by interfaces(5) manpages): auto interface – Start the interface(s) at boot. That’s why the lo interface uses this kind of linking configuration. allow-auto interface – Same as auto allow-hotplug interface – Start the interface when a "hotplug" event is detected. In the real world, this is used in the same situations as auto but the difference is that it will wait for an event like "being detected by udev hotplug api" or "cable linked". See " Related Stuff(hotplug) " for additional info. These options are pretty much "layer 2" options, setting up link states on interfaces, and are not related with "layer 3" (routing and addressing). As an example you could have a link aggregation where the bond0 interface needs to be up whatever the link state is, and its members could be up after a link state event: auto bond0 iface bond0 inet manual down ip link set $IFACE down post-down rmmod bonding pre-up modprobe bonding mode=4 miimon=200 up ip link set $IFACE up mtu 9000 up udevadm trigger allow-hotplug eth0 iface eth0 inet manual up ifenslave bond0 $IFACE down ifenslave -d bond0 $IFACE 2> /dev/null allow-hotplug eth1 iface eth1 inet manual up ifenslave bond0 $IFACE down ifenslave -d bond0 $IFACE 2> /dev/null So, this way I create a link aggregation and the interfaces will be added to it and removed on cable link states. Most common interface types: All options below are a suffix to a defined interface ( iface <Interface_family> ). Basically the iface eth0 creates a stanza called eth0 on an Ethernet device. iface ppp0 should create a point-to-point interface, and it could have different ways to acquire addresses like inet wvdial that will forward the configuration of this interface to wvdialconf script. The tuple inet / inet6 + option will define the version of the IP protocol that will be used and the way this address will be configured ( static , dhcp , scripts ...). The online Debian manuals will give you more details about this. Options on Ethernet interfaces: inet static – Defines a static IP address. inet manual – Does not define an IP address for an interface. Generally used by interfaces that are bridge or aggregation members, interfaces that need to operate in promiscuous mode ( e.g. port mirroring or network TAPs ), or have a VLAN device configured on them. It's a way to keep the interface up without an IP address. inet dhcp – Acquire IP address through DHCP protocol. inet6 static – Defines a static IPv6 address. Example: # Eth0 auto eth0 iface eth0 inet manual pre-up modprobe 8021q pre-up ifconfig eth0 up post-down ifconfig eth0 down # Vlan Interface auto vlan10 iface vlan10 inet static address 10.0.0.1 netmask 255.255.255.0 gateway 10.0.0.254 vlan-raw-device eth0 ip_rp_filter 0 This example will bring eth0 up, and create a VLAN interface called vlan10 that will process the tag number 10 on an Ethernet frame. Common options inside an interface stanza(layer 2 and 3): address – IP address for a static IP configured interface netmask – Network mask. Can be omitted if you use cidr address. Example: iface eth1 inet static address 192.168.1.2/24 gateway 192.168.1.1 gateway – The default gateway of a server. Be careful to use only one of this guy. vlan-raw-device – On a VLAN interface, defines its "father". bridge_ports – On a bridge interface, define its members. down – Use the following command to down the interface instead of ifdown . post-down – Actions taken right after the interface is down. pre-up – Actions before the interface is up. up – Use the following command to up the interface instead of ifup . It is up to your imagination to use any option available on iputils . As an example we could use up ip link set $IFACE up mtu 9000 to enable jumbo frames during the up operation(instead of using the mtu option itself). You can also call any other software like up sleep 5; mii-tool -F 100baseTx-FD $IFACE to force 100Mbps Full duplex 5 seconds after the interface is up. hwaddress ether 00:00:00:00:00:00 - Change the mac address of the interface instead of using the one that is hardcoded into rom, or generated by algorithms. You can use the keyword random to get a randomized mac address. dns-nameservers – IP addresses of nameservers. Requires the resolvconf package. It’s a way to concentrate all the information in /etc/network/interfaces instead of using /etc/resolv.conf for DNS-related configurations. Do not edit the resolv.conf configuration file manually as it will be dynamically changed by programs in the system. dns-search example.net – Append example.net as domain to queries of host, creating the FQDN. Option domain of /etc/resolv.conf wpa-ssid – Wireless: Set a wireless WPA SSID. mtu - MTU size. mtu 9000 = Jumbo Frame. Useful if your Linux box is connected with switches that support larger MTU sizes. Can break some protocols(I had bad experiences with snmp and jumbo frames). wpa-psk – Wireless: Set a hexadecimal encoded PSK for your SSID. ip_rp_filter 1 - Reverse path filter enabled. Useful in situations where you have 2 routes to a host, and this will force the packet to come back from where it came(same interface, using its routes). Example: You are connected on your lan( 192.168.1.1/24 ) and you have a dlna server with one interface on your lan( 192.168.1.10/24 ) and other interface on dmz to execute administrative tasks( 172.16.1.1/24 ). During a ssh session from your computer to dlna dmz ip, the information needs to come back to you, but will hang forever because your dlna server will try to deliver the response directly through it's lan interface. With rp_filter enabled, it will ensure that the connection will come back from where it came from. More information here . Some of those options are not optional. Debian will warn you if you put an IP address on an interface without a netmask, for example. You can find more good examples of network configuration here . Related Stuff : Links that have information related to /etc/network/interfaces network configuration file: HOWTO: Wireless Security - WPA1, WPA2, LEAP, etc . How can I bridge two interfaces with ip/iproute2? . What is a hotplug event from the interface?
{ "source": [ "https://unix.stackexchange.com/questions/128439", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/50666/" ] }
128,462
I want to get detail about binary package and run them on linux. I am running Debian base (Ubuntu/Linux mint) Linux os. How to build binary package from source? And can I directly download binary package for applications (like firefox, etc.) and games (like boswars, etc.) ? I run some direct package which is in "xyz.linux.run" format What are these package? Are they independent of dependencies? or Is it pre-built binary packages? How to build them which can be run on linux operating system by directly "xyz.linux.run" on linux. What is diference between binary package and deb package?
In a strict sense a binary file is one which is not character encoded as human readable text. More colloquially, a "binary" refers to a file that is compiled, executable code, although the file itself may not be executable (referring not so much to permissions as to the capacity to be run alone; some binary code files such as libraries are compiled, but regardless of permissions, they cannot be executed all by themselves). A binary which runs as a standalone executable is an "executable", although not all executable files are binaries (and this is about permissions: executable text files which invoke an interpreter via a shebang such as #!/bin/sh are executables too). What is a binary package? A binary package in a linux context is an application package which contains (pre-built) executables, as opposed to source code. Note that this does not mean a package file is itself an executable. A package file is an archive (sort of like a .zip ) which contains other files, and a "binary" package file is one which specifically contains executables (although again, executables are not necessarily truly binaries, and in fact binary packages may be used for compiled libraries which are binary code, but not executables). However, the package must be unpacked in order for you to access these files. Usually that is taken care of for you by a package management system (e.g. apt/dpkg) which downloads the package and unpacks and installs the binaries inside for you. What is diference between binary package and deb package? There isn't -- .deb packages are binary packages, although there are .deb s which contain source instead, these usually have -src appended to their name. I run some direct package which is in "xyz.linux.run" format What are these package? Those are generally self-extracting binary packages; they work by embedding a binary payload into a shell script. "Self-extracting" means you don't have to invoke another application (such as a package manager) in order to unpack and use them. However, since they do not work with a package manager, resolving their dependencies may be more of a crapshoot and hence some such packages use statically linked executables (they have all necessary libraries built into them) which wastes a bit of memory when they are used.
{ "source": [ "https://unix.stackexchange.com/questions/128462", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66803/" ] }
128,468
Below is the format of some table data contained within a single file: ;NULL;ABCD;ABHJARS;;ABCD;ABCD;Y;;;;;;;;;A; ;NULL;XEU-ANKD;XEU-AJKD;;ABCD;ABCD;Y;;;;;;;;;A; . . ;11744;AMKDIONSKH;AMKDJ AN DJ JAHF AS CPFVH MTM;;QWERDF;QWERDF;Y;;;;;;;;;A; (5436rowsaffected) (returnstatus=0) Returnparameters: ;; ;5436; (1rowaffected) ;;; ;-------;-----------; ;grepkey;5436; (1rowaffected) NOTE: Above grepkey=5436 (Count of the records present in table). Below is the expected output: 1;NULL;ABCD;ABHJARS;;ABCD;ABCD;Y;;;;;;;;;A; 2;NULL;XEU-ANKD;XEU-AJKD;;ABCD;ABCD;Y;;;;;;;;;A; . . 5436;11744;AMKDIONSKH;AMKDJ AN DJ JAHF AS CPFVH MTM;;QWERDF;QWERDF;Y;;;;;;;;;A; I need the data in the above format. I'd like to prefix the row number and exclude the additional data that is present at the end of the file, like count of records in the table etc. Additionally I'd like to accomplish the above using awk .
In a strict sense a binary file is one which is not character encoded as human readable text. More colloquially, a "binary" refers to a file that is compiled, executable code, although the file itself may not be executable (referring not so much to permissions as to the capacity to be run alone; some binary code files such as libraries are compiled, but regardless of permissions, they cannot be executed all by themselves). A binary which runs as a standalone executable is an "executable", although not all executable files are binaries (and this is about permissions: executable text files which invoke an interpreter via a shebang such as #!/bin/sh are executables too). What is a binary package? A binary package in a linux context is an application package which contains (pre-built) executables, as opposed to source code. Note that this does not mean a package file is itself an executable. A package file is an archive (sort of like a .zip ) which contains other files, and a "binary" package file is one which specifically contains executables (although again, executables are not necessarily truly binaries, and in fact binary packages may be used for compiled libraries which are binary code, but not executables). However, the package must be unpacked in order for you to access these files. Usually that is taken care of for you by a package management system (e.g. apt/dpkg) which downloads the package and unpacks and installs the binaries inside for you. What is diference between binary package and deb package? There isn't -- .deb packages are binary packages, although there are .deb s which contain source instead, these usually have -src appended to their name. I run some direct package which is in "xyz.linux.run" format What are these package? Those are generally self-extracting binary packages; they work by embedding a binary payload into a shell script. "Self-extracting" means you don't have to invoke another application (such as a package manager) in order to unpack and use them. However, since they do not work with a package manager, resolving their dependencies may be more of a crapshoot and hence some such packages use statically linked executables (they have all necessary libraries built into them) which wastes a bit of memory when they are used.
{ "source": [ "https://unix.stackexchange.com/questions/128468", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/66967/" ] }
128,471
If I do # cd / # ln -s /home test # cd test # mount --bind $PWD /mnt the entry in /proc/mounts is /dev/sda2 /mnt ext4 rw,noatime,data=ordered 0 0 which is the device that is mounted to /home and is not easily deducible from $PWD which is /test . How can I determine which device (i.e., /dev/sda2) is going to show up in /proc/mounts in general given that the bind mount may be to a directory/file that is potentially "obscured" by symlinks, other bind mounts, etc?
If I understand your question you want to know which device was used for a given mount. For this you can use the df command: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora_greeneggs-root 50G 21G 27G 44% / devtmpfs 3.8G 0 3.8G 0% /dev tmpfs 3.8G 14M 3.8G 1% /dev/shm tmpfs 3.8G 984K 3.8G 1% /run tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup tmpfs 3.8G 3.4M 3.8G 1% /tmp /dev/sda1 477M 99M 349M 23% /boot /dev/mapper/fedora_greeneggs-home 402G 184G 198G 49% /home To find which device a particular file/directory is found on, give the file as an argument to df . Using your example: $ df -h /mnt Filesystem Size Used Avail Use% Mounted on /dev/sda1 477M 99M 349M 23% / You can also use the mount command: $ mount | grep '^/dev' /dev/mapper/fedora_greeneggs-root on / type ext4 (rw,relatime,seclabel,data=ordered) /dev/sda1 on /boot type ext4 (rw,relatime,seclabel,data=ordered) /dev/mapper/fedora_greeneggs-home on /home type ext4 (rw,relatime,seclabel,data=ordered) The directory mounted for each device is the 3rd argument in the output above. So for device /dev/sda1 would be /boot . The other devices are making use of LVM (Logical Volume Management) and would need to be further queried to know which actual device is being used by LVM.
{ "source": [ "https://unix.stackexchange.com/questions/128471", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/22724/" ] }
128,559
I have a folder with more than a million files that needs sorting, but I can't really do anything because mv outputs this message all the time -bash: /bin/mv: Argument list too long I'm using this command to move extension-less files: mv -- !(*.jpg|*.png|*.bmp) targetdir/
xargs is the tool for the job. That, or find with -exec … {} + . These tools run a command several times, with as many arguments as can be passed in one go. Both methods are easier to carry out when the variable argument list is at the end, which isn't the case here: the final argument to mv is the destination. With GNU utilities (i.e. on non-embedded Linux or Cygwin), the -t option to mv is useful, to pass the destination first. If the file names have no whitespace nor any of \"' and don't start with - ΒΉ, then you can simply provide the file names as input to xargs (the echo command is a bash builtin, so it isn't subject to the command line length limit; if you see !: event not found , you need to enable globbing syntax with shopt -s extglob ): echo !(*.jpg|*.png|*.bmp) | xargs mv -t targetdir -- You can use the -0 option to xargs to use null-delimited input instead of the default quoted format. printf '%s\0' !(*.jpg|*.png|*.bmp) | xargs -0 mv -t targetdir -- Alternatively, you can generate the list of file names with find . To avoid recursing into subdirectories, use -type d -prune . Since no action is specified for the listed image files, only the other files are moved. find . -name . -o -type d -prune -o \ -name '*.jpg' -o -name '*.png' -o -name '*.bmp' -o \ -exec mv -t targetdir/ {} + (This includes dot files, unlike the shell wildcard methods.) If you don't have GNU utilities, you can use an intermediate shell to get the arguments in the right order. This method works on all POSIX systems. find . -name . -o -type d -prune -o \ -name '*.jpg' -o -name '*.png' -o -name '*.bmp' -o \ -exec sh -c 'mv "$@" "$0"' targetdir/ {} + In zsh, you can load the mv builtin : setopt extended_glob zmodload zsh/files mv -- ^*.(jpg|png|bmp) targetdir/ or if you prefer to let mv and other names keep referring to the external commands: setopt extended_glob zmodload -Fm zsh/files b:zf_\* zf_mv -- ^*.(jpg|png|bmp) targetdir/ or with ksh-style globs: setopt ksh_glob zmodload -Fm zsh/files b:zf_\* zf_mv -- !(*.jpg|*.png|*.bmp) targetdir/ Alternatively, using GNU mv and zargs : autoload -U zargs setopt extended_glob zargs -- ./^*.(jpg|png|bmp) -- mv -t targetdir/ -- ΒΉ with some xargs implementations, file names must also be valid text in the current locale. Some would also consider a file named _ as indicating the end of input (can be avoided with -E '' )
{ "source": [ "https://unix.stackexchange.com/questions/128559", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/56196/" ] }
128,593
Is there a way to comment/uncomment a shell/config/ruby script using command line? for example: $ comment 14-18 bla.conf $ uncomment 14-18 bla.conf this would add or remove # sign on bla.conf on line 14 to 18 . Normally I use sed , but I must know the contents of those lines and then do a find-replace operation, and that would give a wrong result when the there are more than one needle (and we're only want to replace the N-th one).
To comment lines 2 through 4 of bla.conf: sed -i '2,4 s/^/#/' bla.conf To make the command that you wanted, just put the above into a shell script called comment: #!/bin/sh sed -i "$1"' s/^/#/' "$2" This script is used the same as yours with the exception that the first and last lines are to be separated by a comma rather than a dash. For example: comment 2,4 bla.conf An uncomment command can be created analogously. Advanced feature sed 's line selection is quite powerful. In addition to specifying first and last lines by number, it is also possible to specify them by a regex. So, if you want to command all lines from the one containing foo to the one containing bar , use: comment '/foo/,/bar/' bla.conf BSD (OSX) Systems With BSD sed, the -i option needs an argument even if it is just an empty string. Thus, for example, replace the top command above with: sed -i '' '2,4 s/^/#/' bla.conf And, replace the command in the script with: sed -i '' "$1"' s/^/#/' "$2"
{ "source": [ "https://unix.stackexchange.com/questions/128593", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/27996/" ] }
128,642
The following report is thrown in my messages log: kernel: Out of memory: Kill process 9163 (mysqld) score 511 or sacrifice child kernel: Killed process 9163, UID 27, (mysqld) total-vm:2457368kB, anon-rss:816780kB, file-rss:4kB Doesn't matter if this problem is for httpd , mysqld or postfix but I am curious how can I continue debugging the problem. How can I get more info about why the PID 9163 is killed and I am not sure if linux keeps history for the terminated PIDs somewhere. If this occur in your message log file how you will troubleshoot this issue step by step? # free -m total used free shared buffers cached Mem: 1655 934 721 0 10 52 -/+ buffers/cache: 871 784 Swap: 109 6 103`
The kernel will have logged a bunch of stuff before this happened, but most of it will probably not be in /var/log/messages , depending on how your (r)syslogd is configured. Try: grep oom /var/log/* grep total_vm /var/log/* The former should show up a bunch of times and the latter in only one or two places. That is the file you want to look at. Find the original "Out of memory" line in one of the files that also contains total_vm . Thirty second to a minute (could be more, could be less) before that line you'll find something like: kernel: foobar invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=0 You should also find a table somewhere between that line and the "Out of memory" line with headers like this: [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name This may not tell you much more than you already know, but the fields are: pid The process ID. uid User ID. tgid Thread group ID. total_vm Virtual memory use (in 4 kB pages) rss Resident memory use (in 4 kB pages) nr_ptes Page table entries swapents Swap entries oom_score_adj Usually 0; a lower number indicates the process will be less likely to die when the OOM killer is invoked. You can mostly ignore nr_ptes and swapents although I believe these are factors in determining who gets killed. This is not necessarily the process using the most memory, but it very likely is. For more about the selection process, see here . Basically, the process that ends up with the highest oom score is killed -- that's the "score" reported on the "Out of memory" line; unfortunately the other scores aren't reported but that table provides some clues in terms of factors. Again, this probably won't do much more than illuminate the obvious: the system ran out of memory and mysqld was choosen to die because killing it would release the most resources . This does not necessary mean mysqld is doing anything wrong. You can look at the table to see if anything else went way out of line at the time, but there may not be any clear culprit: the system can run out of memory simply because you misjudged or misconfigured the running processes.
{ "source": [ "https://unix.stackexchange.com/questions/128642", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/65413/" ] }
128,894
I'm not using hosts.allow or hosts.deny , furthermore SSH works from my windows-machine (same laptop, different hard drive) but not my Linux machine. ssh -vvv root@host -p port gives: OpenSSH_6.6, OpenSSL 1.0.1f 6 Jan 2014 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 20: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to host [host] port <port>. debug1: Connection established. debug1: identity file /home/torxed/.ssh/id_dsa type -1 debug1: identity file /home/torxed/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6 ssh_exchange_identification: read: Connection reset by peer On the windows machine, everything works fine, so I checked the security logs and the lines in there are identical, the server treats the two different "machines" no different and they are both allowed via public-key authentication. So that leads to the conclusion that this must be an issue with my local ArchLinux laptop.. but what? [torxed@archie ~]$ cat .ssh/known_hosts [torxed@archie ~]$ So that's not the problem... [torxed@archie ~]$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination No conflicts with the firewall settings (for now).. [torxed@archie ~]$ ls -la .ssh/ total 20 drwx------ 2 torxed users 4096 Sep 3 2013 . drwx------ 51 torxed users 4096 May 11 11:11 .. -rw------- 1 torxed users 1679 Sep 3 2013 id_rsa -rw-r--r-- 1 torxed users 403 Sep 3 2013 id_rsa.pub -rw-r--r-- 1 torxed users 170 May 11 11:21 known_hosts Permissions appear to be fine (same on the server).. Also tried without configuring /etc/ssh/ssh_config with the same result except for a lot of auto-configuration going on in the client which ends up with the same error.
Originally posted on Ask Ubuntu If you have ruled out any "external" factors, the following set of steps usually helps to narrow it down. So while this doesn't directly answer your question, it may help tracking down the error cause. Troubleshooting sshd What I find generally very useful in any such cases is to start sshd without letting it daemonize. The problem in my case was that neither syslog nor auth.log showed anything meaningful. When I started it from the terminal I got: # $(which sshd) -Ddp 10222 /etc/ssh/sshd_config line 8: address family must be specified before ListenAddress. Much better! This error message allowed me to see what's wrong and fix it. Neither of the log files contained this output. NB: at least on Ubuntu the $(which sshd) is the best method to satisfy sshd requirement of an absolute path. Otherwise you'll get the following error: sshd re-exec requires execution with an absolute path . The -p 10222 makes sshd listen on that alternative port, overriding the configuration file - this is so that it doesn't clash with potentially running sshd instances. Make sure to choose a free port here. Finally: connect to the alternative port ( ssh -p 10222 user@server ). This method has helped me many many times in finding issues, be it authentication issues or other types. To get really verbose output to stdout , use $(which sshd) -Ddddp 10222 (note the added dd to increase verbosity). For more debugging goodness check man sshd . The main advantage of this method is that it allows you to check the sshd configuration without having to restart the sshd on the default port. Normally this should not interfere with existing SSH-connections, but I've seen it. So this allows one to validate the configuration file prior to - potentially - cutting off ones access to a remote server (for example I have that for some VPS and even for physical servers where I need to pay extra to get out-of-band access to the machine).
{ "source": [ "https://unix.stackexchange.com/questions/128894", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/26852/" ] }
128,953
How can I display the top results in my terminal in real time so that the list is sorted by memory usage?
Use the top command in Linux/Unix: top press shift + m after running the top command or you can interactively choose which column to sort on press Shift + f to enter the interactive menu press the up or down arrow until the %MEM choice is highlighted press s to select %MEM choice press enter to save your selection press q to exit the interactive menu Or specify the sort order on the command line # on OS-X top -o MEM # other distros top -o %MEM References https://stackoverflow.com/questions/4802481/how-to-see-top-processes-by-actual-memory-usage
{ "source": [ "https://unix.stackexchange.com/questions/128953", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/67370/" ] }
128,975
I have the following command: find / -name libGL.so.1 Which returns lots of lines with "Permission denied". I want to exclude such lines, so I added the following: find / -name libGL.so.1 | grep -v 'denied' But the output is the same - my grep -v 'denied' is not filtering out the lines with Permission denied . I've tried many variations, looked over grep tutorials, but I cannot figure out the problem. Any suggestions?
That's nothing to do with grep - it's because the pipe | redirects the standard output stream stdout whereas the Permission denied messages are in the standard error stream stderr . You could achieve the result you want by combining the streams using 2>&1 (redirect the stream whose file descriptor is 2 to the stream whose file descriptor is 1 ) so that stderr as well as stdout gets piped to the input of the grep command find / -name libGL.so.1 2>&1 | grep -v 'denied' but it would be more usual to simply discard stderr altogether by redirecting it to /dev/null find / -name libGL.so.1 2>/dev/null Using |& instead of 2>&1 | If you take a look at the Bash man page you'll likely notice this blurb: If |& is used, the standard error of command is connected to command2's standard input through the pipe; it is shorthand for 2>&1 | . So you can also use this construct as well if you want to join STDERR and STDOUT: find / -name libGL.so.1 |& grep -v 'denied'
{ "source": [ "https://unix.stackexchange.com/questions/128975", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/59802/" ] }
128,985
I consistently see answers quoting this link stating definitively "Don't parse ls !" This bothers me for a couple of reasons: It seems the information in that link has been accepted wholesale with little question, though I can pick out at least a few errors in casual reading. It also seems as if the problems stated in that link have sparked no desire to find a solution. From the first paragraph: ...when you ask [ls] for a list of files, there's a huge problem: Unix allows almost any character in a filename, including whitespace, newlines, commas, pipe symbols, and pretty much anything else you'd ever try to use as a delimiter except NUL. ... ls separates filenames with newlines. This is fine until you have a file with a newline in its name. And since I don't know of any implementation of ls that allows you to terminate filenames with NUL characters instead of newlines, this leaves us unable to get a list of filenames safely with ls . Bummer, right? How ever can we handle a newline terminated listed dataset for data that might contain newlines? Well, if the people answering questions on this website didn't do this kind of thing on a daily basis, I might think we were in some trouble. The truth is though, most ls implementations actually provide a very simple api for parsing their output and we've all been doing it all along without even realizing it. Not only can you end a filename with null, you can begin one with null as well or with any other arbitrary string you might desire. What's more, you can assign these arbitrary strings per file-type . Please consider: LS_COLORS='lc=\0:rc=:ec=\0\0\0:fi=:di=:' ls -l --color=always | cat -A total 4$ drwxr-xr-x 1 mikeserv mikeserv 0 Jul 10 01:05 ^@^@^@^@dir^@^@^@/$ -rw-r--r-- 1 mikeserv mikeserv 4 Jul 10 02:18 ^@file1^@^@^@$ -rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 01:08 ^@file2^@^@^@$ -rw-r--r-- 1 mikeserv mikeserv 0 Jul 10 02:27 ^@new$ line$ file^@^@^@$ ^@ See this for more. Now it's the next part of this article that really gets me though: $ ls -l total 8 -rw-r----- 1 lhunath lhunath 19 Mar 27 10:47 a -rw-r----- 1 lhunath lhunath 0 Mar 27 10:47 a?newline -rw-r----- 1 lhunath lhunath 0 Mar 27 10:47 a space The problem is that from the output of ls , neither you or the computer can tell what parts of it constitute a filename. Is it each word? No. Is it each line? No. There is no correct answer to this question other than: you can't tell. Also notice how ls sometimes garbles your filename data (in our case, it turned the \n character in between the words "a" and "newline" into a ?question mark ... ... If you just want to iterate over all the files in the current directory, use a for loop and a glob: for f in *; do [[ -e $f ]] || continue ... done The author calls it garbling filenames when ls returns a list of filenames containing shell globs and then recommends using a shell glob to retrieve a file list! Consider the following: printf 'touch ./"%b"\n' "file\nname" "f i l e n a m e" | . /dev/stdin ls -1q f i l e n a m e file?name IFS=" " ; printf "'%s'\n" $(ls -1q) 'f i l e n a m e' 'file name' POSIX defines the -1 and -q ls operands so: -q - Force each instance of non-printable filename characters and <tab> s to be written as the question-mark ( '?' ) character. Implementations may provide this option by default if the output is to a terminal device. -1 - (The numeric digit one.) Force output to be one entry per line. Globbing is not without its own problems - the ? matches any character so multiple matching ? results in a list will match the same file multiple times. That's easily handled. Though how to do this thing is not the point - it doesn't take much to do after all and is demonstrated below - I was interested in why not . As I consider it, the best answer to that question has been accepted. I would suggest you try to focus more often on telling people what they can do than on what they can't. You're a lot less likely, as I think, to be proven wrong at least. But why even try? Admittedly, my primary motivation was that others kept telling me I couldn't. I know very well that ls output is as regular and predictable as you could wish it so long as you know what to look for. Misinformation bothers me more than do most things. The truth is, though, with the notable exception of both Patrick's and Wumpus Q. Wumbley's answers (despite the latter's awesome handle) , I regard most of the information in the answers here as mostly correct - a shell glob is both more simple to use and generally more effective when it comes to searching the current directory than is parsing ls . They are not, however, at least in my regard, reason enough to justify either propagating the misinformation quoted in the article above nor are they acceptable justification to " never parse ls . " Please note that Patrick's answer's inconsistent results are mostly a result of him using zsh then bash . zsh - by default - does not word-split $( command substituted ) results in a portable manner. So when he asks where did the rest of the files go? the answer to that question is your shell ate them. This is why you need to set the SH_WORD_SPLIT variable when using zsh and dealing with portable shell code. I regard his failure to note this in his answer as awfully misleading. Wumpus's answer doesn't compute for me - in a list context the ? character is a shell glob. I don't know how else to say that. In order to handle a multiple results case you need to restrict the glob's greediness. The following will just create a test base of awful file names and display it for you: { printf %b $(printf \\%04o `seq 0 127`) | sed "/[^[-b]*/s///g s/\(.\)\(.\)/touch '?\v\2' '\1\t\2' '\1\n\2'\n/g" | . /dev/stdin echo '`ls` ?QUOTED `-m` COMMA,SEP' ls -qm echo ; echo 'NOW LITERAL - COMMA,SEP' ls -m | cat ( set -- * ; printf "\nFILE COUNT: %s\n" $# ) } OUTPUT `ls` ?QUOTED `-m` COMMA,SEP ??\, ??^, ??`, ??b, [?\, [?\, ]?^, ]?^, _?`, _?`, a?b, a?b NOW LITERAL - COMMA,SEP ? \, ? ^, ? `, ? b, [ \, [ \, ] ^, ] ^, _ `, _ `, a b, a b FILE COUNT: 12 Now I'll safe every character that isn't a /slash , -dash , :colon , or alpha-numeric character in a shell glob then sort -u the list for unique results. This is safe because ls has already safed-away any non printable characters for us. Watch: for f in $( ls -1q | sed 's|[^-:/[:alnum:]]|[!-\\:[:alnum:]]|g' | sort -u | { echo 'PRE-GLOB:' >&2 tee /dev/fd/2 printf '\nPOST-GLOB:\n' >&2 } ) ; do printf "FILE #$((i=i+1)): '%s'\n" "$f" done OUTPUT: PRE-GLOB: [!-\:[:alnum:]][!-\:[:alnum:]][!-\:[:alnum:]] [!-\:[:alnum:]][!-\:[:alnum:]]b a[!-\:[:alnum:]]b POST-GLOB: FILE #1: '? \' FILE #2: '? ^' FILE #3: '? `' FILE #4: '[ \' FILE #5: '[ \' FILE #6: '] ^' FILE #7: '] ^' FILE #8: '_ `' FILE #9: '_ `' FILE #10: '? b' FILE #11: 'a b' FILE #12: 'a b' Below I approach the problem again but I use a different methodology. Remember that - besides \0 null - the / ASCII character is the only byte forbidden in a pathname. I put globs aside here and instead combine the POSIX specified -d option for ls and the also POSIX specified -exec $cmd {} + construct for find . Because find will only ever naturally emit one / in sequence, the following easily procures a recursive and reliably delimited filelist including all dentry information for every entry. Just imagine what you might do with something like this: #v#note: to do this fully portably substitute an actual newline \#v# #v#for 'n' for the first sed invocation#v# cd .. find ././ -exec ls -1ldin {} + | sed -e '\| *\./\./|{s||\n.///|;i///' -e \} | sed 'N;s|\(\n\)///|///\1|;$s|$|///|;P;D' ###OUTPUT 152398 drwxr-xr-x 1 1000 1000 72 Jun 24 14:49 .///testls/// 152399 -rw-r--r-- 1 1000 1000 0 Jun 24 14:49 .///testls/? \/// 152402 -rw-r--r-- 1 1000 1000 0 Jun 24 14:49 .///testls/? ^/// 152405 -rw-r--r-- 1 1000 1000 0 Jun 24 14:49 .///testls/? `/// ... ls -i can be very useful - especially when result uniqueness is in question. ls -1iq | sed '/ .*/s///;s/^/-inum /;$!s/$/ -o /' | tr -d '\n' | xargs find These are just the most portable means I can think of. With GNU ls you could do: ls --quoting-style=WORD And last, here's a much simpler method of parsing ls that I happen to use quite often when in need of inode numbers: ls -1iq | grep -o '^ *[0-9]*' That just returns inode numbers - which is another handy POSIX specified option.
I am not at all convinced of this, but let's suppose for the sake of argument that you could , if you're prepared to put in enough effort, parse the output of ls reliably, even in the face of an "adversary" β€” someone who knows the code you wrote and is deliberately choosing filenames designed to break it. Even if you could do that, it would still be a bad idea . Bourne shell 1 is a bad language. It should not be used for anything complicated, unless extreme portability is more important than any other factor (e.g. autoconf ). I claim that if you're faced with a problem where parsing the output of ls seems like the path of least resistance for a shell script, that's a strong indication that whatever you are doing is too complicated to be a shell script and you should rewrite the entire thing in Perl, Python, Julia, or any of the other good scripting languages that are readily available. As a demonstration, here's your last program in Python: import os, sys for subdir, dirs, files in os.walk("."): for f in dirs + files: ino = os.lstat(os.path.join(subdir, f)).st_ino sys.stdout.write("%d %s %s\n" % (ino, subdir, f)) This has no issues whatsoever with unusual characters in filenames -- the output is ambiguous in the same way the output of ls is ambiguous, but that wouldn't matter in a "real" program (as opposed to a demo like this), which would use the result of os.path.join(subdir, f) directly. Equally important, and in stark contrast to the thing you wrote, it will still make sense six months from now, and it will be easy to modify when you need it to do something slightly different. By way of illustration, suppose you discover a need to exclude dotfiles and editor backups, and to process everything in alphabetical order by basename: import os, sys filelist = [] for subdir, dirs, files in os.walk("."): for f in dirs + files: if f[0] == '.' or f[-1] == '~': continue lstat = os.lstat(os.path.join(subdir, f)) filelist.append((f, subdir, lstat.st_ino)) filelist.sort(key = lambda x: x[0]) for f, subdir, ino in filelist: sys.stdout.write("%d %s %s\n" % (ino, subdir, f)) 1 Yes, extended versions of the Bourne shell are readily available nowadays: bash and zsh are both considerably better than the original. The GNU extensions to the core "shell utilities" (find, grep, etc.) also help a lot. But even with all the extensions, the shell environment is not improved enough to compete with scripting languages that are actually good, so my advice remains "don't use shell for anything complicated" regardless of which shell you're talking about. "What would a good interactive shell that was also a good scripting language look like?" is a live research question, because there is an inherent tension between the conveniences required for an interactive CLI (such as being allowed to type cc -c -g -O2 -o foo.o foo.c instead of subprocess.run(["cc", "-c", "-g", "-O2", "-o", "foo.o", "foo.c"]) ) and the strictures required to avoid subtle errors in complex scripts (such as not interpreting unquoted words in random locations as string literals). If I were to attempt to design such a thing, I'd probably start by putting IPython, PowerShell, and Lua in a blender, but I have no idea what the result would look like.
{ "source": [ "https://unix.stackexchange.com/questions/128985", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/52934/" ] }
129,072
According to this page , $@ and $* do pretty much the same thing: The $@ holds list of all arguments passed to the script. The $* holds list of all arguments passed to the script. After searching all the top hits in google, I'm not able to find any explanation why there would be 2 seemingly duplicate syntaxes. They appear to work the same in my scripts. cat foo.sh #!/bin/bash echo The parameters passed in are $@ echo The parameters passed in are $* ./foo.sh herp derp The parameters passed in are herp derp The parameters passed in are herp derp Is one preferred over the other? Why are there 2 builtin variables to do the exact same thing? Additional sources bash.cyberciti.biz
They aren't the same. $* is a single string, whereas $@ is an actual array. To see the difference, execute the following script like so: > ./test.sh one two "three four" The script: #!/bin/bash echo "Using \"\$*\":" for a in "$*"; do echo $a; done echo -e "\nUsing \$*:" for a in $*; do echo $a; done echo -e "\nUsing \"\$@\":" for a in "$@"; do echo $a; done echo -e "\nUsing \$@:" for a in $@; do echo $a; done The explanation and the results for the four cases are below. In the first case, the parameters are regarded as one long quoted string: Using "$*": one two three four Case 2 (unquoted) - the string is broken into words by the for loop: Using $*: one two three four Case 3 - it treats each element of $@ as a quoted string: Using "$@": one two three four Last case - it treats each element as an unquoted string, so the last one is again split by what amounts to for three four : Using $@: one two three four
{ "source": [ "https://unix.stackexchange.com/questions/129072", "https://unix.stackexchange.com", "https://unix.stackexchange.com/users/39263/" ] }