source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
90,778 | I can't figure out how to properly bring up the wi-fi card on my laptop. When I turn it on and issue $ sudo iwconfig wlan0 txpower auto
$ sudo iwlist wlan0 scan
wlan0 Interface doesn't support scanning : Network is down it reports that the network is down. Trying to bring it up fails too: $ sudo ifup wlan0
wlan0 no private ioctls.
Failed to bring up wlan0. Apparently I'm missing some basic low-level iw... command. When I issue dhclient on the interface: $ sudo dhclient -v wlan0
Internet Systems Consortium DHCP Client 4.2.2
Copyright 2004-2011 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
^C$ and interrupt it, it brings the device up somehow and then scanning etc. works. I'd like to avoid this obviously superfluous step. | sudo ip link set wlan0 up or sudo ifconfig wlan0 up . Answer from Apr 13'17: To elaborate on the answer by Martin: ifup and ifdown commands are part of ifupdown package , which now is considered a legacy frontend for network configuration , compared to newer ones, such as network manager . Upon ifup ifupdown reads configuration settings from /etc/network/interfaces ; it runs pre-up , post-up and post-down scripts from /etc/network , which include starting /etc/wpasupplicant/ifupdown.sh that processes additional wpa-* configuration options for wpa wifi, in /etc/network/interfaces (see zcat /usr/share/doc/wpasupplicant/README.Debian.gz for documentation). For WEP wireless-tools package plays similar role to wpa-supplicant . iwconfig is from wireless-tools , too. ifconfig at the same time is a lower level tool , which is used by ifupdown and allows for more flexibility. For instance, there are 6 modes of wifi adapter functioning and IIRC ifupdown covers only managed mode (+ roaming mode, which formally isn't mode?). With iwconfig and ifconfig you can enable e.g. monitor mode of your wireless card, while with ifupdown you won't be able to do that directly. ip command is a newer tool that works on top of netlink sockets , a new way to configure the kernel network stack from userspace (tools like ifconfig are built on top of ioctl system calls). | {
"source": [
"https://unix.stackexchange.com/questions/90778",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22339/"
]
} |
90,784 | I successfully configured PulseAudio server and client to send audio over network.
It uses direct connection: http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Network/#index1h2 I'd like to have a possibility to switch between client and server sound card i.e. temporarily disable network stream and go back to internal sound device. Using module-tunnel-sink I could simply move sink-input to desired device but is not an option since it doesn't work well with Flash: they lead me to believe that Flash is somehow sending the sound to PulseAudio in such a way that it creates a lot of network traffic (think lots of tiny packets, not bandwidth); this overwhelms the network "tunnel" PulseAudio With direct connection I have to restart the application every time I want to switch the output. Any idea how can I solve this? | sudo ip link set wlan0 up or sudo ifconfig wlan0 up . Answer from Apr 13'17: To elaborate on the answer by Martin: ifup and ifdown commands are part of ifupdown package , which now is considered a legacy frontend for network configuration , compared to newer ones, such as network manager . Upon ifup ifupdown reads configuration settings from /etc/network/interfaces ; it runs pre-up , post-up and post-down scripts from /etc/network , which include starting /etc/wpasupplicant/ifupdown.sh that processes additional wpa-* configuration options for wpa wifi, in /etc/network/interfaces (see zcat /usr/share/doc/wpasupplicant/README.Debian.gz for documentation). For WEP wireless-tools package plays similar role to wpa-supplicant . iwconfig is from wireless-tools , too. ifconfig at the same time is a lower level tool , which is used by ifupdown and allows for more flexibility. For instance, there are 6 modes of wifi adapter functioning and IIRC ifupdown covers only managed mode (+ roaming mode, which formally isn't mode?). With iwconfig and ifconfig you can enable e.g. monitor mode of your wireless card, while with ifupdown you won't be able to do that directly. ip command is a newer tool that works on top of netlink sockets , a new way to configure the kernel network stack from userspace (tools like ifconfig are built on top of ioctl system calls). | {
"source": [
"https://unix.stackexchange.com/questions/90784",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47146/"
]
} |
90,793 | How to create a iso image from a folder or single files via terminal commands?
Currently i am doing this via Brasero s GUI, but i want to do it with a shell script. | Seems to be pretty straightforward to do with genisoimage , in the package with the same name on Debian: genisoimage -o output_image.iso directory_name There are many options to cover different cases, so you should check the man page to see what fits your particular use case. See also How-To: Create ISO Images from Command-Line | {
"source": [
"https://unix.stackexchange.com/questions/90793",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40541/"
]
} |
90,842 | ##What I understand On *nix servers, we configure sending logs using facility.severity , where facility is the name of the (let's call it) "component" of the system, such as kernel, authentication, and so on; and severity is the "level" of each of the logs logged by a facility, such as info (informational), crit (critical) logs. So, if I want to send kernel critical logs, I'll use kern.crit . The combination of facility and severity is known as the priority, for example... priority = kern.crit facility = kern severity = crit ##Question There are "facilities" called local0 to local7 . What in the world are these local# facilities? I'm asking specifically about local6 , since it's usually the most common one I find in searches. My question is actually because I'm configuring Snort (SourceFire Intrusion Sensor) to send logs, so I wanted to know which facility to use. My question is not Snort specific though, because local# facilities are everywhere; on Cisco and IBM's WebSphere Application Server for instance. ##Research RFC3164 , which is where the syslog protocol is defined, only says: local6 - local use 6 Which doesn't really describe it, as opposed to: auth - security/authorization messages In Ubuntu, man syslog shows: LOG_LOCAL0 through LOG_LOCAL7
reserved for local use Also, vague. | General info The facilities local0 to local7 are "custom" unused facilities that syslog provides for the user. If a developer create an application and wants to make it log to syslog, or if you want to redirect the output of anything to syslog (for example, Apache logs), you can choose to send it to any of the local# facilities. Then, you can use /etc/syslog.conf (or /etc/rsyslog.conf ) to save the logs being sent to that local# to a file, or to send it to a remote server. Answer to my question I asked this question because I wanted to send logs to an external server, so I wanted to know which one to choose, not "write logs to a local# facility". I had to go back to the Snort documentation to find out what they are writing to the local# facilities. | {
"source": [
"https://unix.stackexchange.com/questions/90842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38128/"
]
} |
90,853 | I want to communicate between several computers on my network (static Ethernet), through SSH. In order to do that I need to run ssh-add every time I log in on a specific machine. What can I do so it's set up once and it doesn't ask me for the passphrase every time I log in or reboot my machine? I know that there is a way where you add some lines to the bash_profile file, but I still need to type the password every time I reboot/log in to a specific machine. if [ -z "$SSH_AUTH_SOCK" ] ; then
eval `ssh-agent -s`
ssh-add
fi | This is a typical example of a trade-off between security and convenience. Luckily, there are a number of options. The most appropriate solution depends on the usage scenario and desired level of security. ssh-key with passphrase, no ssh-agent Now the passphrase has to be entered every time the key is used for authentication. While this is the best option from a security standpoint, it offers the worst usability. This may also lead to a weak passphrase being chosen in order to lessen the burden of entering it repeatedly. ssh-key with passphrase, with ssh-agent Adding the following to ~/.bash_profile will automatically start ssh-agent and load the ssh-key(s) on login: if [ -z "$SSH_AUTH_SOCK" ] ; then
eval `ssh-agent -s`
ssh-add
fi Now the passphrase must be entered upon every login. While slightly better from a usability perspective, this has the drawback that ssh-agent prompts for the passphrase regardless whether the key is to be used or not during the login session. Each new login also spawns a distinct ssh-agent instance which remains running with the added keys in memory even after logout, unless explicitly killed. To kill ssh_agent on logout, add the following to ~/.bash_logout if [ -n "$SSH_AUTH_SOCK" ] ; then
eval `/usr/bin/ssh-agent -k`
fi or the following to ~/.bash_profile trap 'test -n "$SSH_AUTH_SOCK" && eval `/usr/bin/ssh-agent -k`' 0 Creating multiple ssh-agent instances can be avoided by creating a persistent communication socket to the agent at a fixed location in the file system, such as in Collin Anderson's answer . This is an improvement over spawning multiple agents instances. However, unless explicitly killed, the decrypted key still remains in memory after logout. On desktops, ssh-agents included with the desktop environment, such as the Gnome Keyring SSH Agent , can be a better approach as they typically can be made to prompt for the passphrase the first time the ssh-key is used during a login session and store the decrypted private key in memory until the end of the session. ssh-key with passphrase, with ssh-ident ssh-ident is a utility that can manage ssh-agent on your behalf and load identities as necessary. It adds keys only once they are needed, regardless of how many terminals, SSH or login sessions require access to an ssh-agent . It can also add and use a different agent and different set of keys depending on the host you are connected to, or the directory ssh is invoked from. This allows for isolating keys when using agent forwarding with different hosts. It also allows using multiple accounts on sites like GitHub. To enable ssh-ident , install it and add the following alias to your ~/.bash_profile : alias ssh='/path/to/ssh-ident' ssh-key with passphrase, with keychain keychain is a small utility which manages ssh-agent on your behalf and allows the ssh-agent to remain running when the login session ends. On subsequent logins, keychain will connect to the existing ssh-agent instance. In practice, this means that the passphrase must be be entered only during the first login after a reboot. On subsequent logins, the unencrypted key from the existing ssh-agent instance is used. This can also be useful for allowing passwordless RSA/DSA authentication in cron jobs without passwordless ssh-keys. To enable keychain , install it and add something like the following to ~/.bash_profile : eval `keychain --agents ssh --eval id_rsa` From a security point of view, ssh-ident and keychain are worse than ssh-agent instances limited to the lifetime of a particular session, but they offer a high level of convenience. To improve the security of keychain , some people add the --clear option to their ~/.bash_profile keychain invocation. By doing this, passphrases must be re-entered on login as above, but cron jobs will still have access to the unencrypted keys after the user logs out. The keychain wiki page has more information and examples. ssh-key without passphrase From a security standpoint, this is the worst option since the private key is entirely unprotected in case it is exposed. This is, however, the only way to make sure that the passphrase need not be re-entered after a reboot. ssh-key with passphrase, with ssh-agent , passing passphrase to ssh-add from script While it might seem like a straightforward idea to pass the passphrase to ssh-add from a script, e.g. echo "passphrase\n" | ssh-add , this is not as straightforward as it seems as ssh-add does not read the passphrase from stdin , but opens /dev/tty directly for reading . This can be worked around with expect , a tool for automating interactive applications. Below is an example of a script which adds a ssh-key using a passphrase stored in the script: #!/usr/bin/expect -f
spawn ssh-add /home/user/.ssh/id_rsa
expect "Enter passphrase for /home/user/.ssh/id_rsa:"
send "passphrase\n";
expect "Identity added: /home/user/.ssh/id_rsa (/home/user/.ssh/id_rsa)"
interact Note that as the passphrase is stored in plaintext in the script, from a security perspective, this is hardly better than having a passwordless ssh-key. If this approach is to be used, it is important to make sure that the expect script containing the passphrase has proper permissions set to it, making it readable, writable, and runnable only by the key owner. | {
"source": [
"https://unix.stackexchange.com/questions/90853",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46777/"
]
} |
90,883 | On the GNU Project webpage , there's a subsection called " All GNU packages " which lists the various software in the GNU project. Are there any GNU distributions which use only these packages -- i.e. a "pure" GNU operating system that runs on only GNU packages? I'm not particularly interested on whether this would be a practical operating system, just if it's theoretically possible to run GNU Hurd with purely the GNU packages . If not, what kind of software must still be implemented to achieve this goal (i.e. what's missing)? If GNU Hurd is the limiting factor, than if an exception is made for the kernel, would a pure GNU OS be possible using the Linux kernel? | The explicit goal of the GNU project is to provide a complete open source/libre/free operating system. Are there any GNU distributions which use only these packages -- i.e. a "pure" GNU operating system that runs on only GNU packages? There is a reference here to an official sounding GNU binary distro based on Hurd which "consists of GNU Mach, the Hurd, the C library and many applications". It may or may not be currently maintained, however, as I couldn't find any other online references to it. But it does sound like it fits your criteria. I'm not particularly interested on whether this would be a practical operating system, just if it's theoretically possible to run GNU Hurd with purely the GNU packages. The answer to the previous question implies an obvious answer WRT Hurd. Of course, it might help to define more precisely what would count as a reasonably complete "operating system". I'll provide two definitions: A collection of software sufficient to boot up to a shell prompt. A system which fulfills POSIX criteria. This is essentially a stricter version of #1, since the highest level mandatory entity in a POSIX system would be the shell. This is a little arbitrary, since an operating system designed to fulfill some special purpose might not need a shell at all. However, in that case it would become a more specific question about the nature of the "special purpose". In any case, the answer is yes , although GNU's implementation of some things may not be 100% perfectly POSIX compliant (and there are a handful of required utilities, such as crontab , which GNU doesn't provide). Here are the potential components: Kernel (Hurd) C library (glibc) Essential utilities (GNU core-utils, etc.) Shell (bash, which is a GNU project) I did not include a bootloader, since that is not part of the OS -- but in any case grub is also a GNU project. | {
"source": [
"https://unix.stackexchange.com/questions/90883",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44555/"
]
} |
90,886 | I want to find some files and then move them. I can find the file with: $ find /tmp/ -ctime -1 -name x* I tried to move them to my ~/play directory with: $ find /tmp/ -ctime -1 -name x* | xargs mv ~/play/ but that didn't work. Obviously mv needs two arguments. Not sure if (or how) to reference the xargs 'current item' in the mv command? | Look at Stephane's answer for the best method, take a look at my answer for reasons not to use the more obvious solutions (and reasons why they are not the most efficient). You can use the -I option of xargs : find /tmp/ -ctime -1 -name "x*" | xargs -I '{}' mv '{}' ~/play/ Which works in a similar mechanism to find and {} . I would also quote your -name argument (because a file starting with x in the present directory would be file-globed and passed as an argument to find - which will not give the expected behavior!). However, as pointed out by manatwork, as detailed in the xargs man page: -I replace-str
Replace occurrences of replace-str in the initial-arguments with
names read from standard input. Also, unquoted blanks do not
terminate input items; instead the separator is the newline
character. Implies -x and -L 1. The important thing to note is that -L 1 means that only one line of output from find will be processed at a time. This means that's syntactically the same as: find /tmp/ -ctime -1 -name "x*" -exec mv '{}' ~/play/ (which executes a single mv operation for each file). Even using the GNU -0 xargs argument and the find -print0 argument causes exactly the same behavior of -I - this is to clone() a process for each file mv : find . -name "x*" -print0 | strace xargs -0 -I '{}' mv '{}' /tmp/other
.
.
read(0, "./foobar1/xorgslsala11\0./foobar1"..., 4096) = 870
mmap(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fbb82fad000
open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=26066, ...}) = 0
mmap(NULL, 26066, PROT_READ, MAP_SHARED, 3, 0) = 0x7fbb82fa6000
close(3) = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fbb835af9d0) = 661
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 661
--- SIGCHLD (Child exited) @ 0 (0) ---
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fbb835af9d0) = 662
wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 662
--- SIGCHLD (Child exited) @ 0 (0) ---
.
.
. | {
"source": [
"https://unix.stackexchange.com/questions/90886",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
90,990 | I need to use the less command with the syntax highlighting of the vim command for python , C , bash and other languages. How do I apply syntax highlighting colors according to vim colors for less command ? | Syntax highlighting of less , works just fine on most *nix systems. apt install source-highlight
export LESSOPEN="| /usr/share/source-highlight/src-hilite-lesspipe.sh %s"
export LESS=' -R ' On Fedora/RedHat based distros use /usr/bin/src-hilite-lesspipe.sh instead. Even on Cygwin you can do it with the minor adjustment of the shell script path and installing with apt-cyg instead of apt . However, using this drastically slows down browsing of large files. I suggest to use alias in such a way to only implement the LESSOPEN export above when needed, like this: alias lessh='LESSOPEN="| /usr/bin/src-hilite-lesspipe.sh %s" less -M ' where the -M flag is convenient to also show filename and line number. Also remember to copy the script into your bin path: cp /usr/share/source-highlight/src-hilite-lesspipe.sh /usr/bin/src-hilite-lesspipe.sh UPDATE: 2019-07-24 Apparently, on more recent Cygwin installs, you have the following files in your path: source-highlight.exe
source-highlight-esc.sh
source-highlight-settings.exe So now you also need to execute the source-highlight-settings.exe that will add the configuration file: $HOME/.source-highlight/source-highlight.conf . | {
"source": [
"https://unix.stackexchange.com/questions/90990",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21911/"
]
} |
91,027 | I've updated my HTPC from kernel 3.7.10 to 3.10.7 and it seems CONFIG_USB_SUSPEND is now gone from the kernel options and included in PM. The main problem I'm facing is that I have an external HDD and when suspending and waking up the HTPC, it isn't available to the system. The HDD wakes up (you can hear it spin up again), but when you try to access the mount point you get the following error: ZOTAC ~ # ls /media
ls: reading directory /media: Input/output error And on dmesg: [ 253.278260] EXT4-fs warning (device sdb1): __ext4_read_dirblock:908: error reading directory block (ino 2, block 0) In previous kernels, setting CONFIG_USB_SUSPEND=N would solve the problem, as the HDD would handle its hibernation by itself and the mount point was always accesible. When the HDD was on sleep and the HTPC needed something from the HDD's mount point, the HDD itself would wake up and operate without issues. Right now I've tried the following without success: Manually change /sys/bus/usb/devices/usb*/power/control to "on" instead of "auto" . Manually change /sys/bus/usb/devices/usb*/power/autosuspend to "-1" instead of "0" . But when waking up again the HTPC, the mount point is again inaccesible. As workarround I can unmount and remount the mount point and it works again without problems, but I'm sure there should be a way to avoid having the OS handle the usb autosuspend. Any idea how to disable usb autosuspend on kernel 3.7.10 or above? | For Ubuntu and Debian , usbcore is compiled in to the kernel, so creating entries in /etc/modprobe.d will NOT work. Instead, we need to change the kernel boot parameters. Edit the /etc/default/grub file and change the GRUB_CMDLINE_LINUX_DEFAULT line to add the usbcore.autosuspend=-1 option: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash usbcore.autosuspend=-1" Note that quiet splash were already present options. So keep other options you have too. After save the file, update grub: sudo update-grub And reboot . Now check autosuspend value: cat /sys/module/usbcore/parameters/autosuspend And it should display -1 . Additional Info In the kernel documentation is stated that someday in the future this param will change to autosuspend_delay_ms (instead of autosuspend ), but so far, still the same name. The documentation for the value -1 can be found in the kernel source file drivers/usb/core/hub.c : 1808: * - If user has indicated to prevent autosuspend by passing
1809: * usbcore.autosuspend = -1 then keep autosuspend disabled. | {
"source": [
"https://unix.stackexchange.com/questions/91027",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47315/"
]
} |
91,046 | Is it possible in Mutt to search for specific mail content using built-in functionality? Or, as a last resort, how can I configure grep to be used in Mutt? The documentation only mentions the search and limit functions, which only search headers. | search and limit can also actually search inside messages, depending on the search patterns you give. From the Patterns subsection of the Mutt reference: ~b EXPR messages which contain EXPR in the message body
=b STRING If IMAP is enabled, like ~b but searches for STRING on the server, rather than downloading each message and searching it locally.
~B EXPR messages which contain EXPR in the whole message
=B STRING If IMAP is enabled, like ~B but searches for STRING on the server, rather than downloading each message and searching it locally. That is, ~b only searches in the body, whereas ~B also searches in the headers. Note that this can be quite slow, since it may have to download each message one by one if they are not already cached. If you have a mutt version greater or equal to 1.5.12, you can cache the ones you are downloading for later use by setting message_cachedir to a directory where you want to store message bodies, which can significantly speed up searching them (and the same for headers with header_cache ). | {
"source": [
"https://unix.stackexchange.com/questions/91046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27656/"
]
} |
91,052 | As far as I can tell from the manpage of ntfsundelete getting a file back is done with e.g. ntfsundelete /dev/sdb3 -u -m important.txt which would undelete the file in-place. If I don't want in-place mode I see this option -d, --destination DIR Destination directory but when I use it -d /tmp/win it thinks it is a regex. How should -d be used? | search and limit can also actually search inside messages, depending on the search patterns you give. From the Patterns subsection of the Mutt reference: ~b EXPR messages which contain EXPR in the message body
=b STRING If IMAP is enabled, like ~b but searches for STRING on the server, rather than downloading each message and searching it locally.
~B EXPR messages which contain EXPR in the whole message
=B STRING If IMAP is enabled, like ~B but searches for STRING on the server, rather than downloading each message and searching it locally. That is, ~b only searches in the body, whereas ~B also searches in the headers. Note that this can be quite slow, since it may have to download each message one by one if they are not already cached. If you have a mutt version greater or equal to 1.5.12, you can cache the ones you are downloading for later use by setting message_cachedir to a directory where you want to store message bodies, which can significantly speed up searching them (and the same for headers with header_cache ). | {
"source": [
"https://unix.stackexchange.com/questions/91052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15105/"
]
} |
91,058 | When a child is forked then it inherits parent's file descriptors, if child closes the file descriptor what will happen? If child starts writing what shall happen to the file at the parent's end? Who manages these inconsistencies, kernel or user? When a process calls the close function to close a particular file through file descriptor. In the file table of the process, the reference count is decremented by one.
But since parent and child are both holding the same file, the reference count is 2 and after close it reduces to 1. Since it is not zero the process still continue to use file without any problem. See Terrence Chan UNIX system programming,(Unix kernel support for Files). | When a child is forked then it inherits parent's file descriptors, if child closes the file descriptor what will happen ? It inherits a copy of the file descriptor. So closing the descriptor in the child will close it for the child, but not the parent, and vice versa. If child starts writing what shall happen to the file at the parent's end ? Who manages these inconsistencies , kernel or user ? It's exactly (as in, exactly literally) the same as two processes writing to the same file. The kernel schedules the processes independently, so you will likely get interleaved data in the file. However, POSIX (to which *nix systems largely or completely conform), stipulates that read() and write() functions from the C API (which map to system calls) are "atomic with respect to each other [...] when they operate on regular files or symbolic links". The GNU C manually also provisionally promises this with regard to pipes (note the default PIPE_BUF , which is part of the proviso, is 64 kiB). This means that calls in other languages/tools, such as use of echo or cat , should be included in that contract, so if two indepenedent process try to write "hello" and "world" simultaneously to the same pipe, what will come out the other end is either "helloworld" or "worldhello", and never something like "hweolrllod". when a process call close function to close a particular open file through file descriptor.The file table of process decrement the reference count by one.But since parent and child both are holding the same file(there refrence count is 2 and after close it reduces to 1)since it is not zero so process still continue to use file without any problem. There are TWO processes, the parent and the child. There is no "reference count" common to both of them. They are independent. WRT what happens when one of them closes a file descriptor, see the answer to the first question. | {
"source": [
"https://unix.stackexchange.com/questions/91058",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47132/"
]
} |
91,137 | In terminal emulators like GNOME terminal, I can hold the control key and use my mouse to select a block of text. Doing the same in Konsole has no effect -- the mouse simply selects one character after another, to the end of each line, wrapping around, as if I were using GNOME terminal and selecting text without holding the control key. How can I block select text in Konsole? | Does Ctrl+Alt work? Found it mentioned in a bug tracker , but I can't test it myself as I don't use KDE. | {
"source": [
"https://unix.stackexchange.com/questions/91137",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28896/"
]
} |
91,197 | I want to find out the creation date of particular file, not modification date or access date. I have tried with ls -ltrh and stat filename . | stat -c '%w' file on filesystems that store creation time. Note that on Linux this requires coreutils 8.31, glibc 2.28 and kernel version 4.11 or newer . The POSIX standard only defines three distinct timestamps to be stored for each file: the time of last data access, the time of last data modification, and the time the file status last changed. Modern Linux filesystems, such as ext4, Btrfs, XFS ( v5 and later ) and JFS, do store the file creation time (aka birth time), but use different names for the field in question ( crtime in ext4/XFS, otime in Btrfs and JFS). Linux provides the statx(2) system call interface for retrieving the file birth time for filesystems that support it since kernel version 4.11. (So even when creation time support has been added to a filesystem, some deployed kernels have not immediately supported it, even after adding nominal support for that filesystem version, e.g., XFS v5 .) As Craig Sanders and Mohsen Pahlevanzadeh pointed out, stat does support the %w and %W format specifiers for displaying the file birth time (in human readable format and in seconds since Epoch respectively) prior to coreutils version 8.31. However, coreutils stat uses the statx() system call where available to retrieve the birth time only since version 8.31.
Prior to coreutils version 8.31 stat accessed the birth time via the get_stat_birthtime() provided by gnulib (in lib/stat-time.h ), which gets the birth time from the st_birthtime and st_birthtimensec fields of the stat structure returned by the stat() system call. While for instance BSD systems (and in extension OS X) provide st_birthtime via stat , Linux does not. This is why stat -c '%w' file outputs - (indicating an unknown creation time) on Linux prior to coreutils 8.31 even for filesystems which do store the creation time internally. As Stephane Chazelas points out , some filesystems, such as ntfs-3g, expose the file creation times via extended file attributes. | {
"source": [
"https://unix.stackexchange.com/questions/91197",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38659/"
]
} |
91,282 | I know that VARIABLE=value creates an environment variable, and export VARIABLE=value makes it available to processes created by the current shell. env shows the current environment variables, but where do they live? What comprises an environment variable (or an environment , for that matter)? | An environment is not as magical as it might seem. The shell stores it in memory and passes to the execve() system call. The child process inherits it as an array pointer called environ . From the execve manpage: SYNOPSIS #include <unistd.h>
int execve(const char *filename, char *const argv[],
char *const envp[]); argv is an array of argument strings passed to the new program. By convention, the first of these strings should contain the filename
associated with the file being executed. envp is an array of strings,
conventionally of the form key=value, which are passed as environment
to the new program. The environ(7) manpage also offers some insight: SYNOPSIS extern char **environ; DESCRIPTION The variable environ points to an array of pointers to strings
called the "environment". The last pointer in this array has the
value NULL . (This variable must be declared in the user program,
but is declared in the header file <unistd.h> in case the
header files came from libc4 or libc5, and in case they came from
glibc and _GNU_SOURCE was defined.) This array of strings is made
available to the process by the exec(3) call that started the process. Both of these GNU manpages match the POSIX specification | {
"source": [
"https://unix.stackexchange.com/questions/91282",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34709/"
]
} |
91,384 | On Ubuntu 12.04, when I sudo -s the $HOME variable is not changed, so if my regular user is regularuser , the situation goes like this: $ cd
$ pwd
/home/regularuser
$ sudo -s
# cd
# pwd
/home/regularuser I have abandoned Ubuntu a long time ago, so I cannot be sure, but I think this is the default behavior. So, my questions are: How is this done? Where is the config? How do I disable it? Edit: Thanks for the answers, which clarified things a bit, but I guess I must add a couple of questions, to get the answer I am looking for. In Debian sudo -s , changes the $HOME variable to /root . From what I get from the answers and man sudo the shell ran with sudo -s is the one given in /etc/passwd , right? However, on both Ubuntu and Debian the shell given in /etc/passwd for root is /bin/bash . In either system also, I cannot find where the difference in .profile or .bashrc files is, as far as $HOME is concerned, so that the behavior of sudo -s differs. Any help on this? | Sudo has many compile-time configuration options. You can list the settings in your version with sudo -V . One of the differences between the configuration in Debian wheezy and in Ubuntu 12.04 is that the HOME environment variable is preserved in Ubuntu but not in Debian; both distributions erase all environment variables except for a few that are explicitly marked as safe to preserve. Thus sudo -s preserves HOME on Ubuntu, while on Debian HOME is erased and sudo then sets it to the home directory of the target user. You can override this behavior in the sudoers file. Run visudo to edit the sudoers file. There are several relevant options: env_keep determines which environment variables are preserved. Use Defaults env_keep += "HOME" to retain the caller's HOME environment variable or Defaults env_keep -= "HOME" to erase it (and replace it by the home directory of the target user). env_reset determines whether environment variables are reset at all. Resetting environment variables is often necessary for rules that allow running a specific command, but does not have a direct security benefit for rules that allow running arbitrary commands anyway. always_set_home , if set, causes HOME to be overridden even if it was preserved due to env_reset being disabled or HOME being in the env_keep list. This option has no effect if HOME isn't preserved anyway. set_home is like always_set_home , but only applies to sudo -s , not when calling sudo with an explicit command. These options can be set for a given source user, a given target user or a given command; see the sudoers manual for details. You can always choose to override HOME for a given call to sudo by passing the option -H . The shell will never override the value of HOME . (It would set HOME if it was unset, but sudo always sets HOME one way or another.) If you run sudo -i , sudo simulates an initial login. This includes setting HOME to the home directory of the target user and invoking a login shell . | {
"source": [
"https://unix.stackexchange.com/questions/91384",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33054/"
]
} |
91,390 | The idea would be to use it as... a pipe in a command.
For instance: say there's some kind of long path which has to be retyped again and again, followed by a pipe and a second program, i.e. "directory1/directory2/direcotry3/file.dat | less -I " I'd like that part to be stored in a variable, so it could be used like this: r="directory1/directory2/direcotry3 \| less -I -p "
$ cat path1/path2/$r <searchterm> Instead, I get cat: invalid option -- I
Try `cat --help' for more information. ... meaning the pipe clearly didn't work. | bash does not completely re-interpret the command line after expanding variables. To force this, put eval in front: r="directory1/directory2/direcotry3/file.dat | less -I "
eval "cat path1/path2/$r" Nevertheless, there are more elegant ways to do this (aliases, functions etc.). | {
"source": [
"https://unix.stackexchange.com/questions/91390",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47501/"
]
} |
91,527 | I know that pkill has more filtering rules than killall . My question is, what is the difference between: pkill [signal] name and killall [signal] name I've read that killall is more effective and kill all processes and subprocesses (and recursively) that match with name program. pkill doesn't do this too? | The pgrep and pkill utilities were introduced in Sun's Solaris 7 and, as g33klord noted , they take a pattern as argument which is matched against the names of running processes. While pgrep merely prints a list of matching processes, pkill will send the specified signal (or SIGTERM by default) to the processes. The common options and semantics between pgrep and pkill comes in handy when you want to be careful and first review the list matching processes with pgrep , then proceed to kill them with pkill . pgrep and pkill are provided by the the procps package, which also provides other /proc file system utilities, such as ps , top , free , uptime among others. The killall command is provided by the psmisc package, and differs from pkill in that, by default, it matches the argument name exactly (up to the first 15 characters) when determining the processes signals will be sent to. The -e , --exact option can be specified to also require exact matches for names longer than 15 characters. This makes killall somewhat safer to use compared to pkill . If the specified argument contains slash ( / ) characters, the argument is interpreted as a file name and processes running that particular file will be selected as signal recipients. killall also supports regular expression matching of process names, via the -r , --regexp option. There are other differences as well. The killall command for instance has options for matching processes by age ( -o , --older-than and -y , --younger-than ), while pkill can be told to only kill processes on a specific terminal (via the -t option). Clearly then, the two commands have specific niches. Note that the killall command on systems descendant from Unix System V (notably Sun's Solaris , IBM's AIX and HP's HP-UX ) kills all processes killable by a particular user, effectively shutting down the system if run by root. The Linux psmisc utilities have been ported to BSD (and in extension Mac OS X ), hence killall there follows the "kill processes by name" semantics. | {
"source": [
"https://unix.stackexchange.com/questions/91527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46091/"
]
} |
91,538 | In the man page of kill it is written as following SYNOPSIS kill [ -s signal | -p ] [ -a ] [ -- ] pid ...
kill -l [ signal ]
-p Specify that kill should only print the process id (pid) of the
named processes, and not send any signals. But as I tried many times in both RH and RHEL, command like kill -s SIGHUP |-p 123 never worked and an error is always reported bash: -p: command not found Did I make any mistakes? | kill [ -s signal | -p ] This syntax in a manual page means: You can use kill -s signal or you can use kill -p , but you can't use both -s and -p at the same time. The pipe ( | ) stands for (exclusive) or in the documentation, it's not part of the command. When you type foo | bar in your shell, it will attempt to start foo and bar , and pipe the output of foo to the bar program. (That's the shell doing that, not foo (or bar ), the | is not passed to either process.) In your case, the second part is -p 123 , so the shell tries to find an executable called -p and fails with that error message. | {
"source": [
"https://unix.stackexchange.com/questions/91538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43312/"
]
} |
91,547 | In a bash script, I'm assigning a local variable so that the value depends on an external, global environment variable ( $MYAPP_ENV ). if [ "$MYAPP_ENV" == "PROD" ]
then
[email protected]
else
[email protected]
fi Is there a shorter (yet clean) way to write the above assignment? (Presumably using some kind of conditional operator / inline if.) | You could also use a case/switch in bash to do this: case "$MYAPP_ENV" in
PROD) SERVER_LOGIN="[email protected]" ;;
*) SERVER_LOGIN="[email protected]" ;;
esac Or this method: [ "$MYAPP_ENV" = PROD ] &&
[email protected] ||
[email protected] | {
"source": [
"https://unix.stackexchange.com/questions/91547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1506/"
]
} |
91,561 | Good day! I use 'ps' to see command that starts process. The issue is that command is too long and 'ps' does not show it entirely. Example: I use command 'ps -p 2755 | less' and have following output PID TTY STAT TIME COMMAND
2755 ? Sl 305:05 /usr/java/jdk1.6.0_37/bin/java -Xms64m -Xmx512m -Dflume.monitoring.type=GANGLIA -Dflume.monitoring.hosts=prod.hostname.ru:8649 -cp /etc/flume-ng/conf/acrs-event:/usr/lib/flume-ng/lib/*:/etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.7.4.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/xz-1.0.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop/.//bin:/usr/lib/hadoop/.//cloudera:/usr/lib/hadoop/.//etc:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.3.0-tests.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//lib:/usr/lib/hadoop/.//libexec:/usr/lib/hadoop/.//sbin:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/.//bin:/usr/lib/hadoop-hdfs/.//cloudera:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.3.0.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0. So, the command line is too long and the command stops mid-phrase. How can I see it whole? | On Linux, with the ps from procps(-ng) : ps -fwwp 2755 In Linux versions prior to 4.2, it's still limited though (by the kernel ( /proc/2755/cmdline ) to 4k) and you can't get more except by asking the process to tell it to you or use a debugger. $ sh -c 'sleep 1000' $(seq 4000) &
[1] 31149
$ gdb -p $! /bin/sh
[...]
Attaching to program: /bin/dash, process 31149
[...]
(gdb) bt
#0 0x00007f40d11f40aa in wait4 () at ../sysdeps/unix/syscall-template.S:81
[...]
#7 0x00007f40d115c995 in __libc_start_main (main=0x4022c0, argc=4003, ubp_av=0x7fff5b9f5a88, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff5b9f5a78)
at libc-start.c:260
#8 0x00000000004024a5 in ?? ()
#9 0x00007fff5b9f5a78 in ?? ()
#10 0x0000000000000000 in ?? ()
(gdb) frame 7
#7 0x00007f40d115c995 in __libc_start_main (main=0x4022c0, argc=4003, ubp_av=0x7fff5b9f5a88, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff5b9f5a78)
at libc-start.c:260
(gdb) x/4003s *ubp_av
0x7fff5b9ff83e: "sh"
0x7fff5b9ff841: "-c"
0x7fff5b9ff844: "sleep 1000"
0x7fff5b9ff84f: "1"
0x7fff5b9ff851: "2"
[...]
0x7fff5ba04212: "3999"
0x7fff5ba04217: "4000" To print the 4th arg with up to 5000 characters: (gdb) set print elements 5000
(gdb) p ubp_av[3] If you want something non-intrusive, you could try and get the information from /proc/2755/mem (note that if the kernel.yama.ptrace_scope is not set to 0, you'll need superuser permissions for that). This below works for me (prints all the arguments and environment variables), but there's not much guarantee I would think (the error and unexpected input handling is left as an exercise to the reader): $ perl -e '$p=shift;open MAPS, "/proc/$p/maps";
($m)=grep /\[stack\]/, <MAPS>;
($a,$b)=map hex, $m =~ /[\da-f]+/g;
open MEM, "/proc/$p/mem" or die "open mem: $!";
seek MEM,$a,0; read MEM, $c,$b-$a;
print((split /\0{2,}/,$c)[-1])' "$!" | tr \\0 \\n | head
sh
-c
sleep 1000
1
2
3
4
5
6
7 (replace "$!" with the process id). The above uses the fact that Linux puts the strings pointed to by argv[] , envp[] and the executed filename at the bottom of the stack of the process. The above looks in that stack for the bottom-most string in between two sets of two or more consecutive NUL bytes. It doesn't work if any of the arguments or env strings is empty, because then you'll have a sequence of 2 NUL bytes in the middle of those argv or envp. Also, we don't know where the argv strings stop and where the envp ones start. A work around for that would be to refine that heuristic by looking backwards for the actual content of argv[] (the pointers). This below works on i386 and amd64 architecture for ELF executables at least: perl -le '$p=shift;open MAPS, "/proc/$p/maps";
($m)=grep /\[stack\]/, <MAPS>;
($a,$b)=map hex, $m =~ /[\da-f]+/g;
open MEM, "/proc/$p/mem" or die "open mem: $!";
seek MEM,$a,0; read MEM, $c,$b-$a;
$c =~ /.*\0\0\K[^\0].*\0[^\0]*$/s;
@a=unpack"L!*",substr$c,0,$-[0];
for ($i = $#a; $i >=0 && $a[$i] != $a+$-[0];$i--) {}
for ($i--; $i >= 0 && ($a[$i]>$a || $a[$i]==0); $i--) {}
$argc=$a[$i++];
print for unpack"(Z*)$argc",substr$c,$a[$i]-$a;' "$!" Basically, it does the same as above, but once it has found the first string of argv[] (or at least one of the argv[] or envp[] strings if there are empties), it knows its address, so it looks backward in the top rest of the stack for a pointer with that same value. Then keeps looking backwards until it finds a number that can't be a pointer to those, and that is argc . Then the next integer is argv[0] . And knowing argv[0] and argc , it can display the list of arguments. That doesn't work if the process has written to its argv[] possibly overriding some NUL delimiters or if argc is 0 ( argc is generally at least 1 to include argv[0] ) but should work in the general case at least for ELF executables. In 4.2 and newer, /proc/<pid>/cmdline is no longer truncated, but ps itself has a maximum display width of 128K. | {
"source": [
"https://unix.stackexchange.com/questions/91561",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46180/"
]
} |
91,570 | For example I have a link http://www.abc.com/123/def/ghi/jkl.mno .
I want to download it using wget or curl and get the name of output file as def_ghi_jkl.mno , where the part def_ghi is taken from the link. I will put this wget command in a script to download multiple files so it can't be giving the output file name explicitly. | curl has the -o , --output option which takes a single argument indicating the filename output should be written to instead of stdout . If you are using {} or [] to surround elements in the URL (usually used to fetch multiple documents), you can use # followed by a number in the filename specifier. Each such variable will be replaced with the corresponding string for the URL being fetched. To fetch multiple files, add a comma-separated list of tokens inside the {} . If parts of the URLs to be fetched are sequential numbers, you can specify a range with [] . Examples: curl http://www.abc.com/123/{def}/{ghi}/{jkl}.mno -o '#1_#2_#3.mno' Note the quotes around the option argument (not needed unless the the filename starts with one of the expanded variables).
This should result in the output file def_ghi_jkl.mno . curl http://www.abc.com/123/{def}/{ghi}/{jkl,pqr,stu}.mno -o '#1_#2_#3.mno' This should result in the output files def_ghi_jkl.mno , def_ghi_pqr.mno and def_ghi_stu.mno . curl http://www.abc.com/123/{def}/{ghi}/[1-3].mno -o '#1_#2_#3.mno' This should result in the output files def_ghi_1.mno , def_ghi_2.mno , def_ghi_3.mno . | {
"source": [
"https://unix.stackexchange.com/questions/91570",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47567/"
]
} |
91,596 | I'm trying to produce this behaviour: grep 192.168.1 *.txt By passing a string into grep via Xargs but it is going on the end instead of as the first parameter. echo 192.168.1 | xargs grep *.txt I need to tell xargs (or something similar) to put the incoming string between 'grep' and '*' instead of on the end. How do I do this? | $ echo 192.168.1. | xargs -I{} grep {} *.txt Example Sample files: $ cat {1..3}.txt
192.168.1
192.168.1
192.168.1 Example run: # example uses {} but you can use whatever, such as -I{} or -Ifoo
$ echo 192.168.1. | xargs -I{} grep {} *.txt
1.txt:192.168.1.
2.txt:192.168.1.
3.txt:192.168.1. | {
"source": [
"https://unix.stackexchange.com/questions/91596",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28908/"
]
} |
91,620 | I am attempting to install Arch linux to a new (and very crappy) HP Pavillion 15 Notebook. This is a UEFI-based machine. After several swings at it, I have managed to get pretty far. Legacy mode is disabled in the system setup, and I have EFI-booted to the Arch DVD I burned, and progressed through both the Arch Beginner's Guide and the more advanced Installation Guide to the point where I am installing grub. While chroot ed, I execute: grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=arch_grub --recheck --debug This emits a ton of output, including: EFI variables are not supported on this system The first time I got to this point, I continued with the installation, not knowing if it was an actual problem. Turns out it was, as when I rebooted the machine no bootable medium could be found and the machine refused to boot. I was able at that point to go in to the UEFI setup menu and select an EFI file to boot, and the Arch Linux would boot up. But I am now going back and reinstalling again, trying to fix the problem above. How can I get GRUB to install correctly? | The problem was simply that the efivarfs kernel module was not loaded. This can be confirmed by: sh-4.2# efivar-tester
UEFI variables are not supported on this machine. If you are chroot ed in to your new install, exit out, and then enable efivarfs : exit
modprobe efivarfs ( efivarfs used to be efivars , so if this returns an error try modprobe efivars ) ...and then chroot back in. In my case, this means: chroot /mnt but you should chroot the same way you did before. Once back in, test again: efivar-tester This will no longer report an error, and you can install grub the same way you did before. grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=arch_grub --recheck --debug | {
"source": [
"https://unix.stackexchange.com/questions/91620",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19581/"
]
} |
91,638 | I am trying to execute the following: exec &>filename After this I am not able to see anything including what I typed, alright. I frantically try , exec 1>&1 and exec 2>&2 , but nothing happens. Now , without killing the shell , how do I get back the output redirected to the stdout and error redirected to stderr respectively?
Are the file descriptors the only way to refer standard [in|out]put and stderr? | After you run exec &>filename , the standard output and standard error of the shell go to filename . Standard input is file descriptor 0 by definition, and standard output is fd 1 and standard error is fd 2. A file descriptor isn't either redirected or non-redirected: it always go somewhere (assuming that the process has this descriptor open). To redirect a file descriptor means to change where it goes. When you ran exec &>filename , stdout and stderr were formerly connected to the terminal, and became connected to filename . There is always a way to refer to the current terminal: /dev/tty . When a process opens this file, it always means the process's controlling terminal , whichever it is. So if you want to get back that shell's original stdout and stderr, you can do it because the file they were connected to is still around. exec &>/dev/tty | {
"source": [
"https://unix.stackexchange.com/questions/91638",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20291/"
]
} |
91,684 | I have been using this command successfully, which changes a variable in a config file and then executes a Python script within a loop: for((i=114;i<=255;i+=1)); do echo $i > numbers.txt; python DoMyScript.py; done As each DoMyScript.py instance takes about 30 seconds to run before terminating, I'd like to relegate them to the background while the next one can be spawned. I have tried what I am familiar with, by adding in an ampersand as below: for((i=114;i<=255;i+=1)); do echo $i > numbers.txt; python DoMyScript.py &; done However, this results in the below error: -bash: syntax error near unexpected token `;' | Drop the ; after & . This is a syntactic requirement for((i=114;i<=255;i+=1)); do echo $i > numbers.txt;python DoMyScript.py & done | {
"source": [
"https://unix.stackexchange.com/questions/91684",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26076/"
]
} |
91,701 | I run a VPS which I would like to secure using UFW, allowing connections only to port 80.
However, in order to be able to administer it remotely, I need to keep port 22 open and make it reachable from home. I know that UFW can be configured to allow connections to a port only from specific IP address: ufw allow proto tcp from 123.123.123.123 to any port 22 But my IP address is dynamic, so this is not yet the solution. The question is: I have dynamic DNS resolution with DynDNS, so is it possible to create a Rule using the domain instead of the IP? I already tried this: ufw allow proto tcp from mydomain.dyndns.org to any port 22 but I got ERROR: Bad source address | I don't believe this is possible with ufw . ufw is just a frontend to iptables which also lacks this feature, so one approach would be to create a crontab entry which would periodically run and check if the IP address has changed. If it has then it will update it. You might be tempted to do this: $ iptables -A INPUT -p tcp --src mydomain.dyndns.org --dport 22 -j ACCEPT But this will resolve the hostname to an IP and use that for the rule, so if the IP later changes this rule will become invalid. Alternative idea You could create a script like so, called, iptables_update.bash . #!/bin/bash
#allow a dyndns name
HOSTNAME=HOST_NAME_HERE
LOGFILE=LOGFILE_NAME_HERE
Current_IP=$(host $HOSTNAME | cut -f4 -d' ')
if [ $LOGFILE = "" ] ; then
iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT
echo $Current_IP > $LOGFILE
else
Old_IP=$(cat $LOGFILE)
if [ "$Current_IP" = "$Old_IP" ] ; then
echo IP address has not changed
else
iptables -D INPUT -i eth1 -s $Old_IP -j ACCEPT
iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT
/etc/init.d/iptables save
echo $Current_IP > $LOGFILE
echo iptables have been updated
fi
fi source: Using IPTables with Dynamic IP hostnames like dyndns.org With this script saved you could create a crontab entry like so in the file /etc/crontab : */5 * * * * root /etc/iptables_update.bash > /dev/null 2>&1 This entry would then run the script every 5 minutes, checking to see if the IP address assigned to the hostname has changed. If so then it will create a new rule allowing it, while deleting the old rule for the old IP address. | {
"source": [
"https://unix.stackexchange.com/questions/91701",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39146/"
]
} |
91,725 | I want to kill a bunch of processes using this command: sudo ps ax | grep node | awk '{print $1}' | xargs kill But it gives me operation not permitted even with sudo. Then I tried with kill -9 individually for each process and it worked. Now my question is how do I pass -9 flag to kill via xargs? Nether xargs kill -9 or xargs -9 kill worked for me. | I don't believe this is possible with ufw . ufw is just a frontend to iptables which also lacks this feature, so one approach would be to create a crontab entry which would periodically run and check if the IP address has changed. If it has then it will update it. You might be tempted to do this: $ iptables -A INPUT -p tcp --src mydomain.dyndns.org --dport 22 -j ACCEPT But this will resolve the hostname to an IP and use that for the rule, so if the IP later changes this rule will become invalid. Alternative idea You could create a script like so, called, iptables_update.bash . #!/bin/bash
#allow a dyndns name
HOSTNAME=HOST_NAME_HERE
LOGFILE=LOGFILE_NAME_HERE
Current_IP=$(host $HOSTNAME | cut -f4 -d' ')
if [ $LOGFILE = "" ] ; then
iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT
echo $Current_IP > $LOGFILE
else
Old_IP=$(cat $LOGFILE)
if [ "$Current_IP" = "$Old_IP" ] ; then
echo IP address has not changed
else
iptables -D INPUT -i eth1 -s $Old_IP -j ACCEPT
iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT
/etc/init.d/iptables save
echo $Current_IP > $LOGFILE
echo iptables have been updated
fi
fi source: Using IPTables with Dynamic IP hostnames like dyndns.org With this script saved you could create a crontab entry like so in the file /etc/crontab : */5 * * * * root /etc/iptables_update.bash > /dev/null 2>&1 This entry would then run the script every 5 minutes, checking to see if the IP address assigned to the hostname has changed. If so then it will create a new rule allowing it, while deleting the old rule for the old IP address. | {
"source": [
"https://unix.stackexchange.com/questions/91725",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19718/"
]
} |
91,774 | For example on php-fpm: #listen = 127.0.0.1:9000
listen = /var/run/php-fpm/php-fpm.sock Is there any major performance differences between using unix socket-based listeners over TCP ports? (Not just for PHP but in general. Is it different for each service?) | UNIX domain sockets should offer better performance than TCP sockets over loopback interface (less copying of data, fewer context switches). Beware though that sockets are only reachable from programs that are running on the same server (there's no network support, obviously) and that the programs need to have the necessary permissions to access the socket file. | {
"source": [
"https://unix.stackexchange.com/questions/91774",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47368/"
]
} |
91,799 | Is there a command I can use to ask the dhcpd server which addresses have been assigned? | No, you can only get this information server side from the DHCP server. This information is contained in the DHCP server's .lease file: /var/lib/dhcpd/dhcpd.leases , if you're using ISC's DHCP server. Example $ more /var/lib/dhcpd/dhcpd.leases
# All times in this file are in UTC (GMT), not your local timezone. This is
# not a bug, so please don't ask about it. There is no portable way to
# store leases in the local timezone, so please don't request this as a
# feature. If this is inconvenient or confusing to you, we sincerely
# apologize. Seriously, though - don't ask.
# The format of this file is documented in the dhcpd.leases(5) manual page.
# This lease file was written by isc-dhcp-V3.0.5-RedHat
lease 192.168.1.100 {
starts 4 2011/09/22 20:27:28;
ends 1 2011/09/26 20:27:28;
tstp 1 2011/09/26 20:27:28;
binding state free;
hardware ethernet 00:1b:77:93:a1:69;
uid "\001\000\033w\223\241i";
}
...
... | {
"source": [
"https://unix.stackexchange.com/questions/91799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28488/"
]
} |
91,854 | I know what a kernel panic is, but I've also seen the term "kernel oops". I'd always thought they were the same, but maybe not. So: What is a kernel oops, and how is it different from a kernel panic? | An " oops " is a Linux kernel problem bad enough that it may affect system reliability. Some "oops"es are bad enough that the kernel decides to stop running immediately, lest there be data loss or other damage. These are called kernel panics . The latter term is primordial, going back to the very earliest versions of Linux's Unix forebears, which also print a "panic" message on the console when they happen. The original AT&T Unix kernel function that handles such conditions is called panic() . You can trace it back through the public source code releases of AT&T Unix to its very first releases: The OpenSolaris version of panic() was released by Sun in 2005 . It is fairly elaborate, and its header comments explain a lot about what happens in a panic situation. The Unix V4 implementation of panic() was released in 1973. It basically just prints the core state of the kernel to the console and stops the processor. That function is substantially unchanged in Unix V3 according to Amit Singh, who famously dissected an older version of Mac OS X and explained it. That first link takes you to a lovely article explaining macOS's approach to the implementation of panic() , which starts off with a relevant historical discussion. The " unix-jun72 " project to resurrect Unix V1 from scanned source code printouts shows a very early PDP-11 assembly version of this function, written sometime before June 1972, before Unix was fully rewritten in C. By this point, its implementation is whittled down to a 6-instruction routine that does little more than restart the PDP-11. | {
"source": [
"https://unix.stackexchange.com/questions/91854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29146/"
]
} |
91,937 | I just switched to a Macbook Air. I installed zsh using homebrew, but when I use some of the code that I (originally had) in my .zshrc , I get an error saying that .dircolors was not found . Below is the code in question: zstyle ':completion:*' auto-description 'specify: %d'
zstyle ':completion:*' completer _expand _complete _correct _approximate
zstyle ':completion:*' format 'Completing %d'
zstyle ':completion:*' group-name ''
zstyle ':completion:*' menu select=2
eval "$(dircolors -b)"
zstyle ':completion:*:default' list-colors ${(s.:.)LS_COLORS}
zstyle ':completion:*' list-colors ''
zstyle ':completion:*' list-prompt %SAt %p: Hit TAB for more, or the character to insert%s
zstyle ':completion:*' matcher-list '' 'm:{a-z}={A-Z}' 'm:{a-zA-Z}={A-Za-z}' 'r:|[._-]=* r:|=* l:|=*'
zstyle ':completion:*' menu select=long
zstyle ':completion:*' select-prompt %SScrolling active: current selection at %p%s
zstyle ':completion:*' use-compctl false
zstyle ':completion:*' verbose true
zstyle ':completion:*:*:kill:*:processes' list-colors '=(#b) #([0-9]#)*=0=01;31'
zstyle ':completion:*:kill:*' command 'ps -u $USER -o pid,%cpu,tty,cputime,cmd' Is dircolors not shipped with Mac OS X? How should I install it? Update: If I run dircolors directly on the shell I get: bash: dircolors; command not found | The command dircolors is specific to GNU coreutils, so you'll find it on non-embedded Linux and on Cygwin but not on other unix systems such as OSX. The generated settings in your .zshrc aren't portable to OSX. Since you're using the default colors, you can pass an empty string to the list-colors to get colors in file completions. For colors with the actual ls command , set the CLICOLOR environment variable on OSX, and also set LSCOLORS (see the manual for the format) if you want to change the colors. if whence dircolors >/dev/null; then
eval "$(dircolors -b)"
zstyle ':completion:*:default' list-colors ${(s.:.)LS_COLORS}
alias ls='ls --color'
else
export CLICOLOR=1
zstyle ':completion:*:default' list-colors ''
fi If you wanted to set non-default colors ( dircolors with a file argument), my recommendation would be to hard-code the output of dircolors -b ~/.dircolors in your .zshrc and use these settings for both zsh and GNU ls. LS_COLORS=…
zstyle ':completion:*:default' list-colors ${(s.:.)LS_COLORS}
if whence dircolors >/dev/null; then
export LS_COLORS
alias ls='ls --color'
else
export CLICOLOR=1
LSCOLORS=…
fi | {
"source": [
"https://unix.stackexchange.com/questions/91937",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
92,005 | I am trying to setup OpenVPN but I am getting this error: #./build-ca
grep: /etc/openvpn/easy-rsa/2.0/openssl.cnf: No such file or directory
pkitool: KEY_CONFIG (set by the ./vars script) is pointing to the wrong
version of openssl.cnf: /etc/openvpn/easy-rsa/2.0/openssl.cnf
The correct version should have a comment that says: easy-rsa version 2.x I have OpenSSL* installed. Do I need to set a location? | ln -s openssl-1.0.0.cnf openssl.cnf | {
"source": [
"https://unix.stackexchange.com/questions/92005",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47368/"
]
} |
92,071 | I was changing file permissions and I noticed that some of the permissions
modes ended in @ as in -rw-r--r--@ , or a + as in drwxr-x---+ . I've looked
at the man pages for chmod and chown, and searched around different help
forums, but I can't find anything about what these symbols mean. | + means that the file has additional ACLs set. You can set them with setfacl and query them with getfacl : martin@martin ~ % touch file
martin@martin ~ % ll file
-rw-rw-r-- 1 martin martin 0 Sep 23 21:59 file
martin@martin ~ % setfacl -m u:root:rw file
martin@martin ~ % ll file
-rw-rw-r--+ 1 martin martin 0 Sep 23 21:59 file
martin@martin ~ % getfacl file
# file: file
# owner: martin
# group: martin
user::rw-
user:root:rw-
group::rw-
mask::rw-
other::r-- I haven't seen @ yet personally, but according to this thread it signifies extended attributes, at least on MacOS. Try xattr -l on such a file. | {
"source": [
"https://unix.stackexchange.com/questions/92071",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47798/"
]
} |
92,123 | I have this command to backup a remote machine. The problem is that I need root rights to read and copy all files. I have no root user enabled for security reasons and use sudo the Ubuntu way. Would I need some cool piping or something to do this? rsync -chavzP --stats [email protected]:/ /media/backupdisk/myserverbackup/ | I would recommend that you just use the root account in the first place. If you set it up like this: Configure your sshd_config on the target machine to PermitRootLogin without-password . Use ssh-keygen on the machine that pulls the backup to create an SSH private key (only if you don't already have an SSH key). Do not set a passphrase. Google a tutorial if you need details for this, there should be plenty. Append the contents of /root/.ssh/id_rsa.pub of the backup machine to the /root/.ssh/authorized_keys of your target machine. Now your backup machine has root access to your target machine, without having to use password authentication. then the resulting setup should be pretty safe. sudo , especially combined with NOPASSWD as recommended in the comments, has no security benefits over just using the root account. For example this suggestion: add the following to your /etc/sudoers file: rsyncuser ALL= NOPASSWD:/usr/bin/rsync essentially gives rsyncuser root permissions anyway. You ask: @MartinvonWittich Easy to gain a full root shell because rsync executed with sudo ? Walk [m]e [through] that please. Well, simple. With the recommended configuration, rsyncuser may now run rsync as root without even being asked for a password. rsync is a very powerful tool to manipulate files, so now rsyncuser has a very powerful tool to manipulate files with root permissions. Finding a way to exploit this took me just a few minutes (tested on Ubuntu 13.04, requires dash , bash didn't work): martin@martin ~ % sudo rsync --perms --chmod u+s /bin/dash /bin/rootdash
martin@martin ~ % rootdash
# whoami
root
# touch /etc/evil
# tail -n1 /etc/shadow
dnsmasq:*:15942:0:99999:7::: As you can see, I have created myself a root shell; whoami identifies my account as root, I can create files in /etc , and I can read from /etc/shadow . My exploit was to set the setuid bit on the dash binary; it causes Linux to always run that binary with the permissions of the owner, in this case root. Having a real root is not [recommended] for good reasons. – redanimalwar 15 hours ago No, clumsily working around the root account in situations where it is absolutely appropriate to use it is not for good reasons. This is just another form of cargo cult programming - you don't really understand the concept behind sudo vs root, you just blindly apply the belief "root is bad, sudo is good" because you've read that somewhere. On the one hand, there are situations where sudo is definitely the right tool for the job. For example, when you're interactively working on a graphical Linux desktop, let's say Ubuntu, then having to use sudo is fine in those rare cases where you sometimes need root access. Ubuntu intentionally has a disabled root account and forces you to use sudo by default to prevent users from just always using the root account to log in. When the user just wants to use e.g. the web browser, then logging in as root would be a dangerous thing, and therefore not having a root account by default prevents stupid people from doing this. On the other hand, there are situations like yours, where an automated script requires root permissions to something, for example to make a backup. Now using sudo to work around the root account is not only pointless, it's also dangerous: at first glance rsyncuser looks like an ordinary unprivileged account. But as I've already explained, it would be very easy for an attacker to gain full root access if he had already gained rsyncuser access. So essentially, you now have an additional root account that doesn't look like a root account at all, which is not a good thing. | {
"source": [
"https://unix.stackexchange.com/questions/92123",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47813/"
]
} |
92,177 | I am wondering what exactly the “Namespaces support” feature in the Linux kernel means. I am using kernel 3.11.1 (the newest stable kernel at this time). If I decide to disable it, will I notice any change on my system? And in case somebody decides to make use of namespaces, is it enough to just compile NAMESPACES=Y in the kernel, or does he need userspace tools as well? | In a nutshell, namespaces provide a way to build a virtual Linux system inside a larger Linux system. This is different from running a virtual machine that runs as an unprivileged process: the virtual machine appears as a single process in the host, whereas processes running inside a namespace are still running on the host system. A virtual system running inside a larger system is called a container . The idea of a container is that processes running inside the container believe that they are the only processes in the system. In particular, the root user inside the container does not have root privileges outside the container (note that this is only true in recent enough versions of the kernel). Namespaces virtualize one feature at a time. Some examples of types of namespaces are: User namespaces — this allows processes to behave as if they were running as different users inside and outside the namespace. In particular, processes running as UID 0 inside the namespace have superuser privileges only with respect to processes running in the same namespace. Since Linux kernel 3.8, unprivileged users can create user namespaces. This allows an ordinary user to make use of features that are reserved to root (such as changing routing tables or setting capabilities). PID namespaces — processes inside a PID namespace cannot kill or trace processes outside that namespace. Mount namespaces — this allows processes to have their own view of the filesystem. This view can be a partial view, allowing some pieces of the filesystem to be hidden and pieces to be recomposed so that directory trees appear in different places. Mount namespaces generalize the traditional Unix feature chroot , which allows processes to be restricted to a particular subtree. Network namespaces — allow separation of networking resources (network devices) and thus enhance isolation of processes. Namespaces rely on the kernel to provide isolation between namespaces. This is quite complicated to get right, so there may still be security bugs lying around. The risk of security bugs would be the primary reason not to enable the feature. Another reason not to enable it would be when you're making a small kernel for an embedded device. In a general-purpose kernel that you'd install on a typical server or workstation, namespaces should be enabled, like any other mature kernel feature. There are still few applications that make use of namespaces. Here are a few: LXC is well-established. It relies on cgroups to provide containers. virt-sandbox is a more recent sandboxing project. Recent versions of Chromium also use namespaces for sandboxing where available. The uWSGI framework for clustered applications uses namespaces for improved sandboxing. See the LWN article series by Michael Kerrisk for more information. | {
"source": [
"https://unix.stackexchange.com/questions/92177",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
92,185 | As we know, apt-get has Super Cow Powers and aptitude does not: $ apt-get --help | grep -i cow
This APT has Super Cow Powers.
$ aptitude --help | grep -i cow
This aptitude does not have Super Cow Powers. and of course, APT has an Easter egg to go with it: $ apt-get moo
(__)
(oo)
/------\/
/ | ||
* /\---/\
~~ ~~
...."Have you mooed today?"... I'm curious, is there are story behind this Easter egg? What's its history? I know it's been in apt for a long time—from a quick grep of apt sources in old Debian releases, it gained it sometime between Debian 2.2 (potato; apt 0.3.19) and Debian 3.0 (woody; apt 0.5.4). edit: According to a message from Jacob Kuntz on the Debian-Devel mailing list, it was in apt 0.5.0 in Feb. 2001. A message from Matt Zimmerman on the Debian bug tracker makes it sound like 0.5.0 is when it was added. | Apt started its life around 1997 and entered Debian officially around 1999. During its early days, Jason Gunthorpe was its main maintainer/developer. Well, apparently Jason liked cows. I don't know if he still does. :-) Anyway, I think the apt-get moo thing was added by him as a joke. The corresponding aptitude easter eggs (see below) were added later by Daniel Burrows as a homage, I think. If there is more to the story, Jason is probably the person to ask. He has (likely in response to this question) written a post on Google+ . A small bit of it: Once a long time ago a developer was known for announcing his presence on IRC with a simple, to the point 'Moo'. As with cows in pasture others would often Moo back in greeting. This led to a certain range of cow based jokes. Also: $ aptitude moo
There are no Easter Eggs in this program.
$ aptitude -v moo
There really are no Easter Eggs in this program.
$ aptitude -vv moo
Didn't I already tell you that there are no Easter Eggs in this program?
$ aptitude -vvv moo
Stop it!
$ aptitude -vvvv moo
Okay, okay, if I give you an Easter Egg, will you go away?
$ aptitude -vvvvv moo
All right, you win.
/----\
-------/ \
/ \
/ |
-----------------/ --------\
----------------------------------------------
$ aptitude -vvvvvv moo
What is it? It's an elephant being eaten by a snake, of course. | {
"source": [
"https://unix.stackexchange.com/questions/92185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/977/"
]
} |
92,187 | I know that a custom IFS value can be set for the scope of a single command/built-in. Is there a way to set a custom IFS value for a single statement?? Apparently not, since based on the below the global IFS value is affected when this is attempted #check environment IFS value, it is space-tab-newline
printf "%s" "$IFS" | od -bc
0000000 040 011 012
\t \n
0000003
#invoke built-in with custom IFS
IFS=$'\n' read -r -d '' -a arr <<< "$str"
#environment IFS value remains unchanged as seen below
printf "%s" "$IFS" | od -bc
0000000 040 011 012
\t \n
0000003
#now attempt to set IFS for a single statement
IFS=$'\n' a=($str)
#BUT environment IFS value is overwritten as seen below
printf "%s" "$IFS" | od -bc
0000000 012
\n
0000001 | In some shells (including bash ): IFS=: command eval 'p=($PATH)' (with bash , you can omit the command if not in sh/POSIX emulation). But beware that when using unquoted variables, you also generally need to set -f , and there's no local scope for that in most shells. With zsh, you can do: (){ local IFS=:; p=($=PATH); } $=PATH is to force word splitting which is not done by default in zsh (globbing upon variable expansion is not done either so you don't need set -f unless in sh emulation). However, in zsh , you'd rather use $path which is an array tied to $PATH , or to split with arbitrary delimiters: p=(${(s[:])PATH}) or p=("${(s[:]@)PATH}") to preserve empty elements. (){...} (or function {...} ) are called anonymous functions and are typically used to set a local scope. with other shells that support local scope in functions, you could do something similar with: e() { eval "$@"; }
e 'local IFS=:; p=($PATH)' To implement a local scope for variables and options in POSIX shells, you can also use the functions provided at https://github.com/stephane-chazelas/misc-scripts/blob/master/locvar.sh . Then you can use it as: . /path/to/locvar.sh
var=3,2,2
call eval 'locvar IFS; locopt -f; IFS=,; set -- $var; a=$1 b=$2 c=$3' (by the way, it's invalid to split $PATH that way above except in zsh as in other shells, IFS is field delimiter, not field separator). IFS=$'\n' a=($str) Is just two assignments, one after the other just like a=1 b=2 . A note of explanation on var=value cmd : In: var=value cmd arg The shell executes /path/to/cmd in a new process and passes cmd and arg in argv[] and var=value in envp[] . That's not really a variable assignment, but more passing environment variables to the executed command. In the Bourne or Korn shell, with set -k , you can even write it cmd var=value arg . Now, that doesn't apply to builtins or functions which are not executed . In the Bourne shell, in var=value some-builtin , var ends up being set afterwards, just like with var=value alone. That means for instance that the behaviour of var=value echo foo (which is not useful) varies depending on whether echo is builtin or not. POSIX and/or ksh changed that in that that Bourne behaviour only happens for a category of builtins called special builtins . eval is a special builtin, read is not. For non special builtin, var=value builtin sets var only for the execution of the builtin which makes it behave similarly to when an external command is being run. The command command can be used to remove the special attribute of those special builtins . What POSIX overlooked though is that for the eval and . builtins, that would mean that shells would have to implement a variable stack (even though it doesn't specify the local or typeset scope limiting commands), because you could do: a=0; a=1 command eval 'a=2 command eval echo \$a; echo $a'; echo $a Or even: a=1 command eval myfunction with myfunction being a function using or setting $a and potentially calling command eval . That was really an overlook because ksh (which the spec is mostly based on) didn't implement it (and AT&T ksh and zsh still don't), but nowadays, except those two, most shells implement it. Behaviour varies among shells though in things like: a=0; a=1 command eval a=2; echo "$a" though. Using local on shells that support it is a more reliable way to implement local scope. | {
"source": [
"https://unix.stackexchange.com/questions/92187",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21233/"
]
} |
92,199 | Say I am logged into a remote system, how can I know what it's running? On most modern Linuxes (Linuces?), you have the lsb_release command: $ lsb_release -ic
Distributor ID: LinuxMint
Codename: debian Which as far as I can tell just gives the same info as /etc/lsb-release . What if that file is not present? I seem to recall that the lsb_release command is relatively new so what if I have to get the OS of an older system? In any case, lsb stands for Linux Standard Base so I am assuming it won't work on non-Linux Unices. As far as I know, there is no way of getting this information from uname so how can I get this on systems that do not use lsb_release ? | lsb_release -a is likely going to be your best option for finding this information out, and being able to do so in a consistent way. History of LSB The lsb in that command stands for the project Linux Standards Base which is an umbrella project sponsored by the Linux Foundation to provide generic methods for doing basic kinds of things on various Linux distros. The project is voluntary and vendors can participate within the project as just a user and also as facilitators of the various specifications around different modules that help to drive standardization within the different Linux distributions. excerpt from the charter The LSB workgroup has, as its core goal, to address these two
concerns. We publish a standard that describes the minimum set of APIs
a distribution must support, in consultation with the major
distribution vendors. We also provide tests and tools which measure
support for the standard, and enable application developers to target
the common set. Finally, through our testing work, we seek to prevent
unnecessary divergence between the distributions. Useful links related to LSB LSB Charter LSB Workgroup LSB Roadmap LSB Mailing List (current activity is here!) List of certified LSB products LSB Wikipedia page Criticisms There are a number of problems with LSB that make it problematic for distros such as Debian. The forced usage of RPM being one. See the Wikipedia article for more on the matter . Novell If you search you'll possibly come across a fairly dated looking page titled: Detecting Underlying Linux Distro from Novell. This is one of the few places I"ve seen an actual list that shows several of the major distros and how you can detect what underlying one you're using. excerpt Novell SUSE /etc/SUSE-release
Red Hat /etc/redhat-release, /etc/redhat_version
Fedora /etc/fedora-release
Slackware /etc/slackware-release, /etc/slackware-version
Debian /etc/debian_release, /etc/debian_version,
Mandrake /etc/mandrake-release
Yellow dog /etc/yellowdog-release
Sun JDS /etc/sun-release
Solaris/Sparc /etc/release
Gentoo /etc/gentoo-release
UnitedLinux /etc/UnitedLinux-release
ubuntu /etc/lsb-release This same page also includes a handy script which attempts to codify for the above using just vanilla uname commands, and the presence of one of the above files. NOTE: This list is dated but you could easily drop the dated distros such as Mandrake from the list and replace them with alternatives. This type of a script might be one approach if you're attempting to support a large swath of Solaris & Linux variants. Linux Mafia More searching will turn up the following page maintained on Linuxmafia.com, titled: /etc/release equivalents for sundry Linux (and other Unix) distributions . This is probably the most exhaustive list to date that I've seen. You could codify this list with a case/switch statement and include it as part of your software distribution. In fact there is a script at the bottom of that page that does exactly that. So you could simply download and use the script as 3rd party to your software distribution. script #!/bin/sh
# Detects which OS and if it is Linux then it will detect which Linux
# Distribution.
OS=`uname -s`
REV=`uname -r`
MACH=`uname -m`
GetVersionFromFile()
{
VERSION=`cat $1 | tr "\n" ' ' | sed s/.*VERSION.*=\ // `
}
if [ "${OS}" = "SunOS" ] ; then
OS=Solaris
ARCH=`uname -p`
OSSTR="${OS} ${REV}(${ARCH} `uname -v`)"
elif [ "${OS}" = "AIX" ] ; then
OSSTR="${OS} `oslevel` (`oslevel -r`)"
elif [ "${OS}" = "Linux" ] ; then
KERNEL=`uname -r`
if [ -f /etc/redhat-release ] ; then
DIST='RedHat'
PSUEDONAME=`cat /etc/redhat-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/redhat-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/SuSE-release ] ; then
DIST=`cat /etc/SuSE-release | tr "\n" ' '| sed s/VERSION.*//`
REV=`cat /etc/SuSE-release | tr "\n" ' ' | sed s/.*=\ //`
elif [ -f /etc/mandrake-release ] ; then
DIST='Mandrake'
PSUEDONAME=`cat /etc/mandrake-release | sed s/.*\(// | sed s/\)//`
REV=`cat /etc/mandrake-release | sed s/.*release\ // | sed s/\ .*//`
elif [ -f /etc/debian_version ] ; then
DIST="Debian `cat /etc/debian_version`"
REV=""
fi
if [ -f /etc/UnitedLinux-release ] ; then
DIST="${DIST}[`cat /etc/UnitedLinux-release | tr "\n" ' ' | sed s/VERSION.*//`]"
fi
OSSTR="${OS} ${DIST} ${REV}(${PSUEDONAME} ${KERNEL} ${MACH})"
fi
echo ${OSSTR} NOTE: This script should look familiar, it's an up to date version of the Novell one! Legroom script Another method I've seen employed is to roll your own script, similar to the above Novell method but making use of LSB instead. This article titled: Generic Method to Determine Linux (or UNIX) Distribution Name , shows one such method. # Determine OS platform
UNAME=$(uname | tr "[:upper:]" "[:lower:]")
# If Linux, try to determine specific distribution
if [ "$UNAME" == "linux" ]; then
# If available, use LSB to identify distribution
if [ -f /etc/lsb-release -o -d /etc/lsb-release.d ]; then
export DISTRO=$(lsb_release -i | cut -d: -f2 | sed s/'^\t'//)
# Otherwise, use release info file
else
export DISTRO=$(ls -d /etc/[A-Za-z]*[_-][rv]e[lr]* | grep -v "lsb" | cut -d'/' -f3 | cut -d'-' -f1 | cut -d'_' -f1)
fi
fi
# For everything else (or if above failed), just use generic identifier
[ "$DISTRO" == "" ] && export DISTRO=$UNAME
unset UNAME This chunk of code could be included into a system's /etc/bashrc or some such file which would then set the environment variable $DISTRO . gcc Believe it or not another method is to make use of gcc . If you query the command gcc --version you'll get the distro that gcc was built for, which is invaribly the same as the system it's running on. Fedora 14 $ gcc --version
gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4)
Copyright (C) 2010 Free Software Foundation, Inc. CentOS 5.x $ gcc --version
gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-54)
Copyright (C) 2006 Free Software Foundation, Inc. CentOS 6.x $ gcc --version
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3)
Copyright (C) 2010 Free Software Foundation, Inc. Ubuntu 12.04 $ gcc --version
gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
Copyright (C) 2011 Free Software Foundation, Inc. TL;DR; So which one should I use? I would tend to go with lsb_release -a for any Linux distributions that I would frequent (RedHat, Debian, Ubuntu, etc.). For situations where you're supporting systems that don't provide lsb_release I'd roll my own as part of the distribution of software that I'm providing, similar to one of the above scripts. UPDATE #1: Follow-up with SuSE In speaking with @Nils in the comments below it was determined that for whatever reason, SLES11 appeared to drop LSB from being installed by default. It was only an optional installation, which seemed counter for a package that provides this type of key feature. So I took the opportunity to contact someone from the OpenSuSE project to get a sense of why. excerpt of email Hi Rob,
I hope you don't mind me contacting you directly but I found your info here:
https://en.opensuse.org/User:Rjschwei. I participate on one of the StackExchange
sites, Unix & Linux and a question recently came up regarding the best option
for determining the underlying OS.
http://unix.stackexchange.com/questions/92199/how-can-i-reliably-get-the-operating-systems-name/92218?noredirect=1#comment140840_92218
In my answer I suggested using lsb_release, but one of the other users mentioned
that this command wasn't installed as part of SLES11 which kind of surprised me.
Anyway we were looking for some way to confirm whether this was intentionally
dropped from SLES or it was accidental.
Would you know how we could go about confirming this one way or another?
Thanks for reading this, appreciate any help and/or guidance on this.
-Sam Mingolelli
http://unix.stackexchange.com/users/7453/slm Here's Rob's response Hi,
On 10/01/2013 09:31 AM, Sam Mingo wrote:
- show quoted text -
lsb_release was not dropped in SLES 11. SLES 11 is LSB certified. However, it
is not installed by default, which is consistent with pretty much every other
distribution. The lsb_release command is part of the lsb-release package.
At present almost every distribution has an entry in /etc such as
/etc/SuSE-release for SLES and openSUSE. Since this is difficult for ISVs and
others there is a standardization effort going on driven by the convergence to
systemd. The standard location for distribution information in the future will
be /etc/os-release, although Ubuntu will probably do something different.
HTH,
Robert
-- Robert Schweikert MAY THE SOURCE BE WITH YOU
SUSE-IBM Software Integration Center LINUX
Tech Lead
Public Cloud Architect | {
"source": [
"https://unix.stackexchange.com/questions/92199",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22222/"
]
} |
92,346 | I'm struggling to wrap my mind around why the find interprets file modification times the way it does. Specifically, I don't understand why the -mtime +1 doesn't show files less than 48 hours old. As an example test I created three test files with different modified dates: [root@foobox findtest]# ls -l
total 0
-rw-r--r-- 1 root root 0 Sep 25 08:44 foo1
-rw-r--r-- 1 root root 0 Sep 24 08:14 foo2
-rw-r--r-- 1 root root 0 Sep 23 08:14 foo3 I then ran find with the -mtime +1 switch and got the following output: [root@foobox findtest]# find -mtime +1
./foo3 I then ran find with the -mmin +1440 and got the following output: [root@foobox findtest]# find -mmin +1440
./foo3
./foo2 As per the man page for find, I understand that this is expected behavior: -mtime n
File’s data was last modified n*24 hours ago. See the comments
for -atime to understand how rounding affects the interpretation
of file modification times.
-atime n
File was last accessed n*24 hours ago. When find figures out
how many 24-hour periods ago the file was last accessed, any
fractional part is ignored, so to match -atime +1, a file has to
have been accessed at least two days ago. This still doesn't make sense to me though. So if a file is 1 day, 23 hours, 59 minutes, and 59 seconds old, find -mtime +1 ignores all that and just treats it like it's 1 day, 0 hours, 0 minutes, and 0 seconds old? In which case, it's not technically older that 1 day and ignored? Does... not... compute. | Well, the simple answer is, I guess, that your find implementation is following the POSIX/SuS standard, which says it must behave this way. Quoting from SUSv4/IEEE Std 1003.1, 2013 Edition, "find" : -mtime n The primary shall evaluate as true if the file modification time subtracted from the initialization time, divided by 86400 (with any remainder discarded), is n. (Elsewhere in that document it explains that n can actually be +n , and the meaning of that as "greater than"). As to why the standard says it shall behave that way—well, I'd guess long in the past a programmer was lazy or not thinking about it, and just wrote the C code (current_time - file_time) / 86400 . C integer arithmetic discards the remainder. Scripts started depending on that behavior, and thus it was standardized. The spec'd behavior would also be portable to a hypothetical system that only stored a modification date (not time). I don't know if such a system has existed. | {
"source": [
"https://unix.stackexchange.com/questions/92346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
92,441 | How can I reliably address different machines on my network? I've always used the .local suffix to talk to computers on my local network before. With a new router, though, .local rarely (though sometimes) works. I've found that .home and .lan both usually work, but not always. .-------. .--------. .-----.
| modem |---| router |))))))(wifi))))))| foo |
.-------. .--------. v .-----.
|| | v
/_^_^_\ | \))))))).-----.
/ cloud \ | | bar |
\-_-_-/ .-----. .-----.
| baz |
.-----. So, from a terminal on foo , I can try: ssh bar.local
ssh bar.home
ssh bar.lan
ssh baz.local
ssh baz.home
ssh baz.lan and sometimes some of those suffixes work and some don't, but I don't know how to predict which or when. foo , bar , and baz are all modern Linux or Android systems and the Linux boxes all have (or can have) avahi-daemon, or other reasonably-available packages, installed (I don't want to set up static IP addresses: I'd like to keep using DHCP (from the router) for each machine, and even if I was okay with static addresses I'd want to be able to enter hostnames in the unrooted Android machines, where I can't edit the hosts file to map a chosen hostname to an IP address.) | There are no RFCs that specify .lan and .home . Thus, it is up to the router's vendor what pseudo TLDs (top-level-domain names) are by default configured. For example my router vendor (AVM) seems to use .fritz.box by default. .local is used by mDNS (multicast DNS) , a protocol engineered by Apple. Using example.local only works on systems (and for destinations) that have a mDNS daemon running (e.g. MacOSX, current Linux distributions like Ubuntu/Fedora). You can keep using dhcp - but perhaps you have to configure your router a little bit. Most routers let you configure such things like the domain name for the network. Note that using pseudo TLDs is kind of dangerous - .lan seems to be popular - and better than .local (because it does not clash with mDNSs .local ) - but there is no guarantee that ICANN will not introduce it as new TLD at some point. 2019 update : Case in point, .box isn't a pseudo TLD, anymore. ICANN delegated .box in 2016. Thus, it makes sense to get a real domain name - and use sub-domains of it for private stuff, e.g. when your domain is example.org you could use: lan.example.org
internal.example.org
... | {
"source": [
"https://unix.stackexchange.com/questions/92441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
92,447 | How do I get the ASCII value of the alphabet? For example, 97 for a ? | Define these two functions (usually available in other languages): chr() {
[ "$1" -lt 256 ] || return 1
printf "\\$(printf '%03o' "$1")"
}
ord() {
LC_CTYPE=C printf '%d' "'$1"
} Usage: chr 65
A
ord A
65 | {
"source": [
"https://unix.stackexchange.com/questions/92447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47974/"
]
} |
92,493 | I am able to see the list of all the processes and the memory via ps aux and going through the VSZ and RSS Is there a way to sort down the output of this command by the descending order on RSS value? | Use the following command: ps aux --sort -rss Check here for more Linux process memory usage | {
"source": [
"https://unix.stackexchange.com/questions/92493",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48005/"
]
} |
92,560 | I just SSH'd into root, and then SSH'd again into root on the same machine. So I have two windows open both SSH'd into root on my remote machine. From the shell, how can I see a list of these two sessions? | who or w ; who -a for additional information. These commands just show all login sessions on a terminal device. An SSH session will be on a pseudo-terminal slave ( pts ) as shown in the TTY column, but not all pts connections are SSH sessions. For instance, programs that create a pseudo-terminal device such as xterm or screen will show as pts . See Difference between pts and tty for a better description of the different values found in the TTY column. Furthermore, this approach won't show anybody who's logged in to an SFTP session, since SFTP sessions aren't shell login sessions. I don't know of any way to explicitly show all SSH sessions. You can infer this information by reading login information from utmp / wtmp via a tool like last , w , or who like I've just described, or by using networking tools like @sebelk described in their answer to find open tcp connections on port 22 (or wherever your SSH daemon(s) is/are listening). A third approach you could take is to parse the log output from the SSH daemon. Depending on your OS distribution, SSH distribution, configuration, and so on, your log output may be in a number of different places. On an RHEL 6 box, I found the logs in /var/log/sshd.log . On an RHEL 7 box, and also on an Arch Linux box, I needed to use journalctl -u sshd to view the logs. Some systems might output SSH logs to syslog. Your logs may be in these places or elsewhere. Here's a sample of what you might see: [myhost ~]% grep hendrenj /var/log/sshd.log | grep session
May 1 15:57:11 myhost sshd[34427]: pam_unix(sshd:session): session opened for user hendrenj by (uid=0)
May 1 16:16:13 myhost sshd[34427]: pam_unix(sshd:session): session closed for user hendrenj
May 5 14:27:09 myhost sshd[43553]: pam_unix(sshd:session): session opened for user hendrenj by (uid=0)
May 5 18:23:41 myhost sshd[43553]: pam_unix(sshd:session): session closed for user hendrenj The logs show when sessions open and close, who the session belongs to, where the user is connecting from, and more. However, you're going to have to do a lot of parsing if you want to get this from a simple, human-readable log of events to a list of currently active sessions, and it still probably won't be an accurate list when you're done parsing, since the logs don't actually contain enough information to determine which sessions are still active - you're essentially just guessing. The only advantage you gain by using these logs is that the information comes directly from SSHD instead of via a secondhand source like the other methods. I recommend just using w . Most of the time, this will get you the information you want. | {
"source": [
"https://unix.stackexchange.com/questions/92560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11258/"
]
} |
92,563 | I'm aware of libraries in languages such as Ruby and Javascript to make colorizing your terminal scripts easier by using color names like "red". But is there something like this for shell scripts in Bash, or Ksh, or whatever? | You can define colours in your bash scripts like so: red=$'\e[1;31m'
grn=$'\e[1;32m'
yel=$'\e[1;33m'
blu=$'\e[1;34m'
mag=$'\e[1;35m'
cyn=$'\e[1;36m'
end=$'\e[0m' And then use them to print in your required colours: printf "%s\n" "Text in ${red}red${end}, white and ${blu}blue${end}." | {
"source": [
"https://unix.stackexchange.com/questions/92563",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11258/"
]
} |
92,643 | Is there a way to replace the value of a symbolic link? For example, I want to change a symbolic link from this: first -> /home/username/foo/very/long/directories/that/I/do/not/want/to/type/again to this: second -> /home/username/bar/very/long/directories/that/I/do/not/want/to/type/again I want to change only foo to bar . Of course I can create a link again, but if it is possible to replace the value of the link, it becomes easier. | You can use the -f , --force option of ln to have it remove the existing symlink before creating the new one. If the destination is a directory, you need to add the -n , --no-dereference option to tell ln to treat the symlink as a normal file. ln -sfn target existing_link However, this operation is not atomic, as ln will unlink() the old symlink before calling symlink() , so technically it doesn't count as changing the value of the link. If you care about this distinction, then the answer is no, you can't change the value of an existing symlink. That said, you can do something like the following to create a new symlink, changing part of the old link value: ln -sfn "$(readlink existing_link | sed s/foo/bar/)" "existing_symlink" | {
"source": [
"https://unix.stackexchange.com/questions/92643",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44001/"
]
} |
92,715 | I am using PuTTY on Windows 7 to SSH to my school computer lab. Can I transfer files from my Windows machine to my user on the school machines using SSH? | Use the PSCP tool from the putty download page: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html PSCP is the putty version of scp which is a cp (copy) over ssh command. PSCP needs to be installed on your windows computer (just downloaded, really, there is no install process. In the Packaged Files section, pscp.exe is already included). Nothing needs to be installed on the school's servers. PSCP and scp both use ssh to connect. To answer the usage question from the comments: To upload from your computer to a remote server: c:\pscp c:\some\path\to\a\file.txt user@remote:\home\user\some\path This will upload the file file.txt to the specified directory on the server.
If the final part of the destination path is NOT a directory, it will be the new file name. You could also do this to upload the file with a different name: c:\pscp c:\some\path\to\a\file.txt user@remote:\home\user\some\path\newname.txt To download a file from a remote server to your computer: c:\pscp user@remote:\home\user\some\file.txt c:\some\path\to\a\ or c:\pscp user@remote:\home\user\some\file.txt c:\some\path\to\a\newfile.txt or c:\pscp user@remote:\home\user\some\file.txt . With a lone dot at the end there. This will download the specified file to the current directory. Since the comment is too far down, I should also point out here that WinSCP exists providing a GUI for all this, if that's of interest: http://winscp.net/eng/download.php | {
"source": [
"https://unix.stackexchange.com/questions/92715",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48115/"
]
} |
92,799 | I am trying to connect to my WEP network just using the command-line (Linux). I run: sudo iwconfig wlan0 mode Managed essid 'my_network' key 'xx:xx:... hex key, 26 digits' Then I try to obtain an IP with sudo dhclient -v wlan0 or sudo dhclient wlan0 without success (tried to ping google.com). I know that the keyword is right, and I also tried with the ASCII key using 's:key', and again, the same result. I get the message below when running dhclient: Listening on LPF/wlan0/44:...
Sending on LPF/wlan0/44:...
Sending on Socket/fallback
DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 3 I have no problem connecting with WICD or the standard Ubuntu tool. | Option 1 Just edit /etc/network/interfaces and write: auto wlan0
iface wlan0 inet dhcp
wpa-ssid {ssid}
wpa-psk {password} After that write and close file and use command: sudo dhclient wlan0 Replace {ssid} and {password} with your respective WiFi SSID and password. Option 2 Provided you replace your Wireless network card, Wi-Fi Network name, and Wi-FI Password this should also work. I am using:
- Wireless network card is wlan0 - Wireless network is "Wifi2Home" - Wireless network key is ASCII code ABCDE12345 First, get your WiFi card up and running: sudo ifconfig wlan0 up Now scan for a list of WiFi networks in range: sudo iwlist wlan0 scan This will show you a list of wireless networks, pick yours from the list: sudo iwconfig wlan0 essid Wifi2Home key s:ABCDE12345 To obtain the IP address, now request it with the Dynamic Host Client: sudo dhclient wlan0 You should then be connected to the WiFi network. The first option is better, because it will be able to run as a cron job to start up the wifi whenever you need it going. If you need to turn off your WiFi for whatever reason, just type: sudo ifconfig wlan0 down FYI I have also seen people using alternative commands. I use Debian, Solaris and OSX, so I'm not 100% sure if they are the same on Ubuntu. But here they are: sudo ifup wlan0 is the same as sudo ifconfig wlan0 up sudo ifdown wlan0 is the same as sudo ifconfig wlan down | {
"source": [
"https://unix.stackexchange.com/questions/92799",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48166/"
]
} |
92,871 | I want to rotate all the images in a directory that match a pattern. So far I have: for file in `ls /tmp/p/DSC*.JPG`; do
convert $file -rotate 90 file+'_rotated'.JPG
done but that gives no output? | There are quite a few issues with your code. First of all, you are parsing ls which is a Bad Idea . You also need to refer to the variable as $file as you point out and you should also quote it so it won't break on spaces. You are declaring num but it is never used. A safer way would be: find /tmp/p/ -name "DSC*.JPG" | while IFS= read -r file; do
convert "$file" -rotate 90 "$file"_rotated.JPG
done This will still have problems if your files contain newlines but at least will not break if your path contains spaces. If the files are all in the same directory, it can be further simplified using globbing. You can also use parameter expansion to create foo_rotated.JPG1 instead of foo.JPG_rotated.JPG : for file in /tmp/p/DSC*.JPG; do
convert "$file" -rotate 90 "${file%.JPG}"_rotated.JPG
done | {
"source": [
"https://unix.stackexchange.com/questions/92871",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
92,895 | I'm writing shell scripts for my server, which is a shared hosting running FreeBSD. I also want to be able to test them locally, on my PC running Linux. Hence, I'm trying to write them in a portable way, but with sed I see no way to do that. Part of my website uses generated static HTML files, and this sed line inserts correct DOCTYPE after each regeneration: sed -i '1s/^/<!DOCTYPE html> \n/' ${file_name.html} It works with GNU sed on Linux, but FreeBSD sed expects the first argument after -i option to be extension for backup copy. This is how it would look like: sed -i '' '1s/^/<!DOCTYPE html> \n/' ${file_name.html} However, GNU sed in turn expects the expression to follow immediately after -i .
(It also requires fixes with newline handling, but that's already answered in here ) Of course I can include this change in my server copy of the script, but that would mess i.e. my use of VCS for versioning. Is there a way to achieve this with sed in a fully portable way? | GNU sed accepts an optional extension after -i . The extension must be in the same argument with no intervening space. This syntax also works on FreeBSD sed. sed -i.bak -e '…' SOMEFILE Note that on FreeBSD, -i also changes the behavior when there are multiple input files: they are processed independently (so e.g. $ matches the last line of each file). Also this won't work on BusyBox. If you don't want to use backup files, you could check which version of sed is available. # Assume that sed is either FreeBSD/macOS or GNU
case $(sed --help 2>&1) in
*GNU*) set sed -i;;
*) set sed -i '';;
esac
"$@" -e '…' "$file" Or alternatively, to avoid clobbering the positional parameters, define a function. case $(sed --help 2>&1) in
*GNU*) sed_i () { sed -i "$@"; };;
*) sed_i () { sed -i '' "$@"; };;
esac
sed_i -e '…' "$file" If you don't want to bother, use Perl. perl -i -pe '…' "$file" If you want to write a portable script, don't use -i — it isn't in POSIX. Do manually what sed does under the hood — it's only one more line of code. sed -e '…' "$file" >"$file.new"
mv -- "$file.new" "$file" | {
"source": [
"https://unix.stackexchange.com/questions/92895",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26514/"
]
} |
92,963 | How can I move a file in sftp server on a different directory? I connect to this server using sftp and then try to move a file using mv myfile.csv /my/dir/myfile.csv but this generates an error. How to to do this? | There is no mv command in the interactive mode of sftp. Use rename instead. To learn which commands are available, check the man page man sftp or type help within sftp . | {
"source": [
"https://unix.stackexchange.com/questions/92963",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27887/"
]
} |
92,978 | I see this in a shell script. variable=${@:2} What is it doing? | It's showing the contents of the special variable $@ , in Bash. It contains all the command line arguments, and this command is taking all the arguments from the second one on and storing them in a variable, variable . Example Here's an exampe script. #!/bin/bash
echo ${@:2}
variable=${@:3}
echo $variable Example run: ./ex.bash 1 2 3 4 5
2 3 4 5
3 4 5 References Positional Parameters - Advanced Bash Scripting Guide | {
"source": [
"https://unix.stackexchange.com/questions/92978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41717/"
]
} |
92,983 | When setting the view to "Grid" in Pantheon Files (Elementary OS), there is this option to click a "+" or "-" button on folders and files. What is that? After clicking the "+", this may be switched back: | It's showing the contents of the special variable $@ , in Bash. It contains all the command line arguments, and this command is taking all the arguments from the second one on and storing them in a variable, variable . Example Here's an exampe script. #!/bin/bash
echo ${@:2}
variable=${@:3}
echo $variable Example run: ./ex.bash 1 2 3 4 5
2 3 4 5
3 4 5 References Positional Parameters - Advanced Bash Scripting Guide | {
"source": [
"https://unix.stackexchange.com/questions/92983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
93,029 | I can read the numbers and operation in with: echo "First number please"
read num1
echo "Second number please"
read num2
echo "Operation?"
read op but then all my attempts to add the numbers fail: case "$op" in
"+")
echo num1+num2;;
"-")
echo `num1-num2`;;
esac Run: First number please
1
Second mumber please
2
Operation?
+ Output: num1+num2 ...or... echo $num1+$num2;;
# results in: 1+2 ...or... echo `$num1`+`$num2`;;
# results in: ...line 9: 1: command not found Seems like I'm getting strings still perhaps when I try add add ("2+2" instead of "4"). | Arithmetic in POSIX shells is done with $ and double parentheses (( )) : echo "$(($num1+$num2))" You can assign from that; also note the $ operators on the variable names inside (()) are optional): num1="$((num1+num2))" There is also expr : expr $num1 + $num2 In scripting $(()) is preferable since it avoids a fork/execute for the expr command. | {
"source": [
"https://unix.stackexchange.com/questions/93029",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
93,139 | I'm trying to zip a folder in unix.
Can that be done using the gzip command? | No. Unlike zip , gzip functions as a compression algorithm only . Because of various reasons some of which hearken back to the era of tape drives, Unix uses a program named tar to archive data, which can then be compressed with a compression program like gzip , bzip2 , 7zip , etc. In order to "zip" a directory, the correct command would be tar -zcvf archive.tar.gz directory/ This will tell tar to compress it using the z (gzip) algorithm c (create) an archive from the files in directory ( tar is recursive by default) v (verbosely) list (on /dev/stderr so it doesn't affect piped commands) all the files it adds to the archive. and store the output as a f (file) named archive.tar.gz The tar command offers gzip support (via the -z flag) purely for your convenience. The gzip command/lib is completely separate. The command above is effectively the same as tar -cv directory | gzip > archive.tar.gz To decompress and unpack the archive into the current directory you would use tar -zxvf archive.tar.gz That command is effectively the same as gunzip < archive.tar.gz | tar -xv tar has many, many, MANY other options and uses as well; I heartily recommend reading through its manpage sometime. | {
"source": [
"https://unix.stackexchange.com/questions/93139",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48280/"
]
} |
93,144 | I use Vim mainly for quick edits rather than long work sessions. In that sense, I find the keyboard sequence for quitting especially laborious: Esc , Shift + ; , w , q , Enter . How to quit Vim (possibly saving the document) with the least keystrokes? Especially from Insert mode. | Shift z z in command mode saves the file and exits. | {
"source": [
"https://unix.stackexchange.com/questions/93144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7047/"
]
} |
93,154 | Please advice what the password in debian Linux (debian Linux in clonezilla software - disaster recovery tool) Example I use the clonzilla (backup & restore tool) in order to clone Linux systems I boot the clonezilla on machine1 and escape to CMD ( from clonzilla ) in order to get debian Linux
Machine1 - Debian Linux network: 192.168.20.100/24 Now I want to copy to my debian Linux some files from other machine - machine2 ( Linux red-hat 6.X) Machine2 network: 192.168.20.10/24 So I do the following on my machine2 in order to copy script.pl to my debian Linux ( machine1 ) scp –rp /tmp/script.pl 192.168.200.100:/tmp
[email protected]'s password: please advice what the password for debian Linux ? | Shift z z in command mode saves the file and exits. | {
"source": [
"https://unix.stackexchange.com/questions/93154",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47261/"
]
} |
93,173 | Every time someone sets a different size for a virtual console, less recognizes the window resolution (I'm assuming that ...); according to that, it changes how many lines of text it should visualize. How is that parameter is computed? | If you're looking for a way to check from a script, you can do either of these: Run tput cols and tput lines , as manatwork suggests check the values of $LINES and $COLUMNS But if you want the details, here we go: For virtual terminals (xterm, et al) there is an ioctl() system call that will tell you what size the window is. If it can, less uses this call. Furthermore, when you change the size of the window, whatever's running in that window receives a SIGWINCH signal that lets less know that it should check for a new window size. For instance, I started a less running (as process ID 16663), connected to it with strace , and resized the window. This is what I saw: $ strace -p 16663
Process 16663 attached - interrupt to quit
read(3, 0xbfb1f10f, 1) = ? ERESTARTSYS (To be restarted)
--- SIGWINCH (Window changed) @ 0 (0) ---
rt_sigaction(SIGWINCH, {0x805cf10, [WINCH], SA_RESTART}, {0x805cf10, [WINCH], SA_RESTART}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig -icanon -echo ...}) = 0
ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig -icanon -echo ...}) = 0
ioctl(1, TIOCGWINSZ, {ws_row=40, ws_col=80, ws_xpixel=0, ws_ypixel=0}) = 0
ioctl(2, TIOCGWINSZ, {ws_row=40, ws_col=80, ws_xpixel=0, ws_ypixel=0}) = 0 This is also what tput cols and tput lines do behind the scenes, if they can. For more info on this method, see man tty-ioctl and search for TIOCGWINSZ. For other terminals such as those connected to serial ports, though, there's no way to get this info directly. In that case, less starts looking for clues in the environment variables. LINES and COLUMNS will often be set to the terminal dimensions. In fact, if bash or zsh can find the terminal dimensions, it will automatically set these variables itself, to make it easy for not-so-clever programs to see the terminal size. However, most other shells, including dash and tcsh , do not set these variables. TERM is usually set to the terminal type, in which case the terminfo database may contain the expected size of the terminal. If tput rows cannot use the IOCTL (for instance, if you're connected over a serial port), it will fall back to the values recorded here. For a terminal whose size can change, this is only a guess and is likely to be wrong. For more info, see man tput for the command to control the terminal, and man terminfo for a list of things you can tell the terminal to do. | {
"source": [
"https://unix.stackexchange.com/questions/93173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47999/"
]
} |
93,323 | Festival stores voicepack data in the following example directory structure: /usr/share/festival/voices/<language>/<voicepack name> What is the simplest one-liner (preferably using ls ) to print out just the <voicepack name> 's, in all the potentially numerous <language> subdirectories? | I'm on Fedora, and these voicepacks are in a slightly different location: $ ls /usr/share/festival/lib/voices/*/ -1 | grep -vE "/usr|^$"
kal_diphone
ked_diphone
nitech_us_awb_arctic_hts
nitech_us_bdl_arctic_hts
nitech_us_clb_arctic_hts
nitech_us_jmk_arctic_hts
nitech_us_rms_arctic_hts
nitech_us_slt_arctic_hts You can just modify this like so: $ ls /usr/share/festival/voices/*/ -1 | grep -vE "/usr|^$" Using find Using ls in this manor is typically frowned upon because the output of ls is difficult to parse. Better to use the find command, like so: $ find /usr/share/festival/lib/voices -maxdepth 2 -mindepth 2 \
-type d -exec basename {} \;
nitech_us_awb_arctic_hts
nitech_us_bdl_arctic_hts
nitech_us_slt_arctic_hts
nitech_us_jmk_arctic_hts
nitech_us_clb_arctic_hts
nitech_us_rms_arctic_hts
ked_diphone
kal_diphone Details of find & basename This command works by producing a list of full paths to files that are exactly 2 levels deep with respect to this directory: /usr/share/festival/lib/voices This list looks like this: $ find /usr/share/festival/lib/voices -maxdepth 2 -mindepth 2
/usr/share/festival/lib/voices/us/nitech_us_awb_arctic_hts
/usr/share/festival/lib/voices/us/nitech_us_bdl_arctic_hts
/usr/share/festival/lib/voices/us/nitech_us_slt_arctic_hts
/usr/share/festival/lib/voices/us/nitech_us_jmk_arctic_hts
/usr/share/festival/lib/voices/us/nitech_us_clb_arctic_hts
/usr/share/festival/lib/voices/us/nitech_us_rms_arctic_hts
/usr/share/festival/lib/voices/english/ked_diphone
/usr/share/festival/lib/voices/english/kal_diphon But we want the last part of these directories, the leaf node. So we can make use of basename to parse it out: $ basename /usr/share/festival/lib/voices/us/nitech_us_awb_arctic_hts
nitech_us_awb_arctic_hts Putting it all together, we can make the find command pass each 2 level deep directory to the basename command. The notation basename {} is what is doing these basename conversions. Find calls it via it's -exec switch. | {
"source": [
"https://unix.stackexchange.com/questions/93323",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18377/"
]
} |
93,376 | I have tested this with both Ubuntu 12.04 and Debian 7. When I do echo $TERM I get xterm But if I use the dropdown menu "help" > "about" then it says gnome terminal 3.4.1.1 . Does this mean I am using just gnome-terminal? Or just xterm? Or is gnome-terminal an extension of xterm? I'm confused. | What is $TERM for? The $TERM variable is for use by applications to take advantage of capabilities of that terminal. For example, if a program wants to display colored text, it must first find out if the terminal you're using supports colored text, and then if it does, how to do colored text. The way this works is that the system keeps a library of known terminals and their capabilities. On most systems this is in /usr/share/terminfo (there's also termcap, but it's legacy not used much any more). So let's say you have a program that wants to display red text. It basically makes a call to the terminfo library that says " give me the sequence of bytes I have to send for red text for the xterm terminal ". Then it just takes those bytes and prints them out. You can try this yourself by doing tput setf 4; echo hi . This will get the setf terminfo capability and pass it a parameter of 4 , which is the color you want. Why gnome terminal lies about itself: Now let's say you have some shiny new terminal emulator that was just released, and the system's terminfo library doesn't have a definition for it yet. When your application goes to look up how to do something, it will fail because the terminal isn't known. The way your terminal gets around this is by lying about who it is. So your gnome terminal is saying " I'm xterm ". Xterm is a very basic terminal that has been around since the dawn of X11, and thus most terminal emulators support what it supports. So by gnome terminal saying it's an xterm, it's more likely to have a definition in the terminfo library. The downside to lying about your terminal type is that the terminal might actually support a lot more than xterm does (for example, many new terminals support 256 colors, while older terminals only supported 16). So you have a tradeoff, get more features, or have more compatibility. Most terminals will opt for more compatibility, and thus choose to advertise themselves as xterm . If you want to override this, many terminals will offer some way of configuring the behavior. But you can also just do export TERM=gnome-terminal . | {
"source": [
"https://unix.stackexchange.com/questions/93376",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5451/"
]
} |
93,412 | I read some articles/tutorials on 'ifconfig' command, most of them included a common statement - "ifconfig is deprecated by ip command" and suggested to learn ip command. But none of them explained how 'ip' command is more powerful than 'ifconfig'. What is the difference between both of them? | ifconfig is from net-tools , which hasn't been able to fully keep up with the Linux network stack for a long time. It also still uses ioctl for network configuration, which is an ugly and less powerful way of interacting with the kernel. A lot of changes in Linux networking code, and a lot of new features aren't accessible using net-tools : multipath routing, policy routing (see the RPDB). route allows you to do stupid things like adding multiple routes to the same destination, with the same metric. Additionally: ifconfig doesn't report the proper hardware address for some devices. You can't configure ipip , sit , gre , l2tp , etc. in-kernel static tunnels. You can't create tun or tap devices. The way of adding multiple addresses to a given interface also has poor semantics. You also can't configure the Linux traffic control system using net-tools either. See also ifconfig sucks . EDIT : Removed assertion about net-tools development ceasing that by now I forgot where I got for this post. net-tools ' has been worked on since iproute2 was released, though it's mostly bug fixing and minor enhancements and features, like internationalization. | {
"source": [
"https://unix.stackexchange.com/questions/93412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48188/"
]
} |
93,476 | I'm on Arch linux, and when I open a new terminal tab, it always goes to $HOME . How can I make it so that when I open a new tab, it opens the shell in the directory I was in previously? | There is a bug related to this issue All you need to do is add the following line to your .bashrc or .zshrc : . /etc/profile.d/vte.sh At least on Arch, the script checks if you are running either bash or zsh and exits if you are not. | {
"source": [
"https://unix.stackexchange.com/questions/93476",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44062/"
]
} |
93,531 | According to my knowledge, /dev/pts files are created for ssh or telnet sessions. | Nothing is stored in /dev/pts . This filesystem lives purely in memory. Entries in /dev/pts are pseudo-terminals (pty for short). Unix kernels have a generic notion of terminals . A terminal provides a way for applications to display output and to receive input through a terminal device . A process may have a controlling terminal — for a text mode application, this is how it interacts with the user. Terminals can be either hardware terminals (“tty”, short for “teletype”) or pseudo-terminals (“pty”). Hardware terminals are connected over some interface such as a serial port ( ttyS0 , …) or USB ( ttyUSB0 , …) or over a PC screen and keyboard ( tty1 , …). Pseudo-terminals are provided by a terminal emulator, which is an application. Some types of pseudo-terminals are: GUI applications such as xterm, gnome-terminal, konsole, … transform keyboard and mouse events into text input and display output graphically in some font. Multiplexer applications such as screen and tmux relay input and output from and to another terminal, to decouple text mode applications from the actual terminal. Remote shell applications such as sshd, telnetd, rlogind, … relay input and output between a remote terminal on the client and a pty on the server. If a program opens a terminal for writing, the output from that program appears on the terminal. It is common to have several programs outputting to a terminal at the same time, though this can be confusing at times as there is no way to tell which part of the output came from which program. Background processes that try to write to their controlling terminal may be automatically suspended by a SIGTTOU signal . If a program opens a terminal for reading, the input from the user is passed to that program. If multiple programs are reading from the same terminal, each character is routed independently to one of the programs; this is not recommended. Normally there is only a single program actively reading from the terminal at a given time; programs that try to read from their controlling terminal while they are not in the foreground are automatically suspended by a SIGTTIN signal . To experiment, run tty in a terminal to see what the terminal device is. Let's say it's /dev/pts/42 . In a shell in another terminal, run echo hello >/dev/pts/42 : the string hello will be displayed on the other terminal. Now run cat /dev/pts/42 and type in the other terminal. To kill that cat command (which will make the other terminal hard to use), press Ctrl + C . Writing to another terminal is occasionally useful to display a notification; for example the write command does that. Reading from another terminal is not normally done. | {
"source": [
"https://unix.stackexchange.com/questions/93531",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47732/"
]
} |
93,532 | How to export a variable which has dot in it. I get 'invalid variable name' when I tried : export my.home=/tmp/someDir
-ksh: my.home=/tmp/someDir: invalid variable name Even escaping metacharacter dot (.) din't helped either $ export my\.home=/tmp/someDir
export: my.home=/tmp/someDir: is not an identifier | At least for bash the man page defines the export syntax as: export [-fn] [name[=word]] ... It also defines a "name" as: name A word consisting only of alphanumeric characters and under‐
scores, and beginning with an alphabetic character or an under‐
score. Also referred to as an identifier. Hence you really cannot define a variable like my.home as it is no valid identifier. I am very sure your ksh has a very similar definition of an identifier and therefore does not allow this kind of variables, too. (Have a look at its man page.) I am also very sure there is some kind of general standard (POSIX?) specifying, what is allowed as an identifier (and therefore a variable name). If you really need this kind of variable for some reason you can use something like env "my.home=/tmp/someDir" bash to define it anyway. But then again, you will not be able to access it using normal shell syntax. In this case you probably need another language like perl: perl -e 'print $ENV{"my.home"}' For example env "my.home=/tmp/someDir" perl -le 'print $ENV{"my.home"}' should print your path. | {
"source": [
"https://unix.stackexchange.com/questions/93532",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47902/"
]
} |
93,566 | I was practicing ftp but faced an issue: ls command isn't working on ftp> . Why? I checked on 2 remote servers but ls didn't work on either and gave different output when ls was executed.
Please see below for the 2 remote boxes. The below shows my remote server where I installed vsftpd today. ravbholua@ravbholua-Aspire-5315:~$ ftp rs
Connected to ravi.com.
220 (vsFTPd 3.0.2)
Name (rs:ravbholua):
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> pwd
257 "/home/ravbholua"
ftp> ls
500 Illegal PORT command.
ftp: bind: Address already in use
ftp> The below is for a different remote machine where I have to send some files. But as ls on ftp> isn't working, how will I transfer files from my local box to that box because I can't be confirmed without ls whether the files have been transferred or not. ravbholua@ravbholua-Aspire-5315:~$ ftp 125.21.153.140
Connected to 125.21.153.140.
220---------- Welcome to Pure-FTPd [TLS] ----------
220-You are user number 1 of 10 allowed.
220-Local time is now 04:34. Server port: 21.
220-This server supports FXP transfers
220 You will be disconnected after 2 minutes of inactivity.
Name (125.21.153.140:ravbholua): peacenews
331 User peacenews OK. Password required
Password:
230 OK. Current restricted directory is /
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200-FXP transfer: from 123.63.112.168 to 10.215.10.80
200 PORT command successful Please note that for the above machine, once I ran ls on ftp>, the prompt didn't come back. On both the remote machines, I got different output when executed ls on ftp> | FTP is an ancient protocol. It relies on two TCP connections: a control connection over which commands are exchanged, and data connections for the content of files and also for the output of commands such as ls . What's happening here is that the control connection is established, but the data connections aren't going through. By default (active mode), data connections are established from the sender to the receiver. For the output of ls , the data is sent by the server, so the server attempts to open a connection to the client. This worked well when FTP was invented, but nowadays, clients are often behind a firewall or NAT which may or may not support active FTP. Switch to passive mode, where the client always initiates the data connection. Check the manual of your ftp command to see how to switch to passive mode by default. For a one-time thing, typing the command passive usually does the trick. You may wish to switch to a nicer FTP client such as ncftp or lftp . | {
"source": [
"https://unix.stackexchange.com/questions/93566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46723/"
]
} |
93,587 | I want to use lftp -c to do an entire session in one go (as I'll be launching this from a script later on) and I managed with -e but that ofc leaves me with the interactive session which I don't want. Manual states -c commands
Execute the given commands and exit. Commands can be separated with a semicolon, `&&'
or `||'. Remember to quote the commands argument properly in the shell. This option
must be used alone without other arguments. But I don't understand how I should quote and string my commands/interactions together correctly. lftp -e "put -O remote/dir/ /local/file.txt" -u user,pass ftpsite.com works excellent. But I want to exit after executing the command; lftp -c "open -u user,pass ftpsite.com" || put -O "remote/dir/ /local/file.txt" just shouts at me, or in fact any combination of quotes I tried ( || or && regardless) | $ lftp -c "open -u user,pass ftpsite.com; put -O remote/dir/ /local/file.txt" should do it. If this doesn't work try adding to your /etc/lftp.conf the following lines: set ftp:ssl-protect-data true
set ftp:ssl-force true
set ftp:ssl-auth TLS
set ssl:verify-certificate no | {
"source": [
"https://unix.stackexchange.com/questions/93587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34112/"
]
} |
93,773 | UNIX philosophy says: do one thing and do it well. Make programs that handle text, because that is a universal interface. The sort command, at least GNU sort, has an -o option to output to a file instead of stdout . Why is, say, sort foobar -o whatever useful when I could just sort foobar > whatever ? | It is not just GNU sort that has it. BSD sort has it too. And as to why? (I thought it was a good question too...) From the man page:
"The argument given is the name of an output file to be used
instead of the standard output. This file can be the same as one
of the input files." You can't go to the same file with redirection, the output redirection wipes the file. To further clarify, if I wanted to sort a file and put the sorted results in the same place I might think to try sort < foo > foo . Except the output redirection truncates the file foo in preparation to receive the output. And then there is nothing to sort. Without "-o" the way to do it would be sort < foo > bar ; mv bar foo . I assume the -o option does something similar without making you have to worry about it. | {
"source": [
"https://unix.stackexchange.com/questions/93773",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29146/"
]
} |
93,783 | I am running an application with command $ grails run-app which prints log in terminal like below. What I want is search a particular text (say user authorities ) in this log so that I can verify further. One way using Logging Apis to write in text file but I want to search it in a terminal at the moment. I found similar question at how to make search a text on the terminal directly which suggests screen command, but I have no idea how screen works in this case.
I tried $ screen grails run-app but couldn't move ahead. I can see screen lists with prayag@prayag:~/zlab/nioc2egdelonk$ screen -list
There is a screen on:
8076.pts-2.prayag (10/06/2013 12:13:25 PM) (Attached)
1 Socket in /var/run/screen/S-prayag. | Ctrl+a (default screen command prefix), [ (enter copy mode) followed by ?SEARCH_TEXT seems to work. Press n to go to the next occurrence. From there, you can copy words, lines, regions, etc to dump into files or paste later on (with Ctrl+a , ] ). | {
"source": [
"https://unix.stackexchange.com/questions/93783",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17781/"
]
} |
93,808 | Why do the commands dig and nslookup sometimes print different results? ~$ dig facebook.com
; <<>> DiG 9.9.2-P1 <<>> facebook.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6625
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;facebook.com. IN A
;; ANSWER SECTION:
facebook.com. 205 IN A 173.252.110.27
;; Query time: 291 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Sun Oct 6 17:55:52 2013
;; MSG SIZE rcvd: 57
~$ nslookup facebook.com
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: facebook.com
Address: 10.10.34.34 | dig uses the OS resolver libraries. nslookup uses is own internal ones. That is why Internet Systems Consortium (ISC) has been trying to get people to stop using nslookup for some time now. It causes confusion. | {
"source": [
"https://unix.stackexchange.com/questions/93808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28488/"
]
} |
93,960 | Can anyone explain why Linux is designed as a single directory tree? Whereas in Windows we can have multiple drives like C:\ , and D:\ , there is a single root in Unix. Any specific reason there? | Since the Unix file system predates Windows by many years, one may re-phrase the question to "why does Windows use a separate designator for each device?". A hierarchical filesystem has the advantage that any file or directory can be found as a child of the root directory. If you need to move data to a new device or a network device, the location in the file system can stay the same and the application will not see the difference. Suppose you have a system where the OS is static and there is an application that has high I/O requirements. You can mount /usr read-only and put /opt (if the app lives there) onto SSD drives. The filesystem hierarchy doesn't change. Under Windows this is much more difficult, particularly with applications that insist on living under C:\Program Files\ | {
"source": [
"https://unix.stackexchange.com/questions/93960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47732/"
]
} |
94,018 | Can anyone clarify gateway assignment for me? What is the difference between adding a gateway as 0.0.0.0 and assigning a specific IP address as a gateway? | 0.0.0.0 has the specific meaning "unspecified". This roughly translates to "there is none" in the context of a gateway. Of course, this assumes that the network is locally connected, as there is no intermediate hop. The machine will send the packet out that interface as though to a machine connected to that segment, which in Ethernet means the MAC address of the destination host will be used instead of the MAC address of the next hop gateway. As a destination, 0.0.0.0/0 is special: if there are no network bits, there can't be anything in the network number either. So, it's naturally unspecified. For prefix matching it masks off all bits, so all addresses are within 0.0.0.0/0 ; for this reason it's used to mean "default gateway" in routing tables. It is also the least-specific possible route, so selections that prioritize specificity will choose anything else available and match 0.0.0.0/0 as a last resort. However, sticking to your question, yes, it does have a special meaning. It means that the network is locally connected on that interface and no more hops are needed to get to it. | {
"source": [
"https://unix.stackexchange.com/questions/94018",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47732/"
]
} |
94,041 | Is there a way of creating a filesystem object akin to this: mknod files p
cat file1 file2 ... fileN > files but such that it can be seeked in, as if it were a regular file? | On Linux-based operating systems, that can be done with network block devices or device mapper devices. The file you obtain is a block device. With nbd : ln -s /path/to/first-file file.0
...
ln -s /path/to/last-file file.19
nbd-server -C /dev/null -m 127.0.0.1:12345 file
sudo nbd-client localhost 12345 /dev/nbd0 (the concatenation is /dev/nbd0 ). With device mapper (file sizes have to be multiple of 512): sudo losetup /dev/loop0 file1
sudo losetup /dev/loop1 file2
s0=$(sudo blockdev --getsize /dev/loop0)
s1=$(sudo blockdev --getsize /dev/loop1)
printf '%s\n' "0 $s0 linear /dev/loop0 0" "$s0 $s1 linear /dev/loop1 0" |
sudo dmsetup create mybundle (the concatenation is /dev/mapper/mybundle ). | {
"source": [
"https://unix.stackexchange.com/questions/94041",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42056/"
]
} |
94,295 | I am trying out shellcheck . I have something like that basename "${OPENSSL}" and I get the following suggestion Use parameter expansion instead, such as ${var##*/}. From the practical point of view I see no difference $ export OPENSSL=/opt/local/bin/openssl
$ basename ${OPENSSL}
openssl
$ echo ${OPENSSL##*/}
openssl Since basename is in the POSIX specs , I don't a reason why it should be best practice. Any hint? | It's not about efficiency -- it's about correctness. basename uses newlines to delimit the filenames it prints out. In the usual case when you only pass one filename, it adds a trailing newline to its output. Since filenames may contain newlines themselves, this makes it difficult to correctly handle these filenames. It's further complicated by the fact that people usually use basename like this: "$(basename "$file")" . This makes things even more difficult, because $(command) strips all trailing newlines from command . Consider the unlikely case that $file ends with a newline. Then basename will add an extra newline, but "$(basename "$file")" will strip both newlines, leaving you with an incorrect filename. Another problem with basename is that if $file begins with a - (dash a.k.a. minus), it will be interpreted as an option. This one is easy to fix: $(basename -- "$file") The robust way of using basename is this: # A file with three trailing newlines.
file=$'/tmp/evil\n\n\n'
# Add an 'x' so we can tell where $file's newlines end and basename's begin.
file_x="$(basename -- "$file"; printf x)"
# Strip off two trailing characters: the 'x' added by us and the newline added by basename.
base="${file_x%??}" An alternative is to use ${file##*/} , which is easier but has bugs of its own. In particular, it's wrong in the cases where $file is / or foo/ . | {
"source": [
"https://unix.stackexchange.com/questions/94295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10616/"
]
} |
94,299 | I understand ls uses dircolors to display colored output. dircolors has default database of colors associated with file extensions, which can be printed wiht the command dircolors --print-database From man dir_colors I read, the system-wide database should be located in /etc/DIR_COLORS . But this file does not exist on my system (Debian). How can I modify system-wide color settings for dircolors ? Where does the command dircolors --print-database take the settings from, when no file exists. I am aware that user can have user-specific file ~/.dircolors with his settings, but this is not suitable for me, since I need to change the settings for everybody. A second questions is, whether it is possible to use 8-bit colors for dircolors. My terminal is xterm-256color . | ls takes it color settings from the environment variable LS_COLORS . dircolors is merely a convenient way to generate this environment variable. To have this environment variable take effect system-wide, put it in your shell's startup file. For bash , you'd put this in /etc/profile : # `dircolors` prints out `LS_COLORS='...'; export LS_COLORS`, so eval'ing
# $(dircolors) effectively sets the LS_COLORS environment variable.
eval "$(dircolors /etc/DIR_COLORS)" For zsh , you'd either put it in /etc/zshrc or arrange for zsh to read /etc/profile on startup. Your distribution might have zsh do that already. I just bring this up to point out that setting dircolors for truly everybody depends on the shell they use. As for where dircolors gets its settings from, when you don't specify a file it just uses some builtin defaults. You can use xterm 's 256 color escape codes in your dircolors file, but be aware that they'll only work for xterm compatible terminals. They won't work on the Linux text console, for example. The format for 256 color escape codes is 38;5;colorN for foreground colors and 48;5;colorN for background colors. So for example: .mp3 38;5;160 # Set fg color to color 160
.flac 48;5;240 # Set bg color to color 240
.ogg 38;5;160;48;5;240 # Set fg color 160 *and* bg color 240.
.wav 01;04;05;38;5;160;48;5;240 # Pure madness: make bold (01), underlined (04), blink (05), fg color 160, and bg color 240! | {
"source": [
"https://unix.stackexchange.com/questions/94299",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36112/"
]
} |
94,322 | I'm working on an embedded Linux project where I will be developing a program that will run automatically on bootup and interact with the user via a character display and some sort of button array. If we go with a simple GPIO button array, I can easily write program that will look for keypresses on those GPIO lines. However, one of our thoughts was to use a USB number pad device instead for user input. My understanding is that those devices will present themselves to the OS as a USB keyboard. If go down this path, is there a way for my program to look for input on this USB keyboard from within Linux, keeping in mind that there is no virtual terminal or VGA display. When a USB keyboard is plugged in, is there an entity in '/dev' that appears that I can open a file descriptor for? | Devices most likely get a file in /dev/input/ named eventN where N is the various devices like mouse, keyboard, jack, power-buttons etc. ls -l /dev/input/by-{path,id}/ should give you a hint. Also look at: cat /proc/bus/input/devices Where Sysfs value is path under /sys . You can test by e.g. cat /dev/input/event2 # if 2 is kbd. To implement use ioctl and check devices + monitor. EDIT 2: OK. I'm expanding on this answer based on the assumption /dev/input/eventN is used. One way could be: At startup loop all event files found in /dev/input/ . Use ioctl() to request event bits: ioctl(fd, EVIOCGBIT(0, sizeof(evbit)), &evbit); then check if EV_KEY -bit is set. IFF set then check for keys: ioctl(fd, EVIOCGBIT(EV_KEY, sizeof(keybit)), &keybit); E.g. if number-keys are interesting, then check if bits for KEY_0 - KEY9 and KEY_KP0 to KEY_KP9 . IFF keys found then start monitoring event file in thread. Back to 1. This way you should get to monitor all devices that meet the wanted criteria. You can't only check for EV_KEY as e.g. power-button will have this bit set, but it obviously won't have KEY_A etc. set. Have seen false positives for exotic keys, but for normal keys this should suffice. There is no direct harm in monitoring e.g. event file for power button or a jack, but you those won't emit events in question (aka. bad code). More in detail below. EDIT 1: In regards to "Explain that last statement …" . Going over in stackoverflow land here … but: A quick and dirty sample in C. You'll have to implement various code to check that you actually get correct device, translate event type, code and value. Typically key-down, key-up, key-repeat, key-code, etc. Haven't time, (and is too much here), to add the rest. Check out linux/input.h , programs like dumpkeys , kernel code etc. for mapping codes. E.g. dumpkeys -l Anyhow: Run as e.g.: # ./testprog /dev/input/event2 Code: #include <stdio.h>
#include <string.h> /* strerror() */
#include <errno.h> /* errno */
#include <fcntl.h> /* open() */
#include <unistd.h> /* close() */
#include <sys/ioctl.h> /* ioctl() */
#include <linux/input.h> /* EVIOCGVERSION ++ */
#define EV_BUF_SIZE 16
int main(int argc, char *argv[])
{
int fd, sz;
unsigned i;
/* A few examples of information to gather */
unsigned version;
unsigned short id[4]; /* or use struct input_id */
char name[256] = "N/A";
struct input_event ev[EV_BUF_SIZE]; /* Read up to N events ata time */
if (argc < 2) {
fprintf(stderr,
"Usage: %s /dev/input/eventN\n"
"Where X = input device number\n",
argv[0]
);
return EINVAL;
}
if ((fd = open(argv[1], O_RDONLY)) < 0) {
fprintf(stderr,
"ERR %d:\n"
"Unable to open `%s'\n"
"%s\n",
errno, argv[1], strerror(errno)
);
}
/* Error check here as well. */
ioctl(fd, EVIOCGVERSION, &version);
ioctl(fd, EVIOCGID, id);
ioctl(fd, EVIOCGNAME(sizeof(name)), name);
fprintf(stderr,
"Name : %s\n"
"Version : %d.%d.%d\n"
"ID : Bus=%04x Vendor=%04x Product=%04x Version=%04x\n"
"----------\n"
,
name,
version >> 16,
(version >> 8) & 0xff,
version & 0xff,
id[ID_BUS],
id[ID_VENDOR],
id[ID_PRODUCT],
id[ID_VERSION]
);
/* Loop. Read event file and parse result. */
for (;;) {
sz = read(fd, ev, sizeof(struct input_event) * EV_BUF_SIZE);
if (sz < (int) sizeof(struct input_event)) {
fprintf(stderr,
"ERR %d:\n"
"Reading of `%s' failed\n"
"%s\n",
errno, argv[1], strerror(errno)
);
goto fine;
}
/* Implement code to translate type, code and value */
for (i = 0; i < sz / sizeof(struct input_event); ++i) {
fprintf(stderr,
"%ld.%06ld: "
"type=%02x "
"code=%02x "
"value=%02x\n",
ev[i].time.tv_sec,
ev[i].time.tv_usec,
ev[i].type,
ev[i].code,
ev[i].value
);
}
}
fine:
close(fd);
return errno;
} EDIT 2 (continued): Note that if you look at /proc/bus/input/devices you have a letter at start of each line. Here B means bit-map. That is for example: B: PROP=0
B: EV=120013
B: KEY=20000 200 20 0 0 0 0 500f 2100002 3803078 f900d401 feffffdf ffefffff ffffffff fffffffe
B: MSC=10
B: LED=7 Each of those bits correspond to a property of the device. Which by bit-map means, 1 indicate a property is present, as defined in linux/input.h . : B: PROP=0 => 0000 0000
B: EV=120013 => 0001 0010 0000 0000 0001 0011 (Event types sup. in this device.)
| | | ||
| | | |+-- EV_SYN (0x00)
| | | +--- EV_KEY (0x01)
| | +------- EV_MSC (0x04)
| +----------------------- EV_LED (0x11)
+--------------------------- EV_REP (0x14)
B: KEY=20... => OK, I'm not writing out this one as it is a bit huge.
B: MSC=10 => 0001 0000
|
+------- MSC_SCAN
B: LED=7 => 0000 0111 , indicates what LED's are present
|||
||+-- LED_NUML
|+--- LED_CAPSL
+---- LED_SCROLL Have a look at /drivers/input/input.{h,c} in the kernel source tree. A lot of good code there. (E.g. the devices properties are produced by this function .) Each of these property maps can be attained by ioctl . For example, if you want to check what LED properties are available say: ioctl(fd, EVIOCGBIT(EV_LED, sizeof(ledbit)), &ledbit); Look at definition of struct input_dev in input.h for how ledbit are defined. To check status for LED's say: ioctl(fd, EVIOCGLED(sizeof(ledbit)), &ledbit); If bit 1 in ledbit are 1 then num-lock are lit. If bit 2 is 1 then caps lock is lit etc. input.h has the various defines. Notes when it comes to event monitoring: Pseudo-code for monitoring could be something in the direction of: WHILE TRUE
READ input_event
IF event->type == EV_SYN THEN
IF event->code == SYN_DROPPED THEN
Discard all events including next EV_SYN
ELSE
This marks EOF current event.
FI
ELSE IF event->type == EV_KEY THEN
SWITCH ev->value
CASE 0: Key Release (act accordingly)
CASE 1: Key Press (act accordingly)
CASE 2: Key Autorepeat (act accordingly)
END SWITCH
FI
END WHILE Some related documents: Documentation/input/input.txt , esp. note section 5. Documentation/input/event-codes.txt , description of various events etc. Take note to what is mentioned under e.g. EV_SYN about SYN_DROPPED Documentation/input ... read up on the rest if you want. | {
"source": [
"https://unix.stackexchange.com/questions/94322",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41988/"
]
} |
94,326 | On my Elementary OS system, dpkg reports a number of kernel packages that aren't installed. (I did an apt-get purge on them previously.) I'd like to have them forgotten about entirely, but I can't figure out how to get them that way. For example: elementary:~$ dpkg -l linux-*-3.2.0-51*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Description
+++-====================================-====================================-========================================================================================
un linux-headers-3.2.0-51 <none> (no description available)
un linux-headers-3.2.0-51-generic <none> (no description available)
un linux-image-3.2.0-51-generic <none> (no description available) apt-get purge doesn't work: elementary:~$ sudo apt-get purge linux-headers-3.2.0-51
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package linux-headers-3.2.0-51 is not installed, so not removed
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. dpkg doesn't work: elementary:~$ sudo dpkg --purge linux-headers-3.2.0-51
dpkg: warning: there's no installed package matching linux-headers-3.2.0-51
elementary:~$ sudo dpkg --forget-old-unavail
dpkg: warning: obsolete '--forget-old-unavail' option, unavailable packages are automatically cleaned up. apt-cache shows: elementary:~$ apt-cache policy linux-headers-3.2.0-51
linux-headers-3.2.0-51:
Installed: (none)
Candidate: 3.2.0-51.77
Version table:
3.2.0-51.77 0
500 http://us.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages
500 http://security.ubuntu.com/ubuntu/ precise-security/main amd64 Packages aptitude isn't installed. Obviously, there's something I'm not understanding. Why does dpkg -l show purged packages? How do I go about making dpkg forget about them? | Devices most likely get a file in /dev/input/ named eventN where N is the various devices like mouse, keyboard, jack, power-buttons etc. ls -l /dev/input/by-{path,id}/ should give you a hint. Also look at: cat /proc/bus/input/devices Where Sysfs value is path under /sys . You can test by e.g. cat /dev/input/event2 # if 2 is kbd. To implement use ioctl and check devices + monitor. EDIT 2: OK. I'm expanding on this answer based on the assumption /dev/input/eventN is used. One way could be: At startup loop all event files found in /dev/input/ . Use ioctl() to request event bits: ioctl(fd, EVIOCGBIT(0, sizeof(evbit)), &evbit); then check if EV_KEY -bit is set. IFF set then check for keys: ioctl(fd, EVIOCGBIT(EV_KEY, sizeof(keybit)), &keybit); E.g. if number-keys are interesting, then check if bits for KEY_0 - KEY9 and KEY_KP0 to KEY_KP9 . IFF keys found then start monitoring event file in thread. Back to 1. This way you should get to monitor all devices that meet the wanted criteria. You can't only check for EV_KEY as e.g. power-button will have this bit set, but it obviously won't have KEY_A etc. set. Have seen false positives for exotic keys, but for normal keys this should suffice. There is no direct harm in monitoring e.g. event file for power button or a jack, but you those won't emit events in question (aka. bad code). More in detail below. EDIT 1: In regards to "Explain that last statement …" . Going over in stackoverflow land here … but: A quick and dirty sample in C. You'll have to implement various code to check that you actually get correct device, translate event type, code and value. Typically key-down, key-up, key-repeat, key-code, etc. Haven't time, (and is too much here), to add the rest. Check out linux/input.h , programs like dumpkeys , kernel code etc. for mapping codes. E.g. dumpkeys -l Anyhow: Run as e.g.: # ./testprog /dev/input/event2 Code: #include <stdio.h>
#include <string.h> /* strerror() */
#include <errno.h> /* errno */
#include <fcntl.h> /* open() */
#include <unistd.h> /* close() */
#include <sys/ioctl.h> /* ioctl() */
#include <linux/input.h> /* EVIOCGVERSION ++ */
#define EV_BUF_SIZE 16
int main(int argc, char *argv[])
{
int fd, sz;
unsigned i;
/* A few examples of information to gather */
unsigned version;
unsigned short id[4]; /* or use struct input_id */
char name[256] = "N/A";
struct input_event ev[EV_BUF_SIZE]; /* Read up to N events ata time */
if (argc < 2) {
fprintf(stderr,
"Usage: %s /dev/input/eventN\n"
"Where X = input device number\n",
argv[0]
);
return EINVAL;
}
if ((fd = open(argv[1], O_RDONLY)) < 0) {
fprintf(stderr,
"ERR %d:\n"
"Unable to open `%s'\n"
"%s\n",
errno, argv[1], strerror(errno)
);
}
/* Error check here as well. */
ioctl(fd, EVIOCGVERSION, &version);
ioctl(fd, EVIOCGID, id);
ioctl(fd, EVIOCGNAME(sizeof(name)), name);
fprintf(stderr,
"Name : %s\n"
"Version : %d.%d.%d\n"
"ID : Bus=%04x Vendor=%04x Product=%04x Version=%04x\n"
"----------\n"
,
name,
version >> 16,
(version >> 8) & 0xff,
version & 0xff,
id[ID_BUS],
id[ID_VENDOR],
id[ID_PRODUCT],
id[ID_VERSION]
);
/* Loop. Read event file and parse result. */
for (;;) {
sz = read(fd, ev, sizeof(struct input_event) * EV_BUF_SIZE);
if (sz < (int) sizeof(struct input_event)) {
fprintf(stderr,
"ERR %d:\n"
"Reading of `%s' failed\n"
"%s\n",
errno, argv[1], strerror(errno)
);
goto fine;
}
/* Implement code to translate type, code and value */
for (i = 0; i < sz / sizeof(struct input_event); ++i) {
fprintf(stderr,
"%ld.%06ld: "
"type=%02x "
"code=%02x "
"value=%02x\n",
ev[i].time.tv_sec,
ev[i].time.tv_usec,
ev[i].type,
ev[i].code,
ev[i].value
);
}
}
fine:
close(fd);
return errno;
} EDIT 2 (continued): Note that if you look at /proc/bus/input/devices you have a letter at start of each line. Here B means bit-map. That is for example: B: PROP=0
B: EV=120013
B: KEY=20000 200 20 0 0 0 0 500f 2100002 3803078 f900d401 feffffdf ffefffff ffffffff fffffffe
B: MSC=10
B: LED=7 Each of those bits correspond to a property of the device. Which by bit-map means, 1 indicate a property is present, as defined in linux/input.h . : B: PROP=0 => 0000 0000
B: EV=120013 => 0001 0010 0000 0000 0001 0011 (Event types sup. in this device.)
| | | ||
| | | |+-- EV_SYN (0x00)
| | | +--- EV_KEY (0x01)
| | +------- EV_MSC (0x04)
| +----------------------- EV_LED (0x11)
+--------------------------- EV_REP (0x14)
B: KEY=20... => OK, I'm not writing out this one as it is a bit huge.
B: MSC=10 => 0001 0000
|
+------- MSC_SCAN
B: LED=7 => 0000 0111 , indicates what LED's are present
|||
||+-- LED_NUML
|+--- LED_CAPSL
+---- LED_SCROLL Have a look at /drivers/input/input.{h,c} in the kernel source tree. A lot of good code there. (E.g. the devices properties are produced by this function .) Each of these property maps can be attained by ioctl . For example, if you want to check what LED properties are available say: ioctl(fd, EVIOCGBIT(EV_LED, sizeof(ledbit)), &ledbit); Look at definition of struct input_dev in input.h for how ledbit are defined. To check status for LED's say: ioctl(fd, EVIOCGLED(sizeof(ledbit)), &ledbit); If bit 1 in ledbit are 1 then num-lock are lit. If bit 2 is 1 then caps lock is lit etc. input.h has the various defines. Notes when it comes to event monitoring: Pseudo-code for monitoring could be something in the direction of: WHILE TRUE
READ input_event
IF event->type == EV_SYN THEN
IF event->code == SYN_DROPPED THEN
Discard all events including next EV_SYN
ELSE
This marks EOF current event.
FI
ELSE IF event->type == EV_KEY THEN
SWITCH ev->value
CASE 0: Key Release (act accordingly)
CASE 1: Key Press (act accordingly)
CASE 2: Key Autorepeat (act accordingly)
END SWITCH
FI
END WHILE Some related documents: Documentation/input/input.txt , esp. note section 5. Documentation/input/event-codes.txt , description of various events etc. Take note to what is mentioned under e.g. EV_SYN about SYN_DROPPED Documentation/input ... read up on the rest if you want. | {
"source": [
"https://unix.stackexchange.com/questions/94326",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39976/"
]
} |
94,331 | How can I delete a word backward at the command line? I'm truly used to some editors deleting the last 'word' using Ctrl + Backspace , and I'd like that functionality at the command line too. I am using Bash at the moment and although I could jump backward a word and then delete forward a word, I'd rather have this as a quick-key, or event as Ctrl + Backspace . How can accomplish this? | Ctrl + W is the standard "kill word" (aka werase ). Ctrl + U kills the whole line ( kill ). You can change them with stty . -bash-4.2$ stty -a
speed 38400 baud; 24 rows; 80 columns;
lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl
-echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo
-extproc -xcase
iflags: -istrip icrnl -inlcr -igncr -iuclc ixon -ixoff ixany imaxbel
-ignbrk brkint -inpck -ignpar -parmrk
oflags: opost onlcr -ocrnl -onocr -onlret -olcuc oxtabs -onoeot
cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -mdmbuf
cchars: discard = ^O; dsusp = ^Y; eof = ^D; eol = <undef>;
eol2 = <undef>; erase = ^?; intr = ^C; kill = ^U; lnext = ^V;
min = 1; quit = ^\; reprint = ^R; start = ^Q; status = <undef>;
stop = ^S; susp = ^Z; time = 0; werase = ^W;
-bash-4.2$ stty werase ^p
-bash-4.2$ stty kill ^a
-bash-4.2$ Note that one does not have to put the actual control character on the line, stty understands putting ^ and then the character you would hit with control. After doing this, if I hit Ctrl + P it will erase a word from the line. And if I hit Ctrl + A , it will erase the whole line. | {
"source": [
"https://unix.stackexchange.com/questions/94331",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16877/"
]
} |
94,357 | What command(s) can one use to find out the current working directory (CWD) of a running process? These would be commands you could use externally from the process. | There are 3 methods that I'm aware of: pwdx $ pwdx <PID> lsof $ lsof -p <PID> | grep cwd /proc $ readlink -e /proc/<PID>/cwd Examples Say we have this process. $ pgrep nautilus
12136 Then if we use pwdx : $ pwdx 12136
12136: /home/saml Or you can use lsof : $ lsof -p 12136 | grep cwd
nautilus 12136 saml cwd DIR 253,2 32768 10354689 /home/saml Or you can poke directly into the /proc : $ readlink -e /proc/12136/cwd/
/home/saml | {
"source": [
"https://unix.stackexchange.com/questions/94357",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
94,421 | I'm now using rsync with -e 'ssh -p 10022' option to specify the port. I have already ssh setting in ~/.ssh/config . Host myvps
HostName example.com
User ironsand
Port 10022 Can I use this config from rsync easily?
Or Can I create ~/.rsync and set a default port for specify server? | Specify "myvps" as the hostname. rsync /var/bar myvps:/home/foo ... | {
"source": [
"https://unix.stackexchange.com/questions/94421",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44001/"
]
} |
94,479 | I am setting up a new, dedicated, centos 6.4 system with redis. I have installed redis many times, but have never hit this issue (and have never been on centos 6.4 before). cd redis-2.6.16
sudo make install error: MAKE jemalloc
cd jemalloc && ./configure --with-lg-quantum=3 --with-jemalloc-prefix=je_ --enable-cc-silence CFLAGS="-std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops " LDFLAGS=""
/bin/sh: ./configure: Permission denied
make[2]: *** [jemalloc] Error 126
make[2]: Leaving directory `/tmp/redis32/redis-3.2.6/deps'
make[1]: [persist-settings] Error 2 (ignored)
sh: ./mkreleasehdr.sh: Permission denied
and later:
zmalloc.h:50:31: error: jemalloc/jemalloc.h: No such file or directory
zmalloc.h:55:2: error: #error "Newer version of jemalloc required" When I try to build jemalloc directly (from the /src area of the redis tarball), other errors include: cd src && make jemalloc
sh: ./mkreleasehdr.sh: Permission denied
make[1]: Entering directory `/tmp/rediswork/redis-2.6.16/src'
make[1]: *** No rule to make target `jemalloc'. Stop.
make[1]: Leaving directory `/tmp/rediswork/redis-2.6.16/src'
make: *** [jemalloc] Error 2 I also tried redis 2.6.7 and have the same issue. I have dug all over and can find no path forward. | I ran into the same issue on centos 6.4 and had to run the following commands: cd deps
make hiredis jemalloc linenoise lua geohash-int
cd ..
make install I am not sure why the deps where not built, I thought they were in the past. However, this got me up and running with the version of redis that I needed. | {
"source": [
"https://unix.stackexchange.com/questions/94479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49155/"
]
} |
94,490 | bash won't source .bashrc from an interactive terminal unless I manually run bash from a terminal: $ bash or manually source it: $ source ./.bashrc or running: $ st -e bash Here's some useful output I hope: $ echo $TERM
st-256color
$ echo $SHELL
/bin/sh
$ readlink /bin/sh
bash
$ shopt login_shell
login_shell off I'm on CRUX Linux 3.0 and I use dwm and st . I've tried using .bash_profile and .profile with no success. Any ideas? | In .bash_profile make sure you have the following: # .bash_profile
# If .bash_profile exists, bash doesn't read .profile
if [[ -f ~/.profile ]]; then
. ~/.profile
fi
# If the shell is interactive and .bashrc exists, get the aliases and functions
if [[ $- == *i* && -f ~/.bashrc ]]; then
. ~/.bashrc
fi | {
"source": [
"https://unix.stackexchange.com/questions/94490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15861/"
]
} |
94,498 | There are two directories shown by 'ls'. Normally directories anywhere are blue on black background. But the first one is blue on green and impossible to read. Why is this? How to make it blue on black, or at least something light on something dark? This is on Ubuntu 12.04, using bash in Gnome Terminal. In Konsole, the blue is slightly darker, and possible to read, though could be way better. | Apart from coloring files based on their type (turquoise for audio files, bright red for Archives and compressed files, and purple for images and videos), ls also colors files and directories based on their attributes: Black text with green background indicates that a directory is writable by others apart from the owning user and group, and has the sticky bit set ( o+w, +t ). Blue text with green background indicates that a directory is writable by others apart from the owning user and group, and does not have the sticky bit set ( o+w, -t ). Stephano Palazzo over at Ask Ubuntu has made this very instructive picture over the different attribute colors: As terdon pointed out, the color settings can be modified via dircolors . A list of the different coloring settings can be accessed with dircolors --print-database . Each line of output, such as BLK 40;33;01 , is of the form: [TARGET] [TEXT_STYLE];[FOREGROUND_COLOR];[BACKGROUND_COLOR] TARGET indicates the target for the coloring rule TEXT_STYLE indicates the text style: 00 = none 01 = bold 04 = underscore 05 = blink 07 = reverse, 08 = concealed FOREGROUND_COLOR indicates the foreground color: 30 = black 31 = red 32 = green 33 = yellow 34 = blue, 35 = magenta 36 = cyan 37 = white BACKGROUND_COLOR indicates the background colors: 40 = black 41 = red 42 = green 43 = yellow 44 = blue, 45 = magenta 46 = cyan 47 = white Fields may be omitted starting from the right, so for instance .tar 01;31 means bold and red. XTerm and most other modern terminal emulators support 256 colors. A XTerm 256-color foreground color code is of the form: 38;5;[FOREGROUND_COLOR] A XTerm 256-color background color code is of the form: 48;5;[BACKGROUND_COLOR] where both FOREGROUND_COLOR and BACKGROUND_COLOR is a number the range 0-255. A full list of color codes for the 16 and 256 color modes are shown in the below screenshot: | {
"source": [
"https://unix.stackexchange.com/questions/94498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2874/"
]
} |
94,518 | I need a series of commands or a single command that sleeps until the next occurrence of a specific time like "4:00" occurs. How would I do that? The at command or a cronjob is not a option because I must not leave the script I'm currently in. The specific case I am talking about is a script running in screen. It is very important that I do not stop the execution of the script by the script itself because there are store many important variables which are needed at some point of the script.
The script is not always supposed to be executed in a regular matter. It just needs to be executed at a specific time. It would be very benificial if the script would have to create any files or any other tasks such as cronjobs or other screens. This is simply a question of design. I just had an awsome idea: difference=$(($(date -d "4:00" +%s) - $(date +%s)))
if [ $difference -lt 0 ]
then
sleep $((86400 + difference))
else
sleep $difference
fi Do you have any better ideas? More information will be added if requested! | terdon's suggestion would work but I guess mine is more efficient. difference=$(($(date -d "4:00" +%s) - $(date +%s)))
if [ $difference -lt 0 ]
then
sleep $((86400 + difference))
else
sleep $difference
fi This is calculating the difference between the given time and the current time in seconds. If the number is negative we have to add the seconds for a whole day (86400 to be exact) to get the seconds we have to sleep and if the number is postive we can just use it. | {
"source": [
"https://unix.stackexchange.com/questions/94518",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45867/"
]
} |
94,527 | I accidentally created over 1000 screens. How do I kill them all with one command? (Or a few) | You can use : pkill screen Or killall screen In OSX the process is called SCREEN in all caps. So, use: pkill SCREEN Or killall SCREEN | {
"source": [
"https://unix.stackexchange.com/questions/94527",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45867/"
]
} |
94,603 | I am running vsftpd as ftp server on my linux (rasbian), I log in to the machine as a root user. I would like to be still locked to using only /var/www, how can I configure vsftpd conf to accomplish it? | Method 1: Changing the user's home directory Make sure the following line exists chroot_local_user=YES Set user HOME Directory to /var/www/ , if you want to change for existing user then you can use: usermod --home /var/www/ username then set required permission on /var/www/ Method 2: Use user_sub_token If you don't want to change user's Home directory then you can use: chroot_local_user=YES
local_root=/ftphome/$USER
user_sub_token=$USER About user_sub_token : Automatically generate a home directory for each virtual user, based on a template.
For example, if the home directory of the real user specified via guest_username is
/ftphome/$USER, and user_sub_token is set to $USER, then when virtual user test
logs in, he will end up (usually chroot()'ed) in the directory /ftphome/test.
This option also takes affect if local_root contains user_sub_token. Create directory and set up permissions: mkdir -p /ftphome/{test,user1,user2}
chmod 770 -R /ftphome
chown -R ftp. /ftphome
usermod -G ftp test Once restart vsftpd and test your setup. Sample success output: [root@mail tmp]# ftp localhost
Connected to mail.linuxian.local.
220 (vsFTPd 2.0.5)
530 Please login with USER and PASS.
530 Please login with USER and PASS.
KERBEROS_V4 rejected as an authentication type
Name (localhost:root): test
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> mput vhosts
mput vhosts?
227 Entering Passive Mode (127,0,0,1,146,41)
150 Ok to send data.
226 File receive OK.
24 bytes sent in 3.3e-05 seconds (7.1e+02 Kbytes/s)
ftp> ls -rlt
227 Entering Passive Mode (127,0,0,1,97,90)
150 Here comes the directory listing.
-rw-r--r-- 1 787 787 24 Oct 11 19:57 vhosts
226 Directory send OK.
ftp> 221 Goodbye. | {
"source": [
"https://unix.stackexchange.com/questions/94603",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49010/"
]
} |
94,604 | So far I couldn't find anything really, but is it true that curl doesn't really time out at all? user@host:~# curl http://localhost/testdir/image.jpg I'm asking because I'm redirecting any request for images in testdir to a separate Apache module which generates those pictures on the fly. It can take up to 15 minutes before the picture is actually ready and delivered to the requesting client. Will curl always wait (or is it depending on configuration) or is there any sort of timeout? | Yes. Timeout parameters curl has two options: --connect-timeout and --max-time . Quoting from the manpage: --connect-timeout <seconds>
Maximum time in seconds that you allow the connection to the
server to take. This only limits the connection phase, once
curl has connected this option is of no more use. Since 7.32.0,
this option accepts decimal values, but the actual timeout will
decrease in accuracy as the specified timeout increases in deci‐
mal precision. See also the -m, --max-time option.
If this option is used several times, the last one will be used. and: -m, --max-time <seconds>
Maximum time in seconds that you allow the whole operation to
take. This is useful for preventing your batch jobs from hang‐
ing for hours due to slow networks or links going down. Since
7.32.0, this option accepts decimal values, but the actual time‐
out will decrease in accuracy as the specified timeout increases
in decimal precision. See also the --connect-timeout option.
If this option is used several times, the last one will be used. Defaults Here (on Debian) it stops trying to connect after 2 minutes, regardless of the time specified with --connect-timeout and although the default connect timeout value seems to be 5 minutes according to the DEFAULT_CONNECT_TIMEOUT macro in lib/connect.h . A default value for --max-time doesn't seem to exist, making curl wait forever for a response if the initial connect succeeds. What to use? You are probably interested in the latter option, --max-time . For your case set it to 900 (15 minutes). Specifying option --connect-timeout to something like 60 (one minute) might also be a good idea. Otherwise curl will try to connect again and again, apparently using some backoff algorithm. | {
"source": [
"https://unix.stackexchange.com/questions/94604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41400/"
]
} |
94,626 | As a privileged user I am trying to set the sudo password to another then the one used at login. I have done some research but have not found an answer. Does sudo support that kind of configuration? If you ever lose your password, you lose everything. Someone could log in and promote himself to root with the same password. sudo has an option to ask for root password instead of invoked user password, ( rootpw ), but sharing root password is definitely not an option, that is why we set up sudo . I did config 2FA in the past, it worked great, but also defeats the automation purpose. For example if you want to execute a privileged command across a dozen of servers with an expect script, adding 2FA doesn't allow you to do that. The closest solution I have found is to only allow SSH private key and setup pass-phrase with a key which differs from the sudo (login) password. Still, it is not comfy, because in an emergency situation you can't login with a PC where it doesn't have that key. | If you want to ask for the root password, as opposed to the user's password, there are options that you can put in /etc/sudoers . rootpw in particular will make it ask for the root password. There is runaspw and targetpw as well; see the sudoers(5) manpage for details. Other than that, sudo does its authentication (like everything else) through PAM. PAM supports per-application configuration. Sudo's config is in (at least on my Debian system) /etc/pam.d/sudo , and looks like this: $ cat sudo
#%PAM-1.0
@include common-auth
@include common-account
@include common-session-noninteractive In other words, by default, it authenticates like everything else on the system. You can change that @include common-auth line, and have PAM (and thus sudo) use an alternate password source. The non-commented-out lines in common-auth look something like (by default, this will be different if you're using e.g., LDAP): auth [success=1 default=ignore] pam_unix.so nullok_secure
auth requisite pam_deny.so
auth required pam_permit.so You could use e.g., pam_userdb.so instead of pam_unix.so , and store your alternate passwords in a Berkeley DB database. example I created the directory /var/local/sudopass , owner/group root:shadow , mode 2750 . Inside it, I went ahead and created a password database file using db5.1_load (which is the version of Berkeley DB in use on Debian Wheezy): # umask 0027
# db5.1_load -h /var/local/sudopass -t hash -T passwd.db
anthony
WMaEFvCFEFplI ^D That hash was generated with mkpasswd -m des , using the password "password". Very highly secure! (Unfortunately, pam_userdb seems to not support anything better than the ancient crypt(3) hashing). Now, edit /etc/pam.d/sudo and remove the @include common-auth line, and instead put this in place: auth [success=1 default=ignore] pam_userdb.so crypt=crypt db=/var/local/sudopass/passwd
auth requisite pam_deny.so
auth required pam_permit.so Note that pam_userdb adds a .db extension to the passed database, so you must leave the .db off. According to dannysauer in a comment , you may need to make the same edit to /etc/pam.d/sudo-i as well. Now, to sudo, I must use password instead of my real login password: anthony@sudotest:~$ sudo -K
anthony@sudotest:~$ sudo echo -e '\nit worked'
[sudo] password for anthony: p a s s w o r d RETURN it worked | {
"source": [
"https://unix.stackexchange.com/questions/94626",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33957/"
]
} |
94,679 | I've seen the install command used in a lot of Makefiles, and its existence and usage are kind of confusing. From the manpages, it seems like a knockoff of cp with less features, but I assume it wouldn't be used unless it had some advantage over cp . What's the deal? | install not only copies files but also changes its ownership and permissions and optionally removes debugging symbols from executables. It combines cp with chown , chmod and strip . It's a convenient higher-level tool to that accomplishes a common sequence of elementary tasks. An advantage of install over cp for installing executables is that if the target already exists, it removes the target file and creates a new one. This gets rid of any current properties such as access control lists and capabilities, which can be seen both as an upside and as a downside. When updating executables, if there are running instances of this executable, they keep running unaffected. In contrast, cp updates the file in place if there is one. On most Unix variants, this fails with the error EBUSY¹ if the target is a running executable; on some it can cause the target to crash because it loads code sections dynamically and modifying the file causes nonsensical code to be loaded. install is a BSD command (added in 4.2BSD , i.e. in the early 1980s). It has not been adopted by POSIX. ¹ “Text file busy”. In this context, “text file” means “binary executable file”, for obscure historical reasons . | {
"source": [
"https://unix.stackexchange.com/questions/94679",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49045/"
]
} |
94,714 | Problem When copying files with cp -H or cp -L , I get the same results: $ ls -l fileA
fileA -> fileB
$ cp fileA somewhere/ -H
$ ls -l somewhere/
fileA # fileA is a copy of fileB, only renamed, with same properties! This answer here describes both options as similar UNLESS used in combination with -R . Not for me. Soft- as hardlinked files become renamed copies of the files they point to at the source. Question : What is the proper use of cp -H and cp -L ? Is this the expected behavior? My attempt to solve : man cp tells me quite the same for both options, but info cp 's wording makes it even more confusing for me. Maybe one can help me break this down a bit: -H If a command line argument specifies a symbolic link, then copy the file it points to rather than the symbolic link itself. However, copy (preserving its nature) any symbolic link that is encountered via recursive traversal. This sounds like a contradiction to me: I guess that » a symbolic link's nature « is that it points somewhere… -L, --dereference Follow symbolic links when copying from them. With this option, cp cannot create a symbolic link. For example, a symlink (to regular file) in the source tree will be copied to a regular file in the destination tree. I do know that a symlink isn't a regular file, but… I admit I'm overchallenged with this explanation here. | With symlinks, tools have two things they can do: Treat the symlink as a symlink ("preserving its nature"), or Treat the symlink as the type of file that it points to. Saying that -H "preserves its nature" is not a contradiction. Consider the alternative. If you use -L , any symlinks cp finds will be opened, and their contents copied to the target file name. So the source was a symlink, but its copy is not a symlink. So it "lost its nature as a symlink". Consider $ mkdir subdir
$ echo "some contents" > subdir/file
$ ln -s file subdir/link
# definition of "list", the abbreviated ls -l output used below
$ list() { ls -l "$@" | \
awk '$0 !~ /^total/ { printf "%s %s\t%s %s %s\n", $1, $5, $9, $10, $11 }' ; }
$ list subdir
-rw-rw-r-- 14 file
lrwxrwxrwx 4 link -> file
$ cp -rH subdir subdir-with-H
$ list subdir-with-H
-rw-rw-r-- 14 file
lrwxrwxrwx 4 link -> file
$ cp -rL subdir subdir-with-L
$ list subdir-with-L
-rw-rw-r-- 14 file
-rw-rw-r-- 14 link | {
"source": [
"https://unix.stackexchange.com/questions/94714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36242/"
]
} |
94,719 | I would like to PXE boot from a TFTP server that is not on my local network. The server is running in a cloud VM. Is it possible to specify the remote server's IP without utilizing DHCP? If not, what would the simplest pass-through method be to proxy requests via another working PC on the local subnet? | With symlinks, tools have two things they can do: Treat the symlink as a symlink ("preserving its nature"), or Treat the symlink as the type of file that it points to. Saying that -H "preserves its nature" is not a contradiction. Consider the alternative. If you use -L , any symlinks cp finds will be opened, and their contents copied to the target file name. So the source was a symlink, but its copy is not a symlink. So it "lost its nature as a symlink". Consider $ mkdir subdir
$ echo "some contents" > subdir/file
$ ln -s file subdir/link
# definition of "list", the abbreviated ls -l output used below
$ list() { ls -l "$@" | \
awk '$0 !~ /^total/ { printf "%s %s\t%s %s %s\n", $1, $5, $9, $10, $11 }' ; }
$ list subdir
-rw-rw-r-- 14 file
lrwxrwxrwx 4 link -> file
$ cp -rH subdir subdir-with-H
$ list subdir-with-H
-rw-rw-r-- 14 file
lrwxrwxrwx 4 link -> file
$ cp -rL subdir subdir-with-L
$ list subdir-with-L
-rw-rw-r-- 14 file
-rw-rw-r-- 14 link | {
"source": [
"https://unix.stackexchange.com/questions/94719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26076/"
]
} |
94,775 | What can I type at my shell (which happens to be bash ) that will list all the commands that are recognized? Also, does this differ by shell? Or do all shells just have a "directory" of commands they recognize? Secondly, different question, but how can I override any of those? In other words how can I write my own view command to replace the one existing on my Ubuntu system, which appears to just load vim . | You can use compgen compgen -c # will list all the commands you could run. FYI: compgen -a # will list all the aliases you could run.
compgen -b # will list all the built-ins you could run.
compgen -k # will list all the keywords you could run.
compgen -A function # will list all the functions you could run.
compgen -A function -abck # will list all the above in one go. | {
"source": [
"https://unix.stackexchange.com/questions/94775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11258/"
]
} |
95,842 | I'm writing a device driver that prints error message into ring buffer dmesg output.
I want to see the output of dmesg as it changes. How can I do this? | Relatively recent dmesg versions provide a follow option ( -w , --follow ) which works analogously to tail -f . Thus, just use following command: $ dmesg -wH ( -H , --human enables user-friendly features like colors, relative time) Those options are available for example in Fedora 19. | {
"source": [
"https://unix.stackexchange.com/questions/95842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13050/"
]
} |
95,884 | Is this possible? I read somewhere that the following command would do it: sed -e [command] [file] but it appeared to do the same thing as just sed [command] [file] (it did not save the changes). Is there any way to do this using sed? | I think you are looking for -i : -i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if SUFFIX supplied) For example: $ cat foo.txt
hello world
$ sed -i 's/o/X/g' foo.txt
$ cat foo.txt
hellX wXrld If you provide a suffix, it will create a backup file: $ ls
foo.txt
$ sed -i.bak 's/o/X/g' foo.txt
$ ls
foo.txt foo.txt.bak The input file is modified and a backup containing the original file data is created. Also note that this is for GNU sed , there are slight differences in format between different sed implementations. | {
"source": [
"https://unix.stackexchange.com/questions/95884",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49088/"
]
} |
95,897 | I am wondering why by default my directory /home/<user>/ has permissions set to 755 . This allows other users to enter into directories and read files in my home. Is there any legitimate reason for this ? Can I set the permissions to 700 for my home and all sub directories , for example: chmod -R o-xw /home/<user>/
chmod -R g-xw /home/<user>/ without breaking anything ? Also, is it possible to set the permissions on my home, so that all new files created will have 600 and directories 700 ? | If your home directory is private, then no one else can access any of your files. In order to access a file, a process needs to have execute permission to all the directories on the path down the tree from the root directory. For example, to allow other users to read /home/martin/public/readme , the directories / , /home , /home/martin and /home/martin/public all need to have the permissions d??x??x??x (it can be drwxr-xr-x , or drwx--x--x or some other combination), and additionally the file readme must be publicly readable ( -r??r??r?? ). It is common to have home directories with mode drwxr-xr-x (755) or at least drwx--x--x (711). Mode 711 (only execute permission) on a directory allows others to access a file in that directory if they know its name, but not to list the content of the directory. Under that home directory, create public and private subdirectories as desired. If you never, ever want other people to read any of your files, you can make your home directory drwx------ (700). If you do that, you don't need to protect your files individually. This won't break anything other than the ability of other people to read your file. One common thing that may break, because it's an instance of other people reading your files, is if you have a directory such as ~/public_html or ~/www which contains your web page. Depending on the web server's configuration, this directory may need to be world-readable. You can change the default permissions for the files you create by setting the umask value in your .profile . The umask is the complement of the maximal permissions of a file. Common values include 022 (writable only by the owner, readable and executable by everyone), 077 (access only by the owner), and 002 (like 022, but also group-writable). These are maximal permissions: applications can set more restrictive permissions, for example most files end up non-executable because the application that created them didn't set the execute permission bits when creating the file. | {
"source": [
"https://unix.stackexchange.com/questions/95897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
96,106 | To perform a scan for bluetooth LE devices hcitool apparently requires root privileges. For normal users the output is following: $ hcitool lescan
Set scan parameters failed: Operation not permitted Why does hcitool need root privileges for a LE scan? Is it possible to somehow perform a LE scan as non-root? | The Bluetooth protocol stack for Linux checks two capabilities. Capabilities are a not yet common system to manage some privileges. They could be handled by a PAM module or via extended file attributes. (see https://elixir.bootlin.com/linux/v5.8.10/source/net/bluetooth/hci_sock.c#L1307 ) $> sudo apt-get install libcap2-bin installs linux capabilities manipulation tools. $> sudo setcap 'cap_net_raw,cap_net_admin+eip' `which hcitool` sets the missing capabilities on the executable quite like the setuid bit. $> getcap !$
getcap `which hcitool`
/usr/bin/hcitool = cap_net_admin,cap_net_raw+eip so we are good to go: $>hcitool -i hci0 lescan
Set scan parameters failed: Input/output error Yeay, your BT adapter does not support BLE $>hcitool -i hci1 lescan
LE Scan... This one does, go on and press a button on your device. | {
"source": [
"https://unix.stackexchange.com/questions/96106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49250/"
]
} |
96,226 | How can I delete the first line of a file and keep the changes? I tried this but it erases the whole content of the file. $sed 1d file.txt > file.txt | The reason file.txt is empty after that command is the order in which the shell does things. The first thing that happens with that line is the redirection. The file "file.txt" is opened and truncated to 0 bytes. After that the sed command runs, but at the point the file is already empty. There are a few options, most involve writing to a temporary file. sed '1d' file.txt > tmpfile; mv tmpfile file.txt # POSIX
sed -i '1d' file.txt # GNU sed only, creates a temporary file
perl -ip -e '$_ = undef if $. == 1' file.txt # also creates a temporary file | {
"source": [
"https://unix.stackexchange.com/questions/96226",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/49295/"
]
} |
96,241 | I would like to know how to search particular text on the terminal. If I do cat of log files, I would like to find certain words like job or summary so that I don't have to read through the entire log file. I know there have been a similar post about this. The answer from that post is Ctrl + A + [ <text> which doesn't seem to work for me. When I press that I get a message No bracket in top line (press Return) or If I press those keys together I get the message ESC . It there a way to do this with PuTTY? Alternatively, is there a generic way to search for text in the output of commands? | The reason file.txt is empty after that command is the order in which the shell does things. The first thing that happens with that line is the redirection. The file "file.txt" is opened and truncated to 0 bytes. After that the sed command runs, but at the point the file is already empty. There are a few options, most involve writing to a temporary file. sed '1d' file.txt > tmpfile; mv tmpfile file.txt # POSIX
sed -i '1d' file.txt # GNU sed only, creates a temporary file
perl -ip -e '$_ = undef if $. == 1' file.txt # also creates a temporary file | {
"source": [
"https://unix.stackexchange.com/questions/96241",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47753/"
]
} |
96,305 | When I run echo $SHELL the output says /bin/tcsh which means that I am running a tcsh shell.
But for example when I issue the following command alias emacs 'emacs -nw' I get the following error: bash: alias: emacs: not found
bash: alias: emacs -nw: not found and when I issue alias emacs="emacs -nw" it runs fine! This is confusing since I am running tcsh but the commands are interpreted by bash . What could be the reason? | $SHELL is not necessarily your current shell, it is the default login shell . To check the shell you are using, try ps $$ This should work on most recent Unix/Linux with a ps that supports the BSD syntax. Otherwise, this is the portable (POSIX) way ps -p $$ That should return something like this if you are running tcsh : 8773 pts/10 00:00:00 tcsh If you want to have tcsh be your default shell, use chsh to set it. | {
"source": [
"https://unix.stackexchange.com/questions/96305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28650/"
]
} |
96,380 | I need to make a backup of a file, and I would like to have a timestamp as part of the name to make it easier to differentiate. How would you inject the current date into a copy command? [root@mongo-test3 ~]# cp foo.txt {,.backup.`date`}
cp: target `2013}' is not a directory
[root@mongo-test3 ~]# cp foo.txt {,.backup. $((date)) }
cp: target `}' is not a directory
[root@mongo-test3 ~]# cp foo.txt foo.backup.`date`
cp: target `2013' is not a directory | This isn't working because the command date returns a string with spaces in it. $ date
Wed Oct 16 19:20:51 EDT 2013 If you truly want filenames like that you'll need to wrap that string in quotes. $ touch "foo.backup.$(date)"
$ ll foo*
-rw-rw-r-- 1 saml saml 0 Oct 16 19:22 foo.backup.Wed Oct 16 19:22:29 EDT 2013 You're probably thinking of a different string to be appended would be my guess though. I usually use something like this: $ touch "foo.backup.$(date +%F_%R)"
$ ll foo*
-rw-rw-r-- 1 saml saml 0 Oct 16 19:25 foo.backup.2013-10-16_19:25 See the man page for date for more formatting codes around the output for the date & time. Additional formats If you want to take full control if you consult the man page you can do things like this: $ date +"%Y%m%d"
20131016
$ date +"%Y-%m-%d"
2013-10-16
$ date +"%Y%m%d_%H%M%S"
20131016_193655 NOTE: You can use date -I or date --iso-8601 which will produce identical output to date +"%Y-%m-%d . This switch also has the ability to take an argument to indicate various time formats : $ date -I=?
date: invalid argument ‘=?’ for ‘--iso-8601’
Valid arguments are:
- ‘hours’
- ‘minutes’
- ‘date’
- ‘seconds’
- ‘ns’
Try 'date --help' for more information. Examples: $ date -Ihours
2019-10-25T01+0000
$ date -Iminutes
2019-10-25T01:21+0000
$ date -Iseconds
2019-10-25T01:21:33+0000 | {
"source": [
"https://unix.stackexchange.com/questions/96380",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39263/"
]
} |
96,385 | What I am after is almost exactly the same as can be found here, but I want the
format "line number, separator, filename, newline" in the results, thus displaying
the line number at the beginning of the line, not after the filename, and without
displaying the line containing the match. The reason why this format is preferable is that (a) the filename might be long and cryptic and contain the separator which the tool uses to separate the filename from the line number, making it incredibly difficult to use awk to achieve this, since the pattern inside the file might also contain the same separator. Also, line numbers at the beginning of the line will be aligned better than if they appear after the filename. And the other reason for this desired format is that (b) the lines matching the pattern may be too long and mess up the one line per row property on the output displayed on standard out (and viewing the output on standard out is better than having to save to a file and use a tool like
vi to view one line per row in the output file). How can I recursively search directories for a pattern and just print out file names and line numbers Now that I've set out the requirement, consider this: Ack is not installed on the Linux host I'm using, so I cannot use it. If I do the following, the shell executes find . and substitutes 'find .`
with a list of absolute paths starting at the current working directory and
proceeding downwards recursively: grep -n PATTERN $(find .) then the -n prints the line number, but not where I want it. Also, for some reason I do not understand, if a directory name includes the PATTERN, then grep matches it in addition to the regular files that contain the pattern. This is not what I want,
so I use: grep -n PATTERN $(find . -type f) I also wanted to change this command so that the output of find is passed on to
grep dynamically. Rather than having to build the entire list of absolute paths
first and then pass the bulk of them to grep, have find pass each line to grep
as it builds the list, so I tried: find . -exec grep -n PATTERN '{}' \; which seems like the right syntax according to the man page but when I issue
this command the Bash shell executes about 100 times slower, so this is not
the way to go. In view of what I described, how can I execute something similar to this command
and obtain the desired format. I have already listed the problems associated
with the related post. | This isn't working because the command date returns a string with spaces in it. $ date
Wed Oct 16 19:20:51 EDT 2013 If you truly want filenames like that you'll need to wrap that string in quotes. $ touch "foo.backup.$(date)"
$ ll foo*
-rw-rw-r-- 1 saml saml 0 Oct 16 19:22 foo.backup.Wed Oct 16 19:22:29 EDT 2013 You're probably thinking of a different string to be appended would be my guess though. I usually use something like this: $ touch "foo.backup.$(date +%F_%R)"
$ ll foo*
-rw-rw-r-- 1 saml saml 0 Oct 16 19:25 foo.backup.2013-10-16_19:25 See the man page for date for more formatting codes around the output for the date & time. Additional formats If you want to take full control if you consult the man page you can do things like this: $ date +"%Y%m%d"
20131016
$ date +"%Y-%m-%d"
2013-10-16
$ date +"%Y%m%d_%H%M%S"
20131016_193655 NOTE: You can use date -I or date --iso-8601 which will produce identical output to date +"%Y-%m-%d . This switch also has the ability to take an argument to indicate various time formats : $ date -I=?
date: invalid argument ‘=?’ for ‘--iso-8601’
Valid arguments are:
- ‘hours’
- ‘minutes’
- ‘date’
- ‘seconds’
- ‘ns’
Try 'date --help' for more information. Examples: $ date -Ihours
2019-10-25T01+0000
$ date -Iminutes
2019-10-25T01:21+0000
$ date -Iseconds
2019-10-25T01:21:33+0000 | {
"source": [
"https://unix.stackexchange.com/questions/96385",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48393/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.