source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
86,980 | I have a problem installing OpenOCD: checking for ftd2xx.h... yes
checking for library containing FT_GetLibraryVersion... no Said library is installed and tested. In hopes of seeing where is configure looking for the library, I passed the verbose command switch. No change at all! Is there a way to check where and under what name is configure looking for said library? | ./configure usually creates a config.log file. It should contain the commands executed to check for the library. | {
"source": [
"https://unix.stackexchange.com/questions/86980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20506/"
]
} |
87,011 | I wanted to have a go at creating my very own Linux Distribution. Could you suggest some nice and easy-to-follow tutorials (preferably text based and not videos).
I have heard something about Arch Linux but I don't know how to go from there. What do I need? | Take a look on Linux From Scratch, LFS they have a tutorial which teaches you how to build your own Linux System, once you understood that you can select a package manager and a set of packages hence creating your own distro. A thing to make the answer a bit more complete, ArchLinux is a Linux Distribuition which uses almost 100% vanilla packages. This means almost no patching is done by the distro mantainers. Also it does not have a default set of packages as *buntu distros do. These characteristics make Arch a very customizable distro. It is your "own distro" in the sense that it is your own setup. But not as in LFS where it is your own kernel, modules, packages... | {
"source": [
"https://unix.stackexchange.com/questions/87011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38241/"
]
} |
87,130 | Some situations call for manually installing a local package using dpkg -i <packagename> . Sometimes it may be further useful to leverage the functionality of apt with that package so the question is: How do you quickly create a local repository for random packages using a Debian based linux distribution - like Xubuntu 13.04/Ubuntu? | This should be distinguished from the situation where you're trying to replicate a full package tree from an official repository and fine tuning sources priority . Random packages mean virtual packages, packages which are compiled locally or copied in a piecemeal fashion for testing purposes. Here's a simple setup based on now obsolete documentation . First, make a directory to host the packages: mkdir <packagedir> Then move your .deb package files there. Execute this command from the directory above the one we just created (make sure permissions allow this!): dpkg-scanpackages packagedir | gzip > packagedir/Packages.gz Now create a file with extension .list in /etc/apt/sources.list.d/ with the contents: deb [trusted=yes] file:///path_to_dir_above_packagedir packagedir/ and update the apt database: apt-get update At this point the packages in our local repository can be installed like any other package using apt-get install <packagename> . When new packages are added to the local repository, the prescribed dpkg-scanpackages command must be issued again to update the Packages.gz file and apt must be updated before the new packages are made available. Hopefully this can be useful for testing purposes. | {
"source": [
"https://unix.stackexchange.com/questions/87130",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
87,182 | I was wondering, is there any differences between Debian Standard and GNOME versions? Isn't Debian under GNOME by default? | Debian Live Standard is Debian without the Graphical User Interface. Debian Live Gnome is Debian Standard with Gnome. | {
"source": [
"https://unix.stackexchange.com/questions/87182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39147/"
]
} |
87,183 | I need to create filesystem with just one partition from nothing ( /dev/zero ).
I tried this sequence of commands: dd if=/dev/zero of=mountedImage.img bs=512 count=131072
fdisk mountedImage.img
n
p
2048
131072 Basically, I need to create 64MB image file filled with zeroes. Then I use fdisk to add a new partition for new filesystem (which should finally be FAT32), starting at sector 2048 and using all remaining sectors. losetup /dev/loop1 mountedImage.img
mkfs -t vfat /dev/loop1 But here I'm hitting problems. If I set up a loop device and format it using mkfs -t vfat , partition table is overwritten and filesystem (FAT32) is placed to disk. I don't need whole disk formatted with FAT32, I just need my primary partition to be so. Does anybody know how can I format only one partition of raw disk image, not whole image? | If on Linux, when loading the loop module, make sure you pass a max_part option to the module so that the loop devices are partitionable. Check the current value: cat /sys/module/loop/parameters/max_part If it's 0: modprobe -r loop # unload the module
modprobe loop max_part=31 To make this setting persistent, add the following line to /etc/modprobe.conf or to a file in /etc/modprobe.d if that directory exists on your system: options loop max_part=31 If modprobe -r loop fails because “Module loop is builtin”, you'll need to add loop.max_part=31 to your kernel command line and reboot. If your bootloader is Grub2, add to it to the value of GRUB_CMDLINE_LINUX in etc/default/grub . Now, you can create a partitionable loop device: truncate -s64M file # no need to fill it with zeros, just make it sparse
fdisk file # create partitions
losetup /dev/loop0 file
mkfs.vfat /dev/loop0p1 # for the first partition.
mount /dev/loop0p1 /mnt/ (note that you need a relatively recent version of Linux). | {
"source": [
"https://unix.stackexchange.com/questions/87183",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30528/"
]
} |
87,200 | I have a symlink with these permissions: lrwxrwxrwx 1 myuser myuser 38 Aug 18 00:36 npm -> ../lib/node_modules/npm/bin/npm-cli.js* The symlink is located in a .tar.gz archive. Now when I unpack the tar.gz archive using maven the symlink is no longer valid. I'm therefore trying to reconstruct the symlink. First I create the symlink using ln but how do I set the same permissions as the original symlink? | You can make a new symlink and move it to the location of the old link. ln -s <new_location> npm2
mv -f npm2 npm That will preserve the link ownership. Alternatively, you can use chown to set the link's ownership manually. chown -h myuser:myuser npm On most systems, symlink permissions don't matter. When using the symlink, the permissions of the components of symlink's target will be checked. However, on some systems they do matter. MacOS requires read permission on the link for readlink , and NetBSD's symperm mount option forces link permissions checks on read and traversal. On those systems (and their relatives, including FreeBSD and OpenBSD) there is a equivalent -h option to chmod . chmod -h 777 npm | {
"source": [
"https://unix.stackexchange.com/questions/87200",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11740/"
]
} |
87,300 | What do these terms mean exactly? partition volume drive On Windows, one may say drive C: or partition C: . On Linux I'm not sure what should be used for partitions because they don't have a name. | The term drive refers to a physical storage device such as a hard disk, solid-state disk, removable USB flash drive etc. In Unix-like operating systems, devices are represented by special file system objects called device nodes which are visible under the /dev directory. Storage devices are labeled under /dev according to the type of device followed by a letter signifying the order in which they were detected by the system. In Linux prior to kernel version 2.6.20 the prefix hd signified an IDE device, so for instance the device files /dev/hda , /dev/hdb and /dev/hdc corresponded to the first, second and third IDE device respectively. The prefix sd was originally used for SCSI devices, but is now used for all PATA and SATA devices, including devices on an IDE bus. If there are more than 26 such devices in the system, devices from the 27th onwards are labeled /dev/sdAa , /dev/sdAb and so on. A physical storage device can be divided into multiple logical storage units known as partitions . Each partition will show up under /dev as a separate device node. A number after the device letter signifies the number of the partition. For example, the device node files /dev/sda1 and /dev/sda2 refer to the first and second partition of the first PATA device. Note that on PCs using MBR partitioning , due to the limit of four primary partitions and the way extended partitions are handled, the partition numbering can slightly differ from the actual partition count. Other Unix-like systems may refer to disks and partitions in other ways. For example, FreeBSD uses /dev/adaX (where X is one or more digits) to refer to PATA disks and /dev/adaXpY (where X and Y are both one or more digits) to refer to partitions on PATA disks. The term volume in Linux is related to the Logical Volume Manager ( LVM ), which can be used to manage mass storage devices. A physical volume is a storage device or partition. A logical volume created by the LVM is a logical storage device which can span multiple physical volumes. | {
"source": [
"https://unix.stackexchange.com/questions/87300",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
87,405 | I have written a script that runs fine when executed locally: ./sysMole -time Aug 18 18 The arguments "-time" , "Aug" , "18" , and "18" are successfully passed on to the script. Now, this script is designed to be executed on a remote machine but, from a local directory on the local machine. Example: ssh root@remoteServer "bash -s" < /var/www/html/ops1/sysMole That also works fine. But the problem arises when I try to include those aforementioned arguments (-time Aug 18 18) , for example: ssh root@remoteServer "bash -s" < /var/www/html/ops1/sysMole -time Aug 18 18 After running that script I get the following error: bash: cannot set terminal process group (-1): Invalid argument
bash: no job control in this shell Please tell me what I'm doing wrong, this greatly frustrating. | You were pretty close with your example. It works just fine when you use it with arguments such as these. Sample script: $ more ex.bash
#!/bin/bash
echo $1 $2 Example that works: $ ssh serverA "bash -s" < ./ex.bash "hi" "bye"
hi bye But it fails for these types of arguments: $ ssh serverA "bash -s" < ./ex.bash "--time" "bye"
bash: --: invalid option
... What's going on? The problem you're encountering is that the argument, -time , or --time in my example, is being interpreted as a switch to bash -s . You can pacify bash by terminating it from taking any of the remaining command line arguments for itself using the -- argument. Like this: $ ssh root@remoteServer "bash -s" -- < /var/www/html/ops1/sysMole -time Aug 18 18 Examples #1: $ ssh serverA "bash -s" -- < ./ex.bash "-time" "bye"
-time bye #2: $ ssh serverA "bash -s" -- < ./ex.bash "--time" "bye"
--time bye #3: $ ssh serverA "bash -s" -- < ./ex.bash --time "bye"
--time bye #4: $ ssh < ./ex.bash serverA "bash -s -- --time bye"
--time bye NOTE: Just to make it clear that wherever the redirection appears on the command line makes no difference, because ssh calls a remote shell with the concatenation of its arguments anyway, quoting doesn't make much difference, except when you need quoting on the remote shell like in example #4: $ ssh < ./ex.bash serverA "bash -s -- '<--time bye>' '<end>'"
<--time bye> <end> | {
"source": [
"https://unix.stackexchange.com/questions/87405",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44508/"
]
} |
87,468 | Is there an easy way to programmatically extract IP address, without tedious parsing of ifconfig ? I would not mind simple command output processing using sed to do it but not processing multiline files from /etc some place. What I am trying to do is modify my .bashrc to display the IP address of the host in the greeting message. I am using Ubuntu 12.04 but decided to post here instead of the Ubuntu forum because I consider this to be not distro-specific. | NOTES NIC device handles The examples below assume that the network interface is a wireless card named wlan0 . Adjust this bit in the examples for your particular situation. For example if it's a wired NIC card, then it's likely eth0 . IPv4 - (Internet Protocol version 4) Also these examples are returning the IPv4 address. the "dotted quad" that most people identify as their "IP Address". For example: inet addr:192.168.1.20 Bcast:192.168.1.255 Mask:255.255.255.0 IPv6 - (Internet Protocol version 6) If your system is configured to support IPv6 you'll see both the "dotted quad" as well as the IPv6 IP addresses in the ifconfig output. For example: inet6 addr: fe80::226:c7ff:fe85:a720/64 Scope:Link The commands below explicitly ignore this, but could be adapted quite easily to grab this information instead. Solutions (using ifconfig ) There are many ways to do this. You could for example use this awk script to parse out the IP address of your wireless LAN NIC (wlan0): $ ifconfig wlan0 | grep "inet " | awk -F'[: ]+' '{ print $4 }'
192.168.1.20 You can do this and make it more compact: $ ifconfig wlan0 | awk '/t addr:/{gsub(/.*:/,"",$2);print$2}'
192.168.1.20 You could also do it using perl : $ ifconfig wlan0 | perl -nle'/t addr:(\S+)/&&print$1'
192.168.1.20 The Perl example is about as compact as it can get. There are countless other ways to do this, these are just a couple of examples to get you started. EDIT #1 Some additional notes and comments. @StephaneChazelas demonstrated that there is an even more compact version using grep : $ ifconfig wlan0|grep -Po 't addr:\K[\d.]+'
192.168.1.20 This solution makes use of grep 's ability in newer versions to make use of PCRE (Perl regular expressions), along with it's -o switch to return just what matches the regular expression. Solutions (using ip ) As was also mentioned in the comments, ifconfig can be troublesome to use on systems that have multiple IP addresses assigned to a network device, given it only returns the first one. So it's better to use the command ip in these situations. For example: $ ip addr show wlan0 | grep -Po 'inet \K[\d.]+'
192.168.1.20 You can also tell ip to only show you IPv4 information for a given network interface. In this case we're looking only at the interface named wlan0 : $ ip -f inet addr show wlan0 | grep -Po 'inet \K[\d.]+'
192.168.1.20 References Golfing the Extraction of IP Addresses from ifconfig ip man page ifconfig man page PCRE man page grep man page | {
"source": [
"https://unix.stackexchange.com/questions/87468",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23944/"
]
} |
87,543 | Currently I'm running a FreeBSD 9.1 and the default gateway is already configured in the rc.conf . rc.conf : defaultrouter = "10.0.0.1" But now I want to change the default gateway without rebooting the system, is this possible? | route del default
route add default 1.2.3.4 Where 1.2.3.4 is the new gateway. You can even concatenate them onto the same line with a ; Edit: This is FreeBSD, not Linux. The command is different. Please do not edit this Answer if you haven't read the Question carefully enough to determine the operating system being used. | {
"source": [
"https://unix.stackexchange.com/questions/87543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45602/"
]
} |
87,551 | When ltrace is used for tracing the system calls, I could see that fork() uses sys_clone() rather than sys_fork(). But I couldn't find the linux source where it is defined. My program is: #include<stdio.h>
main()
{
int pid,i=0,j=0;
pid=fork();
if(pid==0)
printf("\nI am child\n");
else
printf("\nI am parent\n");
} And ltrace output is: SYS_brk(NULL) = 0x019d0000
SYS_access("/etc/ld.so.nohwcap", 00) = -2
SYS_mmap(0, 8192, 3, 34, 0xffffffff) = 0x7fe3cf84f000
SYS_access("/etc/ld.so.preload", 04) = -2
SYS_open("/etc/ld.so.cache", 0, 01) = 3
SYS_fstat(3, 0x7fff47007890) = 0
SYS_mmap(0, 103967, 1, 2, 3) = 0x7fe3cf835000
SYS_close(3) = 0
SYS_access("/etc/ld.so.nohwcap", 00) = -2
SYS_open("/lib/x86_64-linux-gnu/libc.so.6", 0, 00) = 3
SYS_read(3, "\177ELF\002\001\001", 832) = 832
SYS_fstat(3, 0x7fff470078e0) = 0
SYS_mmap(0, 0x389858, 5, 2050, 3) = 0x7fe3cf2a8000
SYS_mprotect(0x7fe3cf428000, 2097152, 0) = 0
SYS_mmap(0x7fe3cf628000, 20480, 3, 2066, 3) = 0x7fe3cf628000
SYS_mmap(0x7fe3cf62d000, 18520, 3, 50, 0xffffffff) = 0x7fe3cf62d000
SYS_close(3) = 0
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf834000
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf833000
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7fe3cf832000
SYS_arch_prctl(4098, 0x7fe3cf833700, 0x7fe3cf832000, 34, 0xffffffff) = 0
SYS_mprotect(0x7fe3cf628000, 16384, 1) = 0
SYS_mprotect(0x7fe3cf851000, 4096, 1) = 0
SYS_munmap(0x7fe3cf835000, 103967) = 0
__libc_start_main(0x40054c, 1, 0x7fff47008298, 0x4005a0, 0x400590 <unfinished ...>
fork( <unfinished ...>
SYS_clone(0x1200011, 0, 0, 0x7fe3cf8339d0, 0) = 5967
<... fork resumed> ) = 5967
puts("\nI am parent" <unfinished ...>
SYS_fstat(1, 0x7fff47008060) = 0
SYS_mmap(0, 4096, 3, 34, 0xffffffff
) = 0x7fe3cf84e000
I am child
SYS_write(1, "\n", 1
) = 1
SYS_write(1, "I am parent\n", 12) = -512
--- SIGCHLD (Child exited) ---
SYS_write(1, "I am parent\n", 12I am parent
) = 12
<... puts resumed> ) = 13
SYS_exit_group(13 <no return ...>
+++ exited (status 13) +++ | The fork() and vfork() wrappers in glibc are implemented via the clone() system call. To better understand the relationship between fork() and clone() , we must consider the relationship between processes and threads in Linux. Traditionally, fork() would duplicate all the resources owned by the parent process and assign the copy to the child process. This approach incurs considerable overhead, which all might be for nothing if the child immediately calls exec() . In Linux, fork() utilizes copy-on-write pages to delay or altogether avoid copying the data that can be shared between the parent and child processes. Thus, the only overhead that is incurred during a normal fork() is the copying of the parent's page tables and the assignment of a unique process descriptor struct, task_struct , for the child. Linux also takes an exceptional approach to threads. In Linux, threads are merely ordinary processes which happen to share some resources with other processes. This is a radically different approach to threads compared to other operating systems such as Windows or Solaris, where processes and threads are entirely different kinds of beasts. In Linux, each thread has an ordinary task_struct of its own that just happens to be setup in such a way that it shares certain resources, such as an address space, with the parent process. The flags parameter of the clone() system call includes a set of flags which indicate which resources, if any, the parent and child processes should share. Processes and threads are both created via clone() , the only difference is the set of flags that is passed to clone() . A normal fork() could be implemented as: clone(SIGCHLD, 0); This creates a task which does not share any resources with its parent, and is set to send the SIGCHLD termination signal to the parent when it exits. In contrast, a task which shares the address space, filesystem resources, file descriptors and signal handlers with the parent, in other words a thread , could be created with: clone(CLONE_VM | CLONE_FS | CLONE_FILES | CLONE_SIGHAND, 0); vfork() in turn is implemented via a separate CLONE_VFORK flag, which will cause the parent process to sleep until the child process wakes it via a signal. The child will be the sole thread of execution in the parent's namespace, until it calls exec() or exits. The child is not allowed to write to the memory. The corresponding clone() call could be as follows: clone(CLONE_VFORK | CLONE_VM | SIGCHLD, 0) The implementation of sys_clone() is architecture specific, but the bulk of the work happens in kernel_clone() defined in kernel/fork.c . This function calls the static copy_process() , which creates a new process as a copy of the parent, but does not start it yet. copy_process() copies the registers, assigns a PID to the new task, and either duplicates or shares appropriate parts of the process environment as specified by the clone flags . When copy_process() returns, kernel_clone() will wake the newly created process and schedule it to run. References kernel/fork.c in Linux v5.19-rc5, 2022-07-03 . See line 2606 for kernel_clone() , and line 2727 onward for the definitions of the syscalls fork() , vfork() , clone() , and clone3() , which all more or less just wrap kernel_clone() . | {
"source": [
"https://unix.stackexchange.com/questions/87551",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3539/"
]
} |
87,560 | This may be a silly question, but I ask it still. If I have declared a shebang #!/bin/bash in the beginning of my_shell_script.sh , so do I always have to invoke this script using bash [my@comp]$bash my_shell_script.sh or can I use e.g. [my@comp]$sh my_shell_script.sh and my script determines the running shell using the shebang? Is it the same happening with ksh shell? I'm using AIX. | The shebang #! is an human readable instance of a magic number consisting of the byte string 0x23 0x21 , which is used by the exec() family of functions to determine whether the file to be executed is a script or a binary. When the shebang is present, exec() will run the executable specified after the shebang instead. Note that this means that if you invoke a script by specifying the interpreter on the command line, as is done in both cases given in the question, exec() will execute the interpreter specified on the command line, it won't even look at the script. So, as others have noted, if you want exec() to invoke the interpreter specified on the shebang line, the script must have the executable bit set and invoked as ./my_shell_script.sh . The behaviour is easy to demonstrate with the following script: #!/bin/ksh
readlink /proc/$$/exe Explanation: #!/bin/ksh defines ksh to be the interpreter. $$ holds the PID of the current process. /proc/pid/exe is a symlink to the executable of the process (at least on Linux; on AIX, /proc/$$/object/a.out is a link to the executable). readlink will output the value of the symbolic link. Example: Note : I'm demonstrating this on Ubuntu, where the default shell /bin/sh is a symlink to dash i.e. /bin/dash and /bin/ksh is a symlink to /etc/alternatives/ksh , which in turn is a symlink to /bin/pdksh . $ chmod +x getshell.sh
$ ./getshell.sh
/bin/pdksh
$ bash getshell.sh
/bin/bash
$ sh getshell.sh
/bin/dash | {
"source": [
"https://unix.stackexchange.com/questions/87560",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27887/"
]
} |
87,605 | A command like mv foo* ~/bar/ produces this message in stderr if there are no files matching foo* . mv: cannot stat `foo*': No such file or directory However, in the script I'm working on that case would be completely fine, and I'd like to omit that message from our logs. Is there any nice way to tell mv to be quiet even if nothing was moved? | Are you looking for this? $ mv file dir/
mv: cannot stat ‘file’: No such file or directory
$ mv file dir/ 2>/dev/null
# <---- Silent -----> | {
"source": [
"https://unix.stackexchange.com/questions/87605",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1506/"
]
} |
87,625 | Is Kernel space used when Kernel is executing on the behalf of the user program i.e. System Call? Or is it the address space for all the Kernel threads (for example scheduler)? If it is the first one, than does it mean that normal user program cannot have more than 3GB of memory (if the division is 3GB + 1GB)? Also, in that case how can kernel use High Memory, because to what virtual memory address will the pages from high memory be mapped to, as 1GB of kernel space will be logically mapped? | Is Kernel space used when Kernel is executing on the behalf of the user program i.e. System Call? Or is it the address space for all the Kernel threads (for example scheduler)? Yes and yes. Before we go any further, we should state this about memory. Memory get's divided into two distinct areas: The user space , which is a set of locations where normal user processes run (i.e everything other than the kernel). The role of the kernel is to manage applications running in this space from messing with each other, and the machine. The kernel space , which is the location where the code of the kernel is stored, and executes under. Processes running under the user space have access only to a limited part of memory, whereas the kernel has access to all of the memory. Processes running in user space also don't have access to the kernel space. User space processes can only access a small part of the kernel via an interface exposed by the kernel - the system calls . If a process performs a system call, a software interrupt is sent to the kernel, which then dispatches the appropriate interrupt handler and continues its work after the handler has finished. Kernel space code has the property to run in "kernel mode", which (in your typical desktop -x86- computer) is what you call code that executes under ring 0 . Typically in x86 architecture, there are 4 rings of protection . Ring 0 (kernel mode), Ring 1 (may be used by virtual machine hypervisors or drivers), Ring 2 (may be used by drivers, I am not so sure about that though). Ring 3 is what typical applications run under. It is the least privileged ring, and applications running on it have access to a subset of the processor's instructions. Ring 0 (kernel space) is the most privileged ring, and has access to all of the machine's instructions. For example to this, a "plain" application (like a browser) can not use x86 assembly instructions lgdt to load the global descriptor table or hlt to halt a processor. If it is the first one, than does it mean that normal user program cannot have more than 3GB of memory (if the division is 3GB + 1GB)? Also, in that case how can kernel use High Memory, because to what virtual memory address will the pages from high memory be mapped to, as 1GB of kernel space will be logically mapped? For an answer to this, please refer to the excellent answer by wag here | {
"source": [
"https://unix.stackexchange.com/questions/87625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45594/"
]
} |
87,630 | I've run into a perplexing error that I'd like to understand better. The problem seems to require the presence of a "wrapper" shell function (as described below), so my immediate interest is to find out how to modify such a shell function to get rid of the error. (I give a more specific statement of my question at the end of the post.) The simplest code that I've come up with to reproduce this error is given in the following script. (This script is certainly artificial and silly, but the real-life situation in which the error first surfaced is a bit too complicated for a demonstration like this one.) # create an input file
cat <<EOF > demo.txt
a
b
c
EOF
# create a "wrapper shell function" for /usr/bin/join
my_join_fn () {
/usr/bin/join "$@"
}
cat <(my_join_fn <(cat demo.txt) <(cat demo.txt))
cat <(my_join_fn <(cat demo.txt) <(cat demo.txt)) | head -1
# create a "wrapper shell function" for /usr/local/bin/gjoin, a port of
# GNU's join function for OS X
my_gjoin_fn () {
/usr/local/bin/gjoin "$@"
}
cat <(my_gjoin_fn <(cat demo.txt) <(cat demo.txt))
cat <(my_gjoin_fn <(cat demo.txt) <(cat demo.txt)) | head -1
# show the version of zsh
$SHELL --version If one sources this script (under zsh ), it terminates successfully, and produces the following (correct) output: % source demo.sh
a
b
c
a
a
b
c
a
zsh 5.0.2 (x86_64-apple-darwin11.4.2) But if one then re-executes directly from the command line either one of the two lines in the script that end with | head -1 , one gets a bad file descriptor error: % cat <(my_join_fn <(cat demo.txt) <(cat demo.txt)) | head -1
join: /dev/fd/11: Bad file descriptor
% cat <(my_gjoin_fn <(cat demo.txt) <(cat demo.txt)) | head -1
/usr/local/bin/gjoin: /dev/fd/11: Bad file descriptor These are the only two lines in the script that produce an error when run directly on the command line. As indicated in the output of $SHELL --version , the results shown above were obtained under OS X, but I get similar results when I perform an analogous test under Linux: % cat <(my_join_fn <(cat demo.txt) <(cat demo.txt)) | head -1
/usr/bin/join: /proc/self/fd/11: No such file or directory
% $SHELL --version
zsh 4.3.10 (x86_64-unknown-linux-gnu) I have not been able to reproduce this error under bash (OS X or Linux). This leads me to suspect that the error is due to a bug in zsh . But, if so, it is an extremely arcane bug, and thus not likely to be fixed any time soon. Therefore, I'd like to find a workaround. My question is: How should I modify the definition of the wrapper shell function my_gjoin_fn so as to avoid this error? (The real-life counterpart for my_gjoin_fn is almost identical to the one given above, differing only in the inclusion of a flag in the invocation of gjoin : my_gjoin_fn () {
/usr/local/bin/gjoin -t$'\t' "$@"
} I use this wrapper shell function all the time , therefore I'd really like to "salvage" it.) EDIT: The error persists even if I replace the | head -1 at the end of the command with | head -10 , | cat , | tee /dev/null , | : , etc. E.g.: % cat <(my_join_fn <(cat demo.txt) <(cat demo.txt)) | cat
/usr/bin/join: /proc/self/fd/11: No such file or directory Also, adding ls -l /proc/self/fd , as suggested by msw, produces the following: % cat <(ls -l /proc/self/fd; my_join_fn <(cat demo.txt) <(cat demo.txt)) | cat
total 0
lrwx------ 1 jones jones 64 Aug 21 12:29 0 -> /dev/pts/18
l-wx------ 1 jones jones 64 Aug 21 12:29 1 -> pipe:[312539706]
lrwx------ 1 jones jones 64 Aug 21 12:29 2 -> /dev/pts/18
lr-x------ 1 jones jones 64 Aug 21 12:29 3 -> /proc/23849/fd
/usr/bin/join: /proc/self/fd/11: No such file or directory ...which doesn't tell me much, but may be more informative to others. FWIW, the output produced by the ls -l /proc/self/fd subcommand looks the same whether I run this under zsh or under bash . Also, FWIW, the output of ls -l /proc/self/fd when run by itself is % ls -l /proc/self/fd
total 0
lrwx------ 1 jones jones 64 Aug 21 12:32 0 -> /dev/pts/18
lrwx------ 1 jones jones 64 Aug 21 12:32 1 -> /dev/pts/18
lrwx------ 1 jones jones 64 Aug 21 12:32 2 -> /dev/pts/18
lr-x------ 1 jones jones 64 Aug 21 12:32 3 -> /proc/5246/fd | Is Kernel space used when Kernel is executing on the behalf of the user program i.e. System Call? Or is it the address space for all the Kernel threads (for example scheduler)? Yes and yes. Before we go any further, we should state this about memory. Memory get's divided into two distinct areas: The user space , which is a set of locations where normal user processes run (i.e everything other than the kernel). The role of the kernel is to manage applications running in this space from messing with each other, and the machine. The kernel space , which is the location where the code of the kernel is stored, and executes under. Processes running under the user space have access only to a limited part of memory, whereas the kernel has access to all of the memory. Processes running in user space also don't have access to the kernel space. User space processes can only access a small part of the kernel via an interface exposed by the kernel - the system calls . If a process performs a system call, a software interrupt is sent to the kernel, which then dispatches the appropriate interrupt handler and continues its work after the handler has finished. Kernel space code has the property to run in "kernel mode", which (in your typical desktop -x86- computer) is what you call code that executes under ring 0 . Typically in x86 architecture, there are 4 rings of protection . Ring 0 (kernel mode), Ring 1 (may be used by virtual machine hypervisors or drivers), Ring 2 (may be used by drivers, I am not so sure about that though). Ring 3 is what typical applications run under. It is the least privileged ring, and applications running on it have access to a subset of the processor's instructions. Ring 0 (kernel space) is the most privileged ring, and has access to all of the machine's instructions. For example to this, a "plain" application (like a browser) can not use x86 assembly instructions lgdt to load the global descriptor table or hlt to halt a processor. If it is the first one, than does it mean that normal user program cannot have more than 3GB of memory (if the division is 3GB + 1GB)? Also, in that case how can kernel use High Memory, because to what virtual memory address will the pages from high memory be mapped to, as 1GB of kernel space will be logically mapped? For an answer to this, please refer to the excellent answer by wag here | {
"source": [
"https://unix.stackexchange.com/questions/87630",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
87,633 | Is there a way to tell whether a host is a physical one or a VM and which virtual container it is running out of (e.g. VirtualBox or VMWare)? I was wondering if that info may be in /etc some place. | Is Kernel space used when Kernel is executing on the behalf of the user program i.e. System Call? Or is it the address space for all the Kernel threads (for example scheduler)? Yes and yes. Before we go any further, we should state this about memory. Memory get's divided into two distinct areas: The user space , which is a set of locations where normal user processes run (i.e everything other than the kernel). The role of the kernel is to manage applications running in this space from messing with each other, and the machine. The kernel space , which is the location where the code of the kernel is stored, and executes under. Processes running under the user space have access only to a limited part of memory, whereas the kernel has access to all of the memory. Processes running in user space also don't have access to the kernel space. User space processes can only access a small part of the kernel via an interface exposed by the kernel - the system calls . If a process performs a system call, a software interrupt is sent to the kernel, which then dispatches the appropriate interrupt handler and continues its work after the handler has finished. Kernel space code has the property to run in "kernel mode", which (in your typical desktop -x86- computer) is what you call code that executes under ring 0 . Typically in x86 architecture, there are 4 rings of protection . Ring 0 (kernel mode), Ring 1 (may be used by virtual machine hypervisors or drivers), Ring 2 (may be used by drivers, I am not so sure about that though). Ring 3 is what typical applications run under. It is the least privileged ring, and applications running on it have access to a subset of the processor's instructions. Ring 0 (kernel space) is the most privileged ring, and has access to all of the machine's instructions. For example to this, a "plain" application (like a browser) can not use x86 assembly instructions lgdt to load the global descriptor table or hlt to halt a processor. If it is the first one, than does it mean that normal user program cannot have more than 3GB of memory (if the division is 3GB + 1GB)? Also, in that case how can kernel use High Memory, because to what virtual memory address will the pages from high memory be mapped to, as 1GB of kernel space will be logically mapped? For an answer to this, please refer to the excellent answer by wag here | {
"source": [
"https://unix.stackexchange.com/questions/87633",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23944/"
]
} |
87,687 | I'm using Thunar. What i want is remove Networks since i don't use it, and add Lello below videos . I know if you right click on any folder like videos you can remove the shortcut, I don't know how to add shortcuts. | Under Thunar file manager, you can add to the shortcut pane by dragging items there as shown in the Thunar documentation . | {
"source": [
"https://unix.stackexchange.com/questions/87687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/15433/"
]
} |
87,697 | When I try unzip filename.zip it works. However, I need to unzip a series of zip files. Why are: find . -name "*.zip" -print0 | xargs -0 unzip or ls *.zip | xargs unzip not working? In both cases I get a "caution: filename not matched: " message. | You can issue the command: $ unzip '*.zip' Look here for reference . | {
"source": [
"https://unix.stackexchange.com/questions/87697",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45623/"
]
} |
87,745 | What does the C value for LC_ALL do in Unix-like systems? I know that it forces the same locale for all aspects but what does C do? | It forces applications to use the default language for output: $ LC_ALL=es_ES man
¿Qué página de manual desea?
$ LC_ALL=C man
What manual page do you want? and forces sorting to be byte-wise: $ LC_ALL=en_US sort <<< $'a\nb\nA\nB'
a
A
b
B
$ LC_ALL=C sort <<< $'a\nb\nA\nB'
A
B
a
b | {
"source": [
"https://unix.stackexchange.com/questions/87745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1806/"
]
} |
87,772 | I have the scenario where lines to be added on begining and end of the huge files. I have tried as shown below. for the first line: sed -i '1i\'"$FirstLine" $Filename for the last line: sed -i '$ a\'"$Lastline" $Filename But the issue with this command is that it is appending the first line of the file and traversing entire file. For the last line it's again traversing the entire file and appending a last line. Since its very huge file (14GB) this is taking very long time. How can I add a line to the beginning and another to the end of a file while only reading the file once? | sed -i uses tempfiles as an implementation detail, which is what you are experiencing; however, prepending data to the beginning of a data stream without overwriting the existing contents requires rewriting the file, there's no way to get around that, even when avoiding sed -i . If rewriting the file is not an option, you might consider manipulating it when it is read, for example: { echo some prepended text ; cat file ; } | command Also, sed is for editing streams -- a file is not a stream. Use a program that is meant for this purpose, like ed or ex. The -i option to sed is not only not portable, it will also break any symlinks to your file, since it essentially deletes it and recreates it, which is pointless. You can do this in a single command with ed like so: ed -s file << 'EOF'
0a
prepend these lines
to the beginning
.
$a
append these lines
to the end
.
w
EOF Note that depending on your implementation of ed, it may use a paging file, requiring you to have at least that much space available. | {
"source": [
"https://unix.stackexchange.com/questions/87772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44820/"
]
} |
87,776 | With the following shell script, why I am getting errors syntax error near unexpected token `else' Shell Script echo "please enter username"
read user_name
echo "please enter password"
read -s pass
echo ${ORACLE_SID}
SID=${ORACLE_SID}
if ["${ORACLE_SID}" != 'Test'] then
sqlplus -s -l $USER_NAME/$PASS@$SID <<EOF
copy from scott/tiger@orcl insert EMP using select * from EMP
exit
EOF
else
echo "Cannot copy"
fi | You have to terminate the condition of if like this: if [ "${ORACLE_SID}" != 'Test' ]; then
^ semicolon or like this: if [ "${ORACLE_SID}" != 'Test' ]
then
^ newline Note: you also have to put spaces after [ and before ] . The reason for the ; or linebreak is that the condition part of the if statement is just a command. Any command of any length to be precise. The shell executes that command, examines the exit status of the command, and then decides whether to execute the then part or the else part. Because the command can be of any length there needs to be a marker to mark the end of the condition part. That is the ; or the newline, followed by then . The reason for the spaces after [ is because [ is a command. Usually a builtin of the shell. The shell executes the command [ with the rest as parameters, including the ] as mandatory last parameter. If you do not put a space after [ the shell will try to execute [whatever as command and fail. The reason for space before the ] is similar. Because otherwise it will not be recognized as a parameter of its own. | {
"source": [
"https://unix.stackexchange.com/questions/87776",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/27228/"
]
} |
87,831 | I'm using a Raspberry Pi in combination with Chromium (kiosk mode) to show up some stats. The Pi doesn't have a connected keyboard so I searched for a solution to send keystrokes from the terminal to the Chromium (tab) process . Normal input does work but how do I send something like F5 (a special key: browser refresh) via this solution? # pidof chromium
20809 20790 20788 20786 20783
# echo 'some text' > /proc/20809/fd/0 | GUI programs don't read from their standard input, they get their input from the X server . There are tools to inject a keystroke to a window. xdotool is fairly common and convenient. You'll need to find the window ID that you want to send the keystroke to. You can do that with xdotool. xdotool search --class Chrome returns the list of window IDs of all the Chrome windows. If this returns more than one, you need to pick the one you want. You can use xdotool search --name to match on the title instead of the class. You can also parse the output of wmctrl and extract the desired window ID. Once you've found the right window ID, you can call xdotool to inject a keystroke. Unfortunately, many applications reject synthetic events, i.e. keystrokes and mouse events sent by another application. This is the case with current versions of Chrome. It's possible to inject a keystroke from another application by a different mechanism, but that requires the window to be focused. You can do all of that with xdotool, but it'll cause the focus to quickly flicker to the Chrome window and back. The following snippet sends F5 to the first Chrome window (in a somewhat arbitrary order). xdotool search --class Chrome windowactivate --sync %1 key F5 windowactivate $(xdotool getactivewindow) Or with older versions of xdotool: xdotool windowactivate $(xdotool search --class Chrome) &&
xdotool key F5 &&
xdotool windowactivate $(xdotool getactivewindow) Remember that this sends F5 to that window and it's up to the program to decide what to do with it. In Chrome, this reloads the current tab. | {
"source": [
"https://unix.stackexchange.com/questions/87831",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21286/"
]
} |
88,038 | I am trying to print the matched line and the 4th line from the matched line (line containing the expression I am searching for). I have been using the following code: sed -n 's/^[ \t]*//; /img class=\"devil_icon/,4p' input.txt But this only prints the matched line. This prints only the 4th line. awk 'c&&!--c;/img class=\"devil_icon/{c=4}' input.txt I need to print both the matched line and the 4th line only. | In awk, you'd do it as follows awk '/pattern/{nr[NR]; nr[NR+4]}; NR in nr' file > new_file` or awk '/pattern/{print; nr[NR+4]; next}; NR in nr' file > new_file` Explanation The first solution finds all lines that match pattern . When it finds a match it stores the record number ( NR ) in the array nr . It also stores the 4th record from NR in the same array. This is done by the nr[NR+4] . Every record ( NR ) is then checked to see if it's present in the nr array, if so the record is printed. The second solution works essentially the same way, except when it encounters th e pattern it prints that line, and then stores the 4th record ahead of it in the array nr , then goes to the next record. Then when awk encounters this 4th record the NR in nr block will get executed and print this +4 record there after. Example Here's an example data file, sample.txt . $ cat sample.txt
1
2
3
4 blah
5
6
7
8
9
10 blah
11
12
13
14
15
16 Using the 1st solution: $ awk '/blah/{nr[NR]; nr[NR+4]}; NR in nr' sample.txt
4 blah
8
10 blah
14 Using the 2nd solution: $ awk '/blah/{print; nr[NR+4]; next}; NR in nr' sample.txt
4 blah
8
10 blah
14 | {
"source": [
"https://unix.stackexchange.com/questions/88038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43762/"
]
} |
88,065 | I need to find the largest files in a folder. How do I scan a folder recursively and sort the contents by size? I have tried using ls -R -S , but this lists the directories as well. I also tried using find . | You can also do this with just du . Just to be on the safe side I'm using this version of du : $ du --version
du (GNU coreutils) 8.5 The approach: $ du -ah <some DIR> | grep -v "/$" | sort -rh Breakdown of approach The command du -ah DIR will produce a list of all the files and directories in a given directory DIR . The -h will produce human readable sizes which I prefer. If you don't want them then drop that switch. I'm using the head -6 just to limit the amount of output! $ du -ah ~/Downloads/ | head -6
4.4M /home/saml/Downloads/kodak_W820_wireless_frame/W820_W1020_WirelessFrames_exUG_GLB_en.pdf
624K /home/saml/Downloads/kodak_W820_wireless_frame/easyshare_w820.pdf
4.9M /home/saml/Downloads/kodak_W820_wireless_frame/W820_W1020WirelessFrameExUG_GLB_en.pdf
9.8M /home/saml/Downloads/kodak_W820_wireless_frame
8.0K /home/saml/Downloads/bugs.xls
604K /home/saml/Downloads/netgear_gs724t/GS7xxT_HIG_5Jan10.pdf Easy enough to sort it smallest to biggest: $ du -ah ~/Downloads/ | sort -h | head -6
0 /home/saml/Downloads/apps_archive/monitoring/nagios/nagios-check_sip-1.3/usr/lib64/nagios/plugins/check_ldaps
0 /home/saml/Downloads/data/elasticsearch/nodes/0/indices/logstash-2013.04.06/0/index/write.lock
0 /home/saml/Downloads/data/elasticsearch/nodes/0/indices/logstash-2013.04.06/0/translog/translog-1365292480753
0 /home/saml/Downloads/data/elasticsearch/nodes/0/indices/logstash-2013.04.06/1/index/write.lock
0 /home/saml/Downloads/data/elasticsearch/nodes/0/indices/logstash-2013.04.06/1/translog/translog-1365292480946
0 /home/saml/Downloads/data/elasticsearch/nodes/0/indices/logstash-2013.04.06/2/index/write.lock Reverse it, biggest to smallest: $ du -ah ~/Downloads/ | sort -rh | head -6
10G /home/saml/Downloads/
3.8G /home/saml/Downloads/audible/audio_books
3.8G /home/saml/Downloads/audible
2.3G /home/saml/Downloads/apps_archive
1.5G /home/saml/Downloads/digital_blasphemy/db1440ppng.zip
1.5G /home/saml/Downloads/digital_blasphemy Don't show me the directory, just the files: $ du -ah ~/Downloads/ | grep -v "/$" | sort -rh | head -6
3.8G /home/saml/Downloads/audible/audio_books
3.8G /home/saml/Downloads/audible
2.3G /home/saml/Downloads/apps_archive
1.5G /home/saml/Downloads/digital_blasphemy/db1440ppng.zip
1.5G /home/saml/Downloads/digital_blasphemy
835M /home/saml/Downloads/apps_archive/cad_cam_cae/salome/Salome-V6_5_0-LGPL-x86_64.run If you want to exclude all directories from the output, you can use a trick with the presence of a dot character. This assumes that your directory names do not contain dots, and that the files you are looking for do. Then you can filter out the directories with grep -v '\s/[^.]*$' : $ du -ah ~/Downloads/ | grep -v '\s/[^.]*$' | sort -rh | head -2
1.5G /home/saml/Downloads/digital_blasphemy/db1440ppng.zip
835M /home/saml/Downloads/apps_archive/cad_cam_cae/salome/Salome-V6_5_0-LGPL-x86_64.run If you just want the list of smallest to biggest, but the top 6 offending files you can reverse the sort switch, drop ( -r ), and use tail -6 instead of the head -6 . $ du -ah ~/Downloads/ | grep -v "/$" | sort -h | tail -6
835M /home/saml/Downloads/apps_archive/cad_cam_cae/salome/Salome-V6_5_0-LGPL-x86_64.run
1.5G /home/saml/Downloads/digital_blasphemy
1.5G /home/saml/Downloads/digital_blasphemy/db1440ppng.zip
2.3G /home/saml/Downloads/apps_archive
3.8G /home/saml/Downloads/audible
3.8G /home/saml/Downloads/audible/audio_books | {
"source": [
"https://unix.stackexchange.com/questions/88065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45817/"
]
} |
88,106 | I'm using Linux Mint. My login shell ( cat /etc/passwd | grep myUserName ) is bash. After I start my graphical desktop environment and run a terminal emulator from it, I can see that .bash_profile is not sourced (environment vars that are export ed in it are unset). But if I log in from a text console ( ctrl + alt + F1 ) or manually run bash -l from the terminal emulator, .bash_profile works fine. Am I wrong when I think that .bash_profile should be sourced when X starts and all export 'ed vars should be available in the terminal, running from X? P.S. Placing everything in .bashrc and sourcing it from .bash_profile is not good idea ( https://stackoverflow.com/questions/902946/ ): environment stuff should be sourced only once. | The file ~/.bash_profile is read by bash when it is a login shell. That's what you get when you log in in text mode. When you log in under X, the startup scripts are executed by /bin/sh . On Ubuntu and Mint, /bin/sh is dash , not bash. Dash and bash both have the same core features, but dash sticks to these core features in order to be fast and small whereas bash adds a lot of features at the cost of requiring more resources. It is common to use dash for scripts that don't need the extra features and bash for interactive use (though zsh has a lot of nicer features ). Most combinations of display manager (the program where you type your user name and password) and desktop environment read ~/.profile from the login scripts in /etc/X11/Xsession , /usr/bin/lightdm-session , /etc/gdm/Xsession or whichever is applicable. So put your environment variable definitions in ~/.profile . Make sure to use only syntax that dash supports. So what should you put where? A good .bash_profile loads .profile , and loads .bashrc if the shell is interactive. . ~/.profile
if [[ $- == *i* ]]; then . ~/.bashrc; fi In .profile , put environment variable definitions, and other session settings such as ulimit . In .bashrc , put bash interactive settings such as aliases, functions, completion, key bindings (that aren't in .inputrc ), … See also Difference between Login Shell and Non-Login Shell? and Alternative to .bashrc . | {
"source": [
"https://unix.stackexchange.com/questions/88106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39150/"
]
} |
88,174 | So I had to do an exercise in a book as homework. First you had to create a user like: useradd -c "Steven Baxter" -s "/bin/sh" sbaxter Then you had to add some files to the /home/sbaxter directory: touch /home/sbaxter/ some.txt new.txt files.txt Then you had to remove the sbaxter user and create a new user named mjane . To my suprise when I ran find /home/ -user mjane , the new user mjane now owned all of sbaxter's old files, what happened? | The devil is in the details, in the useradd man page (you can see that by issuing man 8 useradd ): -u, --uid UID
The numerical value of the user's ID. This value must be unique,
unless the -o option is used. The value must be non-negative. The
default is to use the smallest ID value greater than or equal to
UID_MIN and greater than every other user. So it will default to using the smallest uid unused, that is larger than other users, in the password file. Seeing as deleting sbaxter removed him from the passwd file, his uid is "free" and gets assigned to mjane (as the uid useradd picks is the same for both users at the time the useradd command was used). Files on disk only store uid, and NOT the user name translation (as this translation is defined in the password file). You can confirm that by issuing ls -ln to see what uid ownership files have. I would actually recommend you disable rather than delete accounts. Locking accounts on most Linux distributions can be achieved with usermod -L -e today <username> , which locks the password and sets the account to expire today (you can see the expiry date of an account with chage -l ). | {
"source": [
"https://unix.stackexchange.com/questions/88174",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45871/"
]
} |
88,185 | I am using Ubuntu and trying to delete all 100 lines from vi editor but I got interview question of doing this in one command. | In normal mode, do 100dd dd deletes the current line. Prefacing that command with 100 causes it to repeat 100 times. If there are fewer than 100 lines in the file starting from the current line, depending on the vi implementation, it will either fail to delete any or delete as many as there are. In the case of vim , that depends on whether the cp aka compatible option is on or not. | {
"source": [
"https://unix.stackexchange.com/questions/88185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45873/"
]
} |
88,201 | The question says it all. I currently use Arch Linux and the zsh, but I'd like a solution that (at minimum) works both on VTs and in xterms and also (hopefully, preferably) will continue to work if I switch distros or shells. I've heard wildly disparate answers to this question in different distros' docs. Ubuntu says "use .pam_environment". I think in Arch what they recommend depends on your shell. Currently I put everything in .profile and if a shell doesn't source that for some reason (e.g. bash if .bash_profile exists), I override that by manually sourcing it. But it seems like there must be a better way. | There is unfortunately no fully portable location to set environment variables. The two files that come closest are ~/.profile , which is the traditional location and works out of the box on many setups, and ~/.pam_environment , a modern, commonplace but limited alternative. What to put in ~/.pam_environment The file ~/.pam_environment is read by all login methods that use PAM and that have this file enabled. This covers most Linux systems nowadays. The major advantage of ~/.pam_environment is that (when enabled) it is read before the user's shell starts, so it works regardless of the session type, login shell and other complexities. It even works for non-interactive logins such as su -c somecommand and ssh somecommand . The major limitation of ~/.pam_environment is that you can only put simple assignments there, not complex shell syntax. The syntax of this file is as follows. Files are parsed line by line. Each line must have the form VAR=VALUE where VAR consists of letters, digits and underscores. The alternative form VAR DEFAULT=value allows expansions of environment variables using ${VAR} syntax and the special variables @{HOME} and @{SHELL} . # starts a comment, it cannot appear in a value. If VALUE is surrounded by " , then VAR is set to the string between the quotes. \$ or \@ insert a literal $ or @ and long lines can be split by escaping the newline with a \ . If there is a syntax error such as no = or unquoted whitespace, the variable is removed from the environment. So on the upside, ~/.pam_environment works in a large array of circumstances. On the downside, you cannot use the output of a command (e.g. test if a directory or program is present), and some characters ( #" , newline) are impossible or troublesome to put in the value. What to put in ~/.profile This file should have portable (POSIX) sh syntax. Only use ksh or bash extensions (arrays, [[ … ]] , etc.) if you know that your system has these shells as /bin/sh . This file may be read by scripts in automated applications, so it should not call programs that produce any output or call exec . If you want to do that on text-mode logins, do it only for interactive shells. Example: case $- in *i*)
# Display a message if I have new mail
if mail -e; then echo 'You have new mail'; fi
# If zsh is available, and this looks like a text-mode login, run zsh
case "`ps $PPID` " in
*" login "*)
if type zsh >/dev/null 2>/dev/null; then exec zsh; fi;;
esac
esac This is an example of using /bin/sh as your login shell and switching to your favorite shell. See also how can I use bash as my login shell when my sysadmin refuses to let me change it When is ~/.profile not read on non-graphical login? Different login shells read different files. If your login shell is bash Bash reads ~/.bash_login or ~/.bash_profile if they exist instead of ~/.profile . Also bash does not read ~/.bashrc in a login shell even if it is interactive. To never have to remember these quirks again, create a ~/.bash_profile with the following two lines: . ~/.profile
case $- in *i*) . ~/.bashrc;; esac See also Which setup files should be used for setting up environment variables with bash? If your login shell is zsh Zsh reads ~/.zprofile and ~/.zlogin , but not ~/.profile . Zsh has a different syntax from sh, but can read ~/.profile in sh emulation mode. You can use this for your ~/.zprofile : emulate sh -c '. ~/.profile' See also Zsh not hitting ~/.profile If your login shell is some other shell There's not much you can do there, short of using /bin/sh as your login shell and your favorite shell (such as fish) as an interactive shell only. That's what I do with zsh. See above for an example of invoking another shell from ~/.profile . Remote commands When invoking a remote command without going through an interactive shell, not all shells read a startup file. Ksh reads the file specified by the ENV variable, if you manage to pass it. Bash reads ~/.bashrc if it is not interactive (!) and its parent process is called rshd or sshd . So you can start your ~/.bashrc with if [[ $- != *i* ]]; then
. ~/.profile
return
fi Zsh always reads ~/.zshenv when it starts. Use with caution, since this is read by every single instance of zsh, even when it is a subshell where you've set other variables. If zsh is your login shell and you want to use it to set variables only for remote commands, use a guard: set some variable in ~/.profile , such as MY_ENVIRONMENT_HAS_BEEN_SET=yes , and check this guard before reading ~/.profile . if [[ -z $MY_ENVIRONMENT_HAS_BEEN_SET ]]; then emulate sh -c '~/.profile'; fi The case of graphical logins Many distributions, display managers and desktop environments arrange to run ~/.profile , either by explicitly sourcing it from the startup scripts or by running a login shell. Unfortunately, there is no general method to handle distro/DM/DE combinations where ~/.profile is not read. If you use a traditional session started by ~/.xsession , this is the place where you should set your environment variables; do it by sourcing ~/.profile (i.e. . ~/.profile ). Note that in some setups, the desktop environment startup scripts will source ~/.profile again. | {
"source": [
"https://unix.stackexchange.com/questions/88201",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29146/"
]
} |
88,216 | I want to print list of numbers from 1 to 100 and I use a for loop like the following: number=100
for num in {1..$number}
do
echo $num
done When I execute the command it only prints {1..100} and not the list of number from 1 to 100. | Yes, that's because brace-expansion occurs before parameter expansion. Either use another shell like zsh or ksh93 or use an alternative syntax: Standard (POSIX) sh syntax i=1
while [ "$i" -le "$number" ]; do
echo "$i"
i=$(($i + 1))
done Ksh-style for ((...)) for ((i=1;i<=number;i++)); do
echo "$i"
done use eval (not recommended) eval '
for i in {1..'"$number"'}; do
echo "$i"
done
' use the GNU seq command on systems where it's available unset -v IFS # restore IFS to default
for i in $(seq "$number"); do
echo "$i"
done (that one being less efficient as it forks and runs a new command and the shell has to reads its output from a pipe). Avoid loops in shells. Using loops in a shell script are often an indication that you're not doing it right. Most probably, your code can be written some other way. | {
"source": [
"https://unix.stackexchange.com/questions/88216",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28650/"
]
} |
88,228 | I want to background a command chain like cp a b && mv b c && rm a . I have tried doing cp a b && mv b c && rm a & but this only backgrounds the last process. How do I background a command chain? | cp a b && mv b c && rm a & is correct. & has lower precedence than && . In fact & has lower precedence than anything other than ; and newline: & is in the same syntactic category as ; , the difference being that ; runs the command list in the foreground while & runs it in the background. You can test this for yourself: $ dash -c 'sleep 2 && echo waited & echo backgrounded'
backgrounded
$ waited Same with pdksh, ksh93, bash, csh, tcsh. The exception is zsh, which is weirdly incompatible. This is documented in the manual : If a sublist is terminated by a & , &| , or &! , the shell executes the last pipeline in it in the background, and does not wait for it to finish (note the difference from other shells which execute the whole sublist in the background). Unfortunately, zsh behaves in this way even in sh or ksh compatibility mode. To make sure that the whole command is executed in the background, put braces or parentheses around it. Parentheses create a subshell whereas braces don't, but this is irrelevant (except as a micro-optimization in some shells) since a backgrounded command is in a subshell anyway. { cp a b && mv b c && rm a; } & | {
"source": [
"https://unix.stackexchange.com/questions/88228",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45678/"
]
} |
88,247 | I have seen that sometimes :q works but sometimes we have to use :q! . This is the case for many commands. I was wondering what is the general use of ! in vim and when to use it. I tried to google this, but it seems the search is omitting the exclamation mark. | When you make no changes to the actual content of the file, you can simply quit with :q . However if you make edits, vim will not allow a simple quit because you may not want to abandon those changes (especially if you've been in vim for a long time editing and use :q by accident). The :q! in this case is a force the quit operation (override the warning). You can issue a forced quit to all opened windows (such as those opened with Ctrl w n ) with :qa! . You can write changes out and quit with :wq (or :x ), and this sometimes will fail (the file has been opened as readonly ( -R on the command line, or vim was invoked with the view command), in which case you can force the write operation with :wq! . As an aside, you can also use ZQ to do the same operation as :q! and ZZ to do the same as :wq , which can be easier on the hands for typing :) Vim also has a built-in help which you can access via :help ; exiting has it's own quick topic page: :help Q_wq . | {
"source": [
"https://unix.stackexchange.com/questions/88247",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45817/"
]
} |
88,257 | I'm trying to create a script that will evaluate the output of a command line, and then print if it's larger than 200. The program /exc/list will count the number of "stories" I have in a directory as an expression. For example: /exc/list q show.today1.rundown will return 161 if there are 161 stories in the today1 rundown. I have to figure this for 23 different directories. If the number of stories is greater than 200, I need it to print it to a temp file ( /tmp/StoryCount.$date ). What's the best method to handle this comparison? | When you make no changes to the actual content of the file, you can simply quit with :q . However if you make edits, vim will not allow a simple quit because you may not want to abandon those changes (especially if you've been in vim for a long time editing and use :q by accident). The :q! in this case is a force the quit operation (override the warning). You can issue a forced quit to all opened windows (such as those opened with Ctrl w n ) with :qa! . You can write changes out and quit with :wq (or :x ), and this sometimes will fail (the file has been opened as readonly ( -R on the command line, or vim was invoked with the view command), in which case you can force the write operation with :wq! . As an aside, you can also use ZQ to do the same operation as :q! and ZZ to do the same as :wq , which can be easier on the hands for typing :) Vim also has a built-in help which you can access via :help ; exiting has it's own quick topic page: :help Q_wq . | {
"source": [
"https://unix.stackexchange.com/questions/88257",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45910/"
]
} |
88,283 | I was googling about how I could find the number of CPUs in a machine and I found some posts but I am confused as some mentioned that you get the logical cores vs physical cores etc. So what is the difference between logical and physical cores and is there a way I could get the physical cores only? Or does it make sense to include logical cores in our count? | Physical cores are just that, physical cores within the CPU. Logical cores are the abilities of a single core to do 2 or more things simultaneously. This grew out of the early Pentium 4 CPUs ability to do what was termed Hyper Threading (HTT) . It was a bit of a game that was being played where sub components of the core weren't being used for certain types of instructions while, another long running instruction might have been being executed. So the CPU could in effect work on 2 things simultaneously. Newer cores are more full-fledged CPUs so they're working on multiple things simultaneously, but they aren't true CPUs as the physical cores are. You can read more about the limitations of the hyperthreading functionality vs. the physical capabilities of the core here on tomshardware in this article titled: Intel Core i5 And Core i7: Intel’s Mainstream Magnum Opus . You can see the breakdown of your box using the lscpu command: $ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 4
Thread(s) per core: 2
Core(s) per socket: 2
CPU socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 37
Stepping: 5
CPU MHz: 2667.000
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0-3 In the above my Intel i5 laptop has 4 "CPUs" in total CPU(s): 4 of which there are 2 physical cores (1 socket × 2 cores/socket = 2 cores) Core(s) per socket: 2 CPU socket(s): 1 of which each can run up to 2 threads Thread(s) per core: 2 at the same time. These threads are the core's logical capabilities. | {
"source": [
"https://unix.stackexchange.com/questions/88283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42132/"
]
} |
88,307 | I just noticed that it seems like the flag -e does not exist for the echo command in my shell on Linux.
Is this just a messed up setting or is it "normal"? Some code as an example: #!/bin/sh
echo -e "\e[3;12r\e[3H" Prints: -e \e[3;12r\e[3H This worked before! I guess some stty commands went terribly wrong and now it does not work anymore. Somebody suggested that my sh was actually just bash . | Because you used sh , not bash , then echo command in sh doesn't have option -e . From sh manpage: echo [-n] args...
Print the arguments on the standard output, separated by spaces.
Unless the -n option is present, a newline is output following the
arguments. And it doesn't have \e , too: If any of the following sequences of characters is encountered
during output, the sequence is not output. Instead, the specified
action is performed:
\b A backspace character is output.
\c Subsequent output is suppressed. This is normally used at
the end of the last argument to suppress the trailing new‐
line that echo would otherwise output.
\f Output a form feed.
\n Output a newline character.
\r Output a carriage return.
\t Output a (horizontal) tab character.
\v Output a vertical tab.
\0digits
Output the character whose value is given by zero to three
octal digits. If there are zero digits, a nul character
is output.
\\ Output a backslash.
All other backslash sequences elicit undefined behaviour. | {
"source": [
"https://unix.stackexchange.com/questions/88307",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45867/"
]
} |
88,392 | Assume I have a tmux (1.7) window split as follows: ________________________
| 1 |
| |
|-----------+------------|
| 2 | 3 |
|___________|____________| Now, the vertical sizes have been customized, so it's by no means one of the default layouts. On occasion, when a program gets stuck or when you reboot a machine to which you connected via ssh , the pane "hangs". I.e. nothing other than kill-pane appears to work. However, since there is no easy way to rebuild above split configuration once pane #1 has been kill-pane d, I'd like to "restart" it. | Looking at the manual, the command respawn-pane struck me, but it turned out that this didn't work. Reading more closely, it turned out that respawn-pane -k was the answer, since it would kill the running command. This way a pane can be "restarted" and spawned anew in place. So <prefix> + : and then enter respawn-pane -k and press Enter | {
"source": [
"https://unix.stackexchange.com/questions/88392",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
88,452 | I need to concatenate two variables to create a filename that has an underscore.
Lets call my variables $FILENAME and $EXTENSION where filename is read from a file. FILENAME=Hello
EXTENSION=WORLD.txt Now... I have tried the following without success: NAME=${FILENAME}_$EXTENSION
NAME=${FILENAME}'_'$EXTENSION
NAME=$FILENAME\\_$EXTENSION I always get some kind of weird output. Usually the underscore first. I need it to be echo $NAME
Hello_WORLD.txt | You can use something like this: NAME=$(echo ${FILENAME}_${EXTENSION}) This works as well: NAME=${FILENAME}_${EXTENSION} | {
"source": [
"https://unix.stackexchange.com/questions/88452",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20380/"
]
} |
88,487 | I have a general question, which might be a result of misunderstanding of how processes are handled in Linux. For my purposes I am going to define a 'script' as a snippet of bash code saved to a text file with execute permissions enabled for the current user. I have a series of scripts that call each other in tandem. For simplicity's sake I'll call them scripts A, B, and C. Script A carries out a series of statements and then pauses, it then executes script B, then it pauses, then it executes script C. In other words, the series of steps is something like this: Run Script A: Series of statements Pause Run Script B Pause Run Script C I know from experience that if I run script A until the first pause, then make edits in script B, those edits are reflected in the execution of the code when I allow it to resume. Likewise if I make edits to script C while script A is still paused, then allow it to continue after saving changes, those changes are reflected in the execution of the code. Here is the real question then, is there any way to edit Script A while it is still running? Or is editing impossible once its execution begins? | In Unix, most editors work by creating a new temporary file containing the edited contents. When the edited file is saved, the original file is deleted and the temporary file renamed to the original name. (There are, of course, various safeguards to prevent dataloss.) This is, for example, the style used by sed or perl when invoked with the -i ("in-place") flag, which is not really "in-place" at all. It should have been called "new place with old name". This works well because unix assures (at least for local filesystems) that an opened file continues to exist until it is closed, even if it is "deleted" and a new file with the same name is created. (It's not coincidental that the unix system call to "delete" a file is actually called "unlink".) So, generally speaking, if a shell interpreter has some source file open, and you "edit" the file in the manner described above, the shell won't even see the changes since it still has the original file open. [Note: as with all standards-based comments, the above is subject to multiple interpretations and there are various corner-cases, such as NFS. Pedants are welcome to fill the comments with exceptions.] It is, of course, possible to modify files directly; it's just not very convenient for editing purposes, because while you can overwrite data in a file, you cannot delete or insert without shifting all following data, which would imply quite a lot of rewriting. Furthermore, while you were doing that shifting, the contents of the file would be unpredictable and processes which had the file open would suffer. In order to get away with this (as with database systems, for example), you need a sophisticated set of modification protocols and distributed locks; stuff which is well beyond the scope of a typical file editing utility. So, if you want to edit a file while its being processed by a shell, you have two options: You can append to the file. This should always work. You can overwrite the file with new contents of exactly the same length . This may or may not work, depending on whether the shell has already read that part of the file or not. Since most file I/O involves read buffers, and since all the shells I know read an entire compound command before executing it, it is pretty unlikely that you can get away with this. It certainly wouldn't be reliable. I don't know of any wording in the Posix standard which actually requires the possibility of appending to a script file while the file is being executed, so it might not work with every Posix compliant shell, much less with the current offering of almost- and sometimes-posix-compliant shells. So YMMV. But as far as I know, it does work reliably with bash. As evidence, here's a "loop-free" implementation of the infamous 99 bottles of beer program in bash, which uses dd to overwrite and append (the overwriting is presumably safe because it substitutes the currently executing line, which is always the last line of the file, with a comment of exactly the same length; I did that so that the end result can be executed without the self-modifying behaviour.) #!/bin/bash
if [[ $1 == reset ]]; then
printf "%s\n%-16s#\n" '####' 'next ${1:-99}' |
dd if=/dev/stdin of=$0 seek=$(grep -bom1 ^#### $0 | cut -f1 -d:) bs=1 2>/dev/null
exit
fi
step() {
s=s
one=one
case $beer in
2) beer=1; unset s;;
1) beer="No more"; one=it;;
"No more") beer=99; return 1;;
*) ((--beer));;
esac
}
next() {
step ${beer:=$(($1+1))}
refrain |
dd if=/dev/stdin of=$0 seek=$(grep -bom1 ^next\ $0 | cut -f1 -d:) bs=1 conv=notrunc 2>/dev/null
}
refrain() {
printf "%-17s\n" "# $beer bottles"
echo echo ${beer:-No more} bottle$s of beer on the wall, ${beer:-No more} bottle$s of beer.
if step; then
echo echo Take $one down, pass it around, $beer bottle$s of beer on the wall.
echo echo
echo next abcdefghijkl
else
echo echo Go to the store, buy some more, $beer bottle$s of beer on the wall.
fi
}
####
next ${1:-99} # | {
"source": [
"https://unix.stackexchange.com/questions/88487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40638/"
]
} |
88,489 | I want your help in multiplying columns of one file by column of other another file where both files have the same number of columns and rows. I want the script to multiply the first column of the first file by the first column of the second file, the second column of the first file and the second column of the second file and so on. Here is my sample data and the required output below file1 2 3 4 4 . . .
5 6 7 8 . . .
. . . . . . . file2 3 4 8 10 . . .
5 10 5 9 . . .
. . . . . . . Required output file will be file1.file2 6 12 32 40 . . .
25 60 35 72 . . . | In Unix, most editors work by creating a new temporary file containing the edited contents. When the edited file is saved, the original file is deleted and the temporary file renamed to the original name. (There are, of course, various safeguards to prevent dataloss.) This is, for example, the style used by sed or perl when invoked with the -i ("in-place") flag, which is not really "in-place" at all. It should have been called "new place with old name". This works well because unix assures (at least for local filesystems) that an opened file continues to exist until it is closed, even if it is "deleted" and a new file with the same name is created. (It's not coincidental that the unix system call to "delete" a file is actually called "unlink".) So, generally speaking, if a shell interpreter has some source file open, and you "edit" the file in the manner described above, the shell won't even see the changes since it still has the original file open. [Note: as with all standards-based comments, the above is subject to multiple interpretations and there are various corner-cases, such as NFS. Pedants are welcome to fill the comments with exceptions.] It is, of course, possible to modify files directly; it's just not very convenient for editing purposes, because while you can overwrite data in a file, you cannot delete or insert without shifting all following data, which would imply quite a lot of rewriting. Furthermore, while you were doing that shifting, the contents of the file would be unpredictable and processes which had the file open would suffer. In order to get away with this (as with database systems, for example), you need a sophisticated set of modification protocols and distributed locks; stuff which is well beyond the scope of a typical file editing utility. So, if you want to edit a file while its being processed by a shell, you have two options: You can append to the file. This should always work. You can overwrite the file with new contents of exactly the same length . This may or may not work, depending on whether the shell has already read that part of the file or not. Since most file I/O involves read buffers, and since all the shells I know read an entire compound command before executing it, it is pretty unlikely that you can get away with this. It certainly wouldn't be reliable. I don't know of any wording in the Posix standard which actually requires the possibility of appending to a script file while the file is being executed, so it might not work with every Posix compliant shell, much less with the current offering of almost- and sometimes-posix-compliant shells. So YMMV. But as far as I know, it does work reliably with bash. As evidence, here's a "loop-free" implementation of the infamous 99 bottles of beer program in bash, which uses dd to overwrite and append (the overwriting is presumably safe because it substitutes the currently executing line, which is always the last line of the file, with a comment of exactly the same length; I did that so that the end result can be executed without the self-modifying behaviour.) #!/bin/bash
if [[ $1 == reset ]]; then
printf "%s\n%-16s#\n" '####' 'next ${1:-99}' |
dd if=/dev/stdin of=$0 seek=$(grep -bom1 ^#### $0 | cut -f1 -d:) bs=1 2>/dev/null
exit
fi
step() {
s=s
one=one
case $beer in
2) beer=1; unset s;;
1) beer="No more"; one=it;;
"No more") beer=99; return 1;;
*) ((--beer));;
esac
}
next() {
step ${beer:=$(($1+1))}
refrain |
dd if=/dev/stdin of=$0 seek=$(grep -bom1 ^next\ $0 | cut -f1 -d:) bs=1 conv=notrunc 2>/dev/null
}
refrain() {
printf "%-17s\n" "# $beer bottles"
echo echo ${beer:-No more} bottle$s of beer on the wall, ${beer:-No more} bottle$s of beer.
if step; then
echo echo Take $one down, pass it around, $beer bottle$s of beer on the wall.
echo echo
echo next abcdefghijkl
else
echo echo Go to the store, buy some more, $beer bottle$s of beer on the wall.
fi
}
####
next ${1:-99} # | {
"source": [
"https://unix.stackexchange.com/questions/88489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45611/"
]
} |
88,490 | Let's say I have a script that I want to pipe to another command or redirect to a file (piping to sh for the examples). Assume that I'm using bash. I could do it using echo : echo "touch somefile
echo foo > somefile" | sh I could also do almost the same thing using cat : cat << EOF
touch somefile
echo foo > somefile
EOF But if I replace "EOF" with "EOF | sh" it just thinks that it's a part of the heredoc. How can I make it so that cat outputs text from stdin, and then pipes it to an arbitrary location? | There are multiple ways to do this. The simplest is probably this: cat <<EOF | sh
touch somefile
echo foo > somefile
EOF Another, which is nicer syntax in my opinion: (
cat <<EOF
touch somefile
echo foo > somefile
EOF
) | sh This works as well, but without the subshell: {
cat <<EOF
touch somefile
echo foo > somefile
EOF
} | sh More variations: cat <<EOF |
touch somefile
echo foo > somefile
EOF
sh Or: { cat | sh; } << EOF
touch somefile
echo foo > somefile
EOF By the way, I expect the use of cat in your question is a placeholder for something else. If not, take it out, like this: sh <<EOF
touch somefile
echo foo > somefile
EOF Which could be simplified to this: sh -c 'touch somefile; echo foo > somefile' or: sh -c 'touch somefile
echo foo > somefile' Redirecting output instead of piping sh >out <<EOF
touch somefile
echo foo > somefile
EOF Using cat to get the equivalent of echo test > out : cat >out <<EOF
test
EOF Multiple Here Documents ( cat; echo ---; cat <&3 ) <<EOF 3<<EOF2
hi
EOF
there
EOF2 This produces the output: hi
---
there Here's what's going on: The shell sees the ( ... ) and runs the enclosed commands in a subshell. The cat and echo are simple enough. The cat <&3 says to run cat with file descriptor (fd) 0 (stdin) redirected from fd 3; in other words, cat out the input from fd 3. Before the (...) is started, the shell sees the two here document redirects and substitutes fd 0 ( <<EOF ) and fd 3 ( 3<<EOF2 ) with the read-side of pipes Once the initial command is started, the shell reads its stdin until EOF is reached and sends that to the write-side of the first pipe Next, it does the same with EOF2 and the write-side of the second pipe | {
"source": [
"https://unix.stackexchange.com/questions/88490",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29146/"
]
} |
88,503 | To capture a particular pattern, awk and grep can be used. Why should we use one over the other? Which is faster and why? If I had a log file and I wanted to grab a certain pattern, I could do one of the following awk '/pattern/' /var/log/messages or grep 'pattern' /var/log/messages I haven't done any benchmarking, so I wouldn't know. Can someone elaborate this? It is great to know the inner workings of these two tools. | grep will most likely be faster: # time awk '/USAGE/' imapd.log.1 | wc -l
73832
real 0m2.756s
user 0m2.740s
sys 0m0.020s
# time grep 'USAGE' imapd.log.1 | wc -l
73832
real 0m0.110s
user 0m0.100s
sys 0m0.030s awk is a interpreted programming language, where as grep is a compiled c-code program (which is additionally optimized towards finding patterns in files). (Note - I ran both commands twice so that caching would not potentially skew the results) More details about interpreted languages on wikipedia. As Stephane has rightly pointed out in comments, your mileage may vary due to the implementation of the grep and awk you use, the operating system it is on and the character set you are processing. | {
"source": [
"https://unix.stackexchange.com/questions/88503",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45891/"
]
} |
88,613 | I'm trying to make a small command that will find the processes that use the most CPU power. Firstly, I use ps aux > file.txt and then cut -c 16-20 file.txt | sort -n | tail -5 . The result I get is this: 1.0
2.7
8.
14.5
14.5 So my question is how can I have both the %CPU usage and the other fields outputted together? | The correct answer is: ps --sort=-pcpu For top 5: ps --sort=-pcpu | head -n 6 So you can specify columns without interfering with sorting. Ex: ps -Ao user,uid,comm,pid,pcpu,tty --sort=-pcpu | head -n 6 Note of 'ckujau': --sort is supported by ps from procps , other implementations may not have this option. | {
"source": [
"https://unix.stackexchange.com/questions/88613",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46081/"
]
} |
88,622 | I have a script that, when invoked, will cause the contents of dmesg to be written to a file, with the file's name basically being a timestamp. SELinux prevents this. Following the advice of Fedora's SELinux troubleshooting app, I tried: grep dmesg /var/log/audit/audit.log | audit2allow -M mypol semodule -i mypol.pp However, this doesn't seem to work, probably because the name of the file it's creating is different every time. So how do I tell SELinux to allow dmesg to create (and write to) any file in a certain directory? Or tell it that the script in question (and all the processes it spawns) can do that? | The correct answer is: ps --sort=-pcpu For top 5: ps --sort=-pcpu | head -n 6 So you can specify columns without interfering with sorting. Ex: ps -Ao user,uid,comm,pid,pcpu,tty --sort=-pcpu | head -n 6 Note of 'ckujau': --sort is supported by ps from procps , other implementations may not have this option. | {
"source": [
"https://unix.stackexchange.com/questions/88622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41033/"
]
} |
88,642 | I'm following through a tutorial and it mentions to run this command: sudo chmod 700 !$ I'm not familiar with !$ . What does it mean? | Basically, it's the last argument to the previous command. !$ is the "end" of the previous command. Consider the following
example: We start by looking for a word in a file: grep -i joe /some/long/directory/structure/user-lists/list-15 if joe is in that userlist, we want to remove him from it. We can either fire up vi with that long directory tree as the argument, or as simply as vi !$ Which
bash expands to: vi /some/long/directory/structure/user-lists/list-15 ( source ; handy guide, by the way) It's worth nothing the distinction between this !$ token and the special shell variable $_ .
Indeed, both expand to the last argument of the previous command. However, !$ is expanded during history expansion , while $_ is expanded during parameter expansion .
One important consequence of this is that, when you use !$ , the expanded command is saved in your history. For example, consider the keystrokes echo Foo Enter echo !$ Jar Enter Up Enter ; and echo Foo Enter echo $_ Jar Enter Up Enter . (The only characters changed are the $! and $_ in the middle.) In the former, when you press Up , the command line reads echo Foo Jar , so the last line written to stdout is Foo Jar . In the latter, when you press Up , the command line reads echo $_ bar , but now $_ has a different value than it did previously—indeed, $_ is now Jar , so the last line written to stdout is Jar Jar . Another consequence is that _ can be used in other parameter expansions, for example, the sequence of commands printf '%s ' isomorphism
printf '%s\n' ${_%morphism}sceles prints isomorphism isosceles .
But there's no analogous " ${!$%morphism} " expansion. For more information about the phases of expansion in Bash, see the EXPANSION section of man 1 bash (this is called Shell Expansions in the online edition). The HISTORY EXPANSION section is separate. | {
"source": [
"https://unix.stackexchange.com/questions/88642",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40509/"
]
} |
88,644 | What is the Linux command to check the server OS and its version? I am connected to the server using shell. | Kernel Version If you want kernel version information, use uname(1). For example: $ uname -a
Linux localhost 3.11.0-3-generic #8-Ubuntu SMP Fri Aug 23 16:49:15 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Distribution Information If you want distribution information, it will vary depending on your distribution and whether your system supports the Linux Standard Base . Some ways to check, and some example output, are immediately below. $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu Saucy Salamander (development branch)
Release: 13.10
Codename: saucy
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=13.10
DISTRIB_CODENAME=saucy
DISTRIB_DESCRIPTION="Ubuntu Saucy Salamander (development branch)"
$ cat /etc/issue.net
Ubuntu Saucy Salamander (development branch)
$ cat /etc/debian_version
wheezy/sid | {
"source": [
"https://unix.stackexchange.com/questions/88644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46098/"
]
} |
88,661 | Is there a nicer way to create a timestamp in front of an echo ? Currently I do it this way: #!/bin/sh
if mount | grep -q /mnt/usb; then
echo `date +%R\ ` "usb device already mounted"
else
echo `date +%R\ ` "mounting usb device..."
mount -t msdosfs /dev/da0s1 /mnt/usb
if mount | grep -q /mnt/usb; then
echo `date +%R\ ` "usb device successfully mounted"
fi
fi The output should look something like this: 10:36 usb device already mounted | You could skip the echo , and just put the message in the date command. date allows you to insert text into the format string ( +%R in your example). For example: date +"%R usb device already mounted" You can also throw it into a shell function for convenience. For example: echo_time() {
date +"%R $*"
}
echo_time "usb device already mounted" This is a cleaner if you are going to re-use it many times. | {
"source": [
"https://unix.stackexchange.com/questions/88661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42580/"
]
} |
88,665 | Witness: $ ps f
PID TTY STAT TIME COMMAND
31509 pts/3 Ss 0:01 -bash
27266 pts/3 S+ 0:00 \_ mysql -uroot -p
25210 pts/10 Ss+ 0:00 /bin/bash
24444 pts/4 Ss 0:00 -bash
29111 pts/4 S+ 0:00 \_ tmux attach
4833 pts/5 Ss+ 0:00 -bash
9046 pts/6 Ss 0:00 -bash
17749 pts/6 R+ 0:00 \_ ps f
4748 pts/0 Ss 0:00 -bash
14635 pts/0 T 0:02 \_ mysql -uroot -px xxxxxxxxxxxxxxxx
16210 pts/0 S+ 0:01 \_ mysql -uroot -px xxxxxxxxxxxxxxxx How did ps know to hide the mysql passwords? Can I incorporate this into my own scripts to hide particular CLI attributes? | ps does not hide the password. Applications like mysql overwrite arguments list that they got. Please note, that there is a small time frame (possible extendible by high system load), where the arguments are visible to other applications until they are overwritten. Hiding the process to other users could help. In general it is much better to pass passwords via files than per command line. In this article it is described for C, how to do this.
The following example hides/deletes all command line arguments: #include <string.h>
int main(int argc, char **argv)
{
// process command line arguments....
// hide command line arguments
if (argc > 1) {
char *arg_end;
arg_end = argv[argc-1] + strlen (argv[argc-1]);
*arg_end = ' ';
}
// ...
} Look also at https://stackoverflow.com/questions/724582/hide-arguments-from-ps and https://stackoverflow.com/questions/3830823/hiding-secret-from-command-line-parameter-on-unix . | {
"source": [
"https://unix.stackexchange.com/questions/88665",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9760/"
]
} |
88,693 | I just read some stuff about swappiness on Linux. I don't understand why the default is set to 60. According to me this parameter should be set to 10 in order to reduce swap. Swap is on my hard drives so it us much slower than my memory. Why did they configure the kernel like that? | Since kernel 2.6.28, Linux uses a Split Least Recently Used (LRU) page replacement strategy. Pages with a filesystem source, such as program text or shared libraries belong to the file cache. Pages without filesystem backing are called anonymous pages, and consist of runtime data such as the stack space reserved for applications etc. Typically pages belonging to the file cache are cheaper to evict from memory (as these can simple be read back from disk when needed). Since anonymous pages have no filesystem backing, they must remain in memory as long as they are needed by a program unless there is swap space to store them to. It is a common misconception that a swap partition would somehow slow down your system. Not having a swap partition does not mean that the kernel won't evict pages from memory, it just means that the kernel has fewer choices in regards to which pages to evict. The amount of swap available will not affect how much it is used. Linux can cope with the absence of a swap space because, by default, the kernel memory accounting policy may overcommit memory . The downside is that when physical memory is exhausted, and the kernel cannot swap anonymous pages to disk, the out-of-memory-killer (OOM-killer) mechanism will start killing off memory-hogging "rogue" processes to free up memory for other processes. The vm.swappiness option is a modifier that changes the balance between swapping out file cache pages in favour of anonymous pages. The file cache is given an arbitrary priority value of 200 from which vm.swappiness modifier is deducted ( file_prio=200-vm.swappiness ). Anonymous pages, by default, start out with 60 ( anon_prio=vm.swappiness ). This means that, by default, the priority weights stand moderately in favour of anonymous pages ( anon_prio=60 , file_prio=200-60=140 ). The behaviour is defined in mm/vmscan.c in the kernel source tree. Given a vm.swappiness of 100 , the priorities would be equal ( file_prio=200-100=100 , anon_prio=100 ). This would make sense for an I/O heavy system if it is not wanted that pages from the file cache being evicted in favour of anonymous pages. Conversely setting the vm.swappiness to 0 will prevent the kernel from evicting anonymous pages in favour of pages from the file cache. This might be useful if programs do most of their caching themselves, which might be the case with some databases. In desktop systems this might improve interactivity, but the downside is that I/O performance will likely take a hit. The default value has most likely been chosen as an approximate middleground between these two extremes. As with any performance parameter, adjusting vm.swappiness should be based on benchmark data comparable to real workloads, not just a gut feeling. | {
"source": [
"https://unix.stackexchange.com/questions/88693",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32820/"
]
} |
88,714 | I have some text in my paste buffer, e.g. I did a yw (yank word) and now I have 'foo' in my buffer. I now go to the word 'bar', and I want to replace it with my paste buffer. To replace the text manually I could do cw and then type the new word. How can I do a 'change word', but use the contents of my paste buffer instead of manually typing out the replacement word? The best option I have right now is to go to the beginning of the word I want to replace and do dw (delete word), go to the other place, and do the yw (yank word). Then go back to the replacement area and do p (paste) which is kind of clumsy, especially if they are not on the same screen. | Option 1 You could use registers to do it and make a keybinding for the process. Yank the word you want to replace with yw . The yanked word is in the 0 register which you can see by issuing :registers . Go to the word you want to replace and do cw . Do Ctrl + r followed by 0 to paste the 0 register. The map for that would look something like this (assuming Ctrl + j as our key combo): :map <C-j> cw<C-r>0<ESC> Option 2 (simpler) With your word yanked, cursor over the word you want to replace and do v i w p . Which is visual select inner word and paste. Courtesy of @tlo in the comments: you could also just do v e p . One char shorter. Downside have to position cursor at start of word and (as with mine) changes the buffer. Comment (from Michael): This is good. Extra note: the second method is indeed easier but, as is, only works for ONE substitution because after each substitution the buffer then gets changed to the field that was replaced (old text). The first method is a little harder to use BUT has the advantage that the buffer 0 stays 'as is' so you can use that method to do more than 1 replacement of the same text. | {
"source": [
"https://unix.stackexchange.com/questions/88714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
88,728 | Simple inquiry: I have just realized that I have never seen a shebang on top of a .bashrc script, which leads me to think the system uses the default shell to source it upon login ( ${SHELL} ). I am pondering over reasons why that is the case, i.e. is it considered a bad habit to use something other than the default shell to run the login script. | .bashrc and .bash_profile are NOT scripts. They're configuration file which get sourced every time bash is executed in one of 2 ways: interactive login The INVOCATION section of the bash man page is what's relevent. A login shell is one whose first character of argument zero is a - , or
one started with the --login option. An interactive shell is one started without non-option arguments and
without the -c option whose standard input and error are both
connected to terminals (as determined by isatty(3)) , or one started
with the -i option. PS1 is set and $- includes i if bash is
interactive, allowing a shell script or a startup file to test this
state. The following paragraphs describe how bash executes its startup
files. If any of the files exist but cannot be read, bash reports an
error. Tildes are expanded in file names as described below under Tilde Expansion in the EXPANSION section. When bash is invoked as an interactive login shell, or as a
non-interactive shell with the --login option, it first reads and
executes commands from the file /etc/profile , if that file
exists. After reading that file, it looks for ~/.bash_profile , ~/.bash_login , and ~/.profile , in that order, and reads and executes
commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior. When a login shell exits, bash reads and executes commands from the
file ~/.bash_logout , if it exists. When an interactive shell that is not a login shell is started, bash
reads and executes commands from ~/.bashrc , if that file exists.
This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead
of ~/.bashrc . You can control when they get loaded through the command line switches, --norc and --noprofile . You can also override the location of where they get loaded from using the --rcfile switch. As other's have mentioned you can mimic how these files get loaded through the use of the source <file> command or the use of the . <file> command. It's best to think of this functionality as follows: bash starts up with a bare environment bash then opens one of these files (depending on how it was invoked as interactive or login, and then... ...line by line executes each of the commands within the file... when complete gives control to in the form of a prompt, waiting for input Methods for invoking This topic seems to come up every once in a while, so here's a little cheatsheet of the various ways to invoke bash and what they result in. NOTE: To help I've added the messages "sourced $HOME/.bashrc" and "sourced $HOME/.bash_profile" to their respective files. basic calls bash -i $ bash -i
sourced /home/saml/.bashrc bash -l $ bash -l
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profile bash -il -or- bash -li $ bash -il
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profile bash -c "..cmd.." $ bash -c 'echo hi'
hi NOTE: Notice that the -c switch didn't source either file! disabling config files from being read bash --norc $ bash --norc
bash-4.1$ bash --noprofile $ bash --noprofile
sourced /home/saml/.bashrc bash --norc -i $ bash --norc -i
bash-4.1$ bash --norc -l $ bash --norc -l
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profile bash --noprofile -i $ bash --noprofile -i
sourced /home/saml/.bashrc bash --noprofile -l $ bash --noprofile -l
bash-4.1$ bash --norc -i -or- bash --norc -l $ bash --norc -c 'echo hi'
hi More esoteric ways to call bash bash --rcfile $HOME/.bashrc $ bash -rcfile ~/.bashrc
sourced /home/saml/.bashrc bash --norc --rcfile $HOME/.bashrc $ bash --norc -rcfile ~/.bashrc
bash-4.1$ These failed bash -i -rcfile ~/.bashrc $ bash -i -rcfile ~/.bashrc
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profile
bash: /home/saml/.bashrc: restricted: cannot specify `/' in command names bash -i -rcfile .bashrc $ bash -i -rcfile .bashrc
sourced /home/saml/.bashrc
sourced /home/saml/.bash_profile
bash: .bashrc: command not found There are probably more but you get the point, hopefully.... What else? Lastly if you're so enthralled with this topic that you'd like to read/explore more on it, I highly suggest taking a look at the Bash Beginners Guide, specifically section: 1.2. Advantages of the Bourne Again SHell . The various subsections under that one, "1.2.2.1. Invocation" through "1.2.2.3.3. Interactive shell behavior" explain the low level differences between the various ways you can invoke bash . | {
"source": [
"https://unix.stackexchange.com/questions/88728",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23944/"
]
} |
88,732 | Is it possible to speed up the gzip process? I'm using mysqldump "$database_name" | gzip > $BACKUP_DIR/$database_name.sql.gz to backup a database into a directory, $BACKUP_DIR . the manpage says: -# --fast --best Regulate the speed of compression using the
specified digit #, where -1 or --fast indi‐
cates the fastest compression method (less
compression) and -9 or --best indicates the
slowest compression method (best compression).
The default compression level is -6 (that is,
biased towards high compression at expense of
speed). How effective would it be to use --fast ? Is this effectively lowering the CPU usage on a modern computer? My test results I didn't notice any acceleration: 7 min, 47 seconds (with default ratio -6 ) 8 min, 36 seconds (with ratio --fast ( = 9 )) So it seems it takes even longer to use the fast compression? Only higher compression really slows it down: 11 min, 57 seconds (with ratio --best ( = 1 )) After getting the Idea with lzop I tested that too and it really is faster: 6 min, 14 seconds with lzop -1 -f -o $BACKUP_DIR/$database_name.sql.lzo | If you have a multi-core machine using pigz is much faster than traditional gzip. pigz, which stands for parallel implementation of gzip, is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data. pigz was written by Mark Adler, and uses the zlib and pthread libraries. Pigz ca be used as a drop-in replacement for gzip. Note than only the compression can be parallelised, not the decompression. Using pigz the command line becomes mysqldump "$database_name" | pigz > $BACKUP_DIR/$database_name.sql.gz | {
"source": [
"https://unix.stackexchange.com/questions/88732",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20661/"
]
} |
88,738 | I am creating an LDAP directory and searching by the full DN shows the proper results. $ ldapsearch -x -D "cn=ldapbind,dc=server,dc=com" -w bind I want / need to be able to search using the email address as in: $ ldapsearch -x -D [email protected] -w bind
ldap_bind: Invalid DN syntax (34)
additional info: invalid DN We have an Active Directory that allows ldapsearch to do that kind of search but I don't know what configuration changes do I need in order to have that in LDAP. I am not attaching my slapd.conf because I want someone to show me the right configuration to do this, and mine is quite minimal. | If you have a multi-core machine using pigz is much faster than traditional gzip. pigz, which stands for parallel implementation of gzip, is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data. pigz was written by Mark Adler, and uses the zlib and pthread libraries. Pigz ca be used as a drop-in replacement for gzip. Note than only the compression can be parallelised, not the decompression. Using pigz the command line becomes mysqldump "$database_name" | pigz > $BACKUP_DIR/$database_name.sql.gz | {
"source": [
"https://unix.stackexchange.com/questions/88738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31891/"
]
} |
88,744 | On ubuntu this file exists: /var/log/syslog . However the same file does not appear on CentOS Distributions. What is the equivalent file on CentOS? | Red Hat family distributions (including CentOS and Fedora) use /var/log/messages and /var/log/secure where Debian-family distributions use /var/log/syslog and /var/log/auth.log . Note that in newer Fedora (or RHEL/CentOS 7 if someone has gone out of their way to configure it this way), you may have no traditional syslog daemon running. In that case, the same data can be shown with journalctl (which defaults to producing text output in the syslog format). | {
"source": [
"https://unix.stackexchange.com/questions/88744",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41783/"
]
} |
88,808 | I am aware of three methods to delete all entries from a file. They are >filename touch filename 1 filename < /dev/null Of these three I abuse >filename the most as that requires the least number of keystrokes. However, I would like to know which is the most efficient of the three (if there are any more efficient methods) with respect to large log files and small files. Also, how does the three codes operate and delete the contents? 1 Edit : as discussed in this answer , this actually does not clear the file! | Actually, the second form touch filename doesn't delete anything from the file - it only creates an empty file if one did not exist, or updates the last-modified date of an existing file. And the third filename < /dev/null tries to run filename with /dev/null as input. cp /dev/null filename works. As for efficient, the most efficient would be truncate -s 0 filename (see here ). Otherwise, cp /dev/null filename or > filename are both fine. They both open and then close the file, using the truncate-on-open setting. cp also opens /dev/null , so that makes it marginally slower. On the other hand, truncate would likely be slower than > filename when run from a script since running the truncate command requires the system to open the executable, load it, and then run it. | {
"source": [
"https://unix.stackexchange.com/questions/88808",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43762/"
]
} |
88,824 | My basic understanding of a symlink is that it's a special file, a file that contains a string path to another file. The kernel's VFS abstracts a lot of that away but is there any reason why symlinks seem to be impossible to edit? In other words: Can I edit a symlink? If not, why not? I do understand that there are various ways of replacing symlinks (two alternatives are currently in the answers section) but it would be interesting to get an explanation on why replacement seems to be the only way to deal with symlinks. Why can't you just change where they point? | Given that -f just does a silent replacement, you can do an atomic replacement with mv -T (-T makes it work even if /loc.../link is a directory) : ln -s /location/to/link linkname
# ...
ln -s /location/to/link2 newlink
mv -T newlink linkname linkname is accessible throughout the process. | {
"source": [
"https://unix.stackexchange.com/questions/88824",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/880/"
]
} |
88,850 | I am trying to understand how the logical operator precedence works in bash. For example, I would have expected, that the following command does not echo anything. true || echo aaa && echo bbb However, contrary to my expectation, bbb gets printed. Can somebody please explain, how can I make sense of compounded && and || operators in bash? | In many computer languages, operators with the same precedence are left-associative . That is, in the absence of grouping structures, leftmost operations are executed first. Bash is no exception to this rule. This is important because in Bash and other shells && and || have the same precedence. This is different from most other programming languages which usually give && a higher precedence than || . So what happens in your example is that the leftmost operation ( || ) is carried out first: true || echo aaa Since true is obviously true, the || operator short-circuits and the whole statement is considered true without the need to evaluate echo aaa as you would expect. Now it remains to do the rightmost operation: (...) && echo bbb Since the first operation evaluated to true (i.e. had a 0 exit status), it's as if you're executing true && echo bbb so the && will not short-circuit, which is why you see bbb echoed. You would get the same behavior with false && echo aaa || echo bbb Notes based on the comments You should note that the left-associativity rule is only followed when both operators have the same precedence. This is not the case when you use these operators in conjunction with keywords such as [[...]] or ((...)) or use the -o and -a operators as arguments to the test or [ commands. In such cases, AND ( && or -a ) takes precedence over OR ( || or -o ). Thanks to Stephane Chazelas' comment for clarifying this point. It seems that in C and C-like languages && has higher precedence than || which is probably why you expected your original construct to behave like true || (echo aaa && echo bbb). This is not the case with Bash, however, in which both operators have the same precedence, which is why Bash parses your expression using the left-associativity rule. Thanks to Kevin's comment for bringing this up. There might also be cases where all 3 expressions are evaluated. If the first command returns a non-zero exit status, the || won't short circuit and goes on to execute the second command. If the second command returns with a zero exit status, then the && won't short-circuit as well and the third command will be executed. Thanks to Ignacio Vazquez-Abrams' comment for bringing this up. | {
"source": [
"https://unix.stackexchange.com/questions/88850",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43007/"
]
} |
88,875 | Initially I thought it was a coincidence, but now I see there's even a tag for it: all hidden file names start with a dot. Is this a convention? Why was it chosen? Can it be changed? Or in other words (as a related question @evilsoup suggested that implies the answer to a bunch of others): can I hide files without renaming them (using . as the first character of their name)? | According to Wikipedia , The notion that filenames preceded by a . should be hidden is the result of a software bug in the early days of Unix. When the special . and .. directory entries were added to the filesystem, it was decided that the ls command should not display them. However, the program was mistakenly written to exclude any file whose name started with a . character, rather than the exact names . or .. . ...so it started off as a bug, and then it was embraced as a feature (for the record, . is a link to the current directory and .. is a link to the directory above it, but I'm sure you know that already). Since this method of hiding files actually is good enough most of the time, I suppose nobody ever bothered to implement Windows-style file hiding. There's also the fact that implementing different behaviour would produce an even greater amount of fragmentation to the *nix world, which is the last thing anyone wants. There is another method for hiding files that doesn't involve renaming them, but it only works for GUI file managers (and it's not universal amongst those -- the major Linux ones use it, but I don't think OSX's Finder does, and the more niche Linux file managers are less likely to support this behaviour): you can create a file called .hidden , and put the filenames you want to hide inside it, one per line. ls and shell globs won't respect this, but it might be useful to you, still. | {
"source": [
"https://unix.stackexchange.com/questions/88875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45907/"
]
} |
88,879 | Mostly I edit Ruby files, although shell script file comments are also # Currently my comments show as dark blue on black which is really hard to read. See screenshot. How can I change their color? I'm willing to consider different schemas for all colors though I do like the black background as a base. | There are many color schemes which are usually distributed together with vim. You can select them with the :color command. You can see the available color schemes in vim's colors folder, for example in my case: $ ls /usr/share/vim/vimNN/colors/ # where vimNN is vim version, e.g. vim74
blue.vim darkblue.vim default.vim delek.vim desert.vim elflord.vim
evening.vim koehler.vim morning.vim murphy.vim pablo.vim peachpuff.vim
README.txt ron.vim shine.vim slate.vim torte.vim zellner.vim I usually use desert . So I open vim , then enter :color desert and enter. To have the color scheme by default every time you open vim , add :color desert into your ~/.vimrc . (Michael, OP) This was good. The terminal looks like: | {
"source": [
"https://unix.stackexchange.com/questions/88879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
88,934 | Does anyone know of a command that reports whether a system is Big Endian or Little Endian, or is the best option a technique like this using Perl or a string of commands? Perl # little
$ perl -MConfig -e 'print "$Config{byteorder}\n";'
12345678
# big
$ perl -MConfig -e 'print "$Config{byteorder}\n";'
87654321 od | awk # little
$ echo -n I | od -to2 | awk 'FNR==1{ print substr($2,6,1)}'
1
# big
$ echo -n I | od -to2 | awk 'FNR==1{ print substr($2,6,1)}'
0 References Perl Config documentation - byteorder | lscpu The lscpu command shows (among other things): Byte Order: Little Endian Systems this is known to work on CentOS 6 Ubuntu (12.04, 12.10, 13.04, 13.10, 14.04) Fedora (17,18,19) ArchLinux 2012+ Linux Mint Debian (therefore assuming Debian testing as well). Systems this is known to not work on Fedora 14 CentOS 5 (assuming RHEL5 because of this) Why the apparent differences across distros? After much digging I found out why. It looks like version util-linux version 2.19 was the first version that included the feature where lscpu shows you the output reporting your system's Endianness. As a test I compiled both version 2.18 and 2.19 on my Fedora 14 system and the output below shows the differences: util-linux 2.18 $ util-linux-ng-2.18/sys-utils/lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 4
Thread(s) per core: 2
Core(s) per socket: 2
CPU socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 37
Stepping: 5
CPU MHz: 1199.000
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0-3 util-linux 2.19 $ util-linux-2.19/sys-utils/lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
CPU socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 37
Stepping: 5
CPU MHz: 2667.000
BogoMIPS: 5320.02
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0-3 The above versions were downloaded from the kernel.org website . util-linux-ng-2.18.tar.bz2 util-linux-2.19.tar.gz | {
"source": [
"https://unix.stackexchange.com/questions/88934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7453/"
]
} |
89,003 | Warning: Running this command in most shells will result in a broken system that will need a forced shutdown to fix I understand the recursive function :(){ :|: & };: and what it does. But I don't know where is the fork system call. I'm not sure, but I suspect in the pipe | . | As a result of the pipe in x | y , a subshell is created to contain the pipeline as part of the foreground process group. This continues to create subshells (via fork() ) indefinitely, thus creating a fork bomb. $ for (( i=0; i<3; i++ )); do
> echo "$BASHPID"
> done
16907
16907
16907
$ for (( i=0; i<3; i++ )); do
> echo "$BASHPID" | cat
> done
17195
17197
17199 The fork does not actually occur until the code is run, however, which is the final invocation of : in your code. To disassemble how the fork bomb works: :() - define a new function called : { :|: & } - a function definition that recursively pipes the calling function into another instance of the calling function in the background : - call the fork bomb function This tends to not be too memory intensive, but it will suck up PIDs and consume CPU cycles. | {
"source": [
"https://unix.stackexchange.com/questions/89003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46091/"
]
} |
89,048 | I have command foo , how can I know if it's binary, a function or alias? | If you're on Bash (or another Bourne-like shell), you can use type . type command will tell you whether command is a shell built-in, alias (and if so, aliased to what), function (and if so it will list the function body) or stored in a file (and if so, the path to the file). Note that you can have nested cases, such as an alias to a function. If so, to find the actual type, you need to unalias first: unalias command; type command For more information on a "binary" file, you can do file "$(type -P command)" 2>/dev/null This will return nothing if command is an alias, function or shell built-in but returns more information if it's a script or a compiled binary. References Why not use "which"? What to use then? | {
"source": [
"https://unix.stackexchange.com/questions/89048",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1806/"
]
} |
89,052 | A few months ago, Samsung announced the Ativ Book 9 Plus , a pretty cool ultrabook with a screen resolution of 3200 x 1800 pixels (QHD+). The device ships with Windows 8 until Windows 8.1 is released and Samsung declared that only Windows 8.1 will be able to deal with this ultra high resolution. Now I ask myself if any Linux distribution is able to deal with such a high resolution. Especially font rendering is a point to regard. According to some early reviews of the Ativ Book 9 Plus, Windows 8 is not able to render fonts properly so that you can read text without having to put the screen just in front of your nose. That's why they say Windows 8.1 will be able to do better. But what's with Linux? Can Linux deal better with this ultra high resolution? Maybe anybody has some experience regarding other ultrabooks with comparable resolutions. | The Gnome / Wayland / X developers are working on this. As with OS X and Windows, the solution will probably involve decoupling applications' idea of a "pixel" from physical pixels. This is kind of silly, but solves the problem for software that makes assumptions about DPI and the relative size of a pixel. There's an update on this from Gnome developer Alexander Larsson here: HiDPI support in Gnome . | {
"source": [
"https://unix.stackexchange.com/questions/89052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46283/"
]
} |
89,211 | I recently learned that (at least on Fedora and Red Hat Enterprise Linux), executable programs that are compiled as Position Independent Executables (PIE) receive stronger address space randomization (ASLR) protection. So: How do I test whether a particular executable was compiled as a Position Independent Executable, on Linux? | You can use the perl script contained in the hardening-check package, available in Fedora and Debian (as hardening-includes ). Read this Debian wiki page for details on what compile flags are checked. It's Debian specific, but the theory applies to Red Hat as well. Example: $ hardening-check $(which sshd)
/usr/sbin/sshd:
Position Independent Executable: yes
Stack protected: yes
Fortify Source functions: yes (some protected functions found)
Read-only relocations: yes
Immediate binding: yes | {
"source": [
"https://unix.stackexchange.com/questions/89211",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9812/"
]
} |
89,213 | I have the following free spaces on 2 disks: SSD - 240G (sda) non-SSD - 240G (sdb) I understand that I should use SSD to install packages and non-SSD just for storing data.
What's the best partitioning schema (including swap) in my case? When I tried automatic partitioning it installs only on 1 disk and dedicating 8G for swap. PS. I'm going to install Linux Mint as a dual-boot alongside with Windows 7, which is already installed. UPDATE: I have 8GB of RAM
Windows has been installed on non-SSD drive. | On a hybrid solid-state and spinning disk system (like the one I'm typing this), you have two to three aims: Speed up your system: as much commonly used data as possible stays on the SSD. Keep volatile data off the SSD to reduce wear. Optional: have some level of redundancy by using an md(4) (‘software RAID’) setup across the SSD and HDD(s). If you're just meeting the first two goals, it's a simple task of coming up with a scheme somewhat like this (depending on which of these filesystems you use): Solid state: / (root filesystem), /usr , /usr/local , /opt Spinning disk: /var , /home , /tmp , swap Since you have two disks, though, you can read the Multi HDD/SSD article on the Debian wiki. It'll walk you through setting up md(4) devices with your SSD as a ‘mostly-read’ device (fast reads, fewer writes), your HDD as a ‘mostly-write’ device (no-wear writes, fewer reads). The filesystems that would normally go on the SSD alone can now go on this md device. The kernel will read mostly from the SSD (with occasional, brief forays into the HDD to increase read throughput even more). It'll write to the HDD, but handle SSD writes with care to avoid wearing out the device. You get the best of both worlds (almost), and you don't have to worry about SSD wear rendering your data useless. My laptop is running on a similar layout where / , /usr and /usr/local are on a RAID-1 device across a 64 GB SSD and a 64 GB partition on the 1TB HDD, and the rest of the filesystems are on the rest of the HDD. The rest of the HDD is one of two members of a RAID-1 setup, with one disk usually missing. When I'm at home, I plug in the second disk and let the md device synchronise. It's an added level of redundancy and an extra 1–7 day backup¹). You should also have a look at the basic SSD optimisation guide for Debian (and friends). Oh, and it's not guaranteed you'll be able to do this all via the installer. You may have to boot a rescue disk prior to installation, prepare (at least) the md(4) devices (I do the LVM PVs, VGs and LVs too because it's easier on the CLI), then boot the installer and just point out the volumes to it. ¹ RAID ≠ backup policy. I also have proper backups. | {
"source": [
"https://unix.stackexchange.com/questions/89213",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38353/"
]
} |
89,236 | I want to search a word from my current cursor position in vim to upward in file. How to achieve this? Also how to do same for downward in file. | To search in reverse from your cursor for a word, just use ? . So to find the word "fred" you would issue ?fred . For forward searching you use / , using "fred" as an example again you would issue /fred . If you want to continue searching for the same term, in the same direction you can use the n command. (Or you can issue ? or / without arguments). | {
"source": [
"https://unix.stackexchange.com/questions/89236",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33340/"
]
} |
89,264 | When uploading to an ftp site, the original file create date seems to be lost, and I get the upload date instead. However, the Exif data in the file is correct. Is there a tool to batch change the created date from the Exif date? | The EXIF handling tool exiv2 has a builtin option for this: exiv2 -T rename image.jpg sets the time of last file modification, mtime , to the date stored in the EXIF metadata. You asked for using the create time - but that is not used in Unix-like systems, and there are good reasons for that . I'm pretty sure the time you call create time is actually mtime , no problem there. From man exiv2 : NAME
exiv2 - Image metadata manipulation tool
SYNOPSIS
exiv2 [options] [action] file ...
DESCRIPTION
exiv2 is a program to read and write Exif, IPTC and XMP image metadata and image com‐
ments. The following image formats are supported:
[ ... ]
mv | rename
Rename files and/or set file timestamps according to the Exif create time‐
stamp. Uses the value of tag Exif.Photo.DateTimeOriginal or, if not
present, Exif.Image.DateTime to determine the timestamp. The filename for‐
mat can be set with -r fmt, timestamp options are -t and -T.
[ ... ]
-T Only set the file timestamp according to the Exif create timestamp, do not
rename the file (overrides -k). This option is only used with the 'rename'
action. Note: On Windows you may have to set the TZ environment variable for
this option to work correctly. See option -t to do the opposite. | {
"source": [
"https://unix.stackexchange.com/questions/89264",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46398/"
]
} |
89,296 | I've just installed kernel-3.11.0-1.fc20 for my Fedora 19 installation. During the rebooting progress, I saw the Linux logo with a Windows flag in it, what does it mean? The Fedora 19 is installed in an ASUS TX300CA notebook, secure boot is off, CSM (BIOS Compatibility Support Module) mode is on. | A couple of years ago, Linus Torvalds was discussing Linux version
numbers and said , "I think I will call it 3.11 Linux for Workgroups." It turns out he wasn't joking. With a release candidate of Linux 3.11
now available, Torvalds has actually named the new version of the
kernel " Linux for Workgroups ." He even gave it a Windows-themed boot
icon featuring Linux's mascot penguin, Tux, holding a flag emblazoned
with an old Windows logo. The name "Linux for Workgroups" follows such
whimsical past Linux version names as "Pink Farting Weasel," "Killer
Bat of Doom," "Erotic Pickled Herring," and "Jeff Thinks I Should
Change This, But To What?" From the news: 20 years after Windows 3.11, Linus unveils “Linux for Workgroups” | {
"source": [
"https://unix.stackexchange.com/questions/89296",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7332/"
]
} |
89,316 | I have a process I would like to kill: computer@ubuntu:~$ ps aux | grep socat
root 2092 0.0 0.0 5564 1528 pts/1 T 14:37 0:00 sudo socat TCP:xxx.17.29.152:54321 PTY,link=/dev/ttyGPS0,raw,echo=0,mode=666
computer@ubuntu:~$ kill 2092
-bash: kill: (2092) - Operation not permitted <--------------- How to kill ?? | try kill command with with -9 signal if sudo kill 'pid' does not work: sudo kill -9 2092 | {
"source": [
"https://unix.stackexchange.com/questions/89316",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/41742/"
]
} |
89,339 | Is there any tool/command in Linux that I can use to run a command in more than one tab simultaneously? I want to run the same command: ./myprog argument1 argument2 simultaneously in more than one shell to check if the mutexes are working fine in a threaded program. I want to be able to increase the number of instances of this program so as to put my code under stress later on. I am kind of looking for something like what wall does. I can think of using tty's, but that just seems like a lot of pain if I have to scale this to many more shells. | As mavillan already suggested, just use terminator . It allows to display many terminals in a tiled way. When enabling the broadcasting feature by clicking on the grid icon (top-left) and choosing "Broadcast All", you can enter the very same command simultaneously on each terminal. Here is an example with the date command broadcasted to a grid of 32 terminals. | {
"source": [
"https://unix.stackexchange.com/questions/89339",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21728/"
]
} |
89,386 | I have a CRONTAB entry as below. Can someone tell me what the below statement is exactly doing? 1 0 * * * /vol01/sites/provisioning/MNMS/45627/45627.sh1 >> /vol01/sites/provisioning/MNMS/45627/output/cron.log 2>&1 | > redirects output to a file, overwriting the file. >> redirects output to a file appending the redirected output at the end. Standard output is represented in bash with number 1 and standard error is represented with number 2 . They are separate, so the user can redirect them to different files. 2>&1 redirects the standard error to the standard output so they appear together and can be jointly redirected to a file. (Writing just 2>1 would redirect the standard error to a file called "1", not to standard output.) In your case, you have a job whose output (both standard and error) is appended at the end of a log file ( cron.log ) for later use. For additional info, check the bash manual (section "Redirection"), this question , and this question . | {
"source": [
"https://unix.stackexchange.com/questions/89386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46447/"
]
} |
89,483 | I'm running a Node.js server off of a Raspbian (Debian) machine, and I'd like to start and stop the server remotely. This for me means using PuTTY to access the shell, except when I close out of the PuTTY terminal or it times out, my server goes down with it, because I just execute my server in the foreground. Is there a way to keep it going, but still have a way to kill the process afterwards? | Your question was a little lacking in details, so I'm assuming that you mean that you typed the command to start your server on the console of your Pi, and it executed in the foreground. If this is the case, you have five options, ordered by complexity to implement: Use @f-tussel's answer . Since you're new to GNU/Linux, the & symbol tells the shell that it should execute the process in the background and return you to the prompt immediately, instead of what it normally does (which is wait for the command to finish before returning you to the prompt). This is technically called forking the command to the background. Do what you did before, but do it in a screen process. Basically this entails installing screen ( sudo apt-get install screen on your Debian system), and then sometime before you type the command to start your server, you execute screen . This opens a new shell that you can then reconnect to later, even if your PuTTY connection dies. So it will act as if you've never disconnected. If you're unfamiliar with screen , you may want to do some reading on Wikipedia and in the manpages . You can also accomplish this same thing with tmux . Use the forever node.js module. See https://stackoverflow.com/questions/4797050/how-to-run-process-as-background-and-never-die for where I got this. Put your server in a screen process in the background. This means that you'll create a new screen session in the background but never attach to it. And, instead of running a shell, the screen process will be running your server. Here's what you'll type: screen -d -m exec_your_server --put-args-here If you like, you can make this run at boot. Basically you need to put the screen command in the file /etc/rc.local or /etc/rc.d/rc.local , I forget which. If you run into trouble doing this, ask a new question. Again, you can do this with tmux too. Write a service script. Since you're on Debian and are new, you're presumably using the default init that Debian provides, which is System V Init. I've never looked at service files for System V Init, only systemd and a little Upstart, so I can't help you here. Ask a new question if you want to pursue this. This is the least "hacky" way, IMHO, and this is what you should consider doing if you're running your server long-term, as you can then manage it like other services on the system through commands like sudo service your_server stop , etc. Doing it this way will start your server at boot automatically, and you don't need screen because it also automatically happens in the background. It also automatically executes as root, which is dangerous - you should put logic in your server to drop the privileges that you have by becoming an unprivileged user that you have created specifically for the server. (This is in case the server gets compromised - imagine if someone could run things as root, through your server! Eugh. This question does an OK job of talking about this.) | {
"source": [
"https://unix.stackexchange.com/questions/89483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46539/"
]
} |
89,571 | I have a text status bar on a tiling window manager and I am using tcl to feed information to it. At the moment I need a command line that output the volume level 0% to 100%. I am using Arch Linux. | A one-liner to parse amixer 's output for volume in a status bar: awk -F"[][]" '/dB/ { print $2 }' <(amixer sget Master) Edit: As of November 2020, the updated amixer for Arch Linux is 1.2.4 which has no 'dB' in the output. So, the command should replaced by: awk -F"[][]" '/Left:/ { print $2 }' <(amixer sget Master) | {
"source": [
"https://unix.stackexchange.com/questions/89571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45840/"
]
} |
89,640 | I want to extract some lines with awk . Is it possible to do the following task: ls -l | awk 'BEGIN FOR(i=122;i<=129;i++) FNR==i' How can I display the details from line numbers 122 to 129? | You have not understood how awk works. The "program" specified is always executed once for each line (or "record" in awk parlance) of input, there's no need for FOR or any similar construct. Just use: verbose method ls -l | awk 'NR>=122 && NR<=129 { print }' more compact method ls -l | awk 'NR==122,NR==129' Ths one give a range for NR , which is the "Number Record", typically this is the current line awk is processing. | {
"source": [
"https://unix.stackexchange.com/questions/89640",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46614/"
]
} |
89,644 | My aim is to allow read access to folder /var/www/mysite/ only for users in group www-data using a default ACL . This works for a regular ACL, but not for a default ACL. Why? This is how I did it: I am logged on as user www-data who is in group www-data . I am in directory /var/www . I created a directory mysite and gave it the permission 0. Then I added ACL permissions so that anyone in group www-data has read-access to directory mysite/ . $ mkdir mysite
$ chmod 0 mysite
$ setfacl -m g:www-data:r-x mysite
$ ls -la
d---------+ 2 root root 4096 Sep 6 11:16 mysite
$ getfacl mysite/
# file: mysite/
# owner: root
# group: root
user::---
group::---
group:www-data:r-x
mask::r-x
other::--- At this point user www-data has access to the folder. However, if I instead add a default ACL, access is denied! $ setfacl -m d:g:www-data:r-x mysite # <---- NOTE the default acl rule.
$ ls -la
d---------+ 2 root root 4096 Sep 6 11:16 mysite
$ getfacl mysite/
# file: mysite/
# owner: root
# group: root
user::---
group::---
other::---
default:user::---
default:group::---
default:group:www-data:r-x
default:mask::r-x
default:other::--- | You have not understood how awk works. The "program" specified is always executed once for each line (or "record" in awk parlance) of input, there's no need for FOR or any similar construct. Just use: verbose method ls -l | awk 'NR>=122 && NR<=129 { print }' more compact method ls -l | awk 'NR==122,NR==129' Ths one give a range for NR , which is the "Number Record", typically this is the current line awk is processing. | {
"source": [
"https://unix.stackexchange.com/questions/89644",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/45856/"
]
} |
89,678 | It's been months since I've updated my Gentoo system. And, as you can imagine, this means there's a lot of packages (and USE changes) I need to go over. My system is "amd64" (multilib), but I have a lot of manually keyworded packages from "~amd64". Anyway, in this update, I keep seeing "ABI_X86" USE flags. What is this? This is new. There's nothing in "eselect news list" about it. I found this topic: http://forums.gentoo.org/viewtopic-t-953900-start-0.html . That seemed to show how to use it, but, are there any "real" docs for this? What does it do? What am I supposed to set "ABI_X86" to? I have a multilib system. I assume I want "64", but then what are "32" and "x32"? I'm confused at what I need to do here. Emerge is yelling a lot about slot conflicts, and they seem to be related to "ABI_X86" (I forget the errors exactly, but I remember one was zlib). So, is there any "official" docs about what ABI_X86 is and how to use it? From the thread I linked, I found this page: http://kicherer.org/joomla/index.php/en/blog/liste/29-transition-of-emul-packages-to-true-multilib , but I want to know what I'm doing before I go keyword a bunch of stuff and edit my make.conf . P.S. I have most of the "app-emulation/emul-linux-x86" packages (the ones I seemed to need at the time) in my "package.keywords" file. | I must disclose that I have little experience using multilib-build.eclass -style multilib in Gentoo. ABI_X86 is a USE_EXPAND variable; setting ABI_X86="32 64" or USE="abi_x86_32 abi_x86_64" are equivalent. The default setting of ABI_X86, as of this writing (2013-09-09), for the default/linux/amd64/13.0 profile seems to be just ABI_X86=64 . This variable controls explicit multilib support in ebuilds which use multilib-build.eclass which is a more Gentoo-like way of doing multilib than the original method. The original method that 32-bit libraries would be installed in Gentoo is via the binary snapshots named app-emulation/emul-linux-* . Each of these emulation binary packages contains a whole set of 32-bit libraries compiled by some Gentoo dev for you. Because each one installs a bundle of libraries which must be coordinated together, tracking dependencies of 32-bit-only ebuilds is harder. E.g., if you need a 32-bit media-libs/alsa-lib on a 32-bit system, you just install media-libs/alsa-lib , but on a 64-bit multilib system, you have to discover that app-emulation/emul-linux-soundlibs installs, among other libraries, the 32-bit version of media-libs/alsa-lib . Also, the Gentoo dev building one such binary package must do the work of figuring out the multilib and buildsystem quirks of each of the included libraries in the snapshot package, making maintenance harder. And, most importantly, providing binary packages as the only option official option for using multilib in Gentoo goes against the the spirit of Gentoo. You should have the right to compile everything yourself! The multilib-build.eclass moves away from this behavior by helping individual ebuilds install both 32-bit and 64-bit versions. This should allow, for example, wine to only need to specify dependencies directly against the packages it needs instead of needing to pull in app-emulation/emul-linux-* packages. As ssuominen mentions in the forum thread you referenced : =app-emulation/emul-linux-x86-xlibs-20130224-r1 which is empty package that doesn't have files because the files come now directly from the x11-libs/ (Note that -r1 has since been renamed to -r2 ) Eventually, app-emulation/emul-linux-x86-xlibs itself should be able to be dropped as 32-bit-only packages appropriately depend directly on the correct packages in x11-libs that, with multilib-build.eclass ’s help, provide the needed 32-bit libs. This is where ABI_X86 comes into play. Any multilib-build.eclass -enabled package gains at least the new USE-flags abi_x86_32 and abi_x86_64 and probably abi_x86_x32 . Using EAPI=2 -style USE dependencies , packages can depend on 32-bit versions of other packages. If x11-libs/libX11 is emerged while ABI_X86="32 64" , then it shall be installed with USE-flags abi_x86_32 and abi_x86_64 USE-flags set. If a particular graphical package needs a 32-bit version of libX11 , it can specify x11-libs/libX11[abi_x86_32] in its dependencies. This way, if you try to emerge this graphical package and libX11 has not installed 32-bit libs, portage will refuse. multilib-build.eclass is also universal and is compatible with 32-bit systems: installing this same graphical package on a 32-bit system will always work because it is impossible to install libX11 without its abi_x86_32 useflag being set. This solves the problem of needing to depend on app-emulation/emul-linux-x86-xlibs when on a multilib system and directly on x11-libs/libX11 on a 32-bit-only system. We are paving the way to a having cleaner and sensible inter-package dependencies on multilib systems. =app-emulation/emul-linux-x86-xlibs-20130224-r2 exists as an intermediary which enables any old packages which used to depend on app-emulation/emul-linux-x86-xlibs which don’t know how to depend directly on, for example, x11-libs/libX11[abi_x86_32] , to still work. =app-emulation/emul-linux-x86-xlibs-20130224-r2 makes sure sure that the same 32-bit libraries exist in /usr/lib32 as if =app-emulation/emul-linux-x86-xlibs-20130224 had been installed, but does it the Gentoo way by building these 32-bit libraries through its dependencies rather than providing a binary package. It behaves much like packages in the virtual category this way: it doesn’t install anything, just "forwards" dependencies for existing ebuilds. We have seen how multilib-build.eclass paves the way for cleaner dependencies on multilib systems. Any package which has ABI_X86 options (same thing as saying it has abi_x86_* useflags) has installed a 32-bit version of itself if you have specified USE=abi_x86_32 / ABI_X86=32 . How does this work (at a high conceptual level)? You can read the ebuild itself. Basically, the idea is the same as python or ruby ebuilds which have the option to install themselves for multiple versions of python and ruby simultaneously. When an ebuild inherits multilib-build.eclass , it loops over ABI_X86 and does each step of the unpacking, compilation, and installation process for each entry in ABI_X86. Since portage goes through all of the ebuild phases like src_unpack() , src_compile() , and src_install() (and others) in order and only once, the multilib-build.eclass (currently, with the help of the multibuild.eclass ) uses creates a directory for each different value of ABI_X86. It will unpack a copy of the sources to each of these directories. From there, each of these directories starts to diverge as each targets a particular ABI. The directory for ABI_X86=32 will have ./configure --libdir=/usr/lib32 run with FLAGS targeting 32-bit (e.g., CFLAGS=-m32 comes from the multilib profile’s CFLAGS_x86 envvar (note: portage profiles mostly refer to ABI_X86=32 as ABI=x86 and ABI_X86=64 as ABI=amd64)). During the src_install() phase, all of the different compiled ABIs are installed over eachother so that when any files have both 32-bit and 64-bit versions, the native ABI wins (e.g., an ebuild installing both libraries and an executable in PATH would install only a 64-bit executable into PATH but include both 32-bit and 64-bit libraries). To sum: when you set ABI_X86="32 64" in make.conf , any package which supports multilib-build.eclass will take roughly twice the amount of work (I’m not saying time ;-)) to compile as it is being built once for each ABI and results in 32-bit libraries in /usr/lib32 . I do not know if there is official documentation for ABI_X86 yet or its detailed status. Ebuilds using multilib-build.eclass seem to be mostly unstable for now. You can follow the instructions at the blog you linked to to start experiencing and testing ABI_X86 if you understand the distinction between app-emulation/emul-linux-x86-xlibs-20130224 and the new-style multilib app-emulation/emul-linux-x86-xlibs-20130224-r2 . But, if you are OK with the old-style binary package, I think that app-emulation/emul-linux-x86-xlibs-20130224 should remain functional. You would only need to move to -r2 if you use any package which directly depends on another package’s abi_x86_32 useflag (for example, app-emulation/emul-linux-x86-xlibs-20130224 and x1-libs/libX11[abi_x86_32] cannot coexist because they probably both install the same library to /usr/lib32 , namely /usr/lib32/libX11.so.6 ). A quick look at wine-1.7.0.ebuild suggests to me that it doesn’t need -r2 . | {
"source": [
"https://unix.stackexchange.com/questions/89678",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13384/"
]
} |
89,712 | Is this the right way to do float to integer conversion in bash? Is there any other method? flotToint() {
printf "%.0f\n" "$@"
} | bash In bash , that's probably as good as it gets. That uses a shell builtin. If you need the result in a variable, you could use command substitution, or the bash specific (though now also supported by zsh ): printf -v int %.0f "$float" You could do: float=1.23
int=${float%.*} But that would remove the fractional part instead of giving you the nearest integer and that wouldn't work for values of $float like 1.2e9 or .12 for instance. Also note the possible limitations due to the internal representation of floats: $ printf '%.0f\n' 1e50
100000000000000007629769841091887003294964970946560 You do get an integer, but chances are that you won't be able to use that integer anywhere. Also, as noted by @BinaryZebra, in several printf implementations (bash, ksh93, yash, not zsh, dash or older version of GNU printf ), it is affected by the locale (the decimal separator which can be . or , ). So, if your floats are always expressed with the period as the decimal separator and you want it to be treated as such by printf regardless of the locale of the user invoking your script, you'd need to fix the locale to C: LC_ALL=C printf '%.0f' "$float" With yash , you can also do: printf '%.0f' "$(($float))" (see below). POSIX printf "%.0f\n" 1.1 is not POSIX as %f is not required to be supported by POSIX. POSIXly, you can do: f2i() {
LC_ALL=C awk -- '
BEGIN {
for (i = 1; i < ARGC; i++)
printf "%.0f\n", ARGV[i]
}' "$@"
} Note that though literal numbers in the awk language syntax are always using the . as decimal radix character (as , is used to separate arguments so couldn't be used in numbers), when taking numbers on input like here from ARGV , some implementations honour the locale's decimal radix character. In the case of GNU awk , that's only when $POSIXLY_CORRECT is in the environment. zsh In zsh (which supports floating point arithmetic (decimal separator is always the period)), you have the rint() math function to give you the nearest integer as a float (like in C ) and int() to give you an integer from a float (like in awk ). So you can do: $ zmodload zsh/mathfunc
$ i=$(( int(rint(1.234e2)) ))
$ echo $i
123 Or: $ integer i=$(( rint(5.678e2) ))
$ echo $i
568 However note that while double s can represent very large numbers, integers are much more limited. $ printf '%.0f\n' 1e123
999999999999999977709969731404129670057984297594921577392083322662491290889839886077866558841507631684757522070951350501376
$ echo $((int(1e123)))
-9223372036854775808 ksh93 ksh93 was the first Bourne-like shell to support floating point arithmetic. ksh93 optimises command substitution by not using a pipe or forking when the commands are only builtin commands. So i=$(printf '%.0f' "$f") doesn't fork. Or even better: i=${ printf '%.0f' "$f"; } which doesn't fork either but also doesn't go all the trouble of creating a fake subshell environment. You can also do: i=$(( rint(f) )) But beware of: $ echo "$(( rint(1e18) ))"
1000000000000000000
$ echo "$(( rint(1e19) ))"
1e+19 You could also do: integer i=$(( rint(f) )) But like for zsh : $ integer i=1e18
$ echo "$i"
1000000000000000000
$ integer i=1e19
$ echo "$i"
-9223372036854775808 Beware that ksh93 floating point arithmetic honour the decimal separator setting in the locale (even though , is otherwise a math operator ( $((1,2)) would be 6/5 in a French/German... locale, and the same as $((1, 2)) , that is 2 in an English locale). yash yash also supports floating point arithmetic but doesn't have math functions like ksh93 / zsh 's rint() . You can convert a number to integer though by using the binary or operator for instance (also works in zsh but not in ksh93 ). Note however that it truncates the decimal part, it doesn't give you the nearest integer: $ echo "$(( 0.237e2 | 0 ))"
23
$ echo "$(( 1e19 | 0 ))"
-9223372036854775808 yash honours the locale's decimal separator on output, but not for the floating point literal constants in its arithmetic expressions, which can cause surprises: $ LC_ALL=fr_FR.UTF-8 ./yash -c 'a=$((1e-2)); echo $(($a + 1))'
./yash: arithmetic: `,' is not a valid number or operator It's good in a way in that you can use floating point constants in your scripts that use the period and not have to worry that it will stop working in other locales, but still be able to deal with the numbers as expressed by the user as long as you remember to do: var=$((10.3)) # and not var=10.3
... "$((a + 0.1))" # and not "$(($a + 0.1))".
printf '%.0f\n' "$((10.3))" # and not printf '%.0f\n' 10.3 | {
"source": [
"https://unix.stackexchange.com/questions/89712",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28434/"
]
} |
89,714 | I have command line access to a Linux machine which may or may not be virtualized. I want to determine what kind of virtualization technology it runs on, if any (VMWare, VirtualBox, KVM, OpenVZ, Xen, ). This isn't a hostile environment: I'm not trying to work against a VM that is trying to disguise itself, I'm diagnosing a flaky server that I know little about. More precisely, I'm helping someone diagnose the issue, I'm not sitting at the helm. So I have to convey instructions like “copy-paste this command” and not “poke around /proc somewhere”. Ideally, it would be something like lshw : an easily-installable (if not preinstalled) command that does the poking around and prints out relevant information. What's the easiest way of determining what virtualization technology this system may be a guest of? I'd appreciate if proposals mentioned which technologies (including bare hardware) can be conclusively detected and which can be conclusively eliminated. I'm mostly interested in Linux, but if it also works for other unices that's nice. | dmidecode -s system-product-name I have tested on Vmware Workstation, VirtualBox, QEMU with KVM, standalone QEMU with Ubuntu as the guest OS. Others have added additional platforms that they're familiar with as well. Virtualization technologies VMware Workstation root@router:~# dmidecode -s system-product-name
VMware Virtual Platform VirtualBox root@router:~# dmidecode -s system-product-name
VirtualBox Qemu with KVM root@router:~# dmidecode -s system-product-name
KVM Qemu (emulated) root@router:~# dmidecode -s system-product-name
Bochs Microsoft VirtualPC root@router:~# dmidecode | egrep -i 'manufacturer|product'
Manufacturer: Microsoft Corporation
Product Name: Virtual Machine Virtuozzo root@router:~# dmidecode
/dev/mem: Permission denied Xen root@router:~# dmidecode | grep -i domU
Product Name: HVM domU On bare metal, this returns an identification of the computer or motherboard model. /dev/disk/by-id If you don't have the rights to run dmidecode then you can use: Virtualization Technology: QEMU ls -1 /dev/disk/by-id/ Output [root@host-7-129 ~]# ls -1 /dev/disk/by-id/
ata-QEMU_DVD-ROM_QM00003
ata-QEMU_HARDDISK_QM00001
ata-QEMU_HARDDISK_QM00001-part1
ata-QEMU_HARDDISK_QM00002
ata-QEMU_HARDDISK_QM00002-part1
scsi-SATA_QEMU_HARDDISK_QM00001
scsi-SATA_QEMU_HARDDISK_QM00001-part1
scsi-SATA_QEMU_HARDDISK_QM00002
scsi-SATA_QEMU_HARDDISK_QM00002-part1 References How to detect virtualization at dmo.ca | {
"source": [
"https://unix.stackexchange.com/questions/89714",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
89,812 | Gilles wrote : character 27 = 033 = 0x1b = ^[ = \e Demizey wrote : ^[ is just a representation of ESCAPE and \e is interpreted as an actual ESCAPE character Then I also found this line from a TechRepublic article Make sure you write the key sequence as \e[24~ rather than ^[[24~. This is because the ^[ sequence is equivalent to the [Esc] key, which is represented by \e in the shell. So, for instance, if the key sequence was ^[[OP the resulting bind code to use would be \e[OP. But I have been using mappings that use ^[ instead of \e. So are they interchangeable? When do I need use one instead of the other? | If you take a look at the ANSI ASCII standard , the lower part of the character set (the first 32) are reserved "control characters" (sometimes referred to as "escape sequences"). These are things like the NUL character, Life Feed, Carriage Return, Tab, Bell, etc. The vast majority can be emulated by pressing the Ctrl key in combination with another key. The 27th (decimal) or \033 octal sequence, or 0x1b hex sequence is the Escape sequence. They are all representations of the same control sequence. Different shells, languages and tools refer to this sequence in different ways. Its Ctrl sequence is Ctrl - [ , hence sometimes being represented as ^[ , ^ being a short hand for Ctrl . You can enter control character sequences as a raw sequences on your command line by proceeding them with Ctrl - v . Ctrl - v to most shells and programs stops the interpretation of the following key sequence and instead inserts in its raw form. If you do this with either the Escape key or Ctrl - v it will display on most shells as ^[ . However although this sequence will get interpreted, it will not cut and paste easily, and may get reduced to a non control character sequence when encountered by certain protocols or programs. To get around this to make it easier to use, certain utilities represent the "raw" sequence either with \033 (by octal reference), hex reference \x1b or by special character reference \e . This is much the same in the way that \t is interpreted as a Tab - which by the way can also be input via Ctrl - i , or \n as newline or the Enter key, which can also be input via Ctrl - m . So when Gilles says: 27 = 033 = 0x1b = ^[ = \e He is saying decimal ASCII 27, octal 33, hex 1b, Ctrl - [ and \e are
all equal he means they all refer to the same thing (semantically). When Demizey says ^[ is just a representation of ESCAPE and \e is interpreted as an actual ESCAPE character He means semantically, but if you press Ctrl - v Ctrl - [ this is exactly the same as \e , the raw inserted sequence will most likely be treated the same way, but this is not always guaranteed, and so it recommended to use the programmatically more portable \e or 0x1b or \033 depending on the language/shell/utility being used. | {
"source": [
"https://unix.stackexchange.com/questions/89812",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16814/"
]
} |
89,842 | I'm trying to install Debian in my Dell inspiron 8GB ram + 500GB HD. Althought I could install it without any problems with the regular install and even with a few modifications, I'm trying to maximize my computers security, and therefore I would like to have the / folder encrypted. Not only that, I would also like that the passphrase for it's encryption was asked before the login screen loaded. The computer will have only one user, and root permissions run trough sudo . So, the question is: How do I get to install Debian with a passphrase for encrypted / asked before the login screen? This is my initial idea to assure security, but I'm open to new ideas and other devices I can use for that purpose. | If you take a look at the ANSI ASCII standard , the lower part of the character set (the first 32) are reserved "control characters" (sometimes referred to as "escape sequences"). These are things like the NUL character, Life Feed, Carriage Return, Tab, Bell, etc. The vast majority can be emulated by pressing the Ctrl key in combination with another key. The 27th (decimal) or \033 octal sequence, or 0x1b hex sequence is the Escape sequence. They are all representations of the same control sequence. Different shells, languages and tools refer to this sequence in different ways. Its Ctrl sequence is Ctrl - [ , hence sometimes being represented as ^[ , ^ being a short hand for Ctrl . You can enter control character sequences as a raw sequences on your command line by proceeding them with Ctrl - v . Ctrl - v to most shells and programs stops the interpretation of the following key sequence and instead inserts in its raw form. If you do this with either the Escape key or Ctrl - v it will display on most shells as ^[ . However although this sequence will get interpreted, it will not cut and paste easily, and may get reduced to a non control character sequence when encountered by certain protocols or programs. To get around this to make it easier to use, certain utilities represent the "raw" sequence either with \033 (by octal reference), hex reference \x1b or by special character reference \e . This is much the same in the way that \t is interpreted as a Tab - which by the way can also be input via Ctrl - i , or \n as newline or the Enter key, which can also be input via Ctrl - m . So when Gilles says: 27 = 033 = 0x1b = ^[ = \e He is saying decimal ASCII 27, octal 33, hex 1b, Ctrl - [ and \e are
all equal he means they all refer to the same thing (semantically). When Demizey says ^[ is just a representation of ESCAPE and \e is interpreted as an actual ESCAPE character He means semantically, but if you press Ctrl - v Ctrl - [ this is exactly the same as \e , the raw inserted sequence will most likely be treated the same way, but this is not always guaranteed, and so it recommended to use the programmatically more portable \e or 0x1b or \033 depending on the language/shell/utility being used. | {
"source": [
"https://unix.stackexchange.com/questions/89842",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46716/"
]
} |
89,844 | Suppose I want to run ubuntu on virtual box..
how it effect original windows in terms of speed ??
like we used ram for ubuntu can not be used for windows like that?? | If you take a look at the ANSI ASCII standard , the lower part of the character set (the first 32) are reserved "control characters" (sometimes referred to as "escape sequences"). These are things like the NUL character, Life Feed, Carriage Return, Tab, Bell, etc. The vast majority can be emulated by pressing the Ctrl key in combination with another key. The 27th (decimal) or \033 octal sequence, or 0x1b hex sequence is the Escape sequence. They are all representations of the same control sequence. Different shells, languages and tools refer to this sequence in different ways. Its Ctrl sequence is Ctrl - [ , hence sometimes being represented as ^[ , ^ being a short hand for Ctrl . You can enter control character sequences as a raw sequences on your command line by proceeding them with Ctrl - v . Ctrl - v to most shells and programs stops the interpretation of the following key sequence and instead inserts in its raw form. If you do this with either the Escape key or Ctrl - v it will display on most shells as ^[ . However although this sequence will get interpreted, it will not cut and paste easily, and may get reduced to a non control character sequence when encountered by certain protocols or programs. To get around this to make it easier to use, certain utilities represent the "raw" sequence either with \033 (by octal reference), hex reference \x1b or by special character reference \e . This is much the same in the way that \t is interpreted as a Tab - which by the way can also be input via Ctrl - i , or \n as newline or the Enter key, which can also be input via Ctrl - m . So when Gilles says: 27 = 033 = 0x1b = ^[ = \e He is saying decimal ASCII 27, octal 33, hex 1b, Ctrl - [ and \e are
all equal he means they all refer to the same thing (semantically). When Demizey says ^[ is just a representation of ESCAPE and \e is interpreted as an actual ESCAPE character He means semantically, but if you press Ctrl - v Ctrl - [ this is exactly the same as \e , the raw inserted sequence will most likely be treated the same way, but this is not always guaranteed, and so it recommended to use the programmatically more portable \e or 0x1b or \033 depending on the language/shell/utility being used. | {
"source": [
"https://unix.stackexchange.com/questions/89844",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38781/"
]
} |
89,897 | Let's say I work for a large services organisation outside the US/UK. We use UNIX and Linux servers extensively. Reading through this article it mentions that it would be easy to insert a backdoor into a C compiler, then any code compiled with that compiler would also contain a backdoor. Now given recent leaks regarding the NSA/GCHQ's mandate to put backdoors/weaknesses in all encryption methods, hardware and software, the compiler is now a critical point of failure. Potentially all standard UNIX/Linix distributions could be compromised. We cannot afford to have our systems, data and our customers data compromised by rogue governments. Given this information, I would like to build a trusted compiler from scratch, then I have a secure base to build on so I can build the Operating System and applications from source code using that compiler. Question What is the correct (and secure way) to go about compiling a compiler from source code (a seemingly chicken-egg scenario) then compiling a trusted Unix/Linux distribution from scratch? You can assume I or others have the ability to read and understand source code for security flaws, so source code will be vetted first before compiling. What I am really after is a working guide to produce this compiler from scratch securely and can be used to compile the kernel, other parts of the OS and applications. The security stack must start at the base level if we are to have any confidence in the operating system or applications running on that stack. Yes I understand there may be hardware backdoors which may insert some microcode into the compiler as it's being built. Not much we can do about that for the moment except maybe use chips not designed in the US. Let's get this layer sorted for a start and assume I could build it on an old computer potentially before any backdoors were inserted. As Bruce Schneier says: "To the engineers, I say this: we built the internet, and some of us have helped to subvert it. Now, those of us who love liberty have to fix it." Extra links: http://nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?pagewanted=all&_r=0 http://theguardian.com/commentisfree/2013/sep/05/government-betrayed-internet-nsa-spying | AFAIK the only way to be completely sure of security would be to write a compiler in assembly language (or modifying the disk directly yourself ). Only then can you ensure that your compiler isn't inserting a backdoor - this works because you're actually eliminating the compiler completely. From there, you may use your from-scratch compiler to bootstrap e.g. the GNU toolchain. Then you could use your custom toolchain to compile a Linux From Scratch system. Note that to make things easier on yourself, you could have a second intermediary compiler, written in C (or whatever other language). So you would write compiler A in assembly, then rewrite that compiler in C/C++/Python/Brainfuck/whatever to get compiler B, which you would compile using compiler A. Then you would use compiler B to compile gcc and friends. | {
"source": [
"https://unix.stackexchange.com/questions/89897",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46751/"
]
} |
89,905 | Is it possible? And in reverse alphabetical order? Essentially, this: How can I move files by type recursively from a directory and its sub-directories to another directory? Except that each file is not moved to the destination directory unless a separate process has fetched the sole file in that destination directory and moved it elsewhere (thus the target folder is empty and 'ready' for the next file to be moved there). | AFAIK the only way to be completely sure of security would be to write a compiler in assembly language (or modifying the disk directly yourself ). Only then can you ensure that your compiler isn't inserting a backdoor - this works because you're actually eliminating the compiler completely. From there, you may use your from-scratch compiler to bootstrap e.g. the GNU toolchain. Then you could use your custom toolchain to compile a Linux From Scratch system. Note that to make things easier on yourself, you could have a second intermediary compiler, written in C (or whatever other language). So you would write compiler A in assembly, then rewrite that compiler in C/C++/Python/Brainfuck/whatever to get compiler B, which you would compile using compiler A. Then you would use compiler B to compile gcc and friends. | {
"source": [
"https://unix.stackexchange.com/questions/89905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46755/"
]
} |
89,923 | I've been trying to understand the booting process, but there's just one thing that is going over my head.. As soon as the Linux kernel has been booted and the root file system (/) mounted, programs can be run and further kernel modules can be integrated to provide additional functions. To mount the root file system, certain conditions must be met. The kernel needs the corresponding drivers to access the device on which the root file system is located (especially SCSI drivers). The kernel must also contain the code needed to read the file system (ext2, reiserfs, romfs, etc.). It is also conceivable that the root file system is already encrypted. In this case, a password is needed to mount the file system. The initial ramdisk (also called initdisk or initrd) solves precisely the problems described above. The Linux kernel provides an option of having a small file system loaded to a RAM disk and running programs there before the actual root file system is mounted. The loading of initrd is handled by the boot loader (GRUB, LILO, etc.). Boot loaders only need BIOS routines to load data from the boot medium. If the boot loader is able to load the kernel, it can also load the initial ramdisk. Special drivers are not required. If /boot is not a different partition, but is present in the / partition, then shouldn't the boot loader require the SCSI drivers, to access the 'initrd' image and the kernel image? If you can access the images directly, then why exactly do we need the SCSI drivers?? | Nighpher, I'll try to answer your question, but for a more comprehensive description of boot process, try this article at IBM . Ok, I assume, that you are using GRUB or GRUB2 as your bootloader for explanation. First off, when BIOS accesses your disk to load the bootloader, it makes use of its built-in routines for disk access, which are stored in the famous 13h interrupt. Bootloader (and kernel at setup phase) make use of those routines when they access disk. Note that BIOS runs in real mode (16-bit mode) of the processor, thus it cannot address more than 2^20 bytes of RAM (2^20, not 2^16, because each address in real mode is comprised of segment_address*16 + offset, where both segment address and offset are 16-bit, see "x86 memory segmentation" at Wikipedia ). Thus, these routines can't access more than 1 MiB of RAM, which is a strict limitation and a major inconvenience. BIOS loads bootloader code right from the MBR – the first 512 bytes of your disk – and executes it. If you're using GRUB, that code is GRUB stage 1. That code loads GRUB stage 1.5, which is located either in the first 32 KiB of disk space, called DOS compatibility region, or from a fixed address of the file system. It doesn't need to understand the file system structure to do this, because even if stage 1.5 is in the file system, it is "raw" code and can be directly loaded to RAM and executed: See "Details of GRUB on the PC" at pixelbeat.org , which is the source for the below image. Load of stage 1.5 from disk to RAM makes use of BIOS disk access routines. Stage 1.5 contains the filesystem utilities, so that it can read the stage 2 from the filesystem (well, it still uses BIOS 13h to read from disk to RAM, but now it can decipher filesystem info about inodes, etc., and get raw code out of the disk). Older BIOSes might not be able to access the whole HD due to limitations in their disk addressing mode – they might use Cylinder-Head-Sector system, unable to address more than first 8 GiB of disk space: http://en.wikipedia.org/wiki/Cylinder-head-sector . Stage 2 loads the kernel into RAM (again, using BIOS disk utilities). If it's 2.6+ kernel, it also has initramfs compiled within, so no need to load it. If it's an older kernel, bootloader also loads standalone initrd image into memory, so that kernel could mount it and get drivers for mounting real file system from disk. The problem is that the kernel (and ramdisk) weigh more than 1 MiB; thus, to load them into RAM you have to load the kernel into the first 1 MiB, then jump to protected mode (32-bit), move the loaded kernel to high memory (free the first 1 MiB for real mode), then return to real (16-bit) mode again, get ramdisk from disk to first 1 MiB (if it's a separate initrd and older kernel), possibly switch to protected (32-bit) mode again, put it to where it belongs, possibly get back to real mode (or not: https://stackoverflow.com/questions/4821911/does-grub-switch-to-protected-mode ) and execute the kernel code. Warning: I'm not entirely sure about thoroughness and accuracy of this part of description. Now, when you finally run the kernel, you already have it and ramdisk loaded into RAM by bootloader , so the kernel can use disk utilities from ramdisk to mount your real root file system and pivot root to it. ramfs drivers are present in the kernel, so it can understand the contents of initramfs, of course. | {
"source": [
"https://unix.stackexchange.com/questions/89923",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46759/"
]
} |
89,925 | I issue the following command to find the .svn directories: find . -name ".svn" That gives me the following results: ./toto/.svn
./toto/titi/.svn
./toto/tata/.svn How could I process all these lines with rm -fr in order to delete the directories and their content? | Find can execute arguments with the -exec option for each match it finds. It is a recommended mechanism because you can handle paths with spaces/newlines and other characters in them correctly. You will have to delete the contents of the directory before you can remove the directory itself, so use -r with the rm command to achieve this. For your example you can issue: find . -name ".svn" -exec rm -r "{}" \; You can also tell find to just find directories named .svn by adding a -type d check: find . -name ".svn" -type d -exec rm -r "{}" \; Warning Use rm -r with caution it deletes the folder and all its contents. If you want to delete just empty directories as well as directories that contain only empty directories, find can do that itself with -delete and -empty : find . -name ".svn" -type d -empty -delete | {
"source": [
"https://unix.stackexchange.com/questions/89925",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/46758/"
]
} |
90,006 | I have a need to reduce the size of the locale-archive file on some of my RHEL6 systems. Here is that file on my system: [root@-dev-007 locale]# ls -l
total 96800
-rw-r--r--. 1 root root 99158704 Sep 9 15:22 locale-archive
-rw-r--r--. 1 root root 0 Jun 20 2012 locale-archive.tmpl So I did this ... [root@-dev-007 locale]# localedef --list | grep zh_CN
zh_CN
zh_CN.gb18030
zh_CN.gb2312
zh_CN.gbk
zh_CN.utf8 ... so I figured I could get rid of zh_CN like so ... [root@-dev-007 locale]# localedef --delete-from-archive zh_CN ... and I can see zh_CN does not get listed anymore like so ... [root@-dev-007 locale]# localedef --list | grep zh_CN
zh_CN.gb18030
zh_CN.gb2312
zh_CN.gbk
zh_CN.utf8 ... but the size of the locale-archive does not get smaller ... [root@-dev-007 locale]# ls -l
total 96800
-rw-r--r--. 1 root root 99158704 Sep 9 17:16 locale-archive
-rw-r--r--. 1 root root 0 Jun 20 2012 locale-archive.tmpl ... is there something else I need to do? | You can first remove all unneeded locales by doing: $localedef --list-archive | grep -v -i ^en | xargs localedef --delete-from-archive Where ^en can be replaced by the locale you wish to keep Then $build-locale-archive If this gives you an error similar to $build-locale-archive
/usr/sbin/build-locale-archive: cannot read archive header Then try this $mv /usr/lib/locale/locale-archive /usr/lib/locale/locale-archive.tmpl
$build-locale-archive If that still fails, check your version. According to this page newer versions don't have the necessary files to rebuild the archive in order to save space. You'll need to run yum reinstall glibc-common In later releases of Red Hat Enterprise Linux, you may use dnf , a similar application. | {
"source": [
"https://unix.stackexchange.com/questions/90006",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31667/"
]
} |
90,036 | I was doing a very simple search: grep -R Milledgeville ~/Documents And after some time this error appeared: grep: memory exhausted How can I avoid this? I have 10GB of RAM on my system and few applications running, so I am really surprised a simple grep runs out of memory. ~/Documents is about 100GB and contains all kinds of files. grep -RI might not have this problem, but I want to search in binary files too. | Two potential problems: grep -R (except for the modified GNU grep found on OS/X 10.8 and above) follows symlinks, so even if there's only 100GB of files in ~/Documents , there might still be a symlink to / for instance and you'll end up scanning the whole file system including files like /dev/zero . Use grep -r with newer GNU grep , or use the standard syntax: find ~/Documents -type f -exec grep Milledgeville /dev/null {} + (however note that the exit status won't reflect the fact that the pattern is matched or not). grep finds the lines that match the pattern. For that, it has to load one line at a time in memory. GNU grep as opposed to many other grep implementations doesn't have a limit on the size of the lines it reads and supports search in binary files. So, if you've got a file with a very big line (that is, with two newline characters very far appart), bigger than the available memory, it will fail. That would typically happen with a sparse file. You can reproduce it with: truncate -s200G some-file
grep foo some-file That one is difficult to work around. You could do it as (still with GNU grep ): find ~/Documents -type f -exec sh -c 'for i do
tr -s "\0" "\n" < "$i" | grep --label="$i" -He "$0"
done' Milledgeville {} + That converts sequences of NUL characters into one newline character prior to feeding the input to grep . That would cover for cases where the problem is due to sparse files. You could optimise it by doing it only for large files: find ~/Documents -type f \( -size -100M -exec \
grep -He Milledgeville {} + -o -exec sh -c 'for i do
tr -s "\0" "\n" < "$i" | grep --label="$i" -He "$0"
done' Milledgeville {} + \) If the files are not sparse and you have a version of GNU grep prior to 2.6 , you can use the --mmap option. The lines will be mmapped in memory as opposed to copied there, which means the system can always reclaim the memory by paging out the pages to the file. That option was removed in GNU grep 2.6 | {
"source": [
"https://unix.stackexchange.com/questions/90036",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2305/"
]
} |
90,100 | In Unicode, some character combinations have more than one representation. For example, the character ä can be represented as "ä", that is the codepoint U+00E4 (two bytes c3 a4 in UTF-8 encoding), or as "ä", that is the two codepoints U+0061 U+0308 (three bytes 61 cc 88 in UTF-8). According to the Unicode standard, the two representations are equivalent but in different "normalization forms", see UAX #15: Unicode Normalization Forms . The unix toolbox has all kinds of text transformation tools, sed , tr , iconv , Perl come to mind. How can I do quick and easy NF conversion on the command-line? | You can use the uconv utility from ICU . Normalization is achieved through transliteration ( -x ). $ uconv -x any-nfd <<<ä | hd
00000000 61 cc 88 0a |a...|
00000004
$ uconv -x any-nfc <<<ä | hd
00000000 c3 a4 0a |...|
00000003 On Debian, Ubuntu and other derivatives, uconv is in the libicu-dev package. On Fedora, Red Hat and other derivatives, and in BSD ports, it's in the icu package. | {
"source": [
"https://unix.stackexchange.com/questions/90100",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38992/"
]
} |
90,106 | CentOS 5.9 I came across an issue the other day where a directory had a lot of files. To count it, I ran ls -l /foo/foo2/ | wc -l Turns out that there were over 1 million files in a single directory (long story -- the root cause is getting fixed). My question is: is there a faster way to do the count? What would be the most efficient way to get the count? | Short answer: \ls -afq | wc -l (This includes . and .. , so subtract 2.) When you list the files in a directory, three common things might happen: Enumerating the file names in the directory. This is inescapable: there is no way to count the files in a directory without enumerating them. Sorting the file names. Shell wildcards and the ls command do that. Calling stat to retrieve metadata about each directory entry, such as whether it is a directory. #3 is the most expensive by far, because it requires loading an inode for each file. In comparison all the file names needed for #1 are compactly stored in a few blocks. #2 wastes some CPU time but it is often not a deal breaker. If there are no newlines in file names, a simple ls -A | wc -l tells you how many files there are in the directory. Beware that if you have an alias for ls , this may trigger a call to stat (e.g. ls --color or ls -F need to know the file type, which requires a call to stat ), so from the command line, call command ls -A | wc -l or \ls -A | wc -l to avoid an alias. If there are newlines in the file name, whether newlines are listed or not depends on the Unix variant. GNU coreutils and BusyBox default to displaying ? for a newline, so they're safe. Call ls -f to list the entries without sorting them (#2). This automatically turns on -a (at least on modern systems). The -f option is in POSIX but with optional status; most implementations support it, but not BusyBox. The option -q replaces non-printable characters including newlines by ? ; it's POSIX but isn't supported by BusyBox, so omit it if you need BusyBox support at the expense of overcounting files whose name contains a newline character. If the directory has no subdirectories, then most versions of find will not call stat on its entries (leaf directory optimization: a directory that has a link count of 2 cannot have subdirectories, so find doesn't need to look up the metadata of the entries unless a condition such as -type requires it). So find . | wc -l is a portable, fast way to count files in a directory provided that the directory has no subdirectories and that no file name contains a newline. If the directory has no subdirectories but file names may contain newlines, try one of these (the second one should be faster if it's supported, but may not be noticeably so). find -print0 | tr -dc \\0 | wc -c
find -printf a | wc -c On the other hand, don't use find if the directory has subdirectories: even find . -maxdepth 1 calls stat on every entry (at least with GNU find and BusyBox find). You avoid sorting (#2) but you pay the price of an inode lookup (#3) which kills performance. In the shell without external tools, you can run count the files in the current directory with set -- *; echo $# . This misses dot files (files whose name begins with . ) and reports 1 instead of 0 in an empty directory. This is the fastest way to count files in small directories because it doesn't require starting an external program, but (except in zsh) wastes time for larger directories due to the sorting step (#2). In bash, this is a reliable way to count the files in the current directory: shopt -s dotglob nullglob
a=(*)
echo ${#a[@]} In ksh93, this is a reliable way to count the files in the current directory: FIGNORE='@(.|..)'
a=(~(N)*)
echo ${#a[@]} In zsh, this is a reliable way to count the files in the current directory: a=(*(DNoN))
echo $#a If you have the mark_dirs option set, make sure to turn it off: a=(*(DNoN^M)) . In any POSIX shell, this is a reliable way to count the files in the current directory: total=0
set -- *
if [ $# -ne 1 ] || [ -e "$1" ] || [ -L "$1" ]; then total=$((total+$#)); fi
set -- .[!.]*
if [ $# -ne 1 ] || [ -e "$1" ] || [ -L "$1" ]; then total=$((total+$#)); fi
set -- ..?*
if [ $# -ne 1 ] || [ -e "$1" ] || [ -L "$1" ]; then total=$((total+$#)); fi
echo "$total" All of these methods sort the file names, except for the zsh one. | {
"source": [
"https://unix.stackexchange.com/questions/90106",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1822/"
]
} |
90,144 | I was experimenting today with some append operations and, in my curiosity, ran this (where file1.txt was non-empty and file2.txt was empty): $ cat file1.txt >> file2.txt >> file1.txt When I saw it taking a while, I hit Ctrl + C to end it. By then, file1.txt was hundreds of MB in size. Switching the file names doesn't produce the same effect; only when the files are in this order does the infinite redirection happen. What exactly is going on that causes this? | You cannot tell cat to use multiple standard out that way, the last redirection takes precedence so: cat file1.txt >> file2.txt >> file1.txt is equivalent to: >> file2.txt ; cat file1.txt >> file1.txt which obviously quickly fills the file system, given the fact the source file being the destination too grows indefinitely provided file1.txt is large enough not to be read at once. Most modern cat implementations should detect the recursivity and abort: Solaris cat: cat: input/output files 'file1.txt' identical Gnu cat: cat: file1.txt: input file is output file They can be fooled anyway with something like: cat < file1.txt | cat | cat >> file2.txt >> file1.txt A nice not so useless use of cats ... | {
"source": [
"https://unix.stackexchange.com/questions/90144",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34709/"
]
} |
90,227 | With all the paranoia that came with NSA revelations, I'm wondering why the Debian package installation mechanism does not support HTTPS for its transport, let alone use one by default. I know Debian packages have some sort of signature validation using GPG, but still, I don't think using HTTPS transport instead of HTTP would be too hard, considering how crucial this is security-wise. Edit: I mostly want to protect myself from MitM attacks (including just traffic sniffing), not Debian mirror administrators. HTTP repositories put the whole system set up on the table for anyone snooping traffic to Debian mirrors. | Update 2017: Apt 1.5 supports https out the box. It is no longer necessary to install package apt-transport-https separately. There are multiple attacks and vulnerabilities against apt with http repositories: A Look In the Mirror: Attacks on Package Managers https://justi.cz/security/2019/01/22/apt-rce.html To use https repos in sources.list you need to install the package apt-transport-https . deb https://some.server.com/debian stable main | {
"source": [
"https://unix.stackexchange.com/questions/90227",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/24334/"
]
} |
90,325 | How can we automatically set the system default timezone in Linux using the Internet? As I see it, NTP servers can update only time, but not timezone. Is there any server that can change the timezone? | I wrote a program a while ago that does this: tzupdate . You can see what it would set your timezone to (without actually setting it) by running tzupdate -p : $ tzupdate -p
Europe/Malta You can set it for real by running tzupdate as root. $ sudo tzupdate
Europe/Malta
$ date
Thu 12 Sep 05:52:22 CEST 2013 This works by: Geolocating your current IP Getting the time zone for that location Updating the symlink at /etc/localtime to point to the zoneinfo file for that timezone | {
"source": [
"https://unix.stackexchange.com/questions/90325",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/44026/"
]
} |
90,345 | I need to find specific length numbers in a big document. I tried to use regex for this. For example, If I need to search for numbers with exactly 2 digits, I use \d\d (i.e. /d twice followed by a space). This works well. But for finding 10 digit numbers it's not really feasible to type in \d 10 times. Tried \d{2} , says ' E486: Pattern not found: \d{2} ' Is there any quicker/easier way to achieve this? | There are different regular expression dialects; some (e.g. Perl's) do not require backslashes in the quantification modifier ( \d{2} ), some (e.g. sed) require two ( \d\{2\} ), and in Vim, only the opening curly needs it ( \d\{2} ). That's the sad state of incompatible regular expression dialects. Also note that for matching exact numbers, you have to anchor the match so that \d\{2} won't match to digits ( 12 ) in 123 . This can be done with negative look-behind and look-ahead : \d\@<!\d\{2}\d\@! | {
"source": [
"https://unix.stackexchange.com/questions/90345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17265/"
]
} |
90,383 | My company has disabled SSH public key authentication, therefore I have to manually enter each time my password (I am not suppose to change /etc/ssh/sshd_config ). However gssapi-keyex and gssapi-with-mic authentications are enabled (please see below ssh debug output). How could I use automatic login in this case? Can I exploit gssapi-keyex and/or gssapi-with-mic authentications? > ssh -v -o PreferredAuthentications=publickey hostxx.domainxx
OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to hostxx.domainxx [11.22.33.44] port 22.
debug1: Connection established.
debug1: identity file /home/me/.ssh/identity type -1
debug1: identity file /home/me/.ssh/id_rsa type -1
debug1: identity file /home/me/.ssh/id_dsa type 2
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.3
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host 'hostxx.domainxx' is known and matches the RSA host key.
debug1: Found key in /home/me/.ssh/known_hosts:2
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: gssapi-keyex,gssapi-with-mic,password
debug1: No more authentication methods to try.
Permission denied (gssapi-keyex,gssapi-with-mic,password). | Maybe. Can you obtain a ticket for your principal on your client system either as part of the standard login process or manually ( kinit , MIT Kerberos for Windows)? Does the server has a kerberos principal or can you give it one? It should be of the form host/[email protected] . Is GSSAPI authentication enabled on your client? Does your client know which realm the server belongs to either by DNS TXT resource record or local mapping? If you said "yes" to all of the above, then congratulations, you can use GSSAPIAuthentication . You may also need to enable credential delegation, depending on your setup. Testing steps: (Assuming: domain = example.com ; realm = EXAMPLE.COM) kinit [email protected] Ideally this is handled by your standard login process by including either pam_krb5 or pam_sss (with auth_provider = krb5 ) in the appropriate pam stack . kvno host/[email protected] This is a debugging step. ssh does this automatically if you have a valid cache and you are talking to a sshd which supports gssapi-with-mic or gssapi-keyex . dig _kerberos.example.com txt should return "EXAMPLE.COM" Alternatively the mapping could be stored in the [domain_realm] section of /etc/krb5.conf as .example.com = EXAMPLE.COM , but the dns method scales much better. ssh -o GSSAPIAuthentication=yes [email protected] To log into username other than that of your principal on the server will have to know to map it the details of which I'm not getting into here. | {
"source": [
"https://unix.stackexchange.com/questions/90383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13999/"
]
} |
90,450 | I've generated a self-signed certificate for my build server and I'd like to globally trust the certificate on my machine, as I created the key myself and I'm sick of seeing warnings. I'm on Ubuntu 12.04. How can I take the certificate and globally trust it so that browsers (Google Chrome), CLI utilities (wget, curl), and programming languages (Python, Java, etc.) trust the connection to https://example.com without asking questions? | The simple answer to this is that pretty much each application will handle it differently. Also OpenSSL and GNUTLS (the most widely used certificate processing libraries used to handle signed certificates) behave differently in their treatment of certs which also complicates the issue. Also operating systems utilize different mechanisms to utilize "root CA" used by most websites. That aside, giving Debian as an example. Install the ca-certificates package: apt-get install ca-certificates You then copy the public half of your untrusted CA certificate (the one you use to sign your CSR) into the CA certificate directory (as root): cp cacert.crt /usr/share/ca-certificates NOTE: Certificate needs to have .crt extension for it to be picked up. And get it to rebuild the directory with your certificate included, run as root: dpkg-reconfigure ca-certificates and select the ask option, scroll to your certificate, mark it for inclusion and select ok. Most browsers use their own CA database, and so tools like certutil have to be used to modify their contents (on Debian that is provided by the libnss3-tools package). For example, with Chrome you run something along the lines of: certutil -d sql:$HOME/.pki/nssdb -A -t "C,," -n "My Homemade CA" -i /path/to/CA/cert.file Firefox will allow you to browse to the certificate on disk, recognize it a certificate file and then allow you to import it to Root CA list. Most other commands such as curl take command line switches you can use to point at your CA, curl --cacert /path/to/CA/cert.file https://... or drop the SSL validation altogether curl --insecure https://... The rest will need individual investigation if the ca-certificates like trick does not sort it for that particular application. | {
"source": [
"https://unix.stackexchange.com/questions/90450",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5614/"
]
} |
90,523 | Is there a term to refer to the subset of packages that is automatically installed by Debian distribution? I though that it had something to do with packages priorities , but it doesn't seem to be the case, cause there are packages of all the priority levels among the packages installed by default. Also, some of those packages of initial subset have automatically installed flag, e.g. wireless-tools . So they will be automatically removed if packages of the initial subset, depending on them, are manually removed. I wonder, does the installation tool keep only a list of packages to be considered manually installed and installs their dependencies automatically? Answer to the first two questions: After installing the core Debian utilities, Debian installer seems to invoke tasksel to carry out installation "tasks". Among the typical tasks are "standard" task and "laptop" task. From tasksel page: "standard" task The standard task is a special task used by Debian Installer. It actually relies on the packages' priority. What does the "standard system" task include? tasksel --task-packages standard which is an aptitude search string that equates to aptitude search ~pstandard ~prequired ~pimportant -F%p So tasksel installs standard , required and important packages. "laptop" task The laptop task is a special task use by Debian Installer, to pull the
packages useful on a laptop: wireless-tools acpi-support cpufrequtils acpi wpasupplicant powertop acpid apmd pcmciautils pm-utils anacron avahi-autoipd bluetooth Desktop See https://wiki.debian.org/DebianDesktop/Tasks | The base system is described in Debian policy as all packages with required or important priority. You can search for the packages that the required and important priorities are attached to with the aptitude utility. aptitude search ~prequired -F"%p"
aptitude search ~pimportant -F"%p" debootstrap installs these packages during the setup process. tasksel will then install whatever other roles you choose on top, normally standard is the default selection that is used. On top of what is listed in the base system you will get A Kernel (thankfully) Input/Locale/Dictionary packages. Hardware packages. (ACPI, USB, PCI, Virtual guest additions on vm's) Then some dependent libraries to support the above. This amounts to about 60 packages on my VirtualBox VM (without the VBox guest additions which pull in a lot of dependencies). Run the Expert Install (select "Advanced options > Expert") if you get a chance. It gives you a better idea of the step by step install process and when apt is being run outside of the base install. | {
"source": [
"https://unix.stackexchange.com/questions/90523",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23424/"
]
} |
90,554 | I am using 32-bit Red Hat Linux in my VM. I want to boot it to command-line mode, not to GUI mode. I know that from there I can switch to GUI mode using startx command. How do I switch back to command-line mode? | Update: The answer below is now obsolete For a lot of distros now, the default is systemd rather than sysvinit. The answer below was written with sysvinit in mind. The more-up-to-date answer (and the one you should use if you have systemd as your init system) is golem's answer . sysvinit answer (obsolete on most current distros): You want to make runlevel 3 your default runlevel. From a terminal, switch to root and do the following: [user@host]$ su
Password:
[root@host]# cp /etc/inittab /etc/inittab.bak #Make a backup copy of /etc/inittab
[root@host]# sed -i 's/id:5:initdefault:/id:3:initdefault:/' /etc/inittab #Make runlevel 3 your default runlevel Anything after (and including) the second # on each line is a comment for you, you don't need to type it into the terminal. See the Wikipedia page on runlevels for more information. Explanation of sed command The sed command is a stream editor (hence the name), you use it to manipulate streams of data, usually through regular expressions . Here, we're telling sed to replace the pattern id:5:initdefault: with the pattern id:3:initdefault: in the file /etc/inittab , which is the file that controls your runlevles. The general syntax for a sed search and replace is s/pattern/replacement_pattern/ . The -i option tells sed to apply the modifications in place. If this were not present, sed would have outputted the resulting file (after substitution) to the terminal (more generally to standard output). Update To switch back to text mode, simply press CTRL + ALT + F1 . This will not stop your graphical session, it will simply switch you back to the terminal you logged in at. You can switch back to the graphical session with CTRL + ALT + F7 . | {
"source": [
"https://unix.stackexchange.com/questions/90554",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43840/"
]
} |
90,572 | I can't find a single desktop environment that supports setting both mouse acceleration AND mouse sensitivity. I don't want any mouse acceleration, but I want to increase the speed of my mouse. That means that if I move the mouse the same distance, the pointer will move the same distance every time, no matter how quickly I move the mouse. KDE will let me set mouse acceleration to 1x, but the mouse moves too slow then, and I can't figure out how to increase the speed. I am willing to accept a CLI solution, but I have only been able to get xinput to change acceleration. I don't recall having much luck with xset , either. | Just force the pointer to skip pixels, here's how: First list input devices: $ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ PixArt USB Optical Mouse id=10 [slave pointer (2)]
⎜ ↳ ETPS/2 Elantech Touchpad id=15 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Sleep Button id=8 [slave keyboard (3)]
↳ USB2.0 UVC 2M WebCam id=9 [slave keyboard (3)]
↳ Asus Laptop extra buttons id=13 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=14 [slave keyboard (3)]
↳ USB Keyboard id=11 [slave keyboard (3)]
↳ USB Keyboard id=12 [slave keyboard (3)] In the example we see the mouse is PixArt USB Optical Mouse . Next list its properties: $ xinput list-props "PixArt USB Optical Mouse"
Device 'PixArt USB Optical Mouse':
Device Enabled (140): 1
Coordinate Transformation Matrix (142): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
Device Accel Profile (265): 0
Device Accel Constant Deceleration (266): 1.000000
Device Accel Adaptive Deceleration (267): 1.000000
Device Accel Velocity Scaling (268): 10.000000
Device Product ID (260): 2362, 9488
Device Node (261): "/dev/input/event5"
Evdev Axis Inversion (269): 0, 0
Evdev Axes Swap (271): 0
Axis Labels (272): "Rel X" (150), "Rel Y" (151), "Rel Vert Wheel" (264)
Button Labels (273): "Button Left" (143), "Button Middle" (144), "Button Right" (145), "Button Wheel Up" (146), "Button Wheel Down" (147), "Button Horiz Wheel Left" (148), "Button Horiz Wheel Right" (149)
Evdev Middle Button Emulation (274): 0
Evdev Middle Button Timeout (275): 50
Evdev Third Button Emulation (276): 0
Evdev Third Button Emulation Timeout (277): 1000
Evdev Third Button Emulation Button (278): 3
Evdev Third Button Emulation Threshold (279): 20
Evdev Wheel Emulation (280): 0
Evdev Wheel Emulation Axes (281): 0, 0, 4, 5
Evdev Wheel Emulation Inertia (282): 10
Evdev Wheel Emulation Timeout (283): 200
Evdev Wheel Emulation Button (284): 4
Evdev Drag Lock Buttons (285): 0 By changing "Coordinate Transformation Matrix" property we can increase the pointer speed. Documentation says it is used to calculate a pointer movement . Quoting: By default, the CTM for every input device in X is the identity
matrix. As an example, lets say you touch a touchscreen at point (400,
197) on the screen: ⎡ 1 0 0 ⎤ ⎡ 400 ⎤ ⎡ 400 ⎤
⎜ 0 1 0 ⎥ · ⎜ 197 ⎥ = ⎜ 197 ⎥
⎣ 0 0 1 ⎦ ⎣ 1 ⎦ ⎣ 1 ⎦ The X and Y coordinates of the device event are input in the second
matrix of the calculation. The result of the calculation is where the
X and Y coordinates of the event are mapped to the screen. As shown,
the identity matrix maps the device coordinates to the screen
coordinates without any changes. So, we want to increase X and Y values, leaving the rest unchanged. An example from my PC: $ xinput set-prop "PixArt USB Optical Mouse" "Coordinate Transformation Matrix" 2.4 0 0 0 2.4 0 0 0 1 Play a bit with this until you're satisfied with the speed. thanks go to Simon Thum from Xorg mailing list for giving a hint about the matrix. UPD : note, some Windows games running in Wine may start exhibiting odd pointer behavior (e.g. it was noted that crosshair in Counter Strike 1.6 declines down until it stares the floor no matter how you move the mouse) , in this case just reset X and Y of CTM back to 1 before running the game. | {
"source": [
"https://unix.stackexchange.com/questions/90572",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28896/"
]
} |
90,590 | I need to set read and write permissions for root user to directory subfolderN and all its parent folders till root . I can do it by hands: $ sudo chmod +rx /root/subfolder1/subfolder2/subfolderN
$ sudo chmod +rx /root/subfolder1/subfolder2
$ sudo chmod +rx /root/subfolder1
$ sudo chmod +rx /root But if N is big I am tired. How to do automatically by one command? | This can be done easily in the shell, starting in the subdir and moving upwards: f=/root/subfolder1/subfolder2/subfolderN
while [[ $f != / ]]; do chmod +rx "$f"; f=$(dirname "$f"); done; This starts with whatever file/directory you set f too, and works on every parent directory, until it encounters "/" (or whatever you set the string in the condition of the loop to). It does not chmod "/". Make sure both f and the directory in the condition of the loop are absolute paths. | {
"source": [
"https://unix.stackexchange.com/questions/90590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/47116/"
]
} |
90,759 | I'm trying to create a Makefile for a small Perl utility I wrote, And I'm struggling to find out a way to find where to install my man page when make is run as a non-root user. I'm currently parsing the output of manpath to find out the first path in the $HOME directory… and it almost work fine. Paths I've found are ~/man and ~/share/man The only problem is that if those directories don't exist in the first place, manpath doesn't output any of them. Questions Is there a portable way to find out where I should install the man pages in the user's $HOME directory? If not, which one of them should be preferred? | You can put the man pages in this directory: $HOME/.local/share/man Accessing directly And then you can access them directly using man : man $HOME/.local/share/man/manX/manpage.1.gz $MANPATH You can check what the $MANPATH is with the command manpath , or echo out the environment variable $MANPATH . Examples $ manpath
manpath: warning: $MANPATH set, ignoring /etc/man_db.conf
/home/saml/apps/perl5/perlbrew/perls/perl-5.14.0/man:/home/saml/.rvm/rubies/ruby-1.9.2-p180/share/man:/home/saml/.rvm/man:/usr/local/share/man:/usr/share/man:/usr/brlcad/share/man:/usr/man:/usr/brlcad/share/man:/usr/brlcad/share/man
$ echo $MANPATH
/home/saml/apps/perl5/perlbrew/perls/perl-5.14.0/man:/home/saml/.rvm/rubies/ruby-1.9.2-p180/share/man:/home/saml/.rvm/man:/usr/local/share/man:/usr/share/man:/usr/brlcad/share/man:/usr/man:/usr/brlcad/share/man:/usr/brlcad/share/man You can add things to the MANPATH temporarily: MANPATH=$HOME/.local/share/man:$MANPATH If you want to make this permanent then add a file in your /etc/profile.d/ directory called myman.bash with the above MANPATH= line in it. This will get picked up system wide for everyone. If you want it to be just for you, then add it to your $HOME/.bash_profile or $HOME/.bashrc . References file-hierarchy — File system hierarchy overview Which distributions have $HOME/.local/bin in $PATH? | {
"source": [
"https://unix.stackexchange.com/questions/90759",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42594/"
]
} |
90,772 | The first two chars were repeated while I use Tab to do completion. In the screenshot below, cd is repeated. I have tried rxvt-unicdoe, xterm, terminator. All these terminal emulators have this issue. Zsh version 5.0.2, config file on-my-zsh | If the characters on your command line are sometimes displayed at an offset, this is often because zsh has computed the wrong width for the prompt. The symptoms are that the display looks fine as long as you're adding characters or moving character by character but becomes garbled (with some characters appearing further right than they should) when you use other commands that move the cursor ( Home , completion, etc.) or when the command overlaps a second line. Zsh needs to know the width of the prompt in order to know where the characters of the command are placed. It assumes that each character occupies one position unless told otherwise. One possibility is that your prompt contains escape sequences which are not properly delimited. Escape sequences that change the color or other formatting aspects of the text, or that change the window title or other effects, have zero width. They need to be included within a percent-braces construct %{…%} . More generally, an escape sequence like %42{…%} tells zsh to assume that what is inside the braces is 42 characters wide. So check your prompt settings ( PS1 , PROMPT , or the variables that they reference) and make sure that all escape sequences (such as \e[…m to change text attributes — note that it may be present via some variable like $fg[red] ) are inside %{…%} . Since you're using oh-my-zsh, check both your own settings and the definitions that you're using from oh-my-zsh. The same issue arises in bash. There zero-width sequences in a prompt need to be enclosed in \[…\] . Another possibility is that your prompt contains non-ASCII characters and that zsh (or any other application) and your terminal have a different idea of how wide they are. This can happen if there is a mismatch between the encoding of your terminal and the encoding that is declared in the shell, and the two encodings result in different widths for certain byte sequences. Typically you might run into this issue when using a non-Unicode terminal but declaring a Unicode locale or vice versa. Applications rely on environment variables to know the locale; the relevant setting is LC_CTYPE , which is determined from the environment variables LANGUAGE , LC_ALL , LC_CTYPE and LANG (the first of these that is set applies). The command locale | grep LC_CTYPE tells you your current setting. Usually the best way to avoid locale issues is to let the terminal emulator set LC_CTYPE , since it knows what encoding it expects; but if that's not working for you, make sure to set LC_CTYPE . The same symptoms can occur when the previous command displayed some output that didn't end in a newline, so that the prompt is displayed in the middle of the line but the shell doesn't realize that. In this case that would only happen after running such a command, not persistently. If a line isn't displayed properly, the command redisplay or clear-screen (bound to Ctrl + L by default) will fix it. | {
"source": [
"https://unix.stackexchange.com/questions/90772",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12350/"
]
} |
90,775 | I wanted to ask is there any reason not to use rsync for everything and abandon cp ? I wasn't aware of rsync and now I don't know why cp is ever needed. | Strictly speaking yes, you can always use rsync . From man rsync (emphasis mine): Rsync is a fast and extraordinarily versatile file copying
tool. It can copy locally, to/from another host over any remote
shell, or to/from a remote rsync daemon. It offers a large number
of options that control every aspect of its behavior and permit
very flexible specification of the set of files to be copied. It is famous for its delta-transfer algo‐ rithm, which
reduces the amount of data sent over the network by sending only
the differences between the source files and the existing files in
the destina‐ tion. Rsync is widely used for backups and mirroring
and as an improved copy command for everyday use. Now, sometimes it is just not worth typing those few extra characters just to use a tank to kill a fly. Also, rsync is often not installed by default so cp is nice to have. | {
"source": [
"https://unix.stackexchange.com/questions/90775",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/42132/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.