source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
73 | How can I set the processor affinity of a process on Linux? | I have used taskset for this. If you have taskset installed, something like: taskset -cp 0,2 45678 would set the process with id 45678 to have an affinity to cpus 1 and 3. | {
"source": [
"https://unix.stackexchange.com/questions/73",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/99/"
]
} |
80 | I've worked on *nix environments for the last four years as a application developer (mostly in C). Please suggest some books/blogs etc. for improving my *nix internals knowledge. | Here are some suggestions on how to understand the "spirit" of Unix, in addition to the fine recommendations that have been done in the previous posts: "The Unix Programming Environment" by Kernighan and Pike: an old book, but it shows the essence of the Unix environment. It will also help you become an effective shell user. "Unix for the Impatient" is a useful resource to learn to navigate the Unix environment. One of my favorites. If you want to become a power user, there is nothing better than O'Reilly's "Unix Power Tools" which consists of the collective tips and tricks from Unix professionals. Another book that I have not seen mentioned that is a fun light and education reading is the "Operating Systems, Design and Implementation", the book from Andy Tanenbaum that included the source code for a complete Unix operating system in 12k lines of code. | {
"source": [
"https://unix.stackexchange.com/questions/80",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/57/"
]
} |
105 | I have heard/read a lot about the chroot jail under linux but have never yet used it (I use Fedora day-to-day), so what is a chroot "jail"? When and why might I use it/not use it and is there anything else I should know? How would I go about creating one? | A chroot jail is a way to isolate a process and its children from the rest of the system. It should only be used for processes that don't run as root, as root users can break out of the jail very easily. The idea is that you create a directory tree where you copy or link in all the system files needed for a process to run. You then use the chroot() system call to change the root directory to be at the base of this new tree and start the process running in that chroot'd environment. Since it can't actually reference paths outside the modified root, it can't perform operations (read/write etc.) maliciously on those locations. On Linux, using a bind mounts is a great way to populate the chroot tree. Using that, you can pull in folders like /lib and /usr/lib while not pulling in /usr , for example. Just bind the directory trees you want to directories you create in the jail directory. | {
"source": [
"https://unix.stackexchange.com/questions/105",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
119 | When I look at a man page in my 'console' (not an xterm ) I see some coloration, but I don't get this in my xterm 's (e.g. konsole ) is there any way I can enable this? hopefully a fairly simple solution? | You need to use the termcap(5) feature. The man page on some Unices says this tool is obsolete and to use terminfo , but it's still available on others (and terminfo is more complicated). More importantly, less uses termcap . Setting colors for less I do the following so that less and man (which uses less ) will have color: $ cat ~/.LESS_TERMCAP
export LESS_TERMCAP_mb=$(tput bold; tput setaf 2) # green
export LESS_TERMCAP_md=$(tput bold; tput setaf 6) # cyan
export LESS_TERMCAP_me=$(tput sgr0)
export LESS_TERMCAP_so=$(tput bold; tput setaf 3; tput setab 4) # yellow on blue
export LESS_TERMCAP_se=$(tput rmso; tput sgr0)
export LESS_TERMCAP_us=$(tput smul; tput bold; tput setaf 7) # white
export LESS_TERMCAP_ue=$(tput rmul; tput sgr0)
export LESS_TERMCAP_mr=$(tput rev)
export LESS_TERMCAP_mh=$(tput dim)
export LESS_TERMCAP_ZN=$(tput ssubm)
export LESS_TERMCAP_ZV=$(tput rsubm)
export LESS_TERMCAP_ZO=$(tput ssupm)
export LESS_TERMCAP_ZW=$(tput rsupm)
export GROFF_NO_SGR=1 # For Konsole and Gnome-terminal And then in my ~/.bashrc , I do this: # Get color support for 'less'
export LESS="--RAW-CONTROL-CHARS"
# Use colors for less, man, etc.
[[ -f ~/.LESS_TERMCAP ]] && . ~/.LESS_TERMCAP NOTE: See Documentation on LESS_TERMCAP_* variables? for how this works. The final result | {
"source": [
"https://unix.stackexchange.com/questions/119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
148 | I spend most of my time working in Unix environments and using terminal emulators. I try to use color on the command line, because color makes the output more useful and intuitive. What options exist to add color to my terminal environment? What tricks do you use? What pitfalls have you encountered? Unfortunately, support for color varies depending on terminal type, OS, TERM setting, utility, buggy implementations, etc. Here are some tips from my setup, after a lot of experimentation: I tend to set TERM=xterm-color , which is supported on most hosts (but not all). I work on a number of different hosts, different OS versions, etc. I use everything from macOS X, Ubuntu Linux, RHEL/CentOS/Scientific Linux and FreeBSD. I'm trying to keep things simple and generic, if possible. I do a bunch of work using GNU screen , which adds another layer of fun. Many OSs set things like dircolors and by default, and I don't want to modify this on a hundred different hosts. So I try to stick with the defaults. Instead, I tweak my terminal's color configuration. Use color for some Unix commands ( ls , grep , less , vim ) and the Bash prompt . These commands seem to use the standard " ANSI escape sequences ". For example: alias less='less --RAW-CONTROL-CHARS'
export LS_OPTS='--color=auto'
alias ls='ls ${LS_OPTS}' I'll post my .bashrc and answer my own question Jeopardy Style. | Here are a couple of things you can do: Editors + Code A lot of editors have syntax highlighting support. vim and emacs have it on by default. You can also enable it under nano . You can also syntax highlight code on the terminal by using Pygments as a command-line tool. grep grep --color=auto highlights all matches. You can also use export GREP_OPTIONS='--color=auto' to make it persistent without an alias. If you use --color=always , it'll use colour even when piping , which confuses things. ls ls --color=always Colors specified by: export LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33' (hint: dircolors can be helpful) PS1 You can set your PS1 (shell prompt) to use colours. For example: PS1='\e[33;1m\u@\h: \e[31m\W\e[0m\$ ' Will produce a PS1 like: [yellow]lucas@ubuntu: [red]~[normal]$ You can get really creative with this. As an idea: PS1='\e[s\e[0;0H\e[1;33m\h \t\n\e[1;32mThis is my computer\e[u[\u@\h: \w]\$ ' Puts a bar at the top of your terminal with some random info. (For best results, also use alias clear="echo -e '\e[2J\n\n'" .) Getting Rid of Escape Sequences If something is stuck outputting colour when you don't want it to, I use this sed line to strip the escape sequences: sed "s/\[^[[0-9;]*[a-zA-Z]//gi" If you want a more authentic experience, you can also get rid of lines starting with \e[8m , which instructs the terminal to hide the text. (Not widely supported.) sed "s/^\[^[8m.*$//gi" Also note that those ^[s should be actual, literal ^[s. You can type them by pressing ^V^[ in bash, that is Ctrl + V , Ctrl + [ . | {
"source": [
"https://unix.stackexchange.com/questions/148",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4/"
]
} |
158 | I sometimes compile apps from source and I've either been using: ./configure
make
sudo make install But recently, I came across ./autogen.sh which generates the configure and make scripts for me and executes them. What other methods to streamline C/C++/C#(mono) compilation exist? Make seems a bit old. Are there new tools out there? Given the choice, which one should I use? | Autoconf and Automake were set out to solve an evolutionary problem of Unix. As Unix evolved into different directions, developers that wanted portable code tended to write code like this: #if RUNNING_ON_BSD
Set things up in the BSD way
#if RUNNING_ON_SYSTEMV
Set things up in the SystemV way
#endif As Unix was forked into different implementations (BSD, SystemV, many vendor forks, and later Linux and other Unix-like systems) it became important for developers that wanted to write portable code to write code that depended not on a particular brand of operating system, but on features exposed by the operating system. This is important because a Unix version would introduce a new feature, for example the "send" system call, and later other operating systems would adopt it. Instead of having a spaghetti of code that checked for brands and versions, developers started probing by features, so code became: #if HAVE_SEND
Use Send here
#else
Use something else
#endif Most README files to compile source code back in the 90's pointed developers to edit a config.h file and comment out that proper features available on the system, or would ship standard config.h files for each operating system configuration that had been tested. This process was both cumbersome and error prone and this is how Autoconf came to be. You should think of Autoconf as a language made up of shell commands with special macros that was able to replace the human editing process of the config.h with a tool that probed the operating system for the functionality. You would typically write your probing code in the file configure.ac and then run the autoconf command which would compile this file to the executable configure command that you have seen used. So when you run ./configure && make you were probing for the features available on your system and then building the executable with the configuration that was detected. When open source projects started using source code control systems, it made sense to check in the configure.ac file, but not the result of the compilation (configure). The autogen.sh is merely a small script that invokes the autoconf compiler with the right command arguments for you. -- Automake grew also out of existing practices in the community. The GNU project standardized a regular set of targets for Makefiles: make all would build the project make clean would remove all compiled files from the project make install would install the software things like make dist and make distcheck would prepare the source for distribution and verify that the result was a complete source code package and so on... Building compliant makefiles became burdensome because there was a lot of boilerplate that was repeated over and over. So Automake was a new compiler that integrated with autoconf and processed "source" Makefile's (named Makefile.am) into Makefiles that could then be fed to Autoconf. The automake/autoconf toolchain actually uses a number of other helper tools and they are augmented by other components for other specific tasks. As the complexity of running these commands in order grew, the need for a ready-to-run script was born, and this is where autogen.sh came from. As far as I know of, Gnome was project that introduced the use of this helper script autogen.sh | {
"source": [
"https://unix.stackexchange.com/questions/158",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/84/"
]
} |
165 | What purpose is each suitable for? | I'll just name a few pro and con points for each. This is by no means an exhausting list, just an indication. If there are some big omissions that need to be in this list, leave a comment and I'll add them, so we get a nice, big list in one place. ext4 Pro: supported by all distro's, commercial and not, and based on ext3, so it's widely tested, stable and proven all kinds for nice features (like extents, subsecond timestamps) which ext3 does not have. Ability to shrink filesystem Con: rumor has it that it is slower than ext3, the fsync dataloss soap XFS Pro: support for massive filesystems (up to 8 exabytes (yes, 'exa') on 64-bit systems) online defrag supported on upcoming RHEL6 as the 'large filesystem' option proven track record: xfs has been around for ages Con: wikipedia mentions slow metadata operations, but I wouldn't know about that potential dataloss on power cut, UPS is recommended, not really suitable for home systems Unable to shrink the filesystem - See https://xfs.org/index.php/Shrinking_Support JFS Pro: said to be fast (I have little experience with JFS) originated in AIX: proven technology Con: used and supported by virtually no-one, except IBM (correct me if I'm wrong; I have never seen or heard about JFS used in production, though it obviously must be, somewhere) ReiserFS Pro: fast with small files very space efficient stable and mature Con: not very active project anymore, next generation reiser 4 has succeeded it no online defragmenter Reiser 4 Pro: very fast with small files atomic transactions very space efficient metadata namespaces plugin architecture, (crypto, compression, dedup and meta data plugins possible) Con: Reiser4 has a very uncertain future and has not been merged yet main supporting distro (SuSE) dropped it years ago Hans Reiser's 'legal issues' are not really helping I recommend this page for further reading. | {
"source": [
"https://unix.stackexchange.com/questions/165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184/"
]
} |
173 | This is an issue that really limits my enjoyment of Linux. If the application isn't on a repository or if it doesn't have an installer script, then I really struggle where and how to install an application from source. Comparatively to Windows, it's easy. You're (pretty much) required to use an installer application that does all of the work in a Wizard. With Linux... not so much. So, do you have any tips or instructions on this or are there any websites that explicitly explain how, why and where to install Linux programs from source? | Normally, the project will have a website with instructions for how to build and install it. Google for that first. For the most part you will do either: Download a tarball (tar.gz or tar.bz2 file), which is a release of a specific version of the source code Extract the tarball with a command like tar zxvf myapp.tar.gz for a gzipped tarball or tar jxvf myapp.tar.bz2 for a bzipped tarball cd into the directory created above run ./configure && make && sudo make install Or: Use git or svn or whatever to pull the latest source code from their official source repository cd into the directory created above run ./autogen.sh && make && sudo make install Both configure and autogen.sh will accept a --prefix argument to specify where the software is installed. I recommend checking out Where should I put software I compile myself? for advice on the best place to install custom-built software. | {
"source": [
"https://unix.stackexchange.com/questions/173",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/193/"
]
} |
186 | Could I get ZFS to work properly in Linux? Are there any caveats / limitations? | ZFS is not in the official Linux kernel, and never will be unless Oracle relicenses the code under something compatible with the GPL. This incompatibility is disputed . The main arguments in favor of ZFS being allowed on Linux systems revolve around the so-called "arm's length" rule. That rule applies in this case only if ZFS is provided as a separate module from the kernel, the two communicate only through published APIs, and both code bases can function independently of each other. The claim then is that neither code base's license taints the other because neither is a derived work of the other; they are independent, but cooperate. Nevertheless, even under this interpretation, it means the ZFS modules must still be shipped separately from the Linux kernel, which is how we see it being provided today by Ubuntu . Quite separately from the CDDL vs GPL argument, NetApp claims they own patents on some technology used in ZFS. NetApp settled their lawsuit with Sun after the Oracle buyout, but that settlement doesn't protect any other Linux distributor. (Red Hat, Ubuntu, SuSE...) As I see it, these are your alternatives: Use btrfs instead, as it has similar features to ZFS but doesn't have the GPL license conflict and has been in the mainline kernel for testing since 2.6.29 (released in January 2009). The main problem with btrfs is that it's had a long history of problems with its RAID 5/6 functionality . These problems are being worked out, but each time one of these problems surfaces, it resets the "stability clock." Another concern is that Red Hat have indicated that the next release of Red Hat Enterprise Linux will not include btrfs. One of the reasons Red Hat is taking that position on btrfs is that they have a plan to offer similar functionality using a different technology stack they are calling Stratis. Therefore, another option you have is to wait for Stratis to appear, with 1.0 scheduled for the first half of 2018, presumably to coincide with Red Hat Enterprise Linux 8. Use a different OS for your file server (FreeBSD, say) and use NFS to connect it to your Linux boxes Use ZFS on FUSE , a userspace implementation, which works neatly around the kernel licensing issue at the expense of a significant amount of performance Integrate ZFS on Linux after installing the OS. The license conflict makes distributing the combined system outside your organization legally questionable. I am not a lawyer, but my sense is that, patent issues aside, distributing ZFS on Linux is about as worrisome as distributing non-GPL binary drivers (such as those for certain video cards) with the system. If one of these bothers you, the other should, too. Switch to Ubuntu, which has been shipping ZFS kernel modules with the OS since 16.04. Canonical believes that it is legally safe to distribute the ZFS kernel module with the OS itself. You would have to decide whether you trust Canonical's opinion; consider also that they may not be willing to indemnify you if a legal issue comes up. Beware that it is not currently possible to boot from ZFS with Ubuntu without a whole lot of manual hackery . Incidentally, btrfs is also backed by Oracle, but was started years before the Sun acquisition. I don't believe the two will ever merge, or one be deprecated in favor of the other due to the license conflict and patent issue. ZFS is too popular to go away, but there will continue to be demand for a ZFS alternative. | {
"source": [
"https://unix.stackexchange.com/questions/186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/207/"
]
} |
207 | If I setup cron jobs incorrectly they appear to silently fail. Where should I look for an error log to understand what went wrong? | You can always explicitly send the job output to a log file: 0 8 * * * /usr/local/bin/myjob > /var/log/myjob.log 2>&1 Keep in mind that this will supercede the mail behaviour that has been mentioned before, because crond iself won't receive any output from the job. If you want to keep that behaviour you should look into tee(1). | {
"source": [
"https://unix.stackexchange.com/questions/207",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/122/"
]
} |
232 | Which harmless pranks do you know that would be great to play on your collegues? | I do not know if this qualifies as a prank, but you can watch StarWars on a shell ! telnet towel.blinkenlights.nl About it . | {
"source": [
"https://unix.stackexchange.com/questions/232",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/78/"
]
} |
256 | I've seen some people make a separate partition for /boot . What is the benefit of doing this? What problems might I encounter in the future by doing this? Also, except for /home and /boot , which partitions can be separated? Is it recommended? | This is a holdover from "ye olde tymes" when machines had trouble addressing large hard drives. The idea behind the /boot partition was to make the partition always accessible to any machine that the drive was plugged into. If the machine could get to the start of the drive (lower cylinder numbers) then it could bootstrap the system; from there the linux kernel would be able to bypass the BIOS boot restriction and work around the problem. As modern machines have lifted that restriction, there is no longer a fixed need for /boot to be separate, unless you require additional processing of the other partitions, such as encryption or file systems that are not natively recognized by the bootloader. Technically, you can get away with a single partition and be just fine, provided that you are not using really really old hardware (pre-1998 or so). If you do decide to use a separate partition, just be sure to give it adequate room, say 200mb of space. That will be more than enough for several kernel upgrades (which consume several megs each time). If /boot starts to fill up, remove older kernels that you don't use and adjust your bootloader to recognize this fact. | {
"source": [
"https://unix.stackexchange.com/questions/256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/184/"
]
} |
272 | Given two Linux boxes on a LAN, what's the simplest way to transfer files between them? | I use scp . scp source desthost:/path/to/dest/. to copy from the local machine to the remote machine, or scp srchost:/path/to/file/file . to copy from a remote machine to the local machine. If the username is not the same on the remote machine, scp user@srchost:/path/to/file/file . | {
"source": [
"https://unix.stackexchange.com/questions/272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/289/"
]
} |
305 | How can I get tiling windows in GNOME? | It depends a bit what you mean by tiling: Permanently tiling or just temporary to get an overview and select a window? If you use compiz ("Desktop effects") the latter is possible by pressing Super+W (Super is normally the "Windows"-Key). For "permanent" tiling: Tile windows with compiz: Install the Compiz Settings manager (e.g. package compizconfig-settings-manager for Ubuntu) and the additional plugins (again for Ubuntu: compiz-fusion-plugins-extra ). Then activate the Grid plugin -- then you can use Ctrl+Alt and a number on your keypad to move and resize the window so that it fits an imaginary grid. This allows very comfortable tiling. Tile windows without compiz: If you do not use compiz, there is a gnome applet that allows tiling: http://www.giuspen.com/x-tile/ | {
"source": [
"https://unix.stackexchange.com/questions/305",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/301/"
]
} |
333 | I have a couple of machines at home (plus a number of Linux boxes running in VMs) and I am planning to use one of them as a centralized file server. Since I am more a Linux user rather than a sysadmin, I'd like to know what is the equivalent of, let's say "Active Directory"? My objective is to have my files in any of the machines that I logon in my network. | You either build your own Active Directory-equivalent from Kerberos and OpenLDAP (Active Directory basically is Kerberos and LDAP, anyway) and use a tool like Puppet (or OpenLDAP itself) for something resembling policies, or you use FreeIPA as an integrated solution. There's also a wide range of commercially supported LDAP servers for Linux, like Red Hat Directory Server. RHDS (like 389 Server, which is the free version of RHDS) has a nice Java GUI for management of the directory. It does neither Kerberos nor policies though. Personally, I really like the FreeIPA project and I think it has a lot of potential. A commercially supported version of FreeIPA is included in standard RHEL6 subscriptions, I believe. That said, what your are asking about is more like a fileserver solution than an authentication solution (which is what AD is). If you want your files on all machines you log into, you have to set up an NFS server and export an NFS share from your fileserver to your network. NFSv3 has IP-range based ACL's, NFSv4 would be able to do proper authentication with Kerberos and combines nicely with the authentication options I described above. If you have Windows boxes on your network, you will want to setup a Samba server, which can share out your files to Linux and Windows boxes alike. Samba3 can also function as an NT4 style domain controller, whereas Samba4 is able to mimic a Windows 2003 style domain controller. | {
"source": [
"https://unix.stackexchange.com/questions/333",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48/"
]
} |
364 | The setuid permission bit tells Linux to run a program with the effective user id of the owner instead of the executor: > cat setuid-test.c
#include <stdio.h>
#include <unistd.h>
int main(int argc, char** argv) {
printf("%d", geteuid());
return 0;
}
> gcc -o setuid-test setuid-test.c
> ./setuid-test
1000
> sudo chown nobody ./setuid-test; sudo chmod +s ./setuid-test
> ./setuid-test
65534 However, this only applies to executables; shell scripts ignore the setuid bit: > cat setuid-test2
#!/bin/bash
id -u
> ./setuid-test2
1000
> sudo chown nobody ./setuid-test2; sudo chmod +s ./setuid-test2
> ./setuid-test2
1000 Wikipedia says : Due to the increased likelihood of security flaws, many operating systems ignore the setuid attribute when applied to executable shell scripts. Assuming I'm willing to accept those risks, is there any way to tell Linux to treat the setuid bit the same on shell scripts as it does on executables? If not, is there a common workaround for this problem? My current solution is to add a sudoers entry to allow ALL to run a given script as the user I want it run as, with NOPASSWD to avoid the password prompt. The main downsides to that is the need for a sudoers entry every time I want to do this, and the need for the caller to sudo some-script instead of just some-script | Linux ignores the setuid¹ bit on all interpreted executables (i.e. executables starting with a #! line). The comp.unix.questions FAQ explains the security problems with setuid shell scripts. These problems are of two kinds: shebang-related and shell-related; I go into more details below. If you don't care about security and want to allow setuid scripts, under Linux, you'll need to patch the kernel. As of 3.x kernels, I think you need to add a call to install_exec_creds in the load_script function, before the call to open_exec , but I haven't tested. Setuid shebang There is a race condition inherent to the way shebang ( #! ) is typically implemented: The kernel opens the executable, and finds that it starts with #! . The kernel closes the executable and opens the interpreter instead. The kernel inserts the path to the script to the argument list (as argv[1] ), and executes the interpreter. If setuid scripts are allowed with this implementation, an attacker can invoke an arbitrary script by creating a symbolic link to an existing setuid script, executing it, and arranging to change the link after the kernel has performed step 1 and before the interpreter gets around to opening its first argument. For this reason, most unices ignore the setuid bit when they detect a shebang. One way to secure this implementation would be for the kernel to lock the script file until the interpreter has opened it (note that this must prevent not only unlinking or overwriting the file, but also renaming any directory in the path). But unix systems tend to shy away from mandatory locks, and symbolic links would make a correct lock feature especially difficult and invasive. I don't think anyone does it this way. A few unix systems (mainly OpenBSD, NetBSD and Mac OS X, all of which require a kernel setting to be enabled) implement secure setuid shebang using an additional feature: the path /dev/fd/ N refers to the file already opened on file descriptor N (so opening /dev/fd/ N is roughly equivalent to dup( N ) ). Many unix systems (including Linux) have /dev/fd but not setuid scripts. The kernel opens the executable, and finds that it starts with #! . Let's say the file descriptor for the executable is 3. The kernel opens the interpreter. The kernel inserts /dev/fd/3 the argument list (as argv[1] ), and executes the interpreter. Sven Mascheck's shebang page has a lot of information on shebang across unices, including setuid support . Setuid interpreters Let's assume you've managed to make your program run as root, either because your OS supports setuid shebang or because you've used a native binary wrapper (such as sudo ). Have you opened a security hole? Maybe . The issue here is not about interpreted vs compiled programs. The issue is whether your runtime system behaves safely if executed with privileges. Any dynamically linked native binary executable is in a way interpreted by the dynamic loader (e.g. /lib/ld.so ), which loads the dynamic libraries required by the program. On many unices, you can configure the search path for dynamic libraries through the environment ( LD_LIBRARY_PATH is a common name for the environment variable), and even load additional libraries into all executed binaries ( LD_PRELOAD ). The invoker of the program can execute arbitrary code in that program's context by placing a specially-crafted libc.so in $LD_LIBRARY_PATH (amongst other tactics). All sane systems ignore the LD_* variables in setuid executables. In shells such as sh, csh and derivatives, environment variables automatically become shell parameters. Through parameters such as PATH , IFS , and many more, the invoker of the script has many opportunities to execute arbitrary code in the shell scripts's context. Some shells set these variables to sane defaults if they detect that the script has been invoked with privileges, but I don't know that there is any particular implementation that I would trust. Most runtime environments (whether native, bytecode or interpreted) have similar features. Few take special precautions in setuid executables, though the ones that run native code often don't do anything fancier than dynamic linking (which does take precautions). Perl is a notable exception. It explicitly supports setuid scripts in a secure way. In fact, your script can run setuid even if your OS ignored the setuid bit on scripts. This is because perl ships with a setuid root helper that performs the necessary checks and reinvokes the interpreter on the desired scripts with the desired privileges. This is explained in the perlsec manual. It used to be that setuid perl scripts needed #!/usr/bin/suidperl -wT instead of #!/usr/bin/perl -wT , but on most modern systems, #!/usr/bin/perl -wT is sufficient. Note that using a native binary wrapper does nothing in itself to prevent these problems . In fact, it can make the situation worse , because it might prevent your runtime environment from detecting that it is invoked with privileges and bypassing its runtime configurability. A native binary wrapper can make a shell script safe if the wrapper sanitizes the environment . The script must take care not to make too many assumptions (e.g. about the current directory) but this goes. You can use sudo for this provided that it's set up to sanitize the environment. Blacklisting variables is error-prone, so always whitelist. With sudo, make sure that the env_reset option is turned on, that setenv is off, and that env_file and env_keep only contain innocuous variables. TL,DR: Setuid shebang is insecure but usually ignored. If you run a program with privileges (either through sudo or setuid), write native code or perl, or start the program with a wrapper that sanitizes the environment (such as sudo with the env_reset option). ¹ This discussion applies equally if you substitute “setgid” for “setuid”; they are both ignored by the Linux kernel on scripts | {
"source": [
"https://unix.stackexchange.com/questions/364",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73/"
]
} |
366 | Say I have the following file: $ cat test
test line 1
test line 2
line without the search word
another line without it
test line 3 with two test words
test line 4 By default, grep returns each line that contains the search term: $ grep test test
test line 1
test line 2
test line 3 with two test words
test line 4 Passing the --color parameter to grep will make it highlight the portion of the line that matches the search expression, but it still only returns lines that contain the expression. Is there a way to get grep to output every line in the source file, but highlight the matches? My current terrible hack to accomplish this (at least on files that don't have 10000+ consecutive lines with no matches) is: $ grep -B 9999 -A 9999 test test If grep can't accomplish this, is there another command-line tool that offers the same functionality? I've fiddled with ack , but it doesn't seem to have an option for it either. | grep --color -E "test|$" yourfile What we're doing here is matching against the $ pattern and the test pattern, obviously $ doesn't have anything to colourize so only the test pattern gets color. The -E just turns on extended regex matching. You can create a function out of it easily like this: highlight () { grep --color -E "$1|$" "${@:1}" ; } | {
"source": [
"https://unix.stackexchange.com/questions/366",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73/"
]
} |
375 | I don't remember why they did this, but at one point X.org decided that disabling Ctrl Alt Backspace to kill it was a good idea. I know there's a way to re-enable it but I don't remember how. Can someone refresh my memory? | Modify /etc/X11/xorg.conf or a .conf file in /etc/X11/xorg.conf.d/ with the following. (note: it is ok if this is all you have in your xorg.conf as xorg will still auto-detect the rest (note: that is if auto-detect works for you without it)) Section "ServerFlags"
Option "DontZap" "false"
EndSection
Section "InputClass"
Identifier "Keyboard Defaults"
MatchIsKeyboard "yes"
Option "XkbOptions" "terminate:ctrl_alt_bksp"
EndSection | {
"source": [
"https://unix.stackexchange.com/questions/375",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
393 | PCI Express slots on the motherboard can be wider then the number of lanes connected.
For example a motherboard can have x8 slot with only x1 lane connected. On the other hand, you can insert a card using only for ex. 4 lanes to a x16 slot on the motherboard, and they will negotiate to use only those x4 lanes. How to check from the running system how many lanes are used by the inserted PCIe cards? | Ok, it seems I missed it on first try in lspci manpages. Note: Run the command as root/sudo otherwise a lot of detail is ommitted including the Lnk output shown below. lspci -vv displays a lot of information, including link width: 01:00.0 VGA compatible controller: nVidia Corporation G92 [GeForce 8800 GT] (rev a2) (prog-if 00 [VGA controller])
[...]
LnkCap: Port #0, Speed 2.5GT/s, Width x16, ASPM L0s L1, Latency L0 <512ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- | {
"source": [
"https://unix.stackexchange.com/questions/393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/97/"
]
} |
467 | I would like to open up a file using less, and have it automatically scroll the file similar to tail -f . I know that I can do less file , and then hit Shift-F to forward forever; like tail -f . I need less because it provides the --raw-control-chars flag, which is necessary because my input is colorful. | use the command "F" while inside of less. less mylogfile.txt
F or, to do so automatically, use the +cmd option: less +F mylogfile.txt | {
"source": [
"https://unix.stackexchange.com/questions/467",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4/"
]
} |
475 | I'm aware that shared objects under Linux use "so numbers", namely that different versions of a shared object are given different extensions, for example: example.so.1 example.so.2 I understand the idea is to have two distinct files such that two versions of a library can exist on a system (as opposed to "DLL Hell" on Windows). I'd like to know how this works in practice? Often, I see that example.so is in fact a symbolic link to example.so.2 where .2 is the latest version. How then does an application depending on an older version of example.so identify it correctly? Are there any rules as to what numbers one must use? Or is this simply convention? Is it the case that, unlike in Windows where software binaries are transferred between systems, if a system has a newer version of a shared object it is linked to the older version automatically when compiling from source? I suspect this is related to ldconfig but I'm not sure how. | Binaries themselves know which version of a shared library they depend on, and request it specifically. You can use ldd to show the dependencies; mine for ls are: $ ldd /bin/ls
linux-gate.so.1 => (0xb784e000)
librt.so.1 => /lib/librt.so.1 (0xb782c000)
libacl.so.1 => /lib/libacl.so.1 (0xb7824000)
libc.so.6 => /lib/libc.so.6 (0xb76dc000)
libpthread.so.0 => /lib/libpthread.so.0 (0xb76c3000)
/lib/ld-linux.so.2 (0xb784f000)
libattr.so.1 => /lib/libattr.so.1 (0xb76bd000) As you can see, it points to e.g. libpthread.so.0 , not just libpthread.so . The reason for the symbolic link is for the linker. When you want to link against libpthread.so directly, you give gcc the flag -lpthread , and it adds on the lib prefix and .so suffix automatically. You can't tell it to add on the .so.0 suffix, so the symbolic link points to the newest version of the lib to faciliate that | {
"source": [
"https://unix.stackexchange.com/questions/475",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
479 | I sometimes have long running processes that I want to kick off before going home, so I create a SSH session to the server to start the process, but then I want to close my laptop and go home and later, after dinner, I want to check on the process that I started before leaving work. How can I do that with SSH? My understanding is that if you break your SSH connection you will also break your login session on the server, therefore killing the long running process. | Use nohup to make your process ignore the hangup signal: $ nohup long-running-process &
$ exit | {
"source": [
"https://unix.stackexchange.com/questions/479",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/244/"
]
} |
500 | Some of my Linux & FreeBSD systems have dozens of users. Staff will use these "ssh gateway" nodes to SSH into other internal servers. We're concerned that some of these people use an unencrypted private SSH key (A key without a passphrase . This is bad , because if a cracker ever gained access to their account on this machine, they could steal the private key and now have access to any machine which uses this same key. For security reasons, we require all users to encrypt their private SSH keys with a passphrase. How can I tell if a private key is not-encrypted (e.g. Does not contain a passphrase)? Is there a different method to do this on an ASCII-armored key vs. a non-ASCII-armored key? Update: To clarify, assume I have superuser access on the machine and I can read everybody's private keys. | Well, OpenSSH private keys with empty passphrases are actually not encrypted. Encrypted private keys are declared as such in the private key file. For instance: -----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,7BD2F97F977F71FC
BT8CqbQa7nUrtrmMfK2okQLtspAsZJu0ql5LFMnLdTvTj5Sgow7rlGmee5wVuqCI
/clilpIuXtVDH4picQlMcR+pV5Qjkx7BztMscx4RCmcvuWhGeANYgPnav97Tn/zp
...
-----END RSA PRIVATE KEY----- So something like # grep -L ENCRYPTED /home/*/.ssh/id_[rd]sa should do the trick. | {
"source": [
"https://unix.stackexchange.com/questions/500",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4/"
]
} |
502 | How about sharing your favorite lessons learned moments? | I was curious if chmod 000 / would work. Well, flawlessly. A few minutes later I was searching for a rescue CD. | {
"source": [
"https://unix.stackexchange.com/questions/502",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/203/"
]
} |
541 | Is there a better way to search my history file for a command than grep? I do have some idea what the command starts as, but I don't know how far back in the history it is. update: was formerly zsh specific but due to overlapping answers feel free to answer for any shell (or mode (vi/emacs)) here, just note if it is specific. | Ctrl + R is usually the best way, as descriptor said . You can also use !string , which runs the most recent command starting with string , or !?string? , which runs the most recent command that contains string . (I think that's the only stuff relevant to this question, but I covered much more of the history commands in this answer ) | {
"source": [
"https://unix.stackexchange.com/questions/541",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
547 | I use bindkey -v (for bash-ers set -o vi I think that works in zsh too) or vi(m) mode. but it bugs me that I don't have any visual cue to tell me whether I'm in insert mode or command mode. Does anyone know how I can make my prompt display the mode? | I found this via SU . Here's the basic example, though I'm still customizing it for myself: function zle-line-init zle-keymap-select {
RPS1="${${KEYMAP/vicmd/-- NORMAL --}/(main|viins)/-- INSERT --}"
RPS2=$RPS1
zle reset-prompt
}
zle -N zle-line-init
zle -N zle-keymap-select I'd explain it except I don't really understand it yet | {
"source": [
"https://unix.stackexchange.com/questions/547",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
549 | Browsing through questions I found about tmux (I normally used GNU Screen). My question is what are pros and cons of each of them. Especially I couldn't find much about tmux. | From their website : How is tmux different from GNU screen? What else does it offer? tmux offers several advantages over screen: a clearly-defined client-server model: windows are independent entities which
may be attached simultaneously to multiple sessions and viewed from multiple
clients (terminals), as well as moved freely between sessions within the same
tmux server; a consistent, well-documented command interface, with the same syntax
whether used interactively, as a key binding, or from the shell; easily scriptable from the shell; multiple paste buffers; choice of vi or emacs key layouts; an option to limit the window size; a more usable status line syntax, with the ability to display the first line
of output of a specific command; a cleaner, modern, easily extended, BSD-licensed codebase. There are still a few features screen includes that tmux omits: builtin serial and telnet support; this is bloat and is unlikely to be added
to tmux; wider platform support, for example IRIX and HP-UX, and for odd terminals. | {
"source": [
"https://unix.stackexchange.com/questions/549",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305/"
]
} |
554 | I would like to monitor one process's memory / cpu usage in real time. Similar to top but targeted at only one process, preferably with a history graph of some sort. | On Linux, top actually supports focusing on a single process, although it naturally doesn't have a history graph: top -p PID This is also available on Mac OS X with a different syntax: top -pid PID | {
"source": [
"https://unix.stackexchange.com/questions/554",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/163/"
]
} |
566 | When using ssh -X is the executable copied and run locally or is it run on the host machine. Since it is called X11 forwarding it makes me think that the window is drawn on my machine but running on the host. | The executable is run on the remote machine and displayed (drawn) on the local machine. What ssh -X remote does is start up a proxy X11 server on the remote machine. If you do echo $DISPLAY on the remote machine, you should see something like localhost:21.0 . That is telling the program running on the remote machine to send drawing commands to the X11 server with id 21. This then forwards those commands to the real X11 server running on the local machine, which draws on your screen. This forwarding happens over an encrypted ssh connection, so they can't be (easily) listened to. Unlike Windows, Mac OS, etc, X11 was designed from the beginning to be able to run programs across a network, without needing things like remote desktop. For a while, X11 thin clients were popular. It is basically a stripped down computer that only runs a X11 server. All of the programs run on some application server somewhere. | {
"source": [
"https://unix.stackexchange.com/questions/566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171/"
]
} |
582 | I frequently find myself SSHing into various OS X machines, and it'd be useful if I could tell what version of OS X I was on when I'm doing that. uname -a doesn't quite work, since Darwin kernel versions don't always change with the rest of the system. | sw_vers My suggestion is to use sw_vers . Example output as of 10.6.4: > sw_vers
ProductName: Mac OS X
ProductVersion: 10.6.4
BuildVersion: 10F569 The answer that suggested system_profiler | grep 'System Version' is what I have tried to use in the past, but it has 2 problems. It is slow since it generates a full system_profiler dump of the machine, gathering all hardware and software inventory information. The output of system_profiler has changed over time. e.g. output of grep for 'Serial Number' on 10.6.4 is "Serial Number (system): ZNNNNNZNZZZ", whereas on 10.4.11 it was "Serial Number: ZNNNNZNZZZZ" - importance being the parse-ability of the output and the add " (system)" piece can be problematic unless you are expecting the change. | {
"source": [
"https://unix.stackexchange.com/questions/582",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40/"
]
} |
604 | When I do more filename and less filename , it would seem that the resulting terminals are quite similar. I can navigate and search through my files identically ( j , Space , /pattern , etc.). I find it hard to believe that less is more and vice versa. Are there any differences between the two? | The difference is mostly historical at this point, I believe some systems even have more and less hardlinked to the same binary. Originally, more pretty much only allowed you to move forward in a file, but was pretty decent for buffering output. less was written as an improved more that allowed you to scroll around the displayed text The first line of my man less pretty much sums it up: Less is a program similar to more, but which allows backward
movement in the file as well as forward movement. | {
"source": [
"https://unix.stackexchange.com/questions/604",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/446/"
]
} |
615 | I was googling this a bit ago and noticed a couple of ways, but I'm guessing that google doesn't know all. So how do you kick users off your Linux box? also how do you go about seeing they are logged in in the first place? and related... does your method work if the user is logged into an X11 DE (not a requirement I'm just curious)? | There's probably an easier way, but I do this: See who's logged into your machine -- use who or w : > who
mmrozek tty1 Aug 17 10:03
mmrozek pts/3 Aug 17 10:09 (:pts/2:S.0) Look up the process ID of the shell their TTY is connected to: > ps t
PID TTY STAT TIME COMMAND
30737 pts/3 Ss 0:00 zsh Laugh at their impending disconnection (this step is optional, but encouraged) > echo "HAHAHAHAHAHAHAHA" | write mmrozek pts/3 Kill the corresponding process: > kill -9 30737 I just discovered you can combine steps 1 and 2 by giving who the -u flag; the PID is the number off to the right: > who -u
mmrozek tty1 Aug 17 10:03 09:01 9250
mmrozek pts/18 Aug 17 10:09 01:46 19467 (:pts/2:S.0) | {
"source": [
"https://unix.stackexchange.com/questions/615",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
629 | I installed Ubuntu on a computer that is now used by somebody else. I renamed the account with her name, but it only changes the fullname, not the user name, which is still displayed in the top right (in the fast-user-switch-applet ). Is there a command to rename an Unix user account? I've thought of creating a new user account with the new name, and then copying everything in the "old" home to the home of the new account. Would it be enough? But then I think the files would have the old account's permissions' owner? So should I do chown -R newuser ~ ? Is there a simpler/recommended way to do this? | Try usermod --move-home --login <new-login-name> --home <new-home-dir> <old-login-name> The --move-home option moves the old home directory's contents to the new one given by the --home option which is created if it doesn't already exist. If you want the primary user group to match the new-login-name , add --gid <new-login-name> to the command above, but the group must be pre-existing. See the man page for more info: man usermod | {
"source": [
"https://unix.stackexchange.com/questions/629",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/475/"
]
} |
634 | For whatever reasons, I've always used RPM based distributions (Fedora, Centos and currently openSUSE). I have often heard it stated that deb is better than rpm, but when asked why, have never been able to get a coherent answer (usually get some zealous ranting and copious amounts of spittle instead). I understand there may be some historical reasons, but for modern distributions using the two different packaging methods, can anybody give the technical (or other) merits of one vs. the other? | Main difference for a package maintainer (I think that would be 'developer' in Debian lingo) is the way package meta-data and accompanying scripts come together. In the RPM world, all your packages (the RPMs you maintain) are located in something like ~/rpmbuild . Underneath, there is the SPEC directory for your spec-files, a SOURCES directory for source tarballs, RPMS and SRPMS directories to put newly created RPMs and SRPMs into, and some other things that are not relevant now. Everything that has to do with how the RPM will be created is in the spec-file: what patches will be applied, possible pre- and post-scripts, meta-data, changelog, everything. All source tarballs and all patches of all your packages are in SOURCES. Now, personally, I like the fact that everything goes into the spec-file, and that the spec-file is a separate entity from the source tarball, but I'm not overly enthusiastic about having all sources in SOURCES. IMHO, SOURCES gets cluttered pretty quick and you tend to lose track of what is in there. However, opinions differ. For RPMs it is important to use the exact same tarball as the one the upstream project releases, up to the timestamp. Generally, there are no exceptions to this rule. Debian packages also require the same tarball as upstream, though Debian policy requires some tarballs to be repackaged (thanks, Umang). Debian packages take a different approach. (Forgive any mistakes here: I am a lot less experienced with deb's that I am with RPM's.) Debian packages' development files are contained in a directory per package. What I (think to) like about this approach is the fact that everything is contained in a single directory. In the Debian world, it is a bit more accepted to carry patches in a package that are not (yet) upstream. In the RPM world (at least among the Red Hat derivatives) this is frowned upon. See "FedoraProject: Staying close to upstream projects" . Also, Debian has a vast amount of scripts that are able to automate a huge portion of creating a package. For example, creating a - simple - package of a setuptool'ed Python program, is as simple as creating a couple of meta-data files and running debuild . That said, the spec-file for such package in RPM format would be pretty short and in the RPM world, too, there's a lot of stuff that is automated these days. | {
"source": [
"https://unix.stackexchange.com/questions/634",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/487/"
]
} |
636 | I know with mkdir I can do mkdir A B C D E F to create each directory. How do I create directories A-Z or 1-100 with out typing in each letter or number? | The {} syntax is Bash syntax not tied to the for construct. mkdir {A..Z} is sufficient all by itself. http://www.gnu.org/software/bash/manual/bashref.html#Brace-Expansion | {
"source": [
"https://unix.stackexchange.com/questions/636",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171/"
]
} |
658 | My /etc/fstab contains this: # / was on /dev/sda1 during installation
UUID=77d8da74-a690-481a-86d5-9beab5a8e842 / ext4 errors=remount-ro 0 1 There are several other disks on this system, and not all disks are being mounted to the correct location (For example, /dev/sda1 and /dev/sdb1 are sometimes reversed). How can I see the UUIDs for all disks on my system? Can I see the UUID for the third disk on this system? | In /dev/disk/by-uuid there are symlinks mapping each drive's UUID to its entry in /dev (e.g. /dev/sda1 ) You can view these with the command ls -lha /dev/disk/by-uuid | {
"source": [
"https://unix.stackexchange.com/questions/658",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4/"
]
} |
682 | I would like to know how the OS works in a nutshell : The basic components it's built upon How those components work together What makes unix UNIX What makes it so different from other OSs like Windows | A UNIX system consists of several parts, or layers as I'd like to call them. To start a system, a program called the boot loader lives at the first sector of a hard disk partition. It is started by the system, and in turn it locates the Operating System kernel, and load it. Layering The Kernel. This is the central program which is started by the boot loader. It does the basic hardware interaction for the system (disk, memory, video, sound) and offers a virtual environment in which it can start programs. The kernel also ships all drivers which deal with all the little differences between hardware devices. To the outside world (the higher layers), each class of devices appear to behave exactly in the same consistent way - which in turn, the programs can build upon. Background subsystems. There are just regular programs, which just stay out of your way. They handle things like remote login, provide a cental message bus, and do actions based on hardware/network events. For example, bluetooth discovery, wifi management, etc.. Any network services (file server, print server, web server) also live at this level. In UNIX systems, these are all just normal programs. The command line tools. These are all little programs which can be started to do things like text editing, downloading files, or administrating the system. At this point, a UNIX system is fully usable for system adminstrators. In Windows, this layer doesn't really exist anymore. The graphical user interface. These are also just programs, the only difference is they draw windows at the screen instead of writing text. This makes the system easier to use for regular users. Any service or event will go from the bottom all up to the top. Libraries - the common platform Programs do a lot of common things like displaying a window, drawing stuff at the screen or downloading a file. These things are the same for multiple programs, hence that code are put in separate "library" files ( .so files - meaning shared object). The library can be shared across all programs. For every imaginable thing, there is a library. There is one for reading/writing PNG files. There is one for JPEG files, for reading XML, for encryption, for video playback, and so on. On Linux, the common libraries for application developers are Qt and Gtk. These libraries use lower-level libraries internally for their specific needs, while exposing their functionality in a nice consistent and concise way for application developers to create applications even faster. Libraries provide the application platform, on which programmers can build end user applications for an Operating System. The more high quality libraries a system provides, the fewer code a programmer has to write to make a beautiful program. Some libraries can be used across different operating systems (for instance, Qt is), some are really specifically tied into one operating system. This will restrict your program to be able to run at that platform only. Inter process communication A third corner piece of an operating system, is the way programs can communicate with each other. These are Inter Process Communication (IPC) machanisms. These exist in several flavors, e.g. a piece of shared memory, or a small channel is set up between two programs to exchange data. There is also a central message bus on which each program can post a message, and receive a response. This is used for global communication, where it's unknown which program can respond. From libraries to Operating Systems With libraries, IPC and the kernel in place, programmers can build all kinds of applications for system services, user administration, configuration, administration, office work, entertainment, etc.. This forms the complete suite which novice users recognize as the "operating system". In UNIX/Linux systems, all services are just programs. All system admin tools are just programs. They all do their job, and they can be chained together. I've summarized a lot of major programs at http://codingdomain.com/linux/sysadmin/ Distinguishable parts with Windows UNIX is mainly a system of programs, files and restricted permissions. A lot of complexities are avoided, making it a powerful system while it looks like it has an easy job doing it. In detail, these are principles which can be found across UNIX/Linux systems: There are uniform ways to access information. ("Everything is just a file"). You can open a file, network socket, IPC channel, kernel parameters and block device as a file. Hence the appearance of the virtual filesystems in /dev, /sys and /proc. The only API you ever need is open , read and close . The underlying system is transparent. Every program operates under the same rules. Unlike Windows, there is no artificial difference between a "console program", "gui program" or "background service". They are all just programs, that happen to do different things. They can also all be observed, analyzed and debugged in the same way. Settings are readable, editable, and can be annotated with comments. They typically have an INI-style format, but may use a custom format for the needs of that application. Because they are just files, they can be copied to other systems, archived or being backuped with standard tools. No large "do it all in once" applications. The mantra is "do one thing, do it well". Command line tools can be chained and together be powerful. Separate services (e.g. SMTP, IMAP and POP, and login) are separate subprograms, avoiding complex intertwined code and security issues. Complex desktop environments delegate hard work to individual programs. fork() . New programs are started by an existing program cloning itself. The clone sets up everything (e.g. file handles), and optionally replaces itself with the new program code. This makes it really easy to apply the same security settings and restrictions to new programs, share memory or setup an IPC mechanism. The cost of starting a process is also very low. The file system is one tree, in which other disk partitions and network shares can be mounted. There is again, an universal way of accessing data. Common system locations (e.g. /usr can easily be mounted as network share. The system is built for low user privileges. After login, every user (except root) is confined their own resources, running applications and files only. Network services reduce their privileges as soon as possible. There is a single clear way to get more privileges, or ask someone to execute a privileged job on their behalf. Every other call is limited by the restrictions and limitations of the program. Every program stores settings in a hidden file/folder of the user home directory. No program ever attempts to write a global setting file. A favor towards openly described communication mechanisms over secret mechanisms or specific 1-to-1 mechanisms. Other vendors and software developers are encouraged to follow the same specification, so things can easily be connected, swapped out and yet stay loosely coupled. | {
"source": [
"https://unix.stackexchange.com/questions/682",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/570/"
]
} |
685 | So recently a Debian 5.0.5 installer offered me to have separate /usr , /home , /var and /tmp partitions (on one physical disk). What is the practical reason for this? I understand that /home can be advantageous to put on a separate partition, because user files can be encrypted separately, but why for anything else? | Minimizing loss: If /usr is on a separate partition, a damaged /usr does not mean that you cannot recover /etc . Security: / cannot be always ro ( /root may need to be rw etc.) but /usr can. It can be used to make ro as much as possible. Using different FS: I may want to use a different system for /tmp (not reliable but fast for many files) and /home (has to be reliable). Similary /var contains data while /usr does not so /usr stability can be sacrifice but not so much as /tmp . Duration of fsck: Smaller partitions mean that checking one is faster. Mentioned filling up of partions, although other method is quotas. | {
"source": [
"https://unix.stackexchange.com/questions/685",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210/"
]
} |
695 | Discussing with Mac owners, I got several versions of where Mac OS X comes from. It is known to have some root in BSD, but how much, and where? Some say that Mac OS X has a FreeBSD kernel, with all the utilities above that makes it an OS being Mac specific. (Not speaking about user apps here, only all of the init , ls , cd , and others. binutils? ) Others say Mac OS X is a Darwin kernel, that is pure Mac, and that the OS utilities come from BSD. Where's the truth? | The history of MacOS is a little bit more convoluted. I was very interested in this in the late 90's as Mach had been pitched around the world as a faster way of building a Unix system. The origin of the kernel is a bit more complicated. It all starts with AT&T distributing their operating system to some universities for free. This Unix was improved extensively at Berkeley and became the foundation for the BSD variations of Unix and incorporated several new innovations like the "Fast File System" (UFS), introduced symlinks and the sockets API. AT&T went on their own way and built System V at the same time. Meanwhile, research continued and some folks adopted the work from BSD as a foundation. At CMU, the BSD kernel was used as the foundation for prototyping a few new ideas: threads, an API to control the virtual memory system (through pluggable "pagers" - user level mmap), a kernel-level remote procedure call system and most importantly the idea of moving some kernel level operations to user space. This became the Mach kernel. I am not 100% sure if mmap came from Mach, and later was adopted by BSD, or if Mach merely pioneered the idea and BSD added their own mmap based on the ideas of Mach. Although the Mach kernel was described as a micro-kernel, up to version 2.5 it was merely a system that provided the thread, mmap, message passing features but remained a monolithic kernel, all the services were running on kernel mode. At this time Rick Rashid (now at Microsoft) and Avie Tevanian (now at Apple) had come up with a novel idea that could accelerate Unix. The idea was to use the mmap system call to pass data to be copied from user space to the "servers" implementing the file system. This idea was essentially a variation of trying to avoid making copies of the same data, but it was pitched as a benefit of micro kernels, even if the feature could be isolated from a micro kernel. The benchmarks of this VM-backed faster Unix system is what drove people at NeXT and at the FSF to pick Mach as the foundation for their kernels. NeXT went with the Mach 2.5 kernel (which was based on either BSD 4.2 or 4.3) and GNU would not actually start on the work for years. This is what the NeXTSTEP operating systems were using. Meanwhile at CMU, work continued on Mach and they finally realized the vision of having multiple servers running on top of a micro kernel with version 3.0. I am not aware of anyone in the wild being able to run Mach 3.0 as all of the interesting user-level servers used AT&T code so they were considered encumbered, so it remained a research product. Around this time the Jolitz team had done a port of 4.3+ BSD to the 386 architecture and published their porting efforts on DrDobbs. 386BSD was not actively maintained and a group emerged to maintain and move 386BSD forward, the NetBSD team. Internal fights within the NetBSD group caused the first split and FreeBSD was formed out of this. NetBSD at the time wanted to focus on having a cross-platform BSD, and FreeBSD wanted to focus on having a Unix that did great on x86 platforms. A little bit later, NetBSD split again due to some other disputes and this lead to the creation of OpenBSD. A fork of BSD 4.3 for x86 platforms went commercial with a company called BSDi, and various members of the original Berkeley team worked there and kept good relations with the BSD team at the University. AT&T was not amused and started the AT&T vs BSDi lawsuit, which was later expanded to sue the University as well. The lawsuit was about BSDi using proprietary code from AT&T that had not been rewritten by Berkeley. This set back BSD compared to the up and coming Linux operating system. Although things were not looking good for the defendants, at some point someone realized that SystemV had incorporated large chunks of BSD code under the BSD license and AT&T had not fulfilled their obligations in the license. A settlement was reached in which AT&T would not have to pull their product from the market, and the University agreed to rip out any code that could still be based on AT&T code. The university then released two versions of BSD 4.4 encumbered and 4.4 lite. The encumbered version would boot and run, but contained AT&T code. The lite version did not contain any code from AT&T but did not work. The various BSD efforts re-did their work on top of the new 4.4 lite release and had a booting system within months. Meanwhile, the Mach 3.0 micro kernel remained not very useful without any of the user-land servers. A student from a Scandinavian university (I believe, I might have this wrong) was the first one to create a full Mach 3.0 system with a complete OS based on the 4.4 lite release, I believe this was called "Lites". The system worked, but was slow. During the 1992-1996 and by now BSD already had an mmap() system call as well as most other Unix systems. The "micro kernel advantage" that was not there, never really came to fruition. NeXT still had a monolithic kernel. The FSF was still trying to get Mach to build, and not wanting to touch the BSD code or contribute to any of the open source BSD efforts, they kept charging away at a poorly specified kernel vision and they were drowning on RPC protocols for their own kernel. The micro kernel looked great on paper, but turned out to be over engineered and just made everything slower. At this point we also had the Linus vs Andy debate over micro-kernels vs monolithic kernels and the world started to realize that it was just impossible to add all of those extra cycles to a micro kernel and still come ahead of a well designed monolithic kernel. Apple had not yet acquired NeXTSTEP, but was also starting to look into Mach as a potential kernel for their future operating systems. They hired the Open Software Foundation to port Linux to the Mach kernel, and this was done out of their Grenoble offices, I believe this was called "mklinux". When Apple bought NeXT what they had on their hands was a relatively old Unix foundation, a 4.2 or 4.3 based Unix and by now, not even free software ran well out of the box on those systems. They hired Jordan Hubbard away from FreeBSD to upgrade their Unix stack. His team was responsible for upgrading the user land, and it is not a surprise that the MacOS userland was upgraded to the latest versions available on BSD. Apple did switch their Mach from 2.5 to 3.0 at some point, but decided to not go with the micro-kernel approach and instead kept everything in-process. I have never been able to confirm if Apple used Lites, hired the scandinavian hacker, or if they adopted the 4.4 lite as their OS. I suspect they did, but I had already moved on to Linux and had stopped tracking the BSD/Mach world. There was a rumor in the late 90's that Avie at Apple tried to hire Linus (who was already famous at this point) to work on his baby, but Linus chose to continue working on Linux. History aside, this page describes the userland and the Mach/Unix kernel: http://developer.apple.com/mac/library/documentation/Darwin/Conceptual/KernelProgramming/Architecture/Architecture.html#//apple_ref/doc/uid/TP30000905-CH1g-CACDAEDC I found this graphic of the history of OSX: | {
"source": [
"https://unix.stackexchange.com/questions/695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/559/"
]
} |
719 | Is there some chance to know how a binary was built, under Linux? (and or other Unix) Compiler, version, time, flags etc... I looked at readelf and couldn't find much, but there might be other ways at analyzing the binary code/section etc... Anything you know how to extract? | There isn't a universal way, but you can make an educated guess by looking for things only done by one compiler. GCC is the easiest; it writes a .comment section that contains the GCC version string (the same string you get if you run gcc --version ). I don't know if there's a way to display it with readelf , but with objdump it's: objdump -s --section .comment /path/binary I just realized I ignored the rest of your question. Flags aren't generally saved anywhere; they would be in a comment section most likely, but I've never seen that done. There's a spot in the COFF header for a timestamp, but there's no equivalent in ELF, so I don't think the compile time is available either | {
"source": [
"https://unix.stackexchange.com/questions/719",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/76/"
]
} |
723 | I have used Ubuntu/Fedora/Red Hat/Suse but haven't used OS X at all. If I have to start using OS X regularly, what are the things I should look out for? Tools I use are GNU tool chain, C++/Boost, etc. | I made the same move years ago. Here are the things I've run into: Your average desktop Linux has a richer userland than that of OS X. You'll probably miss different tools than I did, so no sense getting specific about recommendations for replacements. Instead, just install Fink , MacPorts , or Homebrew first thing. These systems provide a package management system typical of Linux or the BSDs. Each has its own flavor and package set, so the right choice will be based on your tastes and needs. You may find that no one package system will have every program you need. Some programs have yet to be ported to OS X, so they won't appear in any package system. Nevertheless, these systems do greatly extend what ships with OS X, and will ease your transition from Linux. OS X command line compilers now build 64-bit executables by default. In Leopard and earlier, the compilers built 32-bit executables by default instead. This can cause problems in several ways: maybe you have old 32-bit libraries you can't rebuild but have to link to, maybe you're still running your system in 32-bit mode, etc. One way to force a 32-bit build is to override gcc defaults in build systems with gcc-4.0 , that being the old 32-bits-by-default Leopard compiler. ( gcc is a symlink to the 64-bits-by-default gcc-4.2 on Snow Leopard.) With autoconf based build systems, this works: $ ./configure CC=gcc-4.0 CXX=g++-4.0 (You only need the CXX bit if the program contains C++ components.) Another way is to pass -m32 to the compiler and linker: $ ./configure CFLAGS=-m32 CXXFLAGS=-m32 LDFLAGS=-m32 It's more typing, but it lets you get 32-bit builds out of the newer GCC. Dynamic linkage is vastly different. If you're the sort to write your ld commands by hand, it's time to break that habit. You should instead be linking programs and libraries through the compiler, or using an intermediary like libtool . These take care of the niggly platform-specific link scheme differences, so you can save the brain power for learning programs you can't abstract away with portable mechanisms. For instance, you'll need to update your muscle memory so you type otool -L someprogram instead of ldd someprogram to figure out what libraries someprogram is linked to. Another difference in dynamic linkage that will twist your brain at first is that on OS X, the install location for a library is recorded in the library itself , and the linker copies that into the executable at link time. This means that if you link to a library that got installed in /usr/local/lib but you want to ship it to your users in the same directory as the executable, you need to say something like this as part of your install process: $ cp /usr/local/lib/libfoo.dylib .
$ install_name_tool -id @loader_path/libfoo.dylib libfoo.dylib
$ make LDFLAGS=-L. relink Now, much of the above is likely to vary for your build system, so just take it as an example, rather than a recipe. What this does is makes a private copy of a library we link to, changes its shared library identifier from an absolute path to a relative one meaning "in the same directory as the executable", then forces a rebuild of the executable against this modified copy of the library. install_name_tool is the core command here. If instead you wanted to install the library in a ../lib directory relative to the executable, the -id argument would need to be @loader_path/../lib/libfoo.dylib instead. Joe Di Pol wrote a good article on this , with a lot more detail. Dynamic linkage + third party packages can cause headaches early on. You are likely to run into dynamic linkage issues early on, as soon as you start trying to use libraries from third-party packages that don't install the libraries into standard locations. MacPorts does this, for example, installing libraries into /opt/local/lib , rather than /usr/lib or /usr/local/lib . When you run into this, a good fix for the problem is to add lines like the following to your .bash_profile : # Tell the dynamic linker (dyld) where to find MacPorts package libs
export DYLD_LIBRARY_PATH=/opt/local/lib:$DYLD_LIBRARY_PATH
# Add MacPorts header file install dirs to your gcc and g++ include paths
export C_INCLUDE_PATH=/opt/local/include:$C_INCLUDE_PATH
export CPLUS_INCLUDE_PATH=/opt/local/include:$CPLUS_INCLUDE_PATH OS X handles the CPU compatibility issue differently than Linux. On a 64-bit Linux where you have to also support 32-bit for whatever reason, you end up with two copies of things like libraries that need to be in both formats, with the 64-bit versions off in a lib64 directory parallel to the traditional lib directory. OS X solves this problem differently, with the Universal binary concept, which lets you put multiple binaries into a single file. You can currently have executables that support up to 4 CPU types: 32- and 64-bit PowerPC, plus 32- and 64-bit Intel. It's easy to build Universal binaries with Xcode, but a bit of a pain with command line tools. This gets you a Universal Intel-only build with Autoconf-based build systems: $ ./configure --disable-dependency-tracking CFLAGS='-arch i386 -arch x86_64' \
LDFLAGS='-arch i386 -arch x86_64' Add -arch ppc -arch ppc64 to the CFLAGS and LDFLAGS if you need PowerPC support. If you don't disable dependency tracking, you end up building only for one platform, since the presence of newly-built .o files for the first platform convinces make(1) that it doesn't need to build for the second platform, too. Everything has to be built twice in the above example; four times for a fully-Universal binary, if you still need PowerPC support. (More info in Apple Technical Note TN2137 .) The developer tools aren't installed on OS X by default. Before Lion, the most reliable place to get the right dev tools for your system was on the OS discs. They're an optional install. The nice thing about installing the dev tools from the OS discs is that you know the tools will work with the OS. Apple being Apple, you have to have a recent version of the OS to run the latest compilers, and they haven't always made downloads of old tools available, so the OS discs are often the easiest way to find the right tools for a given dev or test box. With Lion, they're trying to do away with install media, so unless you buy the expensive USB key version, you'll have to download Xcode from the App Store . I recommend you keep at least a few versions of any Xcode DMGs you download. When Lion's successor comes out a year or three hence, you might find yourself without a way to install a contemporaneous version of Xcode on a Lion test VM. Plan ahead in case the availability problems and lack of OS media make old versions of Xcode otherwise unobtainable. | {
"source": [
"https://unix.stackexchange.com/questions/723",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/452/"
]
} |
767 | I know that both apt-get and aptitude are command line package management interfaces on Debian derived Linux, with different options, but I'm still somewhat confused. Under the hood, aren't they using the same APT system? Why does Debian maintain these parallel tools? (Bonus question: what on earth is wajig ?) | The most obvious difference is that aptitude provides a terminal menu interface (much like Synaptic in a terminal), whereas apt-get does not. Considering only the command-line interfaces of each, they are quite similar, and for the most part, it really doesn't matter which one you use. Recent versions of both will track which packages were manually installed, and which were installed as dependencies (and therefore eligible for automatic removal). In fact, I believe that even more recently, the two tools were updated to actually share the same database of manually vs automatically installed packages, so cases where you install something with apt-get and then aptitude wants to uninstall it are mostly a thing of the past. There are a few minor differences: aptitude will automatically remove eligible packages, whereas apt-get requires a separate command to do so The commands for upgrade vs. dist-upgrade have been renamed in aptitude to the probably more accurate names safe-upgrade and full-upgrade , respectively. aptitude actually performs the functions of not just apt-get, but also some of its companion tools, such as apt-cache and apt-mark. aptitude has a slightly different query syntax for searching (compared to apt-cache) aptitude has the why and why-not commands to tell you which manually installed packages are preventing an action that you might want to take. If the actions (installing, removing, updating packages) that you want to take cause conflicts, aptitude can suggest several potential resolutions. apt-get will just say "I'm sorry Dave, I can't allow you to do that." There are other small differences, but those are the most important ones that I can think of. In short, aptitude more properly belongs in the category with Synaptic and other higher-level package manager frontends. It just happens to also have a command-line interface that resembles apt-get. Bonus Round: What is wajig? Remember how I mentioned those "companion" tools like apt-cache and apt-mark ? Well, there's a bunch of them, and if you use them a lot, you might not remember which ones provide which commands. wajig is one solution to that problem. It is essentially a dispatcher, a wrapper around all of those tools. It also applies sudo when necessary. When you say wajig install foo , wajig says "Ok, install is provided by apt-get and requires admin privileges," and it runs sudo apt-get install foo . When you say wajig search foo , wajig says "Ok, search is provided by apt-cache and does not require admin privileges," and it runs apt-cache search foo . If you use wajig instead of apt-get, apt-mark, apt-cache and others, then you'll never have this problem: $ apt-get search foo
E: Invalid operation search If you want to know what wajig is doing behind the scenes, which tools it is using to implement a particular command, it has --simulate and --teaching modes. Two wajig commands that I often use are wajig listfiles foo and wajig whichpkg /usr/bin/foo . | {
"source": [
"https://unix.stackexchange.com/questions/767",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/614/"
]
} |
797 | I am trying to understand how a function, say mkdir , works by looking at the kernel source. This is an attempt to understand the kernel internals and navigate between various functions. I know mkdir is defined in sys/stat.h . I found the prototype: /* Create a new directory named PATH, with permission bits MODE. */
extern int mkdir (__const char *__path, __mode_t __mode)
__THROW __nonnull ((1)); Now I need to see in which C file this function is implemented. From the source directory, I tried ack "int mkdir" which displayed security/inode.c
103:static int mkdir(struct inode *dir, struct dentry *dentry, int mode)
tools/perf/util/util.c
4:int mkdir_p(char *path, mode_t mode)
tools/perf/util/util.h
259:int mkdir_p(char *path, mode_t mode); But none of them matches the definition in sys/stat.h . Questions Which file has the mkdir implementation? With a function definition like the above, how can I find out which file has the implementation? Is there any pattern which the kernel follows in defining and implementing methods? NOTE: I am using kernel 2.6.36-rc1 . | System calls aren't handled like regular function calls. It takes special code to make the transition from user space to kernel space, basically a bit of inline assembly code injected into your program at the call site. The kernel side code that "catches" the system call is also low-level stuff you probably don't need to understand deeply, at least at first. In include/linux/syscalls.h under your kernel source directory, you find this: asmlinkage long sys_mkdir(const char __user *pathname, int mode); Then in /usr/include/asm*/unistd.h , you find this: #define __NR_mkdir 83
__SYSCALL(__NR_mkdir, sys_mkdir) This code is saying mkdir(2) is system call #83. That is to say, system calls are called by number, not by address as with a normal function call within your own program or to a function in a library linked to your program. The inline assembly glue code I mentioned above uses this to make the transition from user to kernel space, taking your parameters along with it. Another bit of evidence that things are a little weird here is that there isn't always a strict parameter list for system calls: open(2) , for instance, can take either 2 or 3 parameters. That means open(2) is overloaded , a feature of C++, not C, yet the syscall interface is C-compatible. (This is not the same thing as C's varargs feature , which allows a single function to take a variable number of arguments.) To answer your first question, there is no single file where mkdir() exists. Linux supports many different file systems and each one has its own implementation of the "mkdir" operation. The abstraction layer that lets the kernel hide all that behind a single system call is called the VFS . So, you probably want to start digging in fs/namei.c , with vfs_mkdir() . The actual implementations of the low-level file system modifying code are elsewhere. For instance, the ext4 implementation is called ext4_mkdir() , defined in fs/ext4/namei.c . As for your second question, yes there are patterns to all this, but not a single rule. What you actually need is a fairly broad understanding of how the kernel works in order to figure out where you should look for any particular system call. Not all system calls involve the VFS, so their kernel-side call chains don't all start in fs/namei.c . mmap(2) , for instance, starts in mm/mmap.c , because it's part of the memory management ("mm") subsystem of the kernel. I recommend you get a copy of " Understanding the Linux Kernel " by Bovet and Cesati. | {
"source": [
"https://unix.stackexchange.com/questions/797",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/515/"
]
} |
813 | I have a Linux instance that I set up some time ago. When I fire it up and log in as root there are some environment variables that I set up but I can't remember or find where they came from. I've checked ~/.bash_profile , /etc/.bash_rc , and all the startup
scripts. I've run find and grep to no avail. I feel like I must be forgetting to look in some place obvious. Is there a trick for figuring this out? | If zsh is your login shell: zsh -xl With bash : PS4='+$BASH_SOURCE> ' BASH_XTRACEFD=7 bash -xl 7>&2 That will simulate a login shell and show everything that is done (except in areas where stderr is redirected with zsh ) along with the name of the file currently being interpreted. So all you need to do is look for the name of your environment variable in that output. (you can use the script command to help you store the whole shell session output, or for the bash approach, use 7> file.log instead of 7>&2 to store the xtrace output to file.log instead of on the terminal). If your variable is not in there, then probably the shell inherited it on startup, so it was set before like in PAM configuration, in ~/.ssh/environment , or things read upon your X11 session startup ( ~/.xinitrc , ~/.xsession ) or set upon the service definition that started your login manager or even earlier in some boot script. Then a find /etc -type f -exec grep -Fw THE_VAR {} + may help. | {
"source": [
"https://unix.stackexchange.com/questions/813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/684/"
]
} |
892 | For Windows, I think Process Explorer shows you all the threads under a process. Is there a similar command line utility for Linux that can show me details about all the threads a particular process is spawning? I think I should have made myself more clear. I do not want to see the process hierarcy, but a list of all the threads spawned by a particular process See this screenshot How can this be achieved in Linux? Thanks! | The classical tool top shows processes by default but can be told to show threads with the H key press or -H command line option. There is also htop , which is similar to top but has scrolling and colors; it shows all threads by default (but this can be turned off). ps also has a few options to show threads, especially H and -L . There are also GUI tools that can show information about threads, for example qps (a simple GUI wrapper around ps ) or conky (a system monitor with lots of configuration options). For each process, a lot of information is available in /proc/12345 where 12345 is the process ID. Information on each thread is available in /proc/12345/task/67890 where 67890 is the kernel thread ID. This is where ps , top and other tools get their information. | {
"source": [
"https://unix.stackexchange.com/questions/892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/912/"
]
} |
905 | What benefit could I see by compiling a Linux kernel myself? Is there some efficiency you could create by customizing it to your hardware? | In my mind, the only benefit you really get from compiling your own linux kernel is: You learn how to compile your own linux kernel. It's not something you need to do for more speed / memory / xxx whatever. It is a valuable thing to do if that's the stage you feel you are at in your development. If you want to have a deeper understanding of what this whole "open source" thing is about, about how and what the different parts of the kernel are, then you should give it a go. If you are just looking to speed up your boot time by 3 seconds, then... what's the point... go buy an ssd. If you are curious, if you want to learn, then compiling your own kernel is a great idea and you will likely get a lot out of it. With that said, there are some specific reasons when it would be appropriate to compile your own kernel (as several people have pointed out in the other answers). Generally these arise out of a specific need you have for a specific outcome, for example: I need to get the system to boot/run on hardware with limited resources I need to test out a patch and provide feedback to the developers I need to disable something that is causing a conflict I need to develop the linux kernel I need to enable support for my unsupported hardware I need to improve performance of x because I am hitting the current limits of the system (and I know what I'm doing) The issue lies in thinking that there's some intrinsic benefit to compiling your own kernel when everything is already working the way it should be, and I don't think that there is. Though you can spend countless hours disabling things you don't need and tweaking the things that are tweakable, the fact is the linux kernel is already pretty well tuned (by your distribution) for most user situations. | {
"source": [
"https://unix.stackexchange.com/questions/905",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/328/"
]
} |
922 | I wanted to be clever and compare a remote file to a local file without first manually downloading it. I can get the contents of the remote file by ssh user@remote-host "cat path/file.name" However, piping that to diff ssh user@remote-host "cat path/file.name" | diff path/file.name gives me this: diff: missing operand after `path/file.nae'
diff: Try `diff --help' for more information. I have ssh keys set up, so it's not prompting me for a password. What's a workaround for this? | Use - to represent the standard input: ssh user@remote-host "cat path/file.name" | diff path/file.name - | {
"source": [
"https://unix.stackexchange.com/questions/922",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/394/"
]
} |
940 | I have a UNIX timestamp and I'd like to get a formatted date (like the output of date ) corresponding to that timestamp. My attempts so far: $ date +%s
1282367908
$ date -d 1282367908
date: invalid date `1282367908'
$ date -d +1282367908
date: invalid date `+1282367908'
$ date +%s -d +1282367908
date: invalid date `+1282367908' I'd like to be able to get output like: $ TZ=UTC somecommand 1282368345
Sat Aug 21 05:25:45 UTC 2010 | On Mac OS X and BSD: $ date -r 1282368345
Sat Aug 21 07:25:45 CEST 2010
$ date -r 1282368345 +%Y-%m-%d
2010-08-21 with GNU core tools (you have to dig through the info file for that): $ date -d @1282368345
Sat Aug 21 07:25:45 CEST 2010
$ date -d @1282368345 --rfc-3339=date
2010-08-21 With either, add the -u (standard) option, or pass a TZ=UTC0 environment variable to have the UTC date ( TZ=UTC0 defines a timezone called UTC with offset 0 from UTC while the behaviour for TZ=UTC (with no offset) is unspecified (though on most systems would refer to a system-defined timezone also called UTC with offset 0 from UTC)). | {
"source": [
"https://unix.stackexchange.com/questions/940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/595/"
]
} |
969 | I have been using tcsh for a long time now. But whenever I am searching for something, I often find that the methods specified are bash specific. Even the syntax for the shell scripts is different for the two. From what I have experienced searching and learning on the internet, bash seems to be the more common shell used. Even the number of questions on this site tagged bash are way more (five times more currently) than the number of questions tagged tcsh . So, I am wondering whether I should switch to bash. What do you think? Why should I stick to tcsh OR why should I move over to bash ? | After learning bash I find that tcsh is a bit of a step backwards. For instance what I could easily do in bash I'm finding it difficult to do in tcsh. My question on tcsh . The Internet support and documentation is also much better for bash and very limited for tcsh. The number of O'Reilly books on bash are great but I have found nothing similar for tcsh. | {
"source": [
"https://unix.stackexchange.com/questions/969",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/912/"
]
} |
983 | As a Linux user, I've always just used bash because it was the default on every distro I used. People using other Unix systems such as BSD seem to use other shells far more frequently. In the interests of learning a bit more, I've decided to try out zsh. As a bash user: What features will I miss? And what ones should I look out for? | For a more extensive answer, read https://apple.stackexchange.com/questions/361870/what-are-the-practical-differences-between-bash-and-zsh/361957#361957 There's already been quite a bit of activity on the topic on other Stack Exchange sites. My experience of switching from bash to zsh, as far as can remember (it was years ago²), is that I didn't miss a single thing. I gained a lot; here are what I think are the simple zsh-specific features that I use most: The zsh feature I most miss when I occasionally use bash is autocd: in zsh, executing a directory means changing to it, provided you turn on the autocd option.⁴ Another very useful feature is the fancy globbing. The hieroglyphs characters are a bit hard to remember but extremely convenient (as in, it's often faster to look them up than to write the equivalent find command). A few of the simpler examples: foo*~*.bak = all matches for foo* except those matching *.bak foo*(.) = only regular files matching foo* foo*(/) = only directories matching foo* foo*(-@) = only dangling symbolic links matching foo* foo*(om[1,10]) = the 10 most recent files matching foo* foo*(Lm+1) = only files of size > 1MB dir/**/foo* = foo* in the directory dir and all its subdirectories, recursively⁴ For fancy renames, the zmv builtin can be handy. For example, to copy every file to file .bak : zmv -C '(*)(#q.)' '$1.bak' Both bash and zsh have a decent completion system that needs to be turned on explicitly ( . /etc/bash_completion or autoload -U compinit; compinit ). Zsh's is much more configurable and generally fancier. If you run zsh without a .zshrc , it starts an interactive menu that lets you choose configuration options. (Some distributions may disable this; in that case, run autoload zsh-newuser-install; zsh-newuser-install .) I recommend enabling the recommended history options, turning on (“new-style”) completion, and turning on the “common shell options” except beep . Later, configure more options as you discover them. ² At the time programmable completion was zsh's killer feature, but bash acquired it soon after. ⁴ Features that bash acquired only in version 4 (so were not widely available at the time this answer was posted, and are not available on the system-provided bash on macOS) are in smaller type. | {
"source": [
"https://unix.stackexchange.com/questions/983",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/131/"
]
} |
986 | How would you compare these editors? What are the pros and cons of each? [ note ] This is not meant to be answered by those who "hate one and love another" or those who haven't used both. | I use both, although if I had to choose one, I know which one I would pick. Still, I'll try to make an objective comparison on a few issues. Available everywhere? If you're a professional system administrator who works with Unix systems, or a power user on embedded devices (routers, smartphones with Busybox, …), you need to know vi (not Vim), because it's available on all Unix systems and most Unix-like systems, whether desktop, server or embedded. For an ordinary user, this argument is irrelevant: Emacs is easily available for every desktop/server OS, and since it supports remote editing, it's enough to have it on your desktop machine anyway. Bloated? Emacs once stood humorously for “Eight Megabytes And Constantly Swapping”. Right now, on my machine, Google Chrome needs about as much RAM per tab as Emacs does for 100 open files, and I won't even mention Firefox. In the 21st century, Emacs bloat is just a myth. Feature bloat isn't a problem either. If you don't use it, you don't have to know it's there. Emacs features keep out of the way when you don't use them and the documentation is very well organized. Startup time : Vi(m) proponents complain about Emacs's startup time. Yes, Emacs is slow to start up, but this is not a big deal: you start Emacs once per session, then connect to the running process with emacsclient . So Emacs's slow startup is mostly a myth. There's one exception, which is when you log in to a remote machine and want to edit a file there. Starting a remote Emacs is (usually) slower than starting a remote Vim. In some situations you can keep an Emacs running inside Screen. You can also edit remote files from within Emacs, but it does break the flow if you're in an ssh session in a terminal. (Since XEmacs 21 or GNU Emacs 23, you can open an Emacs window from a running X instance inside a terminal.) Turning the tables, I have observed Vim taking noticeably longer to load than Emacs ( vim -u /dev/null vs. emacs -q ). Admittedly this was on a weird platform (Cygwin). Initial learning curve: This varies from person to person. Michael Mrozek's graph made me chuckle. Seriously, I agree that Vim's learning curve starts steep, steeper than any other editor, although this can be lessened by using gvim. Since I've dispelled a couple of Emacs myths, let me dispel a vi myth: a modal editor is not hard or painful to use. It takes a little habit, but after a while it feels very natural. If I was to redesign vi(m), I'd definitely keep the modes. Asymptotic learning curve: Both Vim and Emacs have a lot of features, and you will keep discovering new ones after years of use. Productivity : This is an extremely hard topic. Proponents of vi(m) argue that you can do pretty much everything without leaving the home row, and that makes you more efficient when you need it most. Proponents of Emacs retort that Emacs has a lot of commands that are not frequently used, so don't warrant a key binding, but are damn convenient when you need them ( obligatory xkcd reference ). My personal opinion is that Emacs ultimately wins unless you have a typing disability (and even then you can configure Emacs to require only key sequences and not combinations like Ctrl +letter). Home row keys are nice, but they often aren't that much of a win because you have to switch modes. I don't think there's anything Vim can do significantly more efficiently than Emacs, whereas the converse is true. Customizability : Both editors are programmable, and there is an extensive body of available packages for both. However, Vim is an editor with a macro language; Emacs is an editor written in Lisp with some ad-hoc primitives. Emacs wins spectacularly when you try to do something that the authors just didn't think of. This doesn't happen every day, but it does accumulate over the years. More than an editor : Vim is an editor. Emacs is not just an editor: it's also an IDE, a file manager, a terminal emulator, a web browser, a mail client, a news client, ... Whether that's a good thing or a bad thing is a matter for debate. But you can use Emacs as a mere editor (see “feature bloat” above). As an IDE : Both Vim and Emacs have support for a lot of programming languages and other text formats. Beyond the basics such as syntactic coloring and automatic indentation, both have advanced IDE features such as code and documentation cross-reference lookups, assisted insertions and refactoring, integrated version control, and the ability to initiate a compilation and jump to the first error. One domain where Emacs is plain better than Vim is interaction with asynchronous subprocesses. That's when you start a long compilation and want to do something else inside the same editor instance while the compiler is churning. Or when you want to interact with a Read-eval-print loop — Emacs really shines at this, Vim only has clumsy hacks to offer. Nevertheless, a new fork of vim, Neovim has proved to have fixed this and implemented other various bug-fixes not implemented in stock vim. | {
"source": [
"https://unix.stackexchange.com/questions/986",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1070/"
]
} |
996 | I renamed a few files in one batch script. Is there a way to undo the changes without having to rename them back? Does Linux provide some native way of undo ing? | Linux (like other unices) doesn't natively provide an undo feature. The philosophy is that if it's gone, it's gone. If it was important, it should have been backed up. There is a fuse filesystem that automatically keeps copies of old versions: copyfs . Of course, that can use a lot of resources. Unfortunately, it's unmaintained. Gitfs might be an alternative, I've never tried it. The best way to protect against such accidents is to use a version control system (cvs, bazaar, darcs, git, mercurial, subversion, ...). It takes a little time to learn, but it pays off awesomely in the medium and long term. | {
"source": [
"https://unix.stackexchange.com/questions/996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/912/"
]
} |
1,045 | I have 256 colors working just fine in konsole, . I thought I'd give tmux a try because, unlike screen, it seems to support vi mode. However I find that the colors of my prompt show up and this is most likely because I have a 256 color mode prompt. What do I need to do to get tmux to recognize all 256 colors? | The Tmux FAQ explicitly advises against setting TERM to anything other than screen or screen-256color or tmux or tmux-256color in your shell init file, so don't do it! Here's what I use: ~$ which tmux
tmux: aliased to TERM=xterm-256color tmux and in in my .tmux.conf: set -g default-terminal "screen-256color" Aliasing tmux to " tmux -2 " should also do the trick. And don't forget to restart your tmux server: (see @mast3r answer ) tmux kill-server && tmux | {
"source": [
"https://unix.stackexchange.com/questions/1045",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
1,052 | For personal linux on my personal notebooks, I've usually set my environment to autologin as root even under X or lower runlevels. I've found my workflow is very pleasant and fast, without any cumbersome need to type su or sudo or being asked by keyring or auth or something. So far I've never had any problem with it, so why are most people freaking out about it? Is the concern overrated? Of course this assumes the user knows what they are doing and doesn't really care about system reliability and security issues. | For the same reasons why each daemon should have minimal rights. Apache can run as root. It is designed to perform one task and surely nothing bad can happen? But assume apache is not bug-free. Bugs are discovered from time to time. Sometimes it can even be arbitrary code execution or similar. Now with apache running as root, it can access anything — for example it can load a rootkit into kernel and hide itself. On the other hand, writing a user-level rootkit is very hard. It has to override different programs (like ps ) inside /home , which can raise suspicion due to the extra disk space used. It might not know the exact configuration and forget to include e.g. gnome-system-monitor therefore exposing itself. It has to cover bash , tcsh and any shell you happen to use (to start itself). It would have to work with different configurations instead of 'simply' overriding a bunch of callbacks. Consider that not so long ago there was arbitrary code execution discovered in... Adobe Reader. Other reason is user mistakes. It is better to be warned before erasing the whole disk by one command. Third reason is different shells. Root shell should be installed on / in case that rescue of system needed to be performed. Users' shells can be installed on /usr (for example user can use zsh). Forth reason is that different programs don't work as root. They specifically know they are not supposed to, so you would need to patch system. Fifth reason is that /root should not be on a separate partition while /home can (and should). Having /home separate helps for various reasons. ALSO : WHY NOT USE AS NORMAL USER. You more often don't need to have root rights than do. It is a very little cost for security. | {
"source": [
"https://unix.stackexchange.com/questions/1052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317/"
]
} |
1,066 | $ ps -e -T | grep myp | grep -v grep
797 797 ? 00:00:00 myp
797 798 ? 00:00:00 myp
797 799 ? 00:00:00 myp
797 800 ? 00:00:00 myp This shows the process myp with PID = 797 and four threads with different SPIDs. How can I kill a particular thread of the process without killing the whole process. I understand that it might not be possible at all in some cases when there are fatal dependencies on that particular thread. But, is it possible in any case? Is yes, how? I tried kill 799 and the process itself was terminated. Now I am not sure this was because there were dependencies that made myp fail without the process 800 or because kill is simple not able to kill individual processes. | Threads are an integral part of the process and cannot be killed outside it. There is the pthread_kill function but it only applies in the context of the thread itself. From the docs at the link: Note that pthread_kill() only causes
the signal to be handled in the
context of the given thread; the
signal action (termination or
stopping) affects the process as a
whole. | {
"source": [
"https://unix.stackexchange.com/questions/1066",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/912/"
]
} |
1,067 | What are the directories one should back up, in order to have a backup of all user-generated files? From a vanilla debian install, I can do enough apt to get the packages that I want. So if I don't want to backup the entire system, where all in the filesystem do user-generated configuration and data files reside? | As this question has many different answers, the following list should combine the suggestions into one comprehensive list: Under most circumstances you want to backup these: /home/ for user data and configuration. /etc/ for system wide configuration files. /var/ contains a mix of directories you usually want to backup and those you don't want to backup. See below for a more detailed explanation. Some more directories to consider are: /usr/local/ hand-installed packages (i.e. not installed through apt) are installed here. If you have packages installed here, you may want to backup the whole directory, so you don't have to reinstall them. If the packages themselves aren't important to you, it should be enough to backup /usr/local/etc/ and /usr/local/src/ . /opt/ if you didn't store anything here, you don't need to back it up. If you stored something here, you are in the best position to decide, if you want to back it up. /srv/ much like /opt/ , but is by convention more likely to contain data you actually want to backup. /root/ stores configuration for the root user. If that is important to you, you should back it up. /var/ /var/ contains many files you want to backup under most circumstances, but also some you don't want to backup. You probably want to backup these: /var/lib/ this directory contains variable state data for installed applications. Depending on the application you want to backup that state or you don't. If you want to be on the safe side, you can just back up everything. Otherwise you can look at each sub-directory and decide for yourself if the data contained is important enough to you to back it up. /var/mail/ you normally want to backup local mails. /var/www/ if your web root is located here and this is the only place where your web content is stored, you want to back it up. /var/games/ you may want to backup these, if system wide game data is important enough for you (not many games use this storage though). /var/backups/ usually contains files that are automatically generated from other data that you usually want on a backup, but that would take an unnecessary amount of space in the backup or is otherwise cumbersome to backup. For example dpkg dumps a list of installed packages here, so you can later know which packages to install after restoring the backup. You probably want to backup this. /var/spool/cron/crontabs/ might contain many commands or a complex schedule, even with dependencies on other systems, that has taken considerable effort to put together. You probably don't want to backup these: /var/cache/ contrary to the name, some contents of this directory are important, so check each subdirectory individually, as a rule of thumb, everything you put here yourself is important. You also might want to backup /var/cache/debconf/ . /var/lock/ locks usually (always) don't need to be backed up. /var/run/ contains data that is only important for your running system, i.e. when you shutdown you system, it will not be needed any more. /var/spool/ (other than /var/spool/cron/crontabs , see above) normally important data shouldn't be stored here, but you might want to check. You have to decide yourself on these: /var/local/ you normally know if you stored something here and whether you want it on a backup or not. /var/opt/ see /var/local/ or better check if something important is stored here. /var/log/ depends on whether your logs are important to you and if you have enough space to store them (they might take a lot of backup space over time). | {
"source": [
"https://unix.stackexchange.com/questions/1067",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/394/"
]
} |
1,079 | Diff is a great tool to display the changes between two files. But how to display the similarities of two text files (while ignoring the differences)? I.e. sample input: a:
Foo Bar
X
Hello
World
42
b:
Foo Baz
Hello
World
23 Pseudo output (something like this): @@ 2,3
=Hello World Just sorting both files and using comm is not enough, because in that case the line information is lost. | How about using diff, even though you don't want a diff? Try this: diff --unchanged-group-format='@@ %dn,%df
%<' --old-group-format='' --new-group-format='' \
--changed-group-format='' a.txt b.txt Here is what I get with your sample data: $ cat a.txt
Foo Bar
X
Hello
World
42
$ cat b.txt
Foo Baz
Hello
World
23
$ diff --unchanged-group-format='@@ %dn,%df
%<' --old-group-format='' --new-group-format='' \
--changed-group-format='' a.txt b.txt
@@ 2,3
Hello
World | {
"source": [
"https://unix.stackexchange.com/questions/1079",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
1,087 | I was wondering how to run a command as another user from a script. I have the script's owner set as root. I also have the following command being run within the script to run the command as the hudson user: su -c command hudson Is this the correct syntax? | Yes. Here's the --help : $ su --help
Usage: su [options] [LOGIN]
Options:
-c, --command COMMAND pass COMMAND to the invoked shell
-h, --help display this help message and exit
-, -l, --login make the shell a login shell
-m, -p,
--preserve-environment do not reset environment variables, and
keep the same shell
-s, --shell SHELL use SHELL instead of the default in passwd And some testing (I used sudo as I don't know the password for the nobody account) $ sudo su -c whoami nobody
[sudo] password for oli:
nobody When your command takes arguments you need to quote it. If you don't, strange things will occur. Here I am —as root— trying to create a directory in /home/oli (as oli) without quoting the full command: # su -c mkdir /home/oli/java oli
No passwd entry for user '/home/oli/java' It's only read mkdir as the value for the -c flag and it's trying to use /home/oli/java as the username. If we quote it, it just works: # su -c "mkdir /home/oli/java" oli
# stat /home/oli/java
File: ‘/home/oli/java’
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: 811h/2065d Inode: 5817025 Links: 2
Access: (0775/drwxrwxr-x) Uid: ( 1000/ oli) Gid: ( 1000/ oli)
Access: 2016-02-16 10:49:15.467375905 +0000
Modify: 2016-02-16 10:49:15.467375905 +0000
Change: 2016-02-16 10:49:15.467375905 +0000
Birth: - | {
"source": [
"https://unix.stackexchange.com/questions/1087",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/430/"
]
} |
1,125 | I have a directory with a large number of files. I don't see a ls switch to provide the count. Is there some command line magic to get a count of files? | Using a broad definition of "file" ls | wc -l (note that it doesn't count hidden files and assumes that file names don't contain newline characters). To include hidden files (except . and .. ) and avoid problems with newline characters, the canonical way is: find . ! -name . -prune -print | grep -c / Or recursively: find .//. ! -name . -print | grep -c // | {
"source": [
"https://unix.stackexchange.com/questions/1125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/325/"
]
} |
1,134 | At the moment I'm using a Python script to generate iptables rules. Each set of changes gets committed to a git repository before deployment so there's a trace of who changed what and why. What tools/processes do other people use to manage changes to their firewall rules? Is there a guide on best practice for firewall change control that anyone likes? UPDATE: I guess what I'm asking is for tools/processes around the area. For instance I find testing large firewall scripts quite difficult. Anyone use/written a test script or know of a unit testing type approach that's possible with iptables ? | Using a broad definition of "file" ls | wc -l (note that it doesn't count hidden files and assumes that file names don't contain newline characters). To include hidden files (except . and .. ) and avoid problems with newline characters, the canonical way is: find . ! -name . -prune -print | grep -c / Or recursively: find .//. ! -name . -print | grep -c // | {
"source": [
"https://unix.stackexchange.com/questions/1134",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/315/"
]
} |
1,136 | I have a directory full of images: image0001.png
image0002.png
image0003.png
... And I would like a one-liner to rename them to (say). 0001.png
0002.png
0003.png
... How do I do this? | If you are using Bash or other POSIX-compatible shell: for f in *.png; do
mv -- "$f" "${f#image}"
done | {
"source": [
"https://unix.stackexchange.com/questions/1136",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/753/"
]
} |
1,168 | I want to glob every hidden file and directory, but not the current ( . ) and parent directory ( .. ). I am using bash. Observe current behaviour: $ ls -a
. .. ...a ...aa ..a ..aa .a .aa .aaa a
$ echo *
a
$ echo .*
. .. ...a ...aa ..a ..aa .a .aa .aaa I would like .* to behave like this $ echo .*
...a ...aa ..a ..aa .a .aa .aaa There is the shell option dotglob $ shopt -s dotglob that works in a way; now I can use * to glob everything (hidden or not) but not . and .. $ echo *
...a ...aa ..a ..aa .a .aa .aaa a but now I can't differentiate between hidden or not. Also, .* still globs . and .. $ echo .*
. .. ...a ...aa ..a ..aa .a .aa .aaa Is there a way to make .* not expand to . and .. ? | You can use the GLOBIGNORE variable to hide the . and .. directories. This does automatically also set the dotglob option, so * now matches both hidden and non-hidden files. You can again manually unset dotglob , though, this then gives the behavior you want. See this example: $ ls -a
. .. a .a ..a
$ GLOBIGNORE=".:.."
$ shopt -u dotglob
$ echo * # all (only non-hidden)
a
$ echo .* # all (only hidden)
.a ..a | {
"source": [
"https://unix.stackexchange.com/questions/1168",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1170/"
]
} |
1,262 | The wheel group on *nix computers typically refers to the group with some sort of root-like access. I've heard that on some *nixes it's the group of users with the right to run su , but on Linux that seems to be anyone (although you need the root password, naturally). On Linux distributions I've used it seems to be the group that by default has the right to use sudo ; there's an entry in sudoers for them: %wheel ALL=(ALL) ALL But that's all tangential; my actual question is: Why is this group called wheel ? I've heard miscellaneous explanations for it before, but don't know if any of them are correct. Does anyone know the actual history of the term? | The Jargon File has an answer which seems to agree with JanC . wheel: n.
[from slang ‘big wheel’ for a powerful person] A person who has an
active wheel bit...The traditional name of security group zero in BSD (to which the major system-internal users like root belong) is ‘wheel’... A wheel bit is also helpfully defined: A privilege bit that allows the possessor to perform some restricted operation on a timesharing system, such as read or write any file on the system regardless of protections, change or look at any address in the running monitor, crash or reload the system, and kill or create jobs and user accounts. The term was invented on the TENEX operating system, and carried over to TOPS-20, XEROX-IFS, and others. The state of being in a privileged logon is sometimes called wheel mode. This term entered the Unix culture from TWENEX in the mid-1980s and has been gaining popularity there (esp. at university sites). | {
"source": [
"https://unix.stackexchange.com/questions/1262",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73/"
]
} |
1,273 | I'm using screen on debian lenny, and I would like to use the -R option. From man screen : -R attempts to resume the youngest (in terms of creation time)
detached screen session it finds. If successful, all other com‐
mand-line options are ignored. If no detached session exists,
starts a new session using the specified options, just as if -R
had not been specified. However, when I run screen -R it does not actually attach to the youngest detached session. Instead, it complains that there are "several suitable screens" and that I need to choose one of them. Am I missing something? How do I make this work as advertised? | Try using screen -RR . Example: $ screen -ls
There are screens on:
5958.pts-3.sys01 (08/26/2010 11:40:43 PM) (Detached)
5850.pts-1.sys01 (08/26/2010 11:40:35 PM) (Detached)
2 Sockets in /var/run/screen/S-sdn. Note that screen 5958 is the youngest. Using screen -RR connects to screen 5958. The -RR options is somewhat further explained in the documentation for -d -RR . -d -RR Reattach a session and if necessary detach or create it. Use
the first session if more than one session is available. Another trick I often use is to use -S to give the screen a tag/label. Then you can reattach using that tag without having to remember what was happening in each screen if the list gets unwieldy. Example (Launch screens for vim and curl): $ screen -dm -S curl
$ screen -dm -S vim
$ screen -list
There are screens on:
11292.vim (08/27/2010 12:02:53 AM) (Detached)
11273.curl (08/27/2010 12:01:42 AM) (Detached) Note: The -dm option was just used to start a detached screen And then, at a later date, you can easily reconnect using the tag curl . # screen -R curl | {
"source": [
"https://unix.stackexchange.com/questions/1273",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/281/"
]
} |
1,280 | When using tar I always include -f in the parameters but I have no idea why. I looked up the man and it said; -f, --file [HOSTNAME:]F
use archive file or device F (default
"-", meaning stdin/stdout) But to be honest I have no idea what that means. Can anyone shed any light on it? | The -f option tells tar that the next argument is the file name of the archive, or standard output if it is - . | {
"source": [
"https://unix.stackexchange.com/questions/1280",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/625/"
]
} |
1,288 | I consistently have more than one terminal open. Anywhere from two to ten, doing various bits and bobs. Now let's say I restart and open up another set of terminals. Some remember certain things, some forget. I want a history that: Remembers everything from every terminal Is instantly accessible from every terminal (eg if I ls in one, switch to another already-running terminal and then press up, ls shows up) Doesn't forget command if there are spaces at the front of the command. Anything I can do to make bash work more like that? | Add the following to your ~/.bashrc : # Avoid duplicates
HISTCONTROL=ignoredups:erasedups
# When the shell exits, append to the history file instead of overwriting it
shopt -s histappend
# After each command, append to the history file and reread it
PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r" | {
"source": [
"https://unix.stackexchange.com/questions/1288",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/880/"
]
} |
1,295 | For example How can I yank and paste Line 4 only to Line 12 without having to move the cursor to Line 4? | If the cursor is already on line 12, then a simple :4y
P does it for me. | {
"source": [
"https://unix.stackexchange.com/questions/1295",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/912/"
]
} |
1,314 | I want to set a folder such that anything created within it (directories, files) inherit default permissions and group. Lets call the group "media". And also, the folders/files created within the directory should have g+rw automatically. | I found it: Applying default permissions From the article: Set the setgid bit, so that files/folder under <directory> will be created with the same group as <directory> chmod g+s <directory> Set the default ACLs for the group and other setfacl -d -m g::rwx /<directory>
setfacl -d -m o::rx /<directory> Next we can verify: getfacl /<directory> Output: # file: ../<directory>/
# owner: <user>
# group: media
# flags: -s-
user::rwx
group::rwx
other::r-x
default:user::rwx
default:group::rwx
default:other::r-x | {
"source": [
"https://unix.stackexchange.com/questions/1314",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1253/"
]
} |
1,338 | What is the Fedora equivalent of the Debian build-essential package? | The closest equivalent would probably be to install the below packages: sudo dnf install make automake gcc gcc-c++ kernel-devel However, if you don't care about exact equivalence and are ok with pulling in a lot of packages you can install all the development tools and libraries with the below command. sudo dnf groupinstall "Development Tools" "Development Libraries" On Fedora version older than 32 you will need the following: sudo dnf groupinstall @development-tools @development-libraries | {
"source": [
"https://unix.stackexchange.com/questions/1338",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171/"
]
} |
1,355 | $ which echo
echo: shell built-in command.
$ which ls
/bin/ls
$ which cat
/bin/cat Why is echo not an independent utility like ls , ps , cat etc? Why is it shell specific? Any good reasons? | There are two classes of builtins: Some commands have to be built into the shell program itself because they cannot work if they are external. cd is one such since if it were external, it could only change its own directory; it couldn't affect the current working directory of the shell. (See also: Why is cd not a program? ) The other class of commands are built into the shell purely for efficiency. The dash man page has a section on builtins which mentions printf , echo , and test as examples of commands in this class. Unix systems have always included separate executables for commands in that second class. These separate executables are still available on every Unixy system I've used, even though they're also built into every shell you're likely to use. ( POSIX actually requires that these executables be present.) I believe echo got built into the shell in AT&T Unix System V Release 3.1. I base that on comparisons of two different editions of manuals for AT&Ts 3B1 series Unix systems . Someone has kindly scanned 1986 editions of these manuals and put them online ; these correspond to the original release of SVR3. You can see that echo isn't in the list on page 523 of UNIX System V User's Manual, Volume II , where you'd expect it if the command were built into the shell. In my local paper copy of the SVR3.1 manuals from 1987, echo is listed in this section of the manual. I'm pretty sure this isn't a Berkeley CSRG innovation that AT&T brought back home. 4.3BSD came out the same year as SVR3, 1986, but if you look at 4.3BSD's sh.1 manpage , you see that echo is not in the "Special Commands" section's list of built-in commands. If CSRG did this, that leaves us wanting a documented source to prove it. At this point, you may wonder if echo was built into the shell earlier than SVR3.1 and that this fact simply wasn't documented until then. The newest pre-SVR3 AT&T Unix source code available to me is in the PDP-11 System III tarball , wherein you will find the Bourne shell source code. You won't find echo in the builtin command table, which is in /usr/src/cmd/sh/msg.c . Based on the timestamps in that file, that proves that echo certainly wasn't in the shell in 1980. Trivia The same directory also contains a file called builtin.c which doesn't contain anything on-point for this question, but we do find this interesting comment: /*
builtin commands are those that Bourne did not intend
to be part of his shell.
Redirection of i/o, or rather the lack of it, is still a
problem..
*/ | {
"source": [
"https://unix.stackexchange.com/questions/1355",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/912/"
]
} |
1,373 | I was surprised that I didn't find this question already on the site. So, today $ came up after I logged in as a new user. This was unexpected because my main user's prompt starts with username@computername:~$ . So, how do I switch from this other shell to bash? | Assuming the unknown shell supports running an absolute command, you could try: /bin/bash To change the default shell, I would use chsh(1) . Sample usage: chsh -s /bin/bash $USER | {
"source": [
"https://unix.stackexchange.com/questions/1373",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/58/"
]
} |
1,386 | Is it possible to view two files side-by-side in Vim? If so, how can I set up my editor to do this, and is there a way to diff between the two files within Vim? I am aware of the :next and :prev commands, but this is not what I'm after. It would really be nice to view the two files in tandem. | Open the side by side view: Ctrl+w v Change between them: Ctrl+w h or l You can then open another file for comparison in one side by entering a command such as: :e file2.txt Checkout the vimdiff command, part of the vim package, if you want a diff-like view, e.g.: vimdiff file1.txt file2.txt | {
"source": [
"https://unix.stackexchange.com/questions/1386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/446/"
]
} |
1,416 | When you attempt to modify a file without having write permissions on it, you get an error: > touch /tmp/foo && sudo chown root /tmp/foo
> echo test > /tmp/foo
zsh: permission denied: /tmp/foo Sudoing doesn't help, because it runs the command as root, but the shell handles redirecting stdout and opens the file as you anyway: > sudo echo test > /tmp/foo
zsh: permission denied: /tmp/foo Is there an easy way to redirect stdout to a file you don't have permission to write to, besides opening a shell as root and manipulating the file that way? > sudo su
# echo test > /tmp/foo | Yes, using tee . So echo test > /tmp/foo becomes echo test | sudo tee /tmp/foo You can also append ( >> ) echo test | sudo tee -a /tmp/foo | {
"source": [
"https://unix.stackexchange.com/questions/1416",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/73/"
]
} |
1,437 | KDE SC 4.5.0 has some problems with some video cards including mine. Upon Release Arch recommended several workarounds . One of which was export "LIBGL_ALWAYS_INDIRECT=1" before starting KDE I decided that it was the easiest, best method. But I don't know what it does or how it impacts my system. Is it slower than the default? should I remember to keep an eye on the problem and disable it later once it's fixed? | Indirect rendering means that the GLX protocol will be used to transmit OpenGL commands and the X.org will do the real drawing. Direct rendering means that application can access hardware directly without communication with X.org first via mesa. The direct rendering is faster as it does not require change of context into X.org process. Clarification: In both cases the rendering is done by GPU (or technically - may be done by GPU). However in indirect rendering the process looks like: Program calls a command(s) Command(s) is/are sent to X.org by GLX protocol X.org calls hardware (i.e. GPU) to draw In direct rendering Program calls a command(s) Command(s) is/are sent to GPU Please note that because OpenGL was designed in such way that may operate over network the indirect rendering is faster then would be naive implementation of architecture i.e. allows to send a buch of commands in one go. However there is some overhead in terms of CPU time spent for context switches and handling protocol. | {
"source": [
"https://unix.stackexchange.com/questions/1437",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
1,469 | When I type cd ~foo , I'd like bash to take me to some directory foo as a shortcut for typing the full directory path of foo . and I'd like to be able to cp ~foo/bar.txt ~/bar.txt to copy a file from the /foo/ directory to the home directory... So basically, I want something that works exactly like ~/ does, but where I specify what the directory should be. [I'm sure I should jfgi, but I don't know what to fg] | The way I used to do this is to create a directory that contains symlinks to the directories you want shortcuts do, and add that directory to your CDPATH. CDPATH controls where cd will search when you switch directories, so if that directory of symlinks is in your CDPATH you can cd to any of the symlinked directories instantly: mkdir ~/symlinks
ln -s /usr/bin ~/symlinks/b
export CDPATH=~/symlinks
cd b # Switches to /usr/bin The downside of course is it won't work if there's a directory in your current directory named "b" -- that takes precedence over the CDPATH I normally dislike answers that say "first you need to switch shells", but this exact feature exists in ZSH , if you're willing to use that instead; it's called named directories . You export a variable foo , and when you refer to ~foo it resolves to the value of $foo . This is especially convenient because it works in commands besides cd : echo hi > /tmp/test
export t=/tmp
cat ~t/test # Outputs "hi" | {
"source": [
"https://unix.stackexchange.com/questions/1469",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12/"
]
} |
1,484 | If I use find command like this: find /mydir/mysubdir -executable -type f all executable files are listed (excluding directories), and including executable script file (like script.sh, etc). What I want to do is list only binary executable files. | You might try the file utility. According to the manpage: The magic tests are used to check for files with data in particular fixed formats. The canonical example of this is a binary executable (compiled program) a.out file, whose format is defined in , and possibly in the standard include directory. You might have to play around with the regular expression but something like: $ find -type f -executable -exec file -i '{}' \; | grep 'x-executable; charset=binary' file has lots of options, so you might want to take a closer look at the man page. I used the first option I found that seemed to output easily-to-grep output. | {
"source": [
"https://unix.stackexchange.com/questions/1484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317/"
]
} |
1,487 | I recently switched over to Xfce and really enjoy it. However, when I went in to my settings to change the appearance, the theme that I liked best was "Xfce-dusk" with one exception: The default colors for application tabs are so dark that it's hard to tell where they are. I hunted around for a little while looking anywhere I could to see where the styles for the tabs are set in the theme configuration file and couldn't find anything that looked like it would do what I need. The file I've found to define the theme was: /usr/share/themes/Xfce-dusl/gtk-2.0/gtkrc But looking through that file, I can't find anything that seems to apply. Any suggestions? Normally I'm a command-line only sort of guy and don't really know much about WMs or theming, and don't even know the terms of what I'm trying to modify, so my Google searches are coming up painfully empty. Here is a screenshot of the type of tabs I want to modify . | You might try the file utility. According to the manpage: The magic tests are used to check for files with data in particular fixed formats. The canonical example of this is a binary executable (compiled program) a.out file, whose format is defined in , and possibly in the standard include directory. You might have to play around with the regular expression but something like: $ find -type f -executable -exec file -i '{}' \; | grep 'x-executable; charset=binary' file has lots of options, so you might want to take a closer look at the man page. I used the first option I found that seemed to output easily-to-grep output. | {
"source": [
"https://unix.stackexchange.com/questions/1487",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1380/"
]
} |
1,489 | I had this argument recently saying Mac OS X was not UNIX, but Unix-like. I know there is a Single Unix Specification and those spec compliant could use the UNIX trade mark. Is Mac OS X a UNIX operating system or is it a Unix-like? | All but one release of Mac OS X (now macOS) has been certified as Unix by The Open Group , starting with 10.5: 13.0 (Ventura) on Intel Macs and on Apple Silicon Macs 12.0 (Monterey) on Intel Macs and on Apple Silicon Macs 11.0 (Big Sur) on Intel Macs and on Apple Silicon Macs 10.15 (Catalina) 10.14 (Mojave) 10.13 (High Sierra) 10.12 (Sierra) 10.11 (El Capitan) 10.10 (Yosemite) 10.9 (Mavericks) 10.8 (Mountain Lion) 10.6 (Snow Leopard) 10.5 (Leopard) At any given time, Apple's page on The Open Group site only lists the current version of macOS and sometimes the previous version, but all of the links above were at one point found via that page. macOS's status as a certified Unix is called out in Apple's Unix technology brief , which also has other good technical bits in it that will help you compare it to other UNIX® and Unix-like systems. Andrew Josey, VP Standards & Certification of the Open Group confirms that 10.7 Lion was never registered as a UNIX 03 product . | {
"source": [
"https://unix.stackexchange.com/questions/1489",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/229/"
]
} |
1,496 | In my ~/.bashrc file reside two definitions: commandA , which is an alias to a longer path commandB , which is an alias to a Bash script I want to process the same file with these two commands, so I wrote the following Bash script: #!/bin/bash
for file in "$@"
do
commandA $file
commandB $file
done Even after logging out of my session and logging back in, Bash prompts me with command not found errors for both commands when I run this script. What am I doing wrong? | If you look into the bash manpage you find: Aliases are not expanded when the
shell is not interactive, unless the
expand_aliases shell option is set
using shopt (see the description of
shopt under SHELL BUILTIN COMMANDS
below). So put a shopt -s expand_aliases in your script. Make sure to source your aliases file after setting this in your script. shopt -s expand_aliases
source ~/.bash_aliases | {
"source": [
"https://unix.stackexchange.com/questions/1496",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/446/"
]
} |
1,519 | How do you remove a file whose filename begins with a dash (hyphen or minus) - ? I'm ssh'd into a remote OSX server and I have this file in my directory: tohru:~ $ ls -l
total 8
-rw-r--r-- 1 me staff 1352 Aug 18 14:33 --help
... How in the world can I delete --help from a CLI? This issue is something that I come across in different forms on occasion, these files are easy to create, but hard to get rid of. I have tried using backslash rm \-\-help I have tried quotes rm "--help" How do I prevent the minus (dash or hyphen) character to be interpreted as an option? | Use "--" to make rm stop parsing command line options, like this: rm -- --help | {
"source": [
"https://unix.stackexchange.com/questions/1519",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/361/"
]
} |
1,524 | I was wondering if anyone knows how to change the timestamps of folders recursively based on the latest timestamp found of the files in that folder. So for example: jon@UbuntuPanther:/media/media/MP3s/Foo Fighters/(1997-05-20) The Colour and The Shape$ ls -alF
total 55220
drwxr-xr-x 2 jon jon 4096 2010-08-30 12:34 ./
drwxr-xr-x 11 jon jon 4096 2010-08-30 12:34 ../
-rw-r--r-- 1 jon jon 1694044 2010-04-18 00:51 Foo Fighters - Doll.mp3
-rw-r--r-- 1 jon jon 3151170 2010-04-18 00:51 Foo Fighters - Enough Space.mp3
-rw-r--r-- 1 jon jon 5004289 2010-04-18 00:52 Foo Fighters - Everlong.mp3
-rw-r--r-- 1 jon jon 5803125 2010-04-18 00:51 Foo Fighters - February Stars.mp3
-rw-r--r-- 1 jon jon 4994903 2010-04-18 00:51 Foo Fighters - Hey, Johnny Park!.mp3
-rw-r--r-- 1 jon jon 4649556 2010-04-18 00:52 Foo Fighters - Monkey Wrench.mp3
-rw-r--r-- 1 jon jon 5216923 2010-04-18 00:51 Foo Fighters - My Hero.mp3
-rw-r--r-- 1 jon jon 4294291 2010-04-18 00:52 Foo Fighters - My Poor Brain.mp3
-rw-r--r-- 1 jon jon 6778011 2010-04-18 00:52 Foo Fighters - New Way Home.mp3
-rw-r--r-- 1 jon jon 2956287 2010-04-18 00:51 Foo Fighters - See You.mp3
-rw-r--r-- 1 jon jon 2730072 2010-04-18 00:51 Foo Fighters - Up in Arms.mp3
-rw-r--r-- 1 jon jon 6086821 2010-04-18 00:51 Foo Fighters - Walking After You.mp3
-rw-r--r-- 1 jon jon 3033660 2010-04-18 00:52 Foo Fighters - Wind Up.mp3 The folder "(1997-05-20) The Colour and The Shape" would have its timestamp set to 2010-04-18 00:52. | You can use touch -r to use another file's timestamp instead of the current time (or touch --reference=FILE ) Here are two solutions. In each solution, the first command changes the modification time of the directory to that of the newest file immediately under it, and the second command looks at the whole directory tree recursively. Change to the directory ( cd '.../(1997-05-20) The Colour and The Shape' ) before running any of the commands. In zsh (remove the D to ignore dot files): touch -r *(Dom[1]) .
touch -r **/*(Dom[1]) . On Linux (or more generally with GNU find): touch -r "$(find -mindepth 1 -maxdepth 1 -printf '%T+=%p\n' |
sort |tail -n 1 | cut -d= -f2-)" .
touch -r "$(find -mindepth 1 -printf '%T+=%p\n' |
sort |tail -n 1 | cut -d= -f2-)" . However note that those ones assume no newline characters in file names. | {
"source": [
"https://unix.stackexchange.com/questions/1524",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1410/"
]
} |
1,537 | There is a shell command that allows you to measure how fast the data goes through it, so you can measure the speed of output of commands in a pipe. So instead of: $ somecommand | anothercommand you can do something like: $ somecommand | ??? | anothercommand And throughput stats (bytes/sec) are printed to stderr, I think. But I can't for the life of me remember what that command was. | cpipe is probably better for these purposes, but another related program is pv (Pipe Viewer): If you give it the --rate flag it will show the transfer rate | {
"source": [
"https://unix.stackexchange.com/questions/1537",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210/"
]
} |
1,555 | What would be a good way to move a file type from a directory and all of its sub-directories? Like "move all *.ogg in /thisdir recursively to /somedir". I tried a couple of things; my best effort was (still not that great): find /thisdir -type f -name '*.ogg' -exec mv /somedir {} \; It returned on each line before each file name, mv: cannot overwrite non-directory `/thisdir/*.ogg' with directory `/somedir' | you can use find with xargs for this find /thisdir -type f -name "*.ogg" -print0 | xargs -0 -Imysongs mv -i mysongs /somedir The -I in the above command tells
xargs what replacement string you want
to use (otherwise it adds the
arguments to the end of the command). OR In your command just try to move '{}' after mv command. find /thisdir -type f -name '*.ogg' -exec mv -i {} /somedir \; | {
"source": [
"https://unix.stackexchange.com/questions/1555",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/171/"
]
} |
1,569 | What does root:wheel mean in the following? chown root:wheel myfile | root is a user (the super user) and wheel is a group (of super users I guess). chown root:wheel myfile means making myfile belong to the user root and the group wheel (read man chown for more information). | {
"source": [
"https://unix.stackexchange.com/questions/1569",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1465/"
]
} |
1,571 | How do I get the file extension from bash? Here's what I tried: filename=`basename $filepath`
fileext=${filename##*.} By doing that I can get extension of bz2 from the path /dir/subdir/file.bz2 , but I have a problem with the path /dir/subdir/file-1.0.tar.bz2 . I would prefer a solution using only bash without external programs if it is possible. To make my question clear, I was creating a bash script to extract any given archive just by a single command of extract path_to_file . How to extract the file is determined by the script by seeing its compression or archiving type, that could be .tar.gz, .gz, .bz2 etc. I think this should involve string manipulation, for example if I get the extension .gz then I should check whether it has the string .tar before .gz — if so, the extension should be .tar.gz . | If the file name is file-1.0.tar.bz2 , the extension is bz2 . The method you're using to extract the extension ( fileext=${filename##*.} ) is perfectly valid¹. How do you decide that you want the extension to be tar.bz2 and not bz2 or 0.tar.bz2 ? You need to answer this question first. Then you can figure out what shell command matches your specification. One possible specification is that extensions must begin with a letter. This heuristic fails for a few common extensions like 7z , which might be best treated as a special case. Here's a bash/ksh/zsh implementation: basename=$filename; fileext=
while [[ $basename = ?*.* &&
( ${basename##*.} = [A-Za-z]* || ${basename##*.} = 7z ) ]]
do
fileext=${basename##*.}.$fileext
basename=${basename%.*}
done
fileext=${fileext%.} For POSIX portability, you need to use a case statement for pattern matching. while case $basename in
?*.*) case ${basename##*.} in [A-Za-z]*|7z) true;; *) false;; esac;;
*) false;;
esac
do … Another possible specification is that some extensions denote encodings and indicate that further stripping is needed. Here's a bash/ksh/zsh implementation (requiring shopt -s extglob under bash and setopt ksh_glob under zsh): basename=$filename
fileext=
while [[ $basename = ?*.@(bz2|gz|lzma) ]]; do
fileext=${basename##*.}.$fileext
basename=${basename%.*}
done
if [[ $basename = ?*.* ]]; then
fileext=${basename##*.}.$fileext
basename=${basename%.*}
fi
fileext=${fileext%.} Note that this considers 0 to be an extension in file-1.0.gz . ¹ ${VARIABLE##SUFFIX} and related constructs are in POSIX , so they work in any non-antique Bourne-style shell such as ash, bash, ksh or zsh. | {
"source": [
"https://unix.stackexchange.com/questions/1571",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317/"
]
} |
1,588 | How do I break a large, +4GB file into smaller files of about 500MB each. And how do I re-assemble them again to get the original file? | You can use split and cat . For example something like $ split --bytes 500M --numeric-suffixes --suffix-length=3 foo foo. (where the input filename is foo and the last argument is the output prefix). This will create files like foo.000 foo.001 ... The same command with short options: $ split -b 100k -d -a 3 foo foo You can also specify "--line-bytes" if you wish it to split on line boundaries instead of just exact number of bytes. For re-assembling the generated pieces again you can use e.g.: $ cat foo.* > foo_2 (assuming that the shell sorts the results of shell globbing - and the number of parts does not exceed the system dependent limit of arguments) You can compare the result via: $ cmp foo foo_2
$ echo $? (which should output 0) Alternatively, you can use a combination of find/sort/xargs to re-assemble the pieces: $ find -maxdepth 1 -type f -name 'foo.*' | sort | xargs cat > foo_3 | {
"source": [
"https://unix.stackexchange.com/questions/1588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1327/"
]
} |
1,591 | Is there a standard for $PATH and the order of things that are supposed to be in there? out of the box Arch Linux doesn't have /usr/local/bin in the $PATH . I want to add it but I'm not sure if there's a predefined pecking order for system paths. Also where is the right place to do this? for right now I modified /etc/profile but I'm not sure that's the right place in Arch for user modifications. Anyone know if there's a better place? | You can use split and cat . For example something like $ split --bytes 500M --numeric-suffixes --suffix-length=3 foo foo. (where the input filename is foo and the last argument is the output prefix). This will create files like foo.000 foo.001 ... The same command with short options: $ split -b 100k -d -a 3 foo foo You can also specify "--line-bytes" if you wish it to split on line boundaries instead of just exact number of bytes. For re-assembling the generated pieces again you can use e.g.: $ cat foo.* > foo_2 (assuming that the shell sorts the results of shell globbing - and the number of parts does not exceed the system dependent limit of arguments) You can compare the result via: $ cmp foo foo_2
$ echo $? (which should output 0) Alternatively, you can use a combination of find/sort/xargs to re-assemble the pieces: $ find -maxdepth 1 -type f -name 'foo.*' | sort | xargs cat > foo_3 | {
"source": [
"https://unix.stackexchange.com/questions/1591",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
1,645 | Sometimes, I need to check only the directories not files. Is there any option with the command ls? Or is there any utility for doing that? EDIT : I'm using Mac OS X, and ls -d gives me . even though I have directories. | I know there is already a selected answer, but you can get the requested behavior with just ls : ls -ld -- */ (Note that the '--' marks the end of parameters, preventing folder names beginning with a hyphen from being interpreted as further command options.) This will list all the non-hidden (unless you configure your shell's globs to expand them) directories in the current working directory where it is run (note that it also includes symbolic links to directories). To get all the subdirectories of some other folder, just try: ls -ld /path/to/directory/*/ Note that the -l is optional. | {
"source": [
"https://unix.stackexchange.com/questions/1645",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1090/"
]
} |
1,670 | There is often a need in the open source or active developer community to publish large video segments online. (Meet-up videos, campouts, tech talks...) Being that I am a developer and not a videographer I have no desire to fork out the extra scratch on a premium Vimeo account. How then do I take a 12.5 GB (1:20:00) MPEG tech talk video and slice it into 00:10:00 segments for easy uploading to video sharing sites? | $ ffmpeg -i source-file.foo -ss 0 -t 600 first-10-min.m4v
$ ffmpeg -i source-file.foo -ss 600 -t 600 second-10-min.m4v
$ ffmpeg -i source-file.foo -ss 1200 -t 600 third-10-min.m4v
... Wrapping this up into a script to do it in a loop wouldn't be hard. Beware that if you try to calculate the number of iterations based on the duration output from an ffprobe call that this is estimated from the average bit rate at the start of the clip and the clip's file size unless you give the -count_frames argument, which slows its operation considerably. Another thing to be aware of is that the position of the -ss option on the command line matters . Where I have it now is slow but accurate. The linked article describes fast-but-inaccurate and slower-but-still-accurate alternative formulations. You pay for the latter with a certain complexity. All that aside, I don't think you really want to be cutting at exactly 10 minutes for each clip. That will put cuts right in the middle of sentences, even words. I think you should be using a video editor or player to find natural cut points just shy of 10 minutes apart. Assuming your file is in a format that YouTube can accept directly, you don't have to reencode to get segments. Just pass the natural cut point offsets to ffmpeg , telling it to pass the encoded A/V through untouched by using the "copy" codec: $ ffmpeg -i source.m4v -ss 0 -t 593.3 -c copy part1.m4v
$ ffmpeg -i source.m4v -ss 593.3 -t 551.64 -c copy part2.m4v
$ ffmpeg -i source.m4v -ss 1144.94 -t 581.25 -c copy part3.m4v
... The -c copy argument tells it to copy all input streams (audio, video, and potentially others, such as subtitles) into the output as-is. For simple A/V programs, it is equivalent to the more verbose flags -c:v copy -c:a copy or the old-style flags -vcodec copy -acodec copy . You would use the more verbose style when you want to copy only one of the streams, but re-encode the other. For example, many years ago there was a common practice with QuickTime files to compress the video with H.264 video but leave the audio as uncompressed PCM ; if you ran across such a file today, you could modernize it with -c:v copy -c:a aac to reprocess just the audio stream, leaving the video untouched. The start point for every command above after the first is the previous command's start point plus the previous command's duration. | {
"source": [
"https://unix.stackexchange.com/questions/1670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/633/"
]
} |
1,687 | On a standard filesystem, we have: /usr/games
/usr/lib/games
/usr/local/games
/usr/share/games
/var/games
/var/lib/games Is this a joke, or is there some history behind this? What is it for? Why do we have separate and specialized directories for something like games? | It's just a bit of historical cruft. A long time ago, games were an optional part of the system, and might be installed by different people, so they lived in /usr/games rather than /usr/bin . Data such as high scores came to live in /var/games . As time went by, people variously put variable game data in /var/lib/games/NAME or /var/games/NAME and static game data in /usr/lib/NAME or /usr/games/lib/NAME or /usr/games/NAME or /usr/lib/games/NAME (and the same with share instead of lib for architecture-independent data). Nowadays, there isn't any compelling reason to keep games separate, it's just a matter of tradition. | {
"source": [
"https://unix.stackexchange.com/questions/1687",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/317/"
]
} |
1,709 | I am using Putty -> Suse box -> vim 7.2 combo for editing and want to remap Ctrl + arrows combo to a particular task. But for some reason, Vim ignores the shortcut and goes into insert mode and inserts character "D" (for left) of "C" (for right). Which part of my keyboard/terminal configuration is to blame and how to fix it? | Figure out exactly what escape sequence your terminal sends for Ctrl +arrow by typing Ctrl + V , Ctrl +arrow in insert mode: this will insert the leading ESC character (shown as ^[ in vim) literally, followed by the rest of the escape sequence. Then tell vim about these escape sequences with something like map <ESC>[5D <C-Left>
map <ESC>[5C <C-Right>
map! <ESC>[5D <C-Left>
map! <ESC>[5C <C-Right> I seem to recall that Putty has a default setting for Application Cursor Keys mode that's inconvenient (I forget why), you might want to toggle this setting first. Note that although escape sequences vary between terminals, conflicts (i.e. an escape sequence that corresponds to different keys in different terminals) are rare, so there's no particular need to try to apply the mappings only on a particular terminal type. | {
"source": [
"https://unix.stackexchange.com/questions/1709",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/210/"
]
} |
1,729 | Just wondering if installing Wine might open up a fairly solid Linux desktop to the world of Windows viruses. Any confirmed reports about that? Would you then install a Windows antivirus product under Wine? | Yes and no. Viruses/trojans are just programs, and will work on Wine... Also, your normal Linux file system is exposed to Wine with the user that launches Wine credentials. BUT, usually viruses are based on lots of hacks, and they expect a "standard" and common Windows installation. I doubt that any virus is coded thinking that it will be executed on Wine, and if it exists, it will probably not be too successful. Why? Because Wine users are a small portion of normal users, they have "weird" and strange installations (think in all flavours of Linux+Wine), usually are avanced users, and they have a strong community aware of security. So: yes, you are exposed to windows viruses, but not totally exposed, and most probably your linux installation will not be contaminated. Just be careful as you are on windows. On the other hand, you can use several techniques to increase the security: use chrooted wine (search google for chroot), virtualized environments, etc... | {
"source": [
"https://unix.stackexchange.com/questions/1729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1549/"
]
} |
1,736 | I am able to run anything using sudo ; my password is accepted. But whenever I try to do su from a shell, it fails with: su: incorrect password What can the problem be? | su means substitute user and if it is called without any arguments you will become the superuser. Therefore you have to enter the root password. This is some kind of unhandy if many people need to use commands for the system administration or similar stuff with extended user rights. You just don't want that people have unlimited rights by sharing all the same root password. The solution for this kind of problems is sudo (" substitute user do ").
This will allow you to specify the commands someone can invoke and define the permissions. For sudo you don't have to enter the root password, but the password of the user who tries to invoke a sudo command. Some distributions have the root user disabled for security reasons. This could be one explanation why you aren't able to use su . In that case, to obtain a shell with root privileges use sudo -s ;-) | {
"source": [
"https://unix.stackexchange.com/questions/1736",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/912/"
]
} |
1,800 | Is there a way to have bash know exactly what to display when you double tab? For example I have a python script scpy which requires a couple arguments. For example like apt-get , if you double tab gives you autoclean build-dep clean dselect-upgrade purge source upgrade
autoremove check dist-upgrade install remove update Is there a way to do that for your own scripts/programs? Do I need to wrap my python script in a bash script? | The easiest way of doing this is to include a shell script in /etc/bash_completion.d/ . The basic structure of this file is a simple function that performs the completion and then invocation of complete which is a bash builtin. Rather than go into detail on how to use complete , I suggest you read An Introduction to Bash Completion . Part 1 covers the basics and Part 2 gets into how you would go about writing a completion script. A denser description of bash completion can be found in the "Programmable Completion" section of man bash (you can type "/Programmable Completion" and then press 'n' a few times to get there quickly. Or, if you are feeling luck, "g 2140 RETURN"). | {
"source": [
"https://unix.stackexchange.com/questions/1800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1571/"
]
} |
1,818 | How do I remove a file from a git repositorie's index without removing the file from the working tree? If I had a file ./notes.txt that was being tracked by git, I could run git rm notes.txt . But that would remove the file. I'd rather want git just to stop tracking the file. | You could just use git rm --cached notes.txt . This will keep the file but remove it from the index. | {
"source": [
"https://unix.stackexchange.com/questions/1818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1327/"
]
} |
1,841 | Does anyone have any tricks and tips for finding information in man pages? | Pay attention to the section number: Suppose you want help on printf . there are at least two of them: in shell and in C. The bash version of printf is in section 1, the C version is in section 3 or 3C. If you don't know which one you want, type man -a printf , and all manual pages will be displayed. If what you are looking for is the format of printf with all % codes and it doesn't appear on printf man page, you can jump to related man pages listed under SEE ALSO paragraph. You may find something like formats(5) , which suggests you to type man 5 formats . If you are annoyed that man printf gives you printf(1) and all you want is printf(3), you have to change the order of scanned directories in the MANPATH environment variable and put the ones for C language before the ones for shell commands. This may happen also when Fortran or TCL/Tk man pages are listed before C ones. If you don't know where to start, type man intro , or man -s <section> intro . This gives you a summary of commands of requested section. Sections are well defined: 1 is for shell commands, 2 is for system calls, 3 is for programming interfaces (sometimes 3C for C, 3F for Fortran...) 5 is for file formats and other rules such as printf or regex formats. Last but not least: information delivered in man pages is not redundant, so read carefully from beginning to end for increasing your chances to find what you need. | {
"source": [
"https://unix.stackexchange.com/questions/1841",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1465/"
]
} |
1,875 | I want to put my home directory (~) under source control (git, in this case), as I have many setting files (.gitconfig, .gitignore, .emacs, etc.) in there I would like to carry across machines, and having them in Git would make it nice for retrieving them. My main machine is my MacBook, and the way that OS X is set up, there are many folders I want to ignore (Documents, Downloads, .ssh). There are also folders which are already using Git (.emacs.d). My thought was to just add all these directories to my .gitignore file, but that seems kind of tiresome, and could potentially lead to some unforeseen consequences. My next thought was to periodically copy the files I want to store into some folder in home, then commit that folder. The problem with that will be that I have to remember to move them before committing. Is there a clean way to do this? | I have $HOME under git. The first line of my .gitignore file is /* The rest are patterns to not ignore using the ! modifier. This first line means the default is to ignore all files in my home directory. Those files that I want to version control go into .gitignore like this: !/.gitignore
!/.profile
[...] A trickier pattern I have is: !/.ssh
/.ssh/*
!/.ssh/config That is, I only want to version .ssh/config - I don't want my keys and other files in .ssh to go into git. The above is how I achieve that. Edit: Added slashes to start of all paths. This makes the ignore patterns match from the top of the repository ($HOME) instead of anywhere. For example, if !lib/ was a pattern (don't ignore everything in the lib directory) and you add a file .gitignore , previously the pattern ( !.gitignore ) was matching that. With the leading slash ( !/.gitignore ), it will only match .gitignore in my home directory and not in any subdirectories. I haven't seen a case where this makes a practical difference with my ignore list, but it appears to me to be more technically accurate. | {
"source": [
"https://unix.stackexchange.com/questions/1875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1395/"
]
} |
1,879 | I'm using this guide to set-up a shared internet connection between two PC's. At step 8 it says I should run the commands: iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
/etc/rc.d/iptables save
/etc/rc.d/iptables start Doing this seems to have no effect on iptable's rules , if I run iptables -nvL my output is: Chain INPUT (policy ACCEPT 2223 packets, 2330K bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 2272 packets, 277K bytes)
pkts bytes target prot opt in out source destination Is that correct or am I doing something wrong? | The command iptables -nvL is displaying the contents of the filter table. The rule you are adding is in the nat table. Add -t nat to look at the nat table: iptables -t nat -nvL | {
"source": [
"https://unix.stackexchange.com/questions/1879",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1327/"
]
} |
1,910 | I'm working on a python script that passes file locations to an scp subprocess. That's all fine, but I'm in a situation where I may end up concatenating a path with a filename such that there's a double ' / in the path. I know that bash doesn't care if you have multiple file separators, but I'm wondering how exactly that is rectified. Is it bash that strips extra / s or does it really not matter ever? I ask because it will save me several lines of code to check for extra / s while concatenating. I know it's not a big deal, but I'm curious as well. I have a bash script that has the line cd //usr (instead of cd /usr ), which seems to imply there might be a significance to using multiple / s in a path | Multiple slashes are allowed and are equivalent to a single slash. From the Single Unix specification (version 4) , base definitions §3.271 pathname : “Multiple successive slashes are considered to be the same as one slash.” There is one exception: If a pathname begins with two successive slash characters, the first component following the leading slash characters may be interpreted in an implementation-defined manner. (ref: base definitions §4.13 pathname resolution ). Linux itself doesn't do this, though some applications might, and other unix-ish system do (e.g. Cygwin). A trailing / at the end of a pathname forces the pathname to refer to a directory. In ( POSIX 1003.1-2001 (Single Unix v4) base definitions §4.11 pathname resolution , a trailing / is equivalent to a trailing /. . POSIX 1003.1-2008 (Single Unix v4) base definitions §4.13 removes the requirement to make it equivalent to /. , in order to cope with non-existing directories (e.g. mkdir foo/ is required to work, whereas mkdir foo/. wouldn't — see the rationale for the change). For programs that act on a directory entry, if foo is a symbolic link to a directory, then passing foo/ is a way to make the program act on the directory instead of the symbolic link. ¹ Note that this applies for pathname resolution only, i.e. when accessing files. Filename manipulations may work differently. For example basename and dirname ignore trailing slashes. | {
"source": [
"https://unix.stackexchange.com/questions/1910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1571/"
]
} |
1,924 | I need one process run before log in to system. How to run it like services? how do I make services in Linux? In Ubuntu and Fedora? The service is customized tomcat | To run a service without or before logging in to the system (i.e. "on boot"), you will need to create a startup script and add it to the boot sequence. There's three parts to a service script: start, stop and restart. The basic structure of a service script is: #!/bin/bash
#
RETVAL=0;
start() {
echo “Starting <Service>”
}
stop() {
echo “Stopping <Service>”
}
restart() {
stop
start
}
case “$1″ in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo $”Usage: $0 {start|stop|restart}”
exit 1
esac
exit $RETVAL Once you have tweaked the script to your liking, just place it in /etc/init.d/ And, add it to the system service startup process (on Fedora, I am not a Ubuntu user, >D): chkconfig -add <ServiceName> Service will be added to the system boot up process and you will not have to manually start it up again. Cheers! | {
"source": [
"https://unix.stackexchange.com/questions/1924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.