source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
5,860 | There's a command, I think it comes with apache, or is somehow related to it, that checks permissions, all the way down. So if I have /home/foo/bar/baz it will tell me what the permissions are for baz , bar , foo , and home . Does anyone know what this command is or another way of doing this? The command basically starts at the argument, and works it's way up to / letting you know what the permissions are along the way so you can see if you have a permission problem. | The utility you may be thinking of is the namei command. According to the manual page: Namei uses its arguments as
pathnames to any type of Unix file
(symlinks, files, directories, and so
forth). Namei then follows each
pathname until a terminal point is
found (a file, directory, char device,
etc). If it finds a symbolic link, we
show the link, and start following it,
indenting the output to show the
context. The output you desire can be received as follows: $ namei -l /usr/src/linux-headers-2.6.35-22/include/
f: /usr/src/linux-headers-2.6.35-22/include/
drwxr-xr-x root root /
drwxr-xr-x root root usr
drwxrwsr-x root src src
drwxr-xr-x root root linux-headers-2.6.35-22
drwxr-xr-x root root include The namei command is part of the linux-util-ng software package. See the manual page for more details. | {
"source": [
"https://unix.stackexchange.com/questions/5860",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
5,863 | I find myself doing the following almost every day Run a find ( find -name somefile.txt ) Open the result in vim The problem is I have to copy and paste the result of the find into the vim command. Is there any way to avoid having to do this? I have experimented a bit ( find -name somefile.txt | vim ) but haven't found anything that works. Thanks in advance | You can use command substitution: vim $(find -name somefile.txt) or find -name somefile.txt -exec vim {} \; | {
"source": [
"https://unix.stackexchange.com/questions/5863",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3808/"
]
} |
5,877 | It appears systemd is the hot new init system on the block, same as Upstart was a few years ago. What are the pros/cons for each? Also, how does each compare to other init systems? | 2016 Update Most answers here are five years old so it's time for some updates. Ubuntu used to use upstart by default but they abandoned it last year in favor of systemd - see: Grab your pitchforks: Ubuntu to switch to systemd on Monday (The Register) Because of that there is a nice article Systemd for Upstart Users on Ubuntu wiki - very detailed comparison between upstart and systemd and a transition guide from upstart to systemd. (Note that according to the Ubuntu wiki you can still run upstart on current versions of Ubuntu by default by installing the upstart-sysv and running sudo update-initramfs -u but considering the scope of the systemd project I don't know how it works in practice, or whether or not systemd is possible to uninstall.) Most of the info in the Commands and Scripts sections below is adapted from some of the examples used in that article (that is conveniently licensed just like Stack Exchange user contributions under the Creative Commons Attribution-ShareAlike 3.0 License ). Here is a quick comparison of common commands and simple scripts, see sections below for detailed explanation. This answer is comparing the old behavior of Upstart-based systems with the new behavior of systemd-based systems, as asked in the question, but note that the commands tagged as "Upstart" are not necessarily Upstart-specific - they are often commands that are common to every non-systemd Linux and Unix system. Commands Running su: upstart: su systemd: machinectl shell (see "su command replacement" section below) Running screen: upstart: screen systemd: systemd-run --user --scope screen (see "Unexpected killing of background processes" section below) Running tmux: upstart: tmux systemd: systemd-run --user --scope tmux (see "Unexpected killing of background processes" section below) Starting job foo: upstart: start foo systemd: systemctl start foo Stopping job foo: upstart: stop foo systemd: systemctl stop foo Restarting job foo: upstart: restart foo systemd: systemctl restart foo Listing jobs: upstart: initctl list systemd: systemctl status Checking configuration of job foo: upstart: init-checkconf /etc/init/foo.conf systemd: systemd-analyze verify /lib/systemd/system/foo.service Listing job's environement variables: upstart: initctl list-env systemd: systemctl show-environment Setting job's environment variable: upstart: initctl set-env foo=bar systemd: systemctl set-environment foo=bar Removing job's environment variable: upstart: initctl unset-env foo systemd: systemctl unset-environment foo Logs In upstart, the logs are normal text files in the /var/log/upstart directory, so you can process them as usual: cat /var/log/upstart/foo.log
tail -f /var/log/upstart/foo.log In systemd logs are stored in an internal binary format (not as text files) so you need to use journalctl command to access them: sudo journalctl -u foo
sudo journalctl -u foo -f Scripts Example upstart script written in /etc/init/foo.conf : description "Job that runs the foo daemon"
start on runlevel [2345]
stop on runlevel [016]
env statedir=/var/cache/foo
pre-start exec mkdir -p $statedir
exec /usr/bin/foo-daemon --arg1 "hello world" --statedir $statedir Example systemd script written in /lib/systemd/system/foo.service : [Unit]
Description=Job that runs the foo daemon
Documentation=man:foo(1)
[Service]
Type=forking
Environment=statedir=/var/cache/foo
ExecStartPre=/usr/bin/mkdir -p ${statedir}
ExecStart=/usr/bin/foo-daemon --arg1 "hello world" --statedir ${statedir}
[Install]
WantedBy=multi-user.target su command replacement A su command replacement was merged into systemd in pull request #1022: Add new "machinectl shell" command for su(1)-like behaviour because, according to Lennart Poettering, "su is really a broken concept" . He explains that "you can use su and sudo as before, but don't expect that it will work in full " . The official way to achieve a su -like behavior is now: machinectl shell It has been further explained by Lennart Poettering in the discussion to issue #825: "Well, there have been long discussions about this, but the problem is
that what su is supposed to do is very unclear. [...]
Long story short: su is really a broken concept. It will given you
kind of a shell, and it’s fine to use it for that, but it’s not a full
login, and shouldn’t be mistaken for one." - Lennart Poettering See also: Lennart Poettering merged “su” command replacement into systemd: Test Drive on Fedora Rawhide Systemd Absorbs "su" Command Functionality Systemd Absorbs “su” (Hacker News) Unexpected killing of background processes Commands like: screen tmux nohup no longer work as expected . For example, nohup is a POSIX command to make sure that the process keeps running after you log out from your session. It no longer works on systemd. Also programs like screen and tmux need to be invoked in a special way or otherwise the processes that you run with them will get killed (while not getting those processes killed is usually the main reason of running screen or tmux in the first place). This is not a mistake, it is a deliberate decision, so it is not likely to get fixed in the future. This is what Lennart Poettering has said about this issue: In my view it was actually quite strange of UNIX that it by default let arbitrary user code stay around unrestricted after logout. It has been discussed for ages now among many OS people, that this should possible but certainly not be the default, but nobody dared so far to flip the switch to turn it from a default to an option. Not cleaning up user sessions after logout is not only ugly and somewhat hackish but also a security problem. systemd 230 now finally flipped the switch and finally by default cleans everything up correctly when the user logs out. For more info see: Systemd Starts Killing Your Background Processes By Default Systemd v230 kills background processes after user logs out, breaks screen, tmux Debian Bug #825394: systemd kill background processes after user logs out High-level startup concept In a way systemd works backwards - in upstart jobs start as soon as they can and in systemd jobs start when they have to. At the end of the day the same jobs can be started by both systems and in pretty much the same order, but you think about it looking from an opposite direction so to speak. Here is how Systemd for Upstart Users explains it: Upstart 's model for starting processes (jobs) is "greedy event-based", i. e. all available jobs whose startup events happen are
started as early as possible. During boot, upstart synthesizes some
initial events like startup or rcS as the "tree root", the early
services start on those, and later services start when the former are
running. A new job merely needs to install its configuration file into
/etc/init/ to become active. systemd 's model for starting processes (units) is "lazy dependency-based", i. e. a unit will only start if and when some other
starting unit depends on it. During boot, systemd starts a "root unit"
(default.target, can be overridden in grub), which then transitively
expands and starts its dependencies. A new unit needs to add itself as
a dependency of a unit of the boot sequence (commonly
multi-user.target) in order to become active. Usage in distributions Now some recent data according to Wikipedia: Distributions using upstart by default: Ubuntu (from 9.10 to 14.10) Chrome OS Chromium OS Distributions using systemd by default: Arch Linux - since October 2012 CentOS - since April 2014 (7.14.04) CoreOS - sice October 2013 (v94.0.0) Debian - since April 2015 (v8) Fedora - since May 2011 (v15) Mageia - since May 2012 (v2.0) openSUSE - since September 2012 (v12.2) Red Hat Enterprise Linux - since June 2014 (v7.0) SUSE Linux Enterprise Server - since October 2014 (v12) Ubuntu - since April 2015 (v15.04) (See Wikipedia for up to date info) Distributions using neither Upstart nor systemd: Devuan (Debian fork created that resulted from the systemd controversies in the Debian community that led to a resignation of Ian Jackson ) - specifically promotes Init Freedom with the following init systems considered for inclusion: sinit , OpenRC , runit , s6 and shepherd . Void Linux - uses runit as the init system and service supervisor Gentoo - uses OpenRC OS X - uses launchd FreeBSD uses a a traditional BSD-style init (not SysV init) NetBSD uses rc.d DragonFly uses traditional init OpenBSD uses the rc system startup script described here Alpine Linux (relatively new and little known distribution, with strong emphasis on security is getting more popular - e.g. Docker is moving its official images from Ubuntu to Alpine ) uses the OpenRC init system Controversy In the past A fork of Debian has been proposed to avoid systemd . The Devuan GNU+Linux was created - a fork of Debian without systemd (thanks to fpmurphy1 for pointing it out in the comments). For more info about this controversy, see: The official Debian position on systemd The systemd controversy Debian Exodus declaration in 2014 : As many of you might know already, the Init GR Debian vote promoted by
Ian Jackson wasn't useful to protect Debian's legacy and its users
from the systemd avalanche. This situation prospects a lock in systemd dependencies which is
de-facto threatening freedom of development and has serious
consequences for Debian, its upstream and its downstream. The CTTE managed to swap a dependency and gain us time over a subtle
install of systemd over sysvinit, but even this process was
exhausting and full of drama. Ultimately, a week ago, Ian Jackson
resigned. [...] Ian Jackson's resignation : I am resigning from the Technical Committee with immediate effect. While it is important that the views of the 30-40% of the project who
agree with me should continue to be represented on the TC, I myself am
clearly too controversial a figure at this point to do so. I should
step aside to try to reduce the extent to which conversations about
the project's governance are personalised. [...] The Init Freedom : Devuan was born out of a controversy over the decision to use as the
default init system for Debian. The official Debian position on
systemd is full of claims that others have debunked . Interested
readers can continue discussing this hot topic in The systemd
controversy . However we encourage you to keep your head cool and your
voice civil. At Devuan we’re more interested in programming them wrong
than looking back. [...] Some websites and articles dedicated to the systemd controversy has been created: Without-Systemd.org Systemd-Free.org The Init Freedom Systemd on Suckless There is a lot of interesting discussion on Hacker News: https://news.ycombinator.com/item?id=7728692 https://news.ycombinator.com/item?id=13387845 https://news.ycombinator.com/item?id=11797075 https://news.ycombinator.com/item?id=12600413 https://news.ycombinator.com/item?id=11845051 https://news.ycombinator.com/item?id=11782364 https://news.ycombinator.com/item?id=12877378 https://news.ycombinator.com/item?id=10483780 https://news.ycombinator.com/item?id=13469935 Similar tendencies in other distros can be observed as well: The Church of Suckless NixOS is looking for followers Philosophy upstart follows the Unix philosophy of DOTADIW - "Do One Thing and Do It Well." It is a replacement for the traditional init daemon. It doesn't do anything other than starting and stopping services. Other tasks are delegated to other specialized subsystems. systemd does much more than that. In addition to starting and stopping services it also manages passwords, logins, terminals, power management, factory resets, log processing, file system mount points, networking and much more - see the NEWS file for some of the features. Plans of expansion According to A Perspective for systemd
What Has Been Achieved, and What Lies Ahead presentation
by Lennart Poettering in 2014 at GNOME.asia, here are the main objectives of systemd, areas that were already covered and those that were still in progress: systemd objectives: Our objectives Turning Linux from a bag of bits into a competitive General Purpose Operating System. Building the Internet’s Next Generation OS Unifying pointless differences between distributions Bringing innovation back to the core OS Desktop, Server, Container, Embedded, Mobile, Cloud, Cluster, . . . These areas are closer together than you might think Reducing administrator complexity, reliability without supervision Everything introspectable Auto discovery, plug and play is key We fix things where they are broken, never tape over them Areas already covered: What we already cover: init system, journal logging, login management, device management,
temporary and volatile file management, binary format registration,
backlight save/restore, rfkill save/restore, bootchart, readahead,
encrypted storage setup, EFI/GPT partition discovery, virtual
machine/container registration, minimal container management, hostname
management, locale management, time management, random seed
management, sysctl variable management, console managment, . . . Work in progress: What we are working on: network management systemd-networkd Local DNS cache, mDNS responder, LLMNR responder, DNSSEC verification IPC in the kernel kdbus, sd-bus Time synchronisation with NTP systemd-timesyncd More integration with containers Sandboxing of Services Sandboxing of Apps OS Image format Container image format App image format GPT with auto-discovery Stateless systems, instantiatable systems, factory reset /usr is the OS /etc is (optional) configuration /var is (optional) state Atomic node initialisation and updates Integration with the cloud Service management across nodes Verifiable OS images All the way to the firmware Boot Loading Scope of this answer As fpmurphy1 noted in the comments, "It should be pointed out that systemd has expanded its scope of work over the years far beyond simply that of system startup." I tried to include most of the relevant info here. Here I am comparing the common features of Upstart and systemd when used as init systems as asked in the question and I only mention features of systemd that go beyond the scope of an init system because those cannot be compared to Startup, but their presence is important to understand the difference between those two projects. The relevant documentation should be checked for more info. More info More info can be found at: upstart website systemd website Upstart on Wikipedia Systemd on Wikipedia The architecture of systemd on Wikipedia Linus Torvalds and others on Linux's systemd (ZDNet) About the systemd controversy by Robert Graham Init Freedom Campaign Rationale for switching from upstart to systemd? Extras The LinOxide Team has created a Systemd vs SysV Init Linux Cheatsheet . | {
"source": [
"https://unix.stackexchange.com/questions/5877",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
5,901 | In Midnight Commander, how to quickly set the right panel to the same directory as the left panel (and vice versa)? | Newer versions of Midnight Commander use Alt-o (also ESC followed by o) to do this. Older versions used Alt-o for doing a change directory to the currently highlighted directory, so it will depend on which build you are using. | {
"source": [
"https://unix.stackexchange.com/questions/5901",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2119/"
]
} |
5,915 | I read this up on this website and it doesn't make sense. http://rcsg-gsir.imsb-dsgi.nrc-cnrc.gc.ca/documents/basic/node32.html When UNIX was first written, /bin and /usr/bin physically resided on two
different disks: /bin being on a
smaller faster (more expensive) disk,
and /usr/bin on a bigger slower disk.
Now, /bin is a symbolic link to /usr/bin : they are essentially the
same directory. But when you ls the /bin folder, it has far less content than the /usr/bin folder (at least on my running system). So can someone please explain the difference? | What? no /bin/ is not a symlink to /usr/bin on any FHS compliant system. Note that there are still popular Unices and Linuxes that ignore this - for example, /bin and /sbin are symlinked to /usr/bin on Arch Linux (the reasoning being that you don't need /bin for rescue/single-user-mode, since you'd just boot a live CD). /bin contains commands that may be used by both the system administrator and by users, but which are required when no other filesystems are mounted (e.g. in single user mode). It may also contain commands which are used indirectly by scripts /usr/bin/ This is the primary directory of executable commands on the system. essentially, /bin contains executables which are required by the system for emergency repairs, booting, and single user mode. /usr/bin contains any binaries that aren't required. I will note, that they can be on separate disks/partitions, /bin must be on the same disk as / . /usr/bin can be on another disk - although note that this configuration has been kind of broken for a while (this is why e.g. systemd warns about this configuration on boot). For full correctness, some unices may ignore FHS, as I believe it is only a Linux Standard, I'm not aware that it has yet been included in SUS, Posix or any other UNIX standard, though it should be IMHO. It is a part of the LSB standard though. | {
"source": [
"https://unix.stackexchange.com/questions/5915",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3368/"
]
} |
5,959 | How can I pause an rsync that's running? First time I did Ctrl + C to kill it and used the -P flag to run again. Is this prone to corrupt the current file transfer? Second time I simply put MacOS to sleep (by closing the lid). Looking at the running process I can see 2 (not sure why) with a status of 'S'. Tried using kill -SIGCONT to resume the process, but it has no effect. | You can pause any program by sending it a TSTP (polite) or STOP (forcible) signal. On the terminal you've run rsync in, pressing Ctrl + Z sends TSTP . Resume with the fg or bg command in the terminal or a CONT signal. It is safe to kill an rsync process and run the whole thing again; it will continue where it left off. It may be a little inefficient, particularly if you haven't passed --partial (included in -P ), because rsync will check all files again and process the file it was interrupted on from scratch. There may be unusual combinations of options that will lead to some files not being synchronized properly, maybe --inplace with something else, but I think no single option will cause this. If you disconnect your laptop and reconnect it somewhere else, it may get a different IP address. Then the TCP connection used by rsync would be severed, so you'd have to kill it and start again. This can also happen if you suspend your laptop and the TCP connection times out. The timeout will eventually filter out to the application level, but it can take a while. It's safe to press Ctrl + C and run rsync again. | {
"source": [
"https://unix.stackexchange.com/questions/5959",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1562/"
]
} |
5,980 | I've got a long line that I want to insert a space every 4 characters, on a single lone line of solid text to make it easier to read, what's the simplest way to do this? also I should be able to input the line from a pipe. e.g. echo "foobarbazblargblurg" | <some command here> gives foob arba zbla rgbl urg | Use sed as follows: $ echo "foobarbazblargblurg" | sed 's/.\{4\}/& /g'
foob arba zbla rgbl urg | {
"source": [
"https://unix.stackexchange.com/questions/5980",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
6,008 | I know df -h and pwd , but it seems a little complex for the regex matching part. Any ideas? | The output can be made a bit easier to parse by using the -P option which will ensure that: The information about each file system is always printed on
exactly one line; a mount device is never put on a line by
itself. This means that if the mount device name is more
than 20 characters long (e.g., for some network mounts), the
columns are misaligned. This makes it much easier to get just the free space available: $ df -Ph . | tail -1 | awk '{print $4}' ( -h uses megabytes, gigabytes and so on. If your system doesn't have it, use -k for kilobytes only.) If we pass df a path, it is only going to return 2 rows: a header row and then the data about the file system that contains the path. We can use tail to grab just the second row. We know that the space available is in the 4th column, so we grab that with awk . This all could be done with awk : $ df -Ph . | awk 'NR==2 {print $4}' or many other sets of filters . | {
"source": [
"https://unix.stackexchange.com/questions/6008",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1875/"
]
} |
6,010 | I have a problem using these methods to get colors in my man pages in gentoo. I've asked already in the gentoo forums but it still doesn't work, and the comments in a bug report ( Bug 184604 ) don't work either. Can someone help me to get colours in my man pages? EDIT :
Yes, I'm using less as pager and urxvt-unicode as terminal emulator EDIT2 : I already asked in the gentoo forums but it didn't help, this is the link: http://forums.gentoo.org/viewtopic-t-819833-start-0.html . | Could be a number of problems. Seeing as you're using zsh , try putting this in your ~/.zshrc : export LESS_TERMCAP_mb=$'\E[01;31m'
export LESS_TERMCAP_md=$'\E[01;31m'
export LESS_TERMCAP_me=$'\E[0m'
export LESS_TERMCAP_se=$'\E[0m'
export LESS_TERMCAP_so=$'\E[01;47;34m'
export LESS_TERMCAP_ue=$'\E[0m'
export LESS_TERMCAP_us=$'\E[01;32m'
export LESS=-r Then open a new terminal window and try running man ls if it's not working, run each of the following to find out where the problem is: Number 1 typeset -p LESS_TERMCAP_md | cat -v should print typeset -x LESS_TERMCAP_md="^[[01;31m" and typeset -p LESS should print typeset -x LESS="-r" if not, you put the export LESS stuff in the wrong file. Number 2 echo "${LESS_TERMCAP_md}red${LESS_TERMCAP_me}" should print red in a red color. If it doesn't there is something wrong with your terminal settings. Check your terminal settings (e.g. ~/.Xresources ) or try running gnome-terminal or xterm and see if that works. Number 3 echo -E "a^Ha" | LESS= less -r ( ^H must be entered by pressing Ctrl + V then Ctrl + H ) should print a in red. If it doesn't, please run these type less
less --version and paste the output back in your question. Number 4 bzcat /usr/share/man/man1/ls.1.bz2 | \
/bin/sh /usr/bin/nroff -mandoc -Tutf8 | head -n 5 | cat -v should print LS(1) User Commands LS(1)
N^HNA^HAM^HME^HE (note the ^H like in step number 3) if it's printing something like: LS(1) User Commands LS(1)
^[[1mNAME^[[0m instead, you will need to find a way to disable "sgr escape sequences". The easiest thing to try is adding export GROFF_NO_SGR=1 to .zshrc , but there are other ways of fixing this. Number 5 bzcat /usr/share/man/man1/ls.1.bz2 | \
/bin/sh /usr/bin/nroff -mandoc -Tutf8 | less should display the ls man page with colors. man ls should now be working! | {
"source": [
"https://unix.stackexchange.com/questions/6010",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2416/"
]
} |
6,035 | I understand what brace expansion is, but I don't know how best to use it. When do you use it? Please teach me some convenient and remarkable examples if you have your own tip. | Brace expansion is very useful if you have long path names. I use it as a quick way to backup a file : cp /a/really/long/path/to/some/file.txt{,.bak} will copy /a/really/long/path/to/some/file.txt to /a/really/long/path/to/some/file.txt.bak You can also use it in a sequence . I once did so to download lots of pages from the web: wget http://domain.example/book/page{1..5}.html or for i in {1..100}
do
#do something 100 times
done | {
"source": [
"https://unix.stackexchange.com/questions/6035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2573/"
]
} |
6,065 | I can change the name of a window with Ctrl-a Shift-a. Instead of editing several window names by hand, is there a way to have them automatically named after the current directory? | Make your shell change the window title every time it changes directory, or every time it displays a prompt. For your ~/.bashrc : if [[ "$TERM" == screen* ]]; then
screen_set_window_title () {
local HPWD="$PWD"
case $HPWD in
$HOME) HPWD="~";;
$HOME/*) HPWD="~${HPWD#$HOME}";;
esac
printf '\ek%s\e\\' "$HPWD"
}
PROMPT_COMMAND="screen_set_window_title; $PROMPT_COMMAND"
fi Or for your ~/.zshrc (for zsh users): precmd () {
local tmp='%~'
local HPWD=${(%)tmp}
if [[ $TERM == screen* ]]; then
printf '\ek%s\e\\' $HPWD
fi
} For more information, look up under Dynamic titles in the Screen manual, or under “Titles (naming windows)” in the man page. | {
"source": [
"https://unix.stackexchange.com/questions/6065",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/379/"
]
} |
6,068 | Three files have suddenly appeared in my home directory, called "client_state.xml", "lockfile", and "time_stats_log". The last two are empty. I'm wondering how they got there. It's not the first time it has happened, but the last time was weeks ago; I deleted the files and nothing broke or complained. I haven't been able to think of what I was doing at the time reported by stat $filename . Is there any way I can find out where they came from? Alternatively, is there a way to monitor the home directory (but not sub-directories) for the creation of files? | I don't believe there is a way to determine which program created a file. For your alternative question:
You can watch for the file to be recreated, though, using inotify . inotifywait is a command-line interface for the inotify subsystem; you can tell it to look for create events in your home directory: $ (sleep 5; touch ~/making-a-test-file) &
[1] 22526
$ inotifywait -e create ~/
Setting up watches.
Watches established.
/home/mmrozek/ CREATE making-a-test-file You probably want to run it with -m (monitor), which tells it not to exit after it sees the first event | {
"source": [
"https://unix.stackexchange.com/questions/6068",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2333/"
]
} |
6,094 | Is there any way to keep a command from being added to your history? I have a command that I want to keep out of my history file, and I really don't care to have it there when I search the history stored in memory, though that's less of a concern. Is there any way to prevent this, or do I just have to go back and edit my history file. update: I didn't realize this might be shell-specific. My shell is zsh . You're welcome to answer for other shells so people know how to do this in their shell. | In ZSH : First insert setopt HIST_IGNORE_SPACE to your ~/.zshrc . Now after you log in again, you can prefix any commands you don't want stored in the history with a space . Note that (unlike bash's option of the same name) the command lingers in the internal history until the next command is entered before it vanishes, allowing you to briefly reuse or edit the line. From the user manual , the following 3 options can be used to say that certain lines shouldn't go into the history at all: HIST_IGNORE_SPACE don't store commands prefixed with a space HIST_NO_STORE don't store history ( fc -l ) command HIST_NO_FUNCTIONS don't store function definitions | {
"source": [
"https://unix.stackexchange.com/questions/6094",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
6,127 | I'm trying to run an install script that requires java to be installed and the JAVA_HOME environment variable to be set. I've set JAVA_HOME in /etc/profile and also in a file I've called java.sh in /etc/profile.d . I can echo $JAVA_HOME and get the correct response and I can even sudo echo $JAVA_HOME and get the correct response. In the install.sh I'm trying to run, I inserted an echo $JAVA_HOME . When I run this script without sudo I see the java directory; when I run the script with sudo it is blank. Any ideas why this is happening? I'm running CentOS. | For security reasons, sudo may clear environment variables which is why it is probably not picking up $JAVA_HOME. Look in your /etc/sudoers file for env_reset . From man sudoers : env_reset If set, sudo will reset the environment to only contain the following variables: HOME, LOGNAME, PATH, SHELL, TERM, and USER (in addi-
tion to the SUDO_* variables). Of these, only TERM is copied unaltered from the old environment. The other variables are set to
default values (possibly modified by the value of the set_logname option). If sudo was compiled with the SECURE_PATH option, its value
will be used for the PATH environment variable. Other variables may be preserved with the env_keep option.
env_keep Environment variables to be preserved in the user's environment when the env_reset option is in effect. This allows fine-grained con-
trol over the environment sudo-spawned processes will receive. The argument may be a double-quoted, space-separated list or a single
value without double-quotes. The list can be replaced, added to, deleted from, or disabled by using the =, +=, -=, and ! operators
respectively. This list has no default members. So, if you want it to keep JAVA_HOME, add it to env_keep: Defaults env_keep += "JAVA_HOME" Alternatively , set JAVA_HOME in root's ~/.bash_profile . | {
"source": [
"https://unix.stackexchange.com/questions/6127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3974/"
]
} |
6,131 | Can anyone explain briefly how to install openSUSE via USB? I searched in lot of forums, but I couldn't find anything helpful. Moreover, there was something written about installing it with the dd_rescue command, but that doesn't seem to work. So please give me a brief idea for installing openSUSE via USB. | For security reasons, sudo may clear environment variables which is why it is probably not picking up $JAVA_HOME. Look in your /etc/sudoers file for env_reset . From man sudoers : env_reset If set, sudo will reset the environment to only contain the following variables: HOME, LOGNAME, PATH, SHELL, TERM, and USER (in addi-
tion to the SUDO_* variables). Of these, only TERM is copied unaltered from the old environment. The other variables are set to
default values (possibly modified by the value of the set_logname option). If sudo was compiled with the SECURE_PATH option, its value
will be used for the PATH environment variable. Other variables may be preserved with the env_keep option.
env_keep Environment variables to be preserved in the user's environment when the env_reset option is in effect. This allows fine-grained con-
trol over the environment sudo-spawned processes will receive. The argument may be a double-quoted, space-separated list or a single
value without double-quotes. The list can be replaced, added to, deleted from, or disabled by using the =, +=, -=, and ! operators
respectively. This list has no default members. So, if you want it to keep JAVA_HOME, add it to env_keep: Defaults env_keep += "JAVA_HOME" Alternatively , set JAVA_HOME in root's ~/.bash_profile . | {
"source": [
"https://unix.stackexchange.com/questions/6131",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1267/"
]
} |
6,171 | I am looking for specific details as to why isn't GNU/Linux currently SUS (Single UNIX Specification) v3 or even better SUS v4 compliant? What application APIs and user utilities does it miss or implement in a non-SUS compliant way? | To get a certification you need to pay, and it's actually really expensive. That's why BSD-like and GNU/Linux OS vendors don't apply for it. So there isn't even a reason to check whether GNU/Linux is compliant or not. http://en.wikipedia.org/wiki/Single_UNIX_Specification#Non-registered_Unix-like_systems Most of all, the GNU/Linux distribution follows the Linux Standard Base, which is free of charge, and recognized by almost all Linux vendors. http://en.wikipedia.org/wiki/Linux_Standard_Base Edit: As my answer is not completely correct, I'll add the @vonbrand comments: Linus (and people involved in the development of other parts of Linux
distributions) follow the pragmatic guideline to make it as close to
POSIX as is worthwhile. There are parts of POSIX (like the (in)famous
STREAMS) that are ill-conceived, impossible to implement efficiently,
or just codification of historic relics that should be replaced by
something better. ... therefore, does it make it harder to obtain a certification? Sure. POSIX mandates some interface, which Linux just won't ever have.
Case closed. | {
"source": [
"https://unix.stackexchange.com/questions/6171",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3963/"
]
} |
6,182 | I've been thinking about discontinuing the use of GNU Coreutils on my Linux systems, but to be honest, unlike many other GNU components, I can't think of any alternatives (on Linux) . What alternatives are there to GNU coreutils? will I need more than one package? Links to the project are a must, bonus points for naming distro packages. Also please don't suggest things unless you know they work on Linux, and can reference instructions. I doubt I'll be switching kernels soon, and I'm much too lazy for anything much beyond a straightforward ./configure; make; make install . I'm certainly not going to hack C for it. warning: if your distro uses coreutils removing them could break the way your distro functions. However not having them be first in your $PATH shouldn't break things, as most scripts should use absolute paths. | busybox the favorite of Embedded Linux systems. BusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc. The utilities in BusyBox generally have fewer options than their full-featured GNU cousins; however, the options that are included provide the expected functionality and behave very much like their GNU counterparts. BusyBox provides a fairly complete environment for any small or embedded system. BusyBox has been written with size-optimization and limited resources in mind. It is also extremely modular so you can easily include or exclude commands (or features) at compile time. This makes it easy to customize your embedded systems. To create a working system, just add some device nodes in /dev, a few configuration files in /etc, and a Linux kernel. You can pretty much make any coreutil name a link to the busybox binary and it will work. you can also run busybox <command> and it will work. Example: if you're on Gentoo and haven't installed your vi yet, you can run busybox vi filename and you'll be in vi. It's Arch Linux - community/busybox Gentoo Linux - sys-apps/busybox Alpine Linux - based on BusyBox and uClibc, here's an overview | {
"source": [
"https://unix.stackexchange.com/questions/6182",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
6,252 | Debian's apt-get update fetches and updates the package index. Because I'm used to this way of doing things, I was surprised to find that yum update does all that and upgrades the system. This made me curious of how to update the package index without installing anything. | The check-update command will refresh the package index and check for available updates: yum check-update | {
"source": [
"https://unix.stackexchange.com/questions/6252",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,263 | If I want to check available versions of a package in Debian, I run apt-cache policy pkgname which in the case of wajig gives: wajig:
Installed: 2.01
Candidate: 2.01
Version table:
*** 2.01 0
100 /var/lib/dpkg/status
2.0.47 0
500 file:/home/wena/.repo_bin/ squeeze/main i386 Packages
500 ftp://ftp.is.co.za/debian/ squeeze/main i386 Packages That means that there are three wajig packages, one that is installed ( /var/lib/dpkg/status ), and two others (which are the same version). One of these two is in a local repository and the other is available from a remote repository. How do I achieve a similar result on rpm systems? | yum For RHEL/Fedora/Centos/Scientific Linux Provides the command list to display information about installed and upgradeable (and older) packages. yum --showduplicates list <package> zypper For SuSE Linux Can return a detailed list of available and installed packages or patches. zypper search -s <package> Adding --exact-match can help, if there are multiple packages. As a side-note, here is a comparison of package-management commands. | {
"source": [
"https://unix.stackexchange.com/questions/6263",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,279 | For debugging purposes I want to monitor the http requests on a network interface. Using a naive tcpdump command line I get too much low-level information and the information I need is not very clearly represented. Dumping the traffic via tcpdump to a file and then using wireshark has the disadvantage that it is not on-the-fly. I imagine a tool usage like this: $ monitorhttp -ieth0 --only-get --just-urls
2011-01-23 20:00:01 GET http://foo.example.org/blah.js
2011-01-23 20:03:01 GET http://foo.example.org/bar.html
... I am using Linux. | Try tcpflow : tcpflow -p -c -i eth0 port 80 | grep -oE '(GET|POST|HEAD) .* HTTP/1.[01]|Host: .*' Output is like this: GET /search?q=stack+exchange&btnI=I%27m+Feeling+Lucky HTTP/1.1
Host: www.google.com You can obviously add additional HTTP methods to the grep statement, and use sed to combine the two lines into a full URL. | {
"source": [
"https://unix.stackexchange.com/questions/6279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1131/"
]
} |
6,284 | Before I install a package, I'd like to know what version I would get. How do I check the version before installing using apt-get or aptitude on Debian or Ubuntu? | apt-get You can run a simulation to see what would happen if you upgrade/install a package: apt-get -s install <package> To see all possible upgrades, run an upgrade in verbose mode
and (to be safe) with simulation; press n to cancel: apt-get -V -s upgrade apt-cache The option policy can show the installed and the remote version (install candidate) of a package. apt-cache policy <package> apt-show-versions If installed, shows version information about one or more packages: apt-show-versions <package> Passing the -u switch with or without a package name will show only upgradeable packages. apt show Similar to what is obtained with dpkg -s <package> : apt show <package> aptitude The console GUI of aptitude can display upgradeable packages with new versions. Open the menu 'Upgradable Packages'. Pressing v on a package will show more detailed version information. Or on the command-line: aptitude versions <package> Passing -V will show detailed information about versions.
Again, to be safe, with the simulation switch: aptitude -V -s install <package> Substituting install <package> with upgrade will show the versions from all upgradeable packages. | {
"source": [
"https://unix.stackexchange.com/questions/6284",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
6,301 | The Linux proc(5) man page tells me that /proc/$pid/mem “can be used to access the pages of a process's memory”. But a straightforward attempt to use it only gives me $ cat /proc/$$/mem /proc/self/mem
cat: /proc/3065/mem: No such process
cat: /proc/self/mem: Input/output error Why isn't cat able to print its own memory ( /proc/self/mem )? And what is this strange “no such process” error when I try to print the shell's memory ( /proc/$$/mem , obviously the process exists)? How can I read from /proc/$pid/mem , then? | /proc/$pid/maps /proc/$pid/mem shows the contents of $pid's memory mapped the same way as in the process, i.e., the byte at offset x in the pseudo-file is the same as the byte at address x in the process. If an address is unmapped in the process, reading from the corresponding offset in the file returns EIO (Input/output error). For example, since the first page in a process is never mapped (so that dereferencing a NULL pointer fails cleanly rather than unintendedly accessing actual memory), reading the first byte of /proc/$pid/mem always yield an I/O error. The way to find out what parts of the process memory are mapped is to read /proc/$pid/maps . This file contains one line per mapped region, looking like this: 08048000-08054000 r-xp 00000000 08:01 828061 /bin/cat
08c9b000-08cbc000 rw-p 00000000 00:00 0 [heap] The first two numbers are the boundaries of the region (addresses of the first byte and the byte after last, in hexa). The next column contain the permissions, then there's some information about the file (offset, device, inode and name) if this is a file mapping. See the proc(5) man page or Understanding Linux /proc/id/maps for more information. Here's a proof-of-concept script that dumps the contents of its own memory. #! /usr/bin/env python
import re
maps_file = open("/proc/self/maps", 'r')
mem_file = open("/proc/self/mem", 'rb', 0)
output_file = open("self.dump", 'wb')
for line in maps_file.readlines(): # for each mapped region
m = re.match(r'([0-9A-Fa-f]+)-([0-9A-Fa-f]+) ([-r])', line)
if m.group(3) == 'r': # if this is a readable region
start = int(m.group(1), 16)
end = int(m.group(2), 16)
mem_file.seek(start) # seek to region start
chunk = mem_file.read(end - start) # read region contents
output_file.write(chunk) # dump contents to standard output
maps_file.close()
mem_file.close()
output_file.close() /proc/$pid/mem [The following is for historical interest. It does not apply to current kernels.] Since version 3.3 of the kernel , you can access /proc/$pid/mem normally as long as you access only access it at mapped offsets and you have permission to trace it (same permissions as ptrace for read-only access). But in older kernels, there were some additional complications. If you try to read from the mem pseudo-file of another process, it doesn't work: you get an ESRCH (No such process) error. The permissions on /proc/$pid/mem ( r-------- ) are more liberal than what should be the case. For example, you shouldn't be able to read a setuid process's memory. Furthermore, trying to read a process's memory while the process is modifying it could give the reader an inconsistent view of the memory, and worse, there were race conditions that could trace older versions of the Linux kernel (according to this lkml thread , though I don't know the details). So additional checks are needed: The process that wants to read from /proc/$pid/mem must attach to the process using ptrace with the PTRACE_ATTACH flag. This is what debuggers do when they start debugging a process; it's also what strace does to a process's system calls. Once the reader has finished reading from /proc/$pid/mem , it should detach by calling ptrace with the PTRACE_DETACH flag. The observed process must not be running. Normally calling ptrace(PTRACE_ATTACH, …) will stop the target process (it sends a STOP signal), but there is a race condition (signal delivery is asynchronous), so the tracer should call wait (as documented in ptrace(2) ). A process running as root can read any process's memory, without needing to call ptrace , but the observed process must be stopped, or the read will still return ESRCH . In the Linux kernel source, the code providing per-process entries in /proc is in fs/proc/base.c , and the function to read from /proc/$pid/mem is mem_read . The additional check is performed by check_mem_permission . Here's some sample C code to attach to a process and read a chunk its of mem file (error checking omitted): sprintf(mem_file_name, "/proc/%d/mem", pid);
mem_fd = open(mem_file_name, O_RDONLY);
ptrace(PTRACE_ATTACH, pid, NULL, NULL);
waitpid(pid, NULL, 0);
lseek(mem_fd, offset, SEEK_SET);
read(mem_fd, buf, _SC_PAGE_SIZE);
ptrace(PTRACE_DETACH, pid, NULL, NULL); I've already posted a proof-of-concept script for dumping /proc/$pid/mem on another thread . | {
"source": [
"https://unix.stackexchange.com/questions/6301",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
6,311 | On Debian systems (and derivatives): $ dpkg --search /bin/ls
coreutils: /bin/ls That is, the file /bin/ls belongs to the Debian package named coreutils . But this only works if the package is installed. What if it's not? | apt-file apt-file provides the feature of searching for a package providing a binary (like Debian or Ubuntu ), it is not installed by default but in the repositories. apt-file search <path-to-file> You may want to update once before searching... apt-file update For example, let's search for the not installed binary mysqldump : $ apt-file search /usr/bin/mysqldump
mysql-client-5.1: /usr/bin/mysqldump
mysql-client-5.1: /usr/bin/mysqldumpslow
mysql-cluster-client-5.1: /usr/bin/mysqldump
mysql-cluster-client-5.1: /usr/bin/mysqldumpslow It's also possible to list the contents of a (not-installed) package: $ apt-file list mysql-client-5.1
mysql-client-5.1: /usr/bin/innochecksum
mysql-client-5.1: /usr/bin/innotop
mysql-client-5.1: /usr/bin/myisam_ftdump
mysql-client-5.1: /usr/bin/mysql_client_test
... yum yum accepts the command whatprovides (or provides ) to search for installed or not installed binaries: yum whatprovides <path-to-file> Again, the not installed mysqldump : $ yum whatprovides /usr/bin/mysqldump
mysql-5.1.51-2.fc14.i686 : MySQL client programs and shared libraries
Repo : fedora
Matched from:
Filename : /usr/bin/mysqldump
mysql-5.1.51-1.fc14.i686 : MySQL client programs and shared libraries
Repo : fedora
Matched from:
Filename : /usr/bin/mysqldump zypper zypper 's search command can check file lists when used with the -f option. zypper se -f /bin/mksh
Loading repository data...
Reading installed packages...
S | Name | Summary | Type
--+------+-------------------+--------
| mksh | MirBSD Korn Shell | package Webpin provides a webbased solution, there is even a script for the command-line. pkgfile Available as pkgtools for pacman based systems. Provides a similar search feature like the others above: $ pkgfile -si /usr/bin/mysqldump
Name : mysql-clients
Version : 5.1.54-1
Url : http://www.mysql.com/
License : GPL
Depends : libmysqlclient
... | {
"source": [
"https://unix.stackexchange.com/questions/6311",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,332 | I sometimes get a little confused by all of the signals that a process can receive. As I understand it, a process has a default handler ( signal disposition ) for each of these signals, but it can provide its own handler by calling sigaction() . So here is my question: what causes each of the signals to be sent? I realize that you can manually send signals to running processes via the -s parameter to kill , but what are the natural circumstances under which these signals are sent? For example, when does SIGINT get sent? Also, are there any restrictions on which signals can be handled? Can even SIGSEGV signals be processed and control returned to the application? | In addition to processes calling kill(2) , some signals are sent by the kernel (or sometimes by the process itself) in various circumstances: Terminal drivers send signals corresponding to various events: Key press notifications: SIGINT (please go back to the main loop) on Ctrl + C , SIGQUIT (please quit immediately) on Ctrl + \ , SIGTSTP (please suspend) on Ctrl + Z . The keys can be changed with the stty command. SIGTTIN and SIGTTOU are sent when a background process tries to read or write to its controlling terminal. SIGWINCH is sent to signal that the size of the terminal window has changed. SIGHUP is sent to signal that the terminal has disappeared (historically because your modem had h ung up , nowadays usually because you've closed the terminal emulator window). Some processor traps can generate a signal. The details are architecture and system dependent; here are typical examples: SIGBUS for an unaligned access memory; SIGSEGV for an access to an unmapped page; SIGILL for an illegal instruction (bad opcode); SIGFPE for a floating-point instruction with bad arguments (e.g. sqrt(-1) ). A number of signals notify the target process that some system event has occured: SIGALRM notifies that a timer set by the process has expired. Timers can be set with alarm , setitimer and others. SIGCHLD notifies a process that one of its children has died. SIGPIPE is generated when a process tries to write to a pipe when the reading end has been closed (the idea is that if you run foo | bar and bar exits, foo gets killed by a SIGPIPE ). SIGPOLL (also called SIGIO ) notifies the process that a pollable event has occured. POSIX specifies pollable events registered through the I_SETSIG ioctl . Many systems allow pollable events on any file descriptor, set via the O_ASYNC fcntl flag. A related signal is SIGURG , which notifies of urgent data on a device (registered via the I_SETSIG ioctl ) or socket . On some systems, SIGPWR is sent to all processes when the UPS signals that a power failure is imminent. These lists are not exhaustive. Standard signals are defined in signal.h . Most signals can be caught and handled (or ignored) by the application. The only two portable signals that cannot be caught are SIGKILL (just die) and STOP (stop execution). SIGSEGV ( segmentation fault ) and its cousin SIGBUS ( bus error ) can be caught, but it's a bad idea unless you really know what you're doing. A common application for catching them is printing a stack trace or other debug information. A more advanced application is to implement some kind of in-process memory management, or to trap bad instructions in virtual machine engines. Finally, let me mention something that isn't a signal. When you press Ctrl + D at the beginning of a line in a program that's reading input from the terminal, this tells the program that the end of the input file is reached. This isn't a signal: it's transmitted via the input/output API. Like Ctrl + C and friends, the key can be configured with stty . | {
"source": [
"https://unix.stackexchange.com/questions/6332",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1049/"
]
} |
6,344 | What is the best way to share the same /home directories between different linux distribution? I mean, I want to have both, say, Fedora and Arch, and I want to find my files at home no matter which linux distro I boot into. But if I mount the same partition for /home then I may mess up the configurations saved inside /home directory. So what can I do? | It certainly is possible to share a home folder (or partition) over different linux distributions. But take the following notes: UID and GID must be the same on each distributions for the certain user(s). (as already pointed out) different configuration files for the same programs could result in unexpected behavior. If you install all distributions onto the same boot folder, make sure that the bootloader handles the different distributions correctly. I have a working (virtual) setup: /dev/sda (40GB)
+-/dev/sda1 /boot (100MB, ext2)
+-/dev/sda3 swap (2GB)
+-/dev/sda4 /home (20GB, ext4)
+---/dev/sda5 /root (Ubuntu 10.04, 5GB, ext4)
+---/dev/sda6 /root (Fedora 14, 5GB, ext4)
+---/dev/sda7 /root (openSUSE 11.3, 5GB, ext4)
+---/dev/sda8 /root (ArchLinux 2010.05, 5GB, ext4) Ubuntu and Fedora both run Gnome 2.30, openSUSE has KDE4 and ArchLinux LXDE. All distributions have their necessary boot files on one partition.
Switching between the distributions provides a persistent user configuration like intended. The other possibility would be a lightweight home folder (doesn't have to be a whole partition) for each of the distributions, only providing the necessary configuration files (.gnome2, .kde4, .compiz, .themes, etc.) and a shared data partition with the "heavy" stuff (documents, pictures, videos, music, etc.). Symbolic links in each of the distributions own home folder would then point to the shared partition. Afterwards, this can be expanded at will to include other stuff as well. Example: You have eclipse IDE installed on all distributions and want the same configuration and source files available everywhere. You can create symbolic links on each distributions home folder to the shared one to achieve this. This would be Ubuntu: $ ls -l /home/user
.eclipse -> /mnt/shared/.eclipse
.gnome2
Documents -> /mnt/shared/Documents
workspace -> /mnt/shared/workspace
... And openSUSE: $ ls -l /home/user
.eclipse -> /mnt/shared/.eclipse
.kde4
Documents -> /mnt/shared/Documents
workspace -> /mnt/shared/workspace
... And so on.. If you're not sure about interfering configuration files, try the second, safer way and find out which home components can be shared easily between the installed distributions. | {
"source": [
"https://unix.stackexchange.com/questions/6344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4123/"
]
} |
6,345 | I'm working on a simple bash script that should be able to run on Ubuntu and CentOS distributions (support for Debian and Fedora/RHEL would be a plus) and I need to know the name and version of the distribution the script is running (in order to trigger specific actions, for instance the creation of repositories). So far what I've got is this: OS=$(awk '/DISTRIB_ID=/' /etc/*-release | sed 's/DISTRIB_ID=//' | tr '[:upper:]' '[:lower:]')
ARCH=$(uname -m | sed 's/x86_//;s/i[3-6]86/32/')
VERSION=$(awk '/DISTRIB_RELEASE=/' /etc/*-release | sed 's/DISTRIB_RELEASE=//' | sed 's/[.]0/./')
if [ -z "$OS" ]; then
OS=$(awk '{print $1}' /etc/*-release | tr '[:upper:]' '[:lower:]')
fi
if [ -z "$VERSION" ]; then
VERSION=$(awk '{print $3}' /etc/*-release)
fi
echo $OS
echo $ARCH
echo $VERSION This seems to work, returning ubuntu or centos (I haven't tried others) as the release name. However, I have a feeling that there must be an easier, more reliable way of finding this out -- is that true? It doesn't work for RedHat.
/etc/redhat-release contains :
Redhat Linux Entreprise release 5.5 So, the version is not the third word, you'd better use : OS_MAJOR_VERSION=`sed -rn 's/.*([0-9])\.[0-9].*/\1/p' /etc/redhat-release`
OS_MINOR_VERSION=`sed -rn 's/.*[0-9].([0-9]).*/\1/p' /etc/redhat-release`
echo "RedHat/CentOS $OS_MAJOR_VERSION.$OS_MINOR_VERSION" | To get OS and VER , the latest standard seems to be /etc/os-release .
Before that, there was lsb_release and /etc/lsb-release . Before that, you had to look for different files for each distribution. Here's what I'd suggest if [ -f /etc/os-release ]; then
# freedesktop.org and systemd
. /etc/os-release
OS=$NAME
VER=$VERSION_ID
elif type lsb_release >/dev/null 2>&1; then
# linuxbase.org
OS=$(lsb_release -si)
VER=$(lsb_release -sr)
elif [ -f /etc/lsb-release ]; then
# For some versions of Debian/Ubuntu without lsb_release command
. /etc/lsb-release
OS=$DISTRIB_ID
VER=$DISTRIB_RELEASE
elif [ -f /etc/debian_version ]; then
# Older Debian/Ubuntu/etc.
OS=Debian
VER=$(cat /etc/debian_version)
elif [ -f /etc/SuSe-release ]; then
# Older SuSE/etc.
...
elif [ -f /etc/redhat-release ]; then
# Older Red Hat, CentOS, etc.
...
else
# Fall back to uname, e.g. "Linux <version>", also works for BSD, etc.
OS=$(uname -s)
VER=$(uname -r)
fi I think uname to get ARCH is still the best way. But the example you gave obviously only handles Intel systems. I'd either call it BITS like this: case $(uname -m) in
x86_64)
BITS=64
;;
i*86)
BITS=32
;;
*)
BITS=?
;;
esac Or change ARCH to be the more common, yet unambiguous versions: x86 and x64 or similar: case $(uname -m) in
x86_64)
ARCH=x64 # or AMD64 or Intel64 or whatever
;;
i*86)
ARCH=x86 # or IA32 or Intel32 or whatever
;;
*)
# leave ARCH as-is
;;
esac but of course that's up to you. | {
"source": [
"https://unix.stackexchange.com/questions/6345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3858/"
]
} |
6,387 | I changed permissions of a file ( chmod g+w testfile ) and running ls -l testfile gives: -rwxrwxr-x 1 user1 user1 0 2011-01-24 20:36 testfile I then added a user to that group (" /etc/group " has user1:x:1000:user2 line), but am failing to edit that file as user2. Why is this so? | user2 needs to log out and back in. Group permissions work this way: When you log in, your processes get to have group membership in your main group mentioned in /etc/passwd , plus all the groups where your user is mentioned in /etc/group . (More precisely, the pw_gid field in getpw(your_uid) , plus all the groups of which your user is an explicit member . Beyond /etc/passwd and /etc/group , the information may come from other kinds of user databases such as NIS or LDAP.) The main group becomes the process's effective group ID and the other groups become its supplementary group IDs . When a process performs an operation that requires membership in a certain group, such as accessing a file , that group must be either the effective group ID or one of the supplementary group IDs of the process. As you can see, your change to the user's group membership only takes effect when the user logs in. For running processes, it's too late. So the user needs to log out and back in. If that's too much trouble, the user can log in to a separate session (e.g. on a different console, or with ssh localhost ). Under the hood, a process can only ever lose privileges (user IDs, group IDs, capabilities). The kernel starts the init process (the first process after boot) running as root, and every process is ultimately descended from that process¹. The login process (or sshd , or the part of your desktop manager that logs you in) is still running as root. Part of its job is to drop the root privileges and switch to the proper user and groups. There's one single exception: executing a setuid or setgid program. That program receives additional permissions: it can choose to act under various subsets of the parent process's memberships plus the additional membership in the user or group that owns the setxid executable. In particular, a setuid root program has root permissions, hence can do everything²; this is how programs like su and sudo can do their job. ¹ There are occasionally processes that aren't derived from init (initrd, udev) but the principle is the same: start as root and lose privileges over time. ² Barring multilevel security frameworks such as SELinux. | {
"source": [
"https://unix.stackexchange.com/questions/6387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,389 | I have a couple of hundred html source code files. I need to extract the contents of a particular <div> element from each of these file so I'm going to write a script to loop through each file. The element structure is like this: <div id='the_div_id'>
<div id='some_other_div'>
<h3>Some content</h3>
</div>
</div> Can anyone suggest a method by which I can extract the div the_div_id and all the child elements and content from a file using the linux command line? | The html-xml-utils package, available in most major Linux distributions, has a number of tools that are useful when dealing with HTML and XML documents. Particularly useful for your case is hxselect which reads from standard input and extracts elements based on CSS selectors. Your use case would look like: hxselect '#the_div_id' <file You might get a complaint about input not being well formed depending on what you are feeding it. This complaint is given over standard error and thus can be easily suppressed if needed. An alternative to this would to be to use Perl's HTML::PARSER package; however, I will leave that to someone with Perl skills less rusty than my own. | {
"source": [
"https://unix.stackexchange.com/questions/6389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/451/"
]
} |
6,393 | How do I move all files in a directory (including the hidden ones) to another directory? For example, if I have a folder "Foo" with the files ".hidden" and "notHidden" inside, how do I move both files to a directory named "Bar"? The following does not work, as the ".hidden" file stays in "Foo". mv Foo/* Bar/ Try it yourself. mkdir Foo
mkdir Bar
touch Foo/.hidden
touch Foo/notHidden
mv Foo/* Bar/ | Zsh mv Foo/*(DN) Bar/ or setopt -s glob_dots
mv Foo/*(N) Bar/ (Leave out the (N) if you know the directory is not empty.) Bash shopt -s dotglob
mv Foo/* Bar/ Ksh93 If you know the directory is not empty: FIGNORE='.?(.)'
mv Foo/* Bar/ Standard (POSIX) sh for x in Foo/* Foo/.[!.]* Foo/..?*; do
if [ -e "$x" ]; then mv -- "$x" Bar/; fi
done If you're willing to let the mv command return an error status even though it succeeded, it's a lot simpler: mv Foo/* Foo/.[!.]* Foo/..?* Bar/ GNU find and GNU mv find Foo/ -mindepth 1 -maxdepth 1 -exec mv -t Bar/ -- {} + Standard find If you don't mind changing to the source directory: cd Foo/ &&
find . -name . -o -exec sh -c 'mv -- "$@" "$0"' ../Bar/ {} + -type d -prune Here's more detail about controlling whether dot files are matched in bash, ksh93 and zsh. Bash Set the dotglob option . $ echo *
none zero
$ shopt -s dotglob
$ echo *
..two .one none zero There's also the more flexible GLOBIGNORE variable , which you can set to a colon-separated list of wildcard patterns to ignore. If unset (the default setting), the shell behaves as if the value was empty if dotglob is set, and as if the value was .* if the option is unset. See Filename Expansion in the manual. The pervasive directories . and .. are always omitted, unless the . is matched explicitly by the pattern. $ GLOBIGNORE='n*'
$ echo *
..two .one zero
$ echo .*
..two .one
$ unset GLOBIGNORE
$ echo .*
. .. ..two .one
$ GLOBIGNORE=.:..
$ echo .*
..two .one Ksh93 Set the FIGNORE variable . If unset (the default setting), the shell behaves as if the value was .* . To ignore . and .. , they must be matched explicitly (the manual in ksh 93s+ 2008-01-31 states that . and .. are always ignored, but this does not correctly describe the actual behavior). $ echo *
none zero
$ FIGNORE='@(.|..)'
$ echo *
..two .one none zero
$ FIGNORE='n*'
$ echo *
. .. ..two .one zero You can include dot files in a pattern by matching them explicitly. $ unset FIGNORE
$ echo @(*|.[^.]*|..?*)
..two .one none zero To have the expansion come out empty if the directory is empty, use the N pattern matching option: ~(N)@(*|.[^.]*|..?*) or ~(N:*|.[^.]*|..?*) . Zsh Set the dot_glob option . % echo *
none zero
% setopt dot_glob
% echo *
..two .one none zero . and .. are never matched, even if the pattern matches the leading . explicitly. % echo .*
..two .one You can include dot files in a specific pattern with the D glob qualifier . % echo *(D)
..two .one none zero Add the N glob qualifier to make the expansion come out empty in an empty directory: *(DN) . Note: you may get filename expansion results in different orders
(e.g., none followed by .one followed by ..two )
based on your settings of the LC_COLLATE , LC_ALL , and LANG variables. | {
"source": [
"https://unix.stackexchange.com/questions/6393",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
6,402 | I have an mp4 video file with multiple audio tracks. I would like to strip away the rest of the tracks and keep just one. How do I do this? | First run ffmpeg -i file.mp4 to see which streams exists in your file. You should see something like this: Stream #0.0: Video: mpeg4, yuv420p, 720x304 [PAR 1:1 DAR 45:19], 23.98 tbr, 23.98 tbn, 23.98 tbc
Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s
Stream #0.2: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Then run ffmpeg -i file.mp4 -map 0:0 -map 0:2 -acodec copy -vcodec copy new_file.mp4 to copy video stream and 2nd audio stream to new_file.mp4 . | {
"source": [
"https://unix.stackexchange.com/questions/6402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,409 | I read once that one advantage of a microkernel architecture is that you can stop/start essential services like networking and filesystems, without needing to restart the whole system. But considering that Linux kernel nowadays (was it always the case?) offers the option to use modules to achieve the same effect, what are the (remaining) advantages of a microkernel? | Microkernels require less code to be run in the innermost, most trusted mode than monolithic kernels . This has many aspects, such as: Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will. This is mostly achievable on Linux, through modules. Microkernels are more robust: if a non-kernel component crashes, it won't take the whole system with it. A buggy filesystem or device driver can crash a Linux system. Linux doesn't have any way to mitigate these problems other than coding practices and testing. Microkernels have a smaller trusted computing base . So even a malicious device driver or filesystem cannot take control of the whole system (for example a driver of dubious origin for your latest USB gadget wouldn't be able to read your hard disk). A consequence of the previous point is that ordinary users can load their own components that would be kernel components in a monolithic kernel. Unix GUIs are provided via X window, which is userland code (except for (part of) the video device driver). Many modern unices allow ordinary users to load filesystem drivers through FUSE . Some of the Linux network packet filtering can be done in userland. However, device drivers, schedulers, memory managers, and most networking protocols are still kernel-only. A classic (if dated) read about Linux and microkernels is the Tanenbaum–Torvalds debate . Twenty years later, one could say that Linux is very very slowly moving towards a microkernel structure (loadable modules appeared early on, FUSE is more recent), but there is still a long way to go. Another thing that has changed is the increased relevance of virtualization on desktop and high-end embedded computers: for some purposes, the relevant distinction is not between the kernel and userland but between the hypervisor and the guest OSes. | {
"source": [
"https://unix.stackexchange.com/questions/6409",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,410 | Just a few minutes ago my sound was working, and then it stopped! I suspect it was caused by me checking out some video editors. How do I go about troubleshooting this? What I've so far tried: I ran lsof | grep audio and lsof | grep delete to see if there's any process locking the audio path(?), but nothing looks suspect. VLC and MPlayer are affected, while Quod Libet (GStreamer) isn't. [ update ] Strange one. I don't know if it has anything to do with Quod Libet, but I noticed that after closing (and reopening) it, the problem seemed to disappear. Note that I haven't logged out yet. | Microkernels require less code to be run in the innermost, most trusted mode than monolithic kernels . This has many aspects, such as: Microkernels allow non-fundamental features (such as drivers for hardware that is not connected or not in use) to be loaded and unloaded at will. This is mostly achievable on Linux, through modules. Microkernels are more robust: if a non-kernel component crashes, it won't take the whole system with it. A buggy filesystem or device driver can crash a Linux system. Linux doesn't have any way to mitigate these problems other than coding practices and testing. Microkernels have a smaller trusted computing base . So even a malicious device driver or filesystem cannot take control of the whole system (for example a driver of dubious origin for your latest USB gadget wouldn't be able to read your hard disk). A consequence of the previous point is that ordinary users can load their own components that would be kernel components in a monolithic kernel. Unix GUIs are provided via X window, which is userland code (except for (part of) the video device driver). Many modern unices allow ordinary users to load filesystem drivers through FUSE . Some of the Linux network packet filtering can be done in userland. However, device drivers, schedulers, memory managers, and most networking protocols are still kernel-only. A classic (if dated) read about Linux and microkernels is the Tanenbaum–Torvalds debate . Twenty years later, one could say that Linux is very very slowly moving towards a microkernel structure (loadable modules appeared early on, FUSE is more recent), but there is still a long way to go. Another thing that has changed is the increased relevance of virtualization on desktop and high-end embedded computers: for some purposes, the relevant distinction is not between the kernel and userland but between the hypervisor and the guest OSes. | {
"source": [
"https://unix.stackexchange.com/questions/6410",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,430 | I want to see the output of a command in the terminal as if there was no redirection.
Also, stderr needs to be redirected to err.log and stdout needs to be redirected to stdout.log. It would be nice to also have the exact copy of what is shown in terminal, i.e. errors printed as and when it occurs, in a separate file: stdouterr.log. | Use the tee command as follows: (cmd | tee stdout.log) 3>&1 1>&2 2>&3 | tee stderr.log 3>&1 1>&2 2>&3 is how you swap stderr and stdout, because tee can only accept stdout. Take a look at Unix tee command for more advanced redirections using tee . | {
"source": [
"https://unix.stackexchange.com/questions/6430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2832/"
]
} |
6,433 | Were I root, I could simply create a dummy user/group, set file permissions accordingly and execute the process as that user. However I am not, so is there any way to achieve this without being root? | More similar Qs with more answers worth attention: https://stackoverflow.com/q/3859710/94687 https://stackoverflow.com/q/4410447/94687 https://stackoverflow.com/q/4249063/94687 https://stackoverflow.com/q/1019707/94687 NOTE: Some of the answers there point to specific solutions not yet mentioned here. Actually, there are quite a few jailing tools with different implementation, but many of them are either not secure by design (like fakeroot , LD_PRELOAD -based), or not complete (like fakeroot-ng , ptrace -based), or would require root ( chroot , or plash mentioned at fakechroot warning label ). These are just examples; I thought of listing them all side-by-side, with indication of these 2 features ("can be trusted?", "requires root to set up?"), perhaps at Operating-system-level virtualization Implementations . In general, the answers there cover the full described range of possibilities and even more: virtual machines/OS ( the answer mentioning virtual machines/OS ) kernel extension (like SELinux) (mentioned in comments here), chroot Chroot-based helpers (which however must be setUID root, because chroot requires root; or perhaps chroot could work in an isolated namespace--see below): [to tell a little more about them!] Known chroot-based isolation tools: hasher with its hsh-run and hsh-shell commands. ( Hasher was designed for building software in a safe and repeatable manner.) schroot mentioned in another answer ... ptrace Another trustworthy isolation solution (besides a seccomp -based one ) would be the complete syscall-interception through ptrace , as explained in the manpage for fakeroot-ng : Unlike previous implementations, fakeroot-ng uses a
technology that leaves the
traced process no choice regarding whether it will use
fakeroot-ng's "services" or
not. Compiling a program statically, directly calling the
kernel and manipulating
ones own address space are all techniques that can be trivially
used to bypass
LD_PRELOAD based control over a process, and do not apply to
fakeroot-ng. It is,
theoretically, possible to mold fakeroot-ng in such a way as to have
total control
over the traced process. While it is theoretically possible, it has not been done.
Fakeroot-ng does assume
certain "nicely behaved" assumptions about the process being
traced, and a process
that break those assumptions may be able to, if not totally escape
then at least
circumvent some of the "fake" environment imposed on it by
fakeroot-ng. As such,
you are strongly warned against using fakeroot-ng as a
security tool. Bug reports
that claim that a process can deliberatly (as opposed to inadvertly)
escape fake‐
root-ng's control will either be closed as "not a bug" or marked as
low priority. It is possible that this policy be rethought in the future. For
the time being,
however, you have been warned. Still, as you can read it, fakeroot-ng itself is not designed for this purpose. (BTW, I wonder why they have chosen to use the seccomp -based approach for Chromium rather than a ptrace -based...) Of the tools not mentioned above, I have noted Geordi for myself, because I liked that the controlling program is written in Haskell. Known ptrace-based isolation tools: Geordi proot fakeroot-ng ... (see also How to achieve the effect of chroot in userspace in Linux (without being root)? ) seccomp One known way to achieve isolation is through the seccomp sandboxing approach used in Google Chromium . But this approach supposes that you write a helper which would process some (the allowed ones) of the "intercepted" file access and other syscalls; and also, of course, make effort to "intercept" the syscalls and redirect them to the helper (perhaps, it would even mean such a thing as replacing the intercepted syscalls in the code of the controlled process; so, it doesn't sound to be quite simple; if you are interested, you'd better read the details rather than just my answer). More related info (from Wikipedia): http://en.wikipedia.org/wiki/Seccomp http://code.google.com/p/seccompsandbox/wiki/overview LWN article: Google's Chromium sandbox , Jake Edge, August 2009 seccomp-nurse , a sandboxing framework based on seccomp. (The last item seems to be interesting if one is looking for a general seccomp -based solution outside of Chromium. There is also a blog post worth reading from the author of "seccomp-nurse": SECCOMP as a Sandboxing solution ? .) The illustration of this approach from the "seccomp-nurse" project : A "flexible" seccomp possible in the future of Linux? There used to appear in 2009 also suggestions to patch the Linux kernel so that there is more flexibility to the seccomp mode--so that "many of the acrobatics that we currently need could be avoided". ("Acrobatics" refers to the complications of writing a helper that has to execute many possibly innocent syscalls on behalf of the jailed process and of substituting the possibly innocent syscalls in the jailed process.) An LWN article wrote to this point: One suggestion that came out was to
add a new "mode" to seccomp. The API
was designed with the idea that
different applications might have
different security requirements; it
includes a "mode" value which
specifies the restrictions that should
be put in place. Only the original
mode has ever been implemented, but
others can certainly be added.
Creating a new mode which allowed the
initiating process to specify which
system calls would be allowed would
make the facility more useful for
situations like the Chrome sandbox. Adam Langley (also of Google) has
posted a patch which does just that.
The new "mode 2" implementation
accepts a bitmask describing which
system calls are accessible. If one of
those is prctl(), then the sandboxed
code can further restrict its own
system calls (but it cannot restore
access to system calls which have been
denied). All told, it looks like a
reasonable solution which could make
life easier for sandbox developers. That said, this code may never be
merged because the discussion has
since moved on to other possibilities. This "flexible seccomp" would bring the possibilities of Linux closer to providing the desired feature in the OS, without the need to write helpers that complicated. (A blog posting with basically the same content as this answer: http://geofft.mit.edu/blog/sipb/33 .) namespaces ( unshare ) Isolating through namespaces ( unshare -based solutions ) -- not mentioned here -- e.g., unsharing mount-points (combined with FUSE?) could perhaps be a part of a working solution for you wanting to confine filesystem accesses of your untrusted processes. More on namespaces, now, as their implementation has been completed (this isolation technique is also known under the nme "Linux Containers", or "LXC" , isn't it?..): "One of the overall goals of namespaces is to support the implementation of containers, a tool for lightweight virtualization (as well as other purposes)" . It's even possible to create a new user namespace, so that "a process can have a normal unprivileged user ID outside a user namespace while at the same time having a user ID of 0 inside the namespace. This means that the process has full root privileges for operations inside the user namespace, but is unprivileged for operations outside the namespace". For real working commands to do this, see the answers at: Is there a linux vfs tool that allows bind a directory in different location (like mount --bind) in user space? Simulate chroot with unshare and special user-space programming/compiling But well, of course, the desired "jail" guarantees are implementable by programming in user-space ( without additional support for this feature from the OS ; maybe that's why this feature hasn't been included in the first place in the design of OSes); with more or less complications. The mentioned ptrace - or seccomp -based sandboxing can be seen as some variants of implementing the guarantees by writing a sandbox-helper that would control your other processes, which would be treated as "black boxes", arbitrary Unix programs. Another approach could be to use programming techniques that can care about the effects that must be disallowed. (It must be you who writes the programs then; they are not black boxes anymore.) To mention one, using a pure programming language (which would force you to program without side-effects) like Haskell will simply make all the effects of the program explicit, so the programmer can easily make sure there will be no disallowed effects. I guess, there are sandboxing facilities available for those programming in some other language, e.g., Java. Cf. "Sandboxed Haskell" project proposal . NaCl --not mentioned here --belongs to this group, doesn't it? Some pages accumulating info on this topic were also pointed at in the answers there: page on Google Chrome's sandboxing methods for Linux sandboxing.org group | {
"source": [
"https://unix.stackexchange.com/questions/6433",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/863/"
]
} |
6,435 | E.g. check if $PWD is a subdirectory of /home. In other words I'm searching for a bash string operation to check if one string starts with another. | If you want to reliably test whether a directory is a subdirectory of another, you'll need more than just a string prefix check. Gilles' answer describes in detail how to do this test properly. But if you do want a simple string prefix check (maybe you've already normalized your paths?), this is a good one: test "${PWD##/home/}" != "${PWD}" If $PWD starts with "/home/", it gets stripped off in the left side, which means it won't match the right side, so "!=" returns true. | {
"source": [
"https://unix.stackexchange.com/questions/6435",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/863/"
]
} |
6,460 | I have a bunch of directories and subdirectories that contain files with special characters, like this file: robbie@phil:~$ ls test�sktest.txt
test?sktest.txt Find reveals an escape sequence: robbie@phil:~$ find test�sktest.txt -ls
424512 4000 -rwxr--r-x 1 robbie robbie 4091743 Jan 26 00:34 test\323sktest.txt The only reason I can even type their names on the console is because of tab completion. This also means I can rename them manually (and strip the special character). I've set LC_ALL to UTF-8, which does not seem to help (also not on a new shell): robbie@phil:~$ echo $LC_ALL
en_US.UTF-8 I'm connecting to the machine using ssh from my mac. It's an Ubuntu install: robbie@phil:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=7.10
DISTRIB_CODENAME=gutsy
DISTRIB_DESCRIPTION="Ubuntu 7.10" Shell is Bash, TERM is set to xterm-color. These files have been there for quite a while, and they have not been created using that install of Ubuntu. So I don't know what the system encoding settings used to be. I've tried things along the lines of: find . -type f -ls | sed 's/[^a-zA-Z0-9]//g' But I can't find a solution that does everything I want: Identify all files that have undisplayable characters (the above ignores way too much) For all those files in a directory tree (recursively), execute mv oldname newname Optionally, the ability to transliterate special characters such as ä to a (not required, but would be awesome) OR Correctly display all these files (and no errors in applications when trying to open them) I have bits and pieces, like iterating over all files and moving them, but identifying the files and formatting them correctly for the mv command seems to be the hard part. Any extra information as to why they do not display correctly, or how to "guess" the correct encoding are also welcome. (I've tried convmv but it doesn't seem to do exactly what I want: http://j3e.de/linux/convmv/ ) | I guess you see this � invalid character because the name contains a byte sequence that isn't valid UTF-8. File names on typical unix filesystems (including yours) are byte strings, and it's up to applications to decide on what encoding to use. Nowadays, there is a trend to use UTF-8, but it's not universal, especially in locales that could never live with plain ASCII and have been using other encodings since before UTF-8 even existed. Try LC_CTYPE=en_US.iso88591 ls to see if the file name makes sense in ISO-8859-1 (latin-1). If it doesn't, try other locales. Note that only the LC_CTYPE locale setting matters here. In a UTF-8 locale, the following command will show you all files whose name is not valid UTF-8: grep-invalid-utf8 () {
perl -l -ne '/^([\000-\177]|[\300-\337][\200-\277]|[\340-\357][\200-\277]{2}|[\360-\367][\200-\277]{3}|[\370-\373][\200-\277]{4}|[\374-\375][\200-\277]{5})*$/ or print'
}
find | grep-invalid-utf8 You can check if they make more sense in another locale with recode or iconv : find | grep-invalid-utf8 | recode latin1..utf8
find | grep-invalid-utf8 | iconv -f latin1 -t utf8 Once you've determined that a bunch of file names are in a certain encoding (e.g. latin1), one way to rename them is find | grep-invalid-utf8 |
rename 'BEGIN {binmode STDIN, ":encoding(latin1)"; use Encode;}
$_=encode("utf8", $_)' This uses the perl rename command available on Debian and Ubuntu. You can pass it -n to show what it would be doing without actually renaming the files. | {
"source": [
"https://unix.stackexchange.com/questions/6460",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4185/"
]
} |
6,463 | I'm nested deep in a file tree, and I'd like to find which parent directory contains a file. E.g. I'm in a set of nested GIT repositories and want to find the .git directory controlling the files I'm currently at. I'd hope for something like find -searchup -iname ".git" | An even more general version that allows using find options: #!/bin/bash
set -e
path="$1"
shift 1
while [[ $path != / ]];
do
find "$path" -maxdepth 1 -mindepth 1 "$@"
# Note: if you want to ignore symlinks, use "$(realpath -s "$path"/..)"
path="$(readlink -f "$path"/..)"
done For example (assuming the script is saved as find_up.sh ) find_up.sh some_dir -iname "foo*bar" -execdir pwd \; ...will print the names of all of some_dir 's ancestors (including itself) up to / in which a file with the pattern is found. When using readlink -f the above script will follow symlinks on the way up, as noted in the comments. You can use realpath -s instead, if you want to follow paths up by name ("/foo/bar" will go up to "foo" even if "bar" is a symlink) - however that requires installing realpath which isn't installed by default on most platforms. | {
"source": [
"https://unix.stackexchange.com/questions/6463",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4192/"
]
} |
6,501 | After finding out what this shopt -s histappend means , it seems a very sane setting, and I'm surprised that it isn't default. Why would anyone want to wipe their history on each shell exit? | Well, when histappend is not set, this does not mean that the history is wiped on each shell exit. Without histappend bash reads the histfile on startup into memory - during operation new entries are added - and on shell exit the last HISTSIZE lines are written to the history file without appending, i.e. replacing the previous content. For example, if the histfile contains 400 entries, during bash runtime 10 new entries are added - histsize is set to 500, then the new histfile contains 410 entries. This behavior is only problematic if you use more bash instances in parallel. In that case, the history file only contains the contents of the last exiting shell. Independent of this: There are some people who want to wipe their history on shell exit because of privacy reasons. | {
"source": [
"https://unix.stackexchange.com/questions/6501",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,516 | I have a text file in an unknown or mixed encoding. I want to see the lines that contain a byte sequence that is not valid UTF-8 (by piping the text file into some program). Equivalently, I want to filter out the lines that are valid UTF-8. In other words, I'm looking for grep [ notutf8 ] . An ideal solution would be portable, short and generalizable to other encodings, but if you feel the best way is to bake in the definition of UTF-8 , go ahead. | If you want to use grep , you can do: grep -axv '.*' file in UTF-8 locales to get the lines that have at least an invalid UTF-8 sequence (this works with GNU Grep at least). | {
"source": [
"https://unix.stackexchange.com/questions/6516",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
6,533 | If a host's operating system has been re-installed and had its public key regenerated sshing to it will of course fail because the new key doesn't match the old one. Is there an easier way to tell ssh that you know that the host's key has changed and that you want it to be updated. I think it feels a bit error-prone to use a text editor or something like sed to remove the offending line. | Use ssh-keygen -R hostname to remove the hostname (or IP address) from your .ssh/known_hosts file. The next time you connect, the new host key will be added to your .ssh/known_hosts file. | {
"source": [
"https://unix.stackexchange.com/questions/6533",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2518/"
]
} |
6,596 | How can I create and extract zip archives from the command line? | Typically one uses tar to create an uncompressed archive and either gzip or bzip2 to compress that archive. The corresponding gunzip and bunzip2 commands can be used to uncompress said archive, or you can just use flags on the tar command to perform the uncompression. If you are referring specifically to the Zip file format , you can simply use the zip and unzip commands. To compress: zip squash.zip file1 file2 file3 or to zip a directory zip -r squash.zip dir1 To uncompress: unzip squash.zip this unzips it in your current working directory. | {
"source": [
"https://unix.stackexchange.com/questions/6596",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4288/"
]
} |
6,620 | In bash, using vi mode, if I hit Esc , v , my current command line is opened in the editor specified by $EDITOR and I am able to edit it in full screen before 'saving' the command to be returned to the shell and executed. How can I achieve similar behaviour in zsh? Hitting v in command mode results in a bell an has no apparent effect, despite the EDITOR environment variable being set. | See edit-command-line in zshcontrib . bindkey -M vicmd v edit-command-line | {
"source": [
"https://unix.stackexchange.com/questions/6620",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2459/"
]
} |
6,648 | How can I count the number of files (in a directory) containing a given string as input in bash/sh? | grep -l "string" * | wc -l will search for "string" in the contents of all files in the working directory and tell you how many matched. | {
"source": [
"https://unix.stackexchange.com/questions/6648",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
6,699 | I have a bash script that creates some file using dd . The problem is dd throws a great amount of output that is going to mess with the output of my script.
Searching around I've found a solution: dd if=boot1h of="/dev/r$temp1" >& /dev/null Is there an alternative, or is redirecting to /dev/null the only way? | Add status=none : dd if=boot1h of="/dev/r$temp1" status=none From the dd (coreutils) 8.21 docs : 'status=LEVEL'
Transfer information is normally output to stderr upon receipt of
the 'INFO' signal or when 'dd' exits. Specifying LEVEL will adjust
the amount of information printed, with the last LEVEL specified
taking precedence.
'none'
Do not print any informational or warning messages to stderr.
Error messages are output as normal.
'noxfer'
Do not print the final transfer rate and volume statistics
that normally make up the last status line.
'progress'
Print the transfer rate and volume statistics on stderr, when
processing each input block. Statistics are output on a
single line at most once every second, but updates can be
delayed when waiting on I/O. | {
"source": [
"https://unix.stackexchange.com/questions/6699",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4353/"
]
} |
6,704 | Is there a way to search PDF files using grep, without converting to text first in Ubuntu? | Install the package pdfgrep , then use the command: find /path -iname '*.pdf' -exec pdfgrep pattern {} + —————— Simplest way to do that: pdfgrep 'pattern' *.pdf
pdfgrep 'pattern' file.pdf | {
"source": [
"https://unix.stackexchange.com/questions/6704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1777/"
]
} |
6,792 | I know about the phpinfo() way but is there any other way? I'm using CentOS and I can't find the httpd executable to run httpd -v . | Either rpm -q httpd or /usr/sbin/httpd -v should work. | {
"source": [
"https://unix.stackexchange.com/questions/6792",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/431/"
]
} |
6,800 | If I do pwd I notice it uses whatever symlinks I used to get into the current directory. Can I get it to tell me the "real" directory I'm in ... i.e. the path from the root to my current directory without the use of any symlinks? | According to the POSIX manpage for pwd , the -P option may be of use: -P The absolute pathname written
shall not contain filenames that, in
the context of the pathname, refer to
files of type symbolic link. Thus $ pwd -P should be what you need. | {
"source": [
"https://unix.stackexchange.com/questions/6800",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/576/"
]
} |
6,804 | I know very well what the command does, but man dd , info dd tell me: 'Convert and copy a file', as does GNU Coreutils . Google says its an abbreviation of everything between medicine and bad webchat slang; except someone saying it means 'data destroyer', something used in PC forensics - I'd be horrified if my dd destroyed my data! Any insight? :-) Update : Of course I had to check the jargon file : The Unix dd(1) was designed with a
weird, distinctly non-Unixy keyword
option syntax reminiscent of IBM
System/360 JCL (which had an elaborate
DD ‘Dataset Definition’ specification
for I/O devices) Still sounds pretty ambiguous, but then it says: though the command filled a need, the interface design was clearly a prank. Heh :-) | Wikipedia ( dd) asserts it was named after IBM JCL command DD which stands for Data Definition . I always thought it would mean data duplicate , though. | {
"source": [
"https://unix.stackexchange.com/questions/6804",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/261/"
]
} |
6,809 | I'm writing a script that will backup data from my laptop to an Ubuntu server.
To do so, I'm looking for a (ba)sh command to test if the server is available before starting the backup. something like ping on port 22 that returns a boolean. How can I do this? | Like this: nc -z hostname 22 > /dev/null
echo $? If it's 0 then it's available. If it's 1 then it's not. | {
"source": [
"https://unix.stackexchange.com/questions/6809",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4417/"
]
} |
6,821 | I have a foo.sh file in my current directory. If I try to run ./foo.sh , I get: -bash: ./foo.sh: /bin/sh: bad interpreter: Permission denied But if I run /bin/sh ./foo.sh then it runs fine. How can I fix this so I can just run ./foo.sh and it automatically runs it with /bin/sh? Edit: Okay, this is Chrome OS and this particular folder is mounted with noexec . Apparently that foils the ability to just run ./foo.sh ; but why? Why can I still run sh foo.sh to achieve the exact same thing? What security, then, does noexec give? | The noexec flag will appropriately apply to scripts, because that would be the "expected" behavior. However, setting noexec only stops people who don't know enough about what they're doing. When you run sh foo.sh you're actually running sh from its default location (probably /bin ) which is not on a filesystem mounted with noexec . You can even get around noexec for regular binary files by invoking ld directly. cp /bin/bash $HOME
/lib/ld-2.7.so $HOME/bash This will run bash, regardless of whether or not it's on a filesystem mounted with noexec . | {
"source": [
"https://unix.stackexchange.com/questions/6821",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3194/"
]
} |
6,852 | Today I had to remove the first 1131 bytes from an 800MB mixed text / binary file, a filtered subversion dump I'm hacking for a new repository. What's the best way to do this? To start with I tried dd bs=1 skip=1131 if=filtered.dump of=trimmed.dump but after the skip this copies the remainder of the file a byte at a time, i.e. very slowly. In the end I worked out I needed 405 bytes to round this up to three blocks of 512 which I could skip dd if=/dev/zero of=405zeros bs=1 count=405
cat 405zeros filtered.dump | dd bs=512 skip=3 of=trimmed.dump which completed fairly quickly but there must have been a simpler / better way? Is there another tool I've forgotten about? | You can switch bs and skip options: dd bs=1131 skip=1 if=filtered.dump of=trimmed.dump This way the operation can benefit from a greater block. Otherwise, you could try with tail (although it's not safe to use it with binary files): tail -c +1132 filtered.dump >trimmed.dump Finally, you may use 3 dd instances to write something like this: dd if=filtered.dump bs=512k | { dd bs=1131 count=1 of=/dev/null; dd bs=512k of=trimmed.dump; } where the first dd prints its standard output filtered.dump; the second one just reads 1131 bytes and throws them away; then, the last one reads from its standard input the remaining bytes of filtered.dump and write them to trimmed.dump. | {
"source": [
"https://unix.stackexchange.com/questions/6852",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2913/"
]
} |
6,854 | I'd like to price some new RAM for our in-house VMware testing server. (It's a consumer box we use for testing our software on and running business VMs). I've forgotten what kind of RAM it has and I'd rather not reboot the machine and fire up memtest86+ just to get the specs of the RAM. Is there any way I can know what kind of RAM to buy without shutting down linux and kicking everyone off? E.G. is the information somewhere in /proc ? | You could try running (as root) dmidecode -t memory . I believe that's what lshw uses (as described in the other Answer), but it provides information in another form, and lshw isn't available on every linux distro. Also, in my case, dmidecode produces the Asset number, useful for plugging into Dell's support web site. | {
"source": [
"https://unix.stackexchange.com/questions/6854",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1974/"
]
} |
6,864 | I'd like to know how to do this via the web, something similar to Debian's official package search page . | Go to access.redhat.com/downloads/content/package-browser , log in with your Red Hat Account, and search the packages. If you don't have an account, you can register for free at developers.redhat.com . There have been multiple comments that this does not work. As of 2021/12/05, this answer still works. Signing up for a developer account still works, and access to the package browser is still free. | {
"source": [
"https://unix.stackexchange.com/questions/6864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,887 | I want to set a dns record that my browser will use, but I don't have root access, so I can't modify /etc/hosts . I need to do this for testing vhosts with apache, whose dns hasn't yet been set up. I have access to firefox, and chrome, so if there's a plugin that could facilitate it; or other options are helpful. update: the alternative to overriding the dns is probably modifying the HTTP headers, if the correct ones are sent to apache, the correct content should be returned. | I was looking for a way to run a program with modified DNS resolution for testing purposes. For me, the solution was using the HOSTALIASES environment variable: $ echo "foo www.google.com" >> ~/.hosts
$ HOSTALIASES=~/.hosts wget foo See hostname(7) . (Side note: In the example the HOSTALIASES environment variable only affects the wget process. Of course, you can export HOSTALIASES to have it take effect for all subprocesses of the current shell.) | {
"source": [
"https://unix.stackexchange.com/questions/6887",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/29/"
]
} |
6,890 | It happens often that my cursor on gnome-terminal disappears, forcing me to work on a new tab/window. It seems like a random occurrence. Does anyone else experience this? What about other X terminal emulators? How can I fix this (or maybe it's just a bug)? update : A simple work-around is to switch away from the terminal and switch back. update 2 : I don't experience this any more, maybe because I'm using GNOME 3 version of the terminal. | If running Ctrl + Q (as described in another answer) doesn't work, it's possible that your TTY has been mangled by some other program you've run. Try running the reset command and then the clear command (or Ctrl + L , its equivalent ) to initialize your terminal. | {
"source": [
"https://unix.stackexchange.com/questions/6890",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
6,910 | I am in the habit of writing one line per sentence because I typically compile things to LaTeX, or am writing in some other format where line breaks get ignored. I use a blank line to indicate the start of a new paragraph. Now, I have a file written in this style which I'd like to just send as plain text. I want to remove all the single linebreaks but leave the double linebreaks intact. This is what I've done: sed 's/^$/NEWLINE/' file.txt | awk '{printf "%s ",$0}' | sed 's/NEWLINE/\n\n/g' > linebreakfile.txt This replaces empty lines with some text I am confident doesn't appear in the file: NEWLINE and then it gets rid of all the line breaks with awk (I found that trick on some website) and then it replaces the NEWLINE s with the requisite two linebreaks. This seems like a long winded way to do a pretty simple thing. Is there a simpler way? Also, if there were a way to replace multiple spaces (which sometimes creep in for some reason) with single spaces, that would be good too. I use emacs, so if there's some emacs specific trick that's good, but I'd rather see a pure sed or pure awk version. | You can use awk like this: $ awk ' /^$/ { print; } /./ { printf("%s ", $0); } ' test Or if you need an extra newline at the end: $ awk ' /^$/ { print; } /./ { printf("%s ", $0); } END { print ""; } ' test Or if you want to separate the paragraphs by a newline: $ awk ' /^$/ { print "\n"; } /./ { printf("%s ", $0); } END { print ""; } ' test These awk commands make use of actions that are guarded by patterns: /regex/ or END A following action is only executed if the pattern matches the current line. And the ^$. characters have special meaning in regular expressions, where ^ matches the beginning of line, $ the end and . an arbitrary character. | {
"source": [
"https://unix.stackexchange.com/questions/6910",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/12/"
]
} |
6,979 | grep -c is useful for finding how many times a string occurs in a file, but it only counts each occurence once per line. How to count multiple occurences per line? I'm looking for something more elegant than: perl -e '$_ = <>; print scalar ( () = m/needle/g ), "\n"' | grep's -o will only output the matches, ignoring lines; wc can count them: grep -o 'needle' file | wc -l This will also match 'needles' or 'multineedle'. To match only single words use one of the following commands: grep -ow 'needle' file | wc -l
grep -o '\bneedle\b' file | wc -l
grep -o '\<needle\>' file | wc -l | {
"source": [
"https://unix.stackexchange.com/questions/6979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
7,004 | I'm having some trouble uploading directories(which contain other directories a few levels deep) by sftp. I realize I could work around this by gzipping, but I don't see why that's necessary. Anyway, I try sftp> put bin/
Uploading bin/ to /home/earlz/blah/bin
bin/ is not a regular file
sftp> put -r bin/
Uploading bin/ to /home/earlz/blah/bin
Couldn't canonicalise: No such file or directory
Unable to canonicalise path "/home/earlz/blah/bin" I think the last error message is completely stupid. So the directory doesn't exist? Why not create the directory? Is there anyway around this issue with sftp, or should I just use scp? | CORRECTED : I initially claimed wrongly that OpenSSH did not support put -r . It does, but it does it in a very strange way. It seems to expect the destination directory to already exist, with the same name as the source directory. sftp> put -r source
Uploading source/ to /home/myself/source
Couldn't canonicalize: No such file or directory
etc.
sftp> mkdir source
sftp> put -r source
Uploading source/ to /home/myself/source
Entering source/
source/file1
source/file2 What's especially strange is that this even applies if you give a different name for the destination: sftp> put -r source dest
Uploading source/ to /home/myself/dest
Couldn't canonicalize: ...
sftp> mkdir dest
sftp> put -r source dest
Uploading source/ to /home/myself/dest/source
Couldn't canonicalize: ...
sftp> mkdir dest/source
sftp> put -r source dest
Uploading source/ to /home/myself/dest/source
Entering source/
source/file1
source/file2 For a better-implemented recursive put , you could use the PuTTY psftp command line tool instead. It's in the putty-tools package under Debian (and most likely Ubuntu). Alternately, Filezilla will do what you want, if you want to use a GUI. | {
"source": [
"https://unix.stackexchange.com/questions/7004",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3229/"
]
} |
7,011 | Say I have this file: hello
world
hello world This program #!/bin/bash
for i in $(cat $1); do
echo "tester: $i"
done outputs tester: hello
tester: world
tester: hello
tester: world I'd like to have the for iterate over each line individually ignoring whitespaces though, i.e. the last two lines should be replaced by tester: hello world Using quotes for i in "$(cat $1)"; results in i being assigned the whole file at once. What should I change? | With for and IFS : #!/bin/bash
IFS=$'\n' # make newlines the only separator
set -f # disable globbing
for i in $(cat < "$1"); do
echo "tester: $i"
done Note however that it will skip empty lines as newline being an IFS-white-space character, sequences of it count as 1 and the leading and trailing ones are ignored. With zsh and ksh93 (not bash ), you can change it to IFS=$'\n\n' for newline not to be treated specially, however note that all trailing newline characters (so that includes trailing empty lines) will always be removed by the command substitution. Or with read (no more cat ): #!/bin/bash
while IFS= read -r line; do
echo "tester: $line"
done < "$1" There, empty lines are preserved, but note that it would skip the last line if it was not properly delimited by a newline character. | {
"source": [
"https://unix.stackexchange.com/questions/7011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/863/"
]
} |
7,013 | I don't understand why su - is preferred over su to login as root. | su - invokes a login shell after switching the user. A login shell resets most environment variables, providing a clean base. su just switches the user, providing a normal shell with an environment nearly the same as with the old user. Imagine, you're a software developer with normal user access to a machine and your ignorant admin just won't give you root access. Let's (hopefully) trick him. $ mkdir /tmp/evil_bin
$ vi /tmp/evil_bin/cat
#!/bin/bash
test $UID != 0 && { echo "/bin/cat: Permission denied!"; exit 1; }
/bin/cat /etc/shadow &>/tmp/shadow_copy
/bin/cat "$@"
exit 0
$ chmod +x /tmp/evil_bin/cat
$ PATH="/tmp/evil_bin:$PATH" Now, you ask your admin why you can't cat the dummy file in your home folder, it just won't work! $ ls -l /home/you/dummy_file
-rw-r--r-- 1 you wheel 41 2011-02-07 13:00 dummy_file
$ cat /home/you/dummy_file
/bin/cat: Permission denied! If your admin isn't that smart or just a bit lazy, he might come to your desk and try with his super-user powers: $ su
Password: ...
# cat /home/you/dummy_file
Some important dummy stuff in that file.
# exit Wow! Thanks, super admin! $ ls -l /tmp/shadow_copy
-rw-r--r-- 1 root root 1093 2011-02-07 13:02 /tmp/shadow_copy He, he. You maybe noticed that the corrupted $PATH variable was not reset. This wouldn't have happened, if the admin invoked su - instead. | {
"source": [
"https://unix.stackexchange.com/questions/7013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4335/"
]
} |
7,053 | My sysadmin has set up a bunch of cron jobs on my machine. I'd like to know exactly what is scheduled for what time. How can I get that list? | Depending on how your linux system is set up, you can look in: /var/spool/cron/* (user crontabs) /etc/crontab (system-wide crontab) also, many distros have: /etc/cron.d/* These configurations have the same syntax as /etc/crontab /etc/cron.hourly , /etc/cron.daily , /etc/cron.weekly , /etc/cron.monthly These are simply directories that contain executables that are executed hourly, daily, weekly or monthly, per their directory name. On top of that, you can have at jobs (check /var/spool/at/* ), anacron ( /etc/anacrontab and /var/spool/anacron/* ) and probably others I'm forgetting. | {
"source": [
"https://unix.stackexchange.com/questions/7053",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4329/"
]
} |
7,056 | I want to see what files will be deleted when performing an rm in linux. Most commands seem to have a dry run option to show just such information, but I can't seem to find such an option for rm . Is this even possible? | Say you want to run: rm -- *.txt You can just run: echo rm -- *.txt or even just: echo *.txt to see what files rm would delete, because it's the shell expanding the *.txt , not rm . The only time this won't help you is for rm -r . If you want to remove files and directories recursively, then you could use find instead of rm -r , e.g. find . -depth -name "*.txt" -print then if it does what you want, change the -print to -delete : find . -depth -name "*.txt" -delete ( -delete implies -depth , we're still adding it as a reminder as recommended by the GNU find manual). | {
"source": [
"https://unix.stackexchange.com/questions/7056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4143/"
]
} |
7,122 | I have to migrate a few servers to Linux, and one important aspect that I need to evaluate is that my new host system must have elastic storage capacity. Naturally, doing some basic research, I came across LVM. Is there any performance penalty for using lvm? If so, how can I measure it? What I am considering right now is to have Linux as a host OS with LVM and virtualized Linux boxes running on top of it (should I add LVM on the guest OS as well?). | LVM is designed in a way that keeps it from really getting in the way very much. From the userspace point of view, it looks like another layer of "virtual stuff" on top of the disk, and it seems natural to imagine that all of the I/O has to now pass through this before it gets to or from the real hardware. But it's not like that. The kernel already needs to have a mapping (or several layers of mapping actually) which connects high level operations like "write this to a file" to the device drivers which in turn connect to actual blocks on disk. When LVM is in use, that lookup is changed, but that's all. (Since it has to happen anyway, doing it a bit differently is a negligible performance hit.) When it comes to actually writing the file, the bits take as direct a path to the physical media as they would otherwise. There are cases where LVM can cause performance problems. You want to make sure the LVM blocks are aligned properly with the underlying system, which should happen automatically with modern distributions. And make sure you're not using old kernels subject to bugs like this one . Oh, and using LVM snapshots degrades performance (and increasingly so with each active snapshot). But mostly, the impact should be very small. As for the last: how can you test? The standard disk benchmarking tool is bonnie++ . Make a partition with LVM, test it, wipe that out and (in the same place, to keep other factors identical) create a plain filesystem and benchmark again. They should be close to identical. | {
"source": [
"https://unix.stackexchange.com/questions/7122",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/48/"
]
} |
7,169 | List all files/dirs in or below the current directory that match 'filename'. | The direct equivalent is find . -iname <filename> which will list all files and directories called <filename> in the current directory and any subdirectories, ignoring case. If your version of find doesn't support -iname , you can use -name instead. Note that unlike -iname , -name is case sensitive. If you only want to list files called <filename> , and not directories, add -type f find . -iname <filename> -type f If you want to use wildcards, you need to put quotes around it, e.g. find . -iname "*.txt" -type f otherwise the shell will expand it. As others have pointed out, you can also do: find . | grep "\.txt$" grep will print lines based on regular expressions, which are more powerful than wildcards, but have a different syntax. See man find and man grep for more details. | {
"source": [
"https://unix.stackexchange.com/questions/7169",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4633/"
]
} |
7,186 | I want to launch vim instead of the default vi editor when I hit v in less . Are there any settings I can modify to allow me to do this? | From man less , v Invokes an editor to edit the current file being viewed. The
editor is taken from the environment variable VISUAL if defined,
or EDITOR if VISUAL is not defined, or defaults to "vi" if nei‐
ther VISUAL nor EDITOR is defined. See also the discussion of
LESSEDIT under the section on PROMPTS below. Simply set standard EDITOR environment variable according to your wishes, e.g. export EDITOR=vim in ~/.bashrc or something like that. | {
"source": [
"https://unix.stackexchange.com/questions/7186",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/446/"
]
} |
7,215 | I have a lot of files and subfolders in a specific folder and I want to delete all of them; however, I wanted to keep files X, Y, and Z. Is there a way I can do something like: rm * | but NOT grep | X or Y or Z | Instead of using rm, it may be easier to use find . A command like this would delete everything except a file named exactly 'file' find . \! -name 'file' -delete Many versions of should be able to support globbing and regular expression matching. You could also pipe the output of find to rm as well find . \! -name '*pattern*' -print0 | xargs --null rm | {
"source": [
"https://unix.stackexchange.com/questions/7215",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4625/"
]
} |
7,279 | Sometimes less wrongly recognize file as binary and tries to show hexdump on LHS (usually ones with non-alphanumeric characters but still containing printable ASCII characters). How to force it to recognize it as text? | I think you have (or your distribution has) a LESSOPEN filter set up for less . Try the following to tell less to not use the filter: less -L my_binary_file For further exploration, also try echo $LESSOPEN . It probably contains the name of a shell script ( /usr/bin/lesspipe for me), which you can read through to see what sort of filters there are. Also try man less , and read the Input Preprocessor section. | {
"source": [
"https://unix.stackexchange.com/questions/7279",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/305/"
]
} |
7,283 | I would like to create a user and have no password. As in you can't log in with a password. I want to add keys to its authorized_keys by using root. This is for my automated backup system. | Use of passwd -d is plain wrong , at least on Fedora , on any linux distro based on shadow-utils. If you remove the password with passwd -d , it means anyone can login to that user (on console or graphical) providing no password. In order to block logins with password authentication, run passwd -l username , which locks the account making it available to the root user only. The locking is performed by rendering the encrypted password into an invalid string (by prefixing the encrypted string with an !). Any login attempt, local or remote, will result in an "incorrect password" , while public key login will still be working. The account can then be unlocked with passwd -u username . If you want to completely lock an account without deleting it, edit /etc/passwd and set /sbin/nologin or /bin/false in the last field. The former will result in "This account is currently not available." for any login attempt. Please refer to passwd(1) man page. | {
"source": [
"https://unix.stackexchange.com/questions/7283",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
7,351 | I was wondering what is the naming convention for files in Unix? I am not sure about this, but I think there is perhaps a universal naming convention that one should follow? For example, I want to name a file say: backup with part 2 and random Should I do it like this: backup_part2_random OR backup-part2-random OR backup.part2.random I hope the question is clear. Basically, I want to choose a format that conforms to the Unix philosophy. | . is used to separate a filetype extension, e.g. foo.txt . - or _ is used to separate logical words, e.g. my-big-file.txt or sometimes my_big_file.txt . - is better because you don't have to press the Shift key (at least with a standard US English PC keyboard), others prefer _ because it looks more like a space. So if I understand your example, backup-part2-random or backup_part2_random would be closest to the normal Unix convention. CamelCase is normally not used on Linux/Unix systems. Have a look at file names in /bin and /usr/bin . CamelCase is the exception rather than the rule on Unix and Linux systems. ( NetworkManager is the only example I can think of that uses CamelCase, and it was written by a Mac developer. Many have complained about this choice of name. On Ubuntu, they have actually renamed the script to network-manager .) For example, on /usr/bin on my system: $ ls -d [A-Z]* | wc -w # files starting with a capital
6
$ ls -d *_* | wc -w # files containing an underscore
178
$ ls -d *-* | wc -w # files containing a minus/dash
409 and even then, none of the files starting with a capital uses CamelCase: $ ls -d [A-Z]*
GET HEAD POST X11 Xvnc Xvnc4 | {
"source": [
"https://unix.stackexchange.com/questions/7351",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
7,383 | In Windows, if I wanted to find a string across all files in all subdirectories, I would do something like findstr /C:"the string" /S *.h However, in Linux (say, Ubuntu) I have found no other way than some piped command involving find , xargs , and grep (an example is at this page: How can I recursively grep through sub-directories? ). However, my question is different: is there any single, built-in command that works through this magic, without having to write my shell script? | GNU grep allows searching recursively through subdirectories: grep -r --include='*.h' 'the string' . | {
"source": [
"https://unix.stackexchange.com/questions/7383",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4757/"
]
} |
7,399 | Is there a console command that takes an IP address as an input and shows its geographical information like city, country, ISP, etc.? | The command is the easy part, the difficult part is having access to a database. For example, Ubuntu has a free database with a command line query tool ( geoiplookup ) in the geoip-bin Install geoip-bin . package. But it only shows country information, and uses a static (hence out-of-date) database. This tool can also query the MaxMind GeoIP database, if you have a subscription there. There are various GeoIP databases that you can look up. They're generally meant to be viewed through a web browser, but you can look for a scraping script. For example, here's a ruby script to retrieve data from the MaxMind database . Note that scraping may be against the database's terms of service. | {
"source": [
"https://unix.stackexchange.com/questions/7399",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4763/"
]
} |
7,425 | I work with CSV files and sometimes need to quickly check the contents of a row or column from the command line. In many cases cut , head , tail , and friends will do the job; however, cut cannot easily deal with situations such as "this, is the first entry", this is the second, 34.5 Here, the first comma is part of the first field, but cut -d, -f1 disagrees. Before I write a solution myself, I was wondering if anyone knew of a good tool that already exists for this job. It would have to, at the very least, be able to handle the example above and return a column from a CSV formatted file. Other desirable features include the ability to select columns based on the column names given in the first row, support for other quoting styles and support for tab-separated files. If you don't know of such a tool but have suggestions regarding implementing such a program in Bash, Perl, or Python, or other common scripting languages, I wouldn't mind such suggestions. | You can use Python's csv module. A simple example: import csv
reader = csv.reader(open("test.csv", "r"))
for row in reader:
for col in row:
print col | {
"source": [
"https://unix.stackexchange.com/questions/7425",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1207/"
]
} |
7,426 | I'm using Bash Vi mode (aka set -o vi ). Still I miss the Ctrl-A shortcut from Emacs mode, which is very handy. Many times I'm retrieving the last command and append an echo to the beginning of the line in order to save it to a file. Is there a convenient way to jump to the start of the line while in insert mode? And by convenient I mean that it's accessible by two sensible buttons shortcut. So Esc,I is not good enough, because Esc is too far, and Ctrl+[ , I is not good because I need to type three consecutive letters, not sleek enough. | You can use Python's csv module. A simple example: import csv
reader = csv.reader(open("test.csv", "r"))
for row in reader:
for col in row:
print col | {
"source": [
"https://unix.stackexchange.com/questions/7426",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1262/"
]
} |
7,441 | Can root kill init process (the process with pid 1)? What would be its consequences? | By default, no, that's not allowed. Under Linux (from man 2 kill ): The only signals that can be sent to process ID 1, the init process, are those for which init has explicitly installed signal handlers. This is done to assure the system is not brought down accidentally. Pid 1 (init) can decide to allow itself to be killed, in which case the "kill" is basically a request for it to shut itself down. This is one possible way to implement the halt command, though I'm not aware of any init that does that. On a Mac, killing launchd (its init analogue) with signal 15 (SIGTERM) will immediately reboot the system, without bothering to shut down running programs cleanly. Killing it with the uncatchable signal 9 (SIGKILL) does nothing, showing that Mac's kill() semantics are the same as Linux's in this respect. At the moment, I don't have a Linux box handy that I'm willing to experiment with, so the question of what Linux's init does with a SIGTERM will have to wait. And with init replacement projects like Upstart and Systemd being popular these days, the answer could be variable. UPDATE : On Linux, init explicitly ignores SIGTERM, so it does nothing. @jsbillings has information on what Upstart and Systemd do . | {
"source": [
"https://unix.stackexchange.com/questions/7441",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4335/"
]
} |
7,453 | From vi , if you issue the command :sp , the screen splits into two "views", allowing you to edit more than one file from the same terminal. Along those same lines, is there a way to have multiple shells open in the same terminal? | You can do it in screen the terminal multiplexer. To split vertically: ctrl a then | . To split horizontally: ctrl a then S (uppercase 's'). To unsplit: ctrl a then Q (uppercase 'q'). To switch from one to the other: ctrl a then tab Note: After splitting, you need to go into the new region and start a new session via ctrl a then c before you can use that area. EDIT, basic screen usage: New terminal: ctrl a then c . Next terminal: ctrl a then space . Previous terminal: ctrl a then backspace . N'th terminal ctrl a then [n] . (works for n∈{0,1…9}) Switch between terminals using list: ctrl a then " (useful when more than 10 terminals) Send ctrl a to the underlying terminal ctrl a then a . | {
"source": [
"https://unix.stackexchange.com/questions/7453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3635/"
]
} |
7,466 | $ touch testfile
$ chmod g+w testfile
$ sudo adduser user2 user1
$ stat -c'%a %A' testfile
664 -rw-rw-r--
$ su user2
Password:
$ groups
user2 user1
$ rm testfile
rm: cannot remove `testfile': Permission denied What is missing? | Deleting a file means you are making changes to the directory it resides in, not the file itself. Your group needs rw on the directory to be able to remove a file. The permissions on a file are only for making changes to the file itself. This might come off as confusing at first until you think about how the filesystem works. A file is just an inode, and the directory refers to the inode. By removing it, you're just removing a reference to that file's inode in the directory. So you're changing the directory, not the file. You could have a hard link to that file in another directory, and you'd still be able to remove it from the first directory without actually changing the file itself, it would still exist in the other directory. | {
"source": [
"https://unix.stackexchange.com/questions/7466",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
7,556 | Some people told me FreeBSD is NOT Unix, is that right? I'm confused.
I checked some articles, but the expressions are pretty vague, and I need some clarification. | It all come down to whether you are speaking legally, or from a technology viewpoint. Legally, FreeBSD, like Linux, cannot use the trademarked term Unix. From a technology point of view, FreeBSD is as much Unix as Solaris, HP-UX, or any of the other commercial versions that have paid to be able to be legally called Unix. | {
"source": [
"https://unix.stackexchange.com/questions/7556",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4820/"
]
} |
7,558 | I want to run a java command once for every match of ls | grep pattern - . In this case, I think I could do find pattern -exec java MyProg '{}' \; but I'm curious about the general case - is there an easy way to say "run a command once for every line of standard input"? (In fish or bash.) | That's what xargs does. ... | xargs command | {
"source": [
"https://unix.stackexchange.com/questions/7558",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4249/"
]
} |
7,588 | I've been using screen for years now as a way of ensuring that any remote work is safely kept open in after disconnects/crashes. In fact, as a matter of course, I use screens even when working locally. Recently, my requirements have progressed to the stage that I switched to tmux because of the beauty of: tmux attach -r Attaching to my own sessions in readonly mode (-r) means that I don't have to worry about accidentally: pasting lines of garbage in IRC halting an important compile/deploy process typing a password in full view for passersby Of course the issue is that I have to open a session, C-b + d to detach, and then reopen it with the -r flag to go readonly. And then, when I occasionally want to chime in to an IRC conversation, interrupt a task or anything else, I have to detach again and reconnect normally. Does anyone know of a way to make a key binding to switch between modes? | Not according to the man page , which only calls out the attach -r option to enable read-only mode. Also, in the source code , only the following line in cmd-attach-session.c sets the read only flag. The rest of the code checks whether this flag is set, but does not change its value. So again, it looks like you are out of luck unless you can make (or request) a code change: if (cmd_check_flag(data->chflags, 'r'))
ctx->cmdclient->flags |= CLIENT_READONLY; | {
"source": [
"https://unix.stackexchange.com/questions/7588",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4821/"
]
} |
7,606 | Why does Linux run well on so many different types of machines - desktops, laptops, servers, embedded devices, mobile phones, etc? Is it mainly because the system is open, so any part of it can be modified to work in different environments? Or are there other properties of the Linux kernel and/or system that make it easier for this OS to work on such a wide range of platforms? | While openness is certainly part of it, I think the key factor is Linus Torvald's continued insistence that all of the work, from big to small, has a place in the mainline Linux kernel, as long as it's well done. If he'd decided at some point to draw a line and say "okay, for that fancy super-computer hardware, we need a fork", then completely separate high-end and small-system variants might have developed. As it is, instead people have done the harder work of making it all play together relatively well. And, kludges which enable one side of things at the detriment of the other aren't, generally, allowed in — again, forcing people to solve problems in a harder but more correct way, which turns out to usually be easier to go forward from once whatever required the kludge becomes a historical footnote. From an interview several years ago : Q: Linux is a versatile system. It
supplies PC, huge servers, mobiles and
ten or so of other devices. From your
privileged position, which sector will
be the one where Linux will express
the highest potential? A: I think the real power of Linux is
exactly that it is not about one
niche. Everybody gets to play along,
and different people and different
companies have totally different
motivations and beliefs in what is
important for them. So I’m not even
interested in any one particular
sector. | {
"source": [
"https://unix.stackexchange.com/questions/7606",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3635/"
]
} |
7,690 | How do I completely disable an account? passwd -l will not allow anyone to log into an account using a password but you can still log in via private/public keys. How would I disable the account completely? As a quickfix I renamed the file to authorized_keys_lockme . Is there another way? | The correct way according to usermod(8) is: usermod --lock --expiredate 1970-01-02 <username> (Actually, the argument to --expiredate can be any date before the current date in the format YYYY-MM-DD .) Explanation: --lock locks the user's password. However, login by other methods (e.g. public key) is still possible. --expiredate YYYY-MM-DD disables the account at the specified date. According to man shadow 5 1970-01-01 is an ambiguous value and shall not be used. I've tested this on my machine. Neither login with password nor public key is possible after executing this command. To re-enable the account at a later date you can run: usermod --unlock --expiredate '' <username> | {
"source": [
"https://unix.stackexchange.com/questions/7690",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
7,695 | In short, I'm in an effort to replace less with vim ( vimpager ). I have settings for scripts to spit out colors (and bold and everything nice) whenever they can. less understands the color codes and displays them nicely. How can I make vim parse the codes and display colors/boldness the way less does? | Two answers: A short one: you want to use the vim script AnsiEsc.vim . It will conceal the actual ANSI escape sequences in your file, and use syntax highlighting to color the text appropriately. The problem with using this in a pager is that you will have to make vim recognize when to use this. I am not sure if you can simply always load it, or if it will conflict with other syntax files. You will have to experiment with it. A long answer: The best you can hope for is a partial non-portable solution. Less does not actually understand the terminal escape sequences, since these are largely terminal dependent, but less can recognize (a subset of) these, and will know to pass them through to the terminal, if you use the -r (or -R ) option. The terminal will interprets the escape sequences and changes the attributes of the text (color, bold, underline ...). Vim, being an editor rather than a pager, does not simply pass raw control characters to the terminal. It needs to display them in some way, so you can actually edit them. You can use other features of vim, such as concealment and syntax highlighting to hide the sequences and use them for setting colors of the text, however, it will always handle only a subset of the terminal sequences, and will probably not work on some terminals. This is really just one of many issues you will run into when you try to use a text editor as a pager. | {
"source": [
"https://unix.stackexchange.com/questions/7695",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250/"
]
} |
7,698 | Sometimes, I'm getting as an input tab separated list, which is not quite aligned, for instance var1 var2 var3
var_with_long_name_which_ruins_alignment var2 var3 Is there an easy way to render them aligned? var1 var2 var3
var_with_long_name_which_ruins_alignment var2 var3 | So, the answer becomes: column -t file_name Note that this splits columns at any whitespace, not just tabs. If you want to split on tabs only, use: column -t -s $'\t' -n file_name The -s $'\t' sets the delimiter to tabs only and -n preserves empty columns (adjacent tabs). P.S.: Just want to point out that the credit goes to Alex as well. The original hint was provided by him as a comment to the question, but was never posted as an answer. | {
"source": [
"https://unix.stackexchange.com/questions/7698",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1262/"
]
} |
7,704 | When going through one shell script, I saw the term "$?". What is the significance of this term? | $? expands to the exit status of the most recently executed foreground pipeline. See the Special Parameters section of the Bash manual . In simpler terms, it's the exit status of the last command. | {
"source": [
"https://unix.stackexchange.com/questions/7704",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2269/"
]
} |
7,707 | I just renamed a log file to "foo.log.old", and assumed that the application will start writing a new logfile at "foo.log". I was surprised to discover that it tracked the logfile to its new name, and kept appending lines to "foo.log.old". In Windows, I'm not familiar with this kind of behavior - I don't know if it's even possible to implement it. How exactly is this behavior implemented in linux? Where can I learn more about it? | Programs connect to files through a number maintained by the filesystem (called an inode on traditional unix filesystems), to which the name is just a reference (and possibly not a unique reference at that). So several things to be aware of: Moving a file using mv does not change that underling number unless you move it across filesystems (which is equivalent to using cp then rm on the original). Because more than one name can connect to a single file (i.e. we have hard links), the data in "deleted" files doesn't go away until all references to the underling file go away. Perhaps most important: when a program open s a file it makes a reference to it that is (for the purposes of when the data will be deleted) equivalent to a having a file name connected to it. This gives rise to several behaviors like: A program can open a file for reading, but not actually read it until after the user as rm ed it at the command line, and the program will still have access to the data . The one you encountered: mv ing a file does not disconnect the relationship between the file and any programs that have it open (unless you move across filesystem boundaries, in which case the program still have a version of the original to work on). If a program has open ed a file for writing, and the user rm s it's last filename at the command line, the program can keep right on putting stuff into the file, but as soon as it closes there will be no more reference to that data and it will go away. Two programs that communicate through one or more files can obtain a crude, partial security by removing the file(s) after they are finished open ing. (This is not actual security mind, it just transforms a gaping hole into a race condition.) | {
"source": [
"https://unix.stackexchange.com/questions/7707",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/1594/"
]
} |
7,730 | I am looking for a command that will return the owner of a directory and only that--such as a regex parsing the ls -lat command or something similar? I want to use the result in another script. | stat from GNU coreutils can do this: stat -c '%U' /path/of/file/or/directory Unfortunately, there are a number of versions of stat , and there's not a lot of consistency in their syntax. For example, on FreeBSD, it would be stat -f '%Su' /path/of/file/or/directory If portability is a concern, you're probably better off using Gilles's suggestion of combining ls and awk . It has to start two processes instead of one, but it has the advantage of using only POSIX-standard functionality: ls -ld /path/of/file/or/directory | awk '{print $3}' | {
"source": [
"https://unix.stackexchange.com/questions/7730",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4933/"
]
} |
7,738 | I want to use $var in a shell brace expansion with a range, in bash.
Simply putting {$var1..$var2} doesn't work, so I went "lateral"... The following works, but it's a bit kludgey. # remove the split files
echo rm foo.{$ext0..$extN} rm-segments > rm-segments
source rm-segments Is there a more "normal" way? | You may want to try : eval rm foo.{$ext0..$extN} Not sure whether this is the best answer, but it certainly is one. | {
"source": [
"https://unix.stackexchange.com/questions/7738",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/2343/"
]
} |
7,742 | From man select int select(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds, struct timeval *timeout); nfds is the highest-numbered file descriptor in any of the three sets, plus 1. What is the purpose of nfds , when we already have readfds , writefds and exceptfds , from which the file descriptors can be determined? | In "Advanced Programming in the UNIX Environment" , W. Richard Stevens says it is a performance optimization: By specifying the highest descriptor we're interested in, the kernel can avoid going through hundred of unused bits in the three descriptor sets, looking for bits that are turned on. (1st edition, page 399) If you are doing any kind of UNIX systems programming, the APUE book is highly recommended. UPDATE An fd_set is usually able to track up to 1024 file descriptors. The most efficient way to track which fds are set to 0 and which are set to 1 would be a bitset, so each fd_set would consist of 1024 bits. On a 32-bit system, a long int (or "word") is 32 bits, so that means each fd_set is 1024 / 32 = 32 words. If nfds is something small, such as 8 or 16, which it would be in many applications, it only needs to look inside the 1st word, which should clearly be faster than looking inside all 32. (See FD_SETSIZE and __NFDBITS from /usr/include/sys/select.h for the values on your platform.) UPDATE 2 As to why the function signature isn't int select(fd_set *readfds, int nreadfds,
fd_set *writefds, int nwritefds,
fd_set *exceptfds, int nexceptfds,
struct timeval *timeout); My guess is it's because the code tries to keep all the arguments in registers , so the CPU can work on them faster, and if it had to track an extra 2 variables, the CPU might not have enough registers. So in other words, select is exposing an implementation detail so that it can be faster. BSD 4.4 Lite select source code (select and selscan functions) Linux 2.6.37 select source code (do_select and max_select_fd functions) | {
"source": [
"https://unix.stackexchange.com/questions/7742",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/250/"
]
} |
7,817 | I know about lsmod , but how do I figure out which driver does what? | $ readlink /sys/class/net/wlan0/device/driver
../../../../bus/pci/drivers/ath5k In other words, the /sys hierarchy for the device ( /sys/class/net/$interface/device ) contains a symbolic link to the /sys hierarchy for the driver. There you'll also find a symbolic link to the /sys hierarchy for the module, if applicable. This applies to most devices, not just wireless interfaces. | {
"source": [
"https://unix.stackexchange.com/questions/7817",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4812/"
]
} |
7,870 | I would like to avoid doing this by launching the process from a monitoring app. | On Linux with the ps from procps(-ng) (and most other systems since this is specified by POSIX): ps -o etime= -p "$$" Where $$ is the PID of the process you want to check. This will return the elapsed time in the format [[dd-]hh:]mm:ss . Using -o etime tells ps that you just want the elapsed time field, and the = at the end of that suppresses the header (without, you get a line which says ELAPSED and then the time on the next line; with, you get just one line with the time). Or, with newer versions of the procps-ng tool suite (3.3.0 or above) on Linux or on FreeBSD 9.0 or above (and possibly others), use: ps -o etimes= -p "$$" (with an added s ) to get time formatted just as seconds, which is more useful in scripts. On Linux, the ps program gets this from /proc/$$/stat , where one of the fields (see man proc ) is process start time. This is, unfortunately, specified to be the time in jiffies (an arbitrary time counter used in the Linux kernel) since the system boot. So you have to determine the time at which the system booted (from /proc/stat ), the number of jiffies per second on this system, and then do the math to get the elapsed time in a useful format. It turns out to be ridiculously complicated to find the value of HZ (that is, jiffies per second). From comments in sysinfo.c in the procps package, one can A) include the kernel header file and recompile if a different kernel is used, B) use the posix sysconf() function, which, sadly, uses a hard-coded value compiled into the C library, or C) ask the kernel, but there's no official interface to doing that. So, the ps code includes a series of kludges by which it determines the correct value. Wow. So it's convenient that ps does that all for you. :) As user @336_ notes, on Linux (this is not portable), you can use the stat command to look at the access, modification, or status change dates for the directory /proc/$$ (where again $$ is the process of interest). All three numbers should be the same, so stat -c%X /proc/$$ will give you the time that process $$ started, in seconds since the epoch. That still isn't quite what you want, since you still need to do the math to subtract that from the current time to get elapsed time — I guess something like date +%s --date="now - $( stat -c%X /proc/$$ ) seconds" would work, but it's a bit ungainly. One possible advantage is that if you use the long-format output like -c%x instead of -c%X , you get greater resolution than whole-number seconds. But, if you need that, you should probably use process-auditing approach because the timing of running the stat command is going to interfere with accuracy. | {
"source": [
"https://unix.stackexchange.com/questions/7870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
7,874 | We have two systems with similar hardware (main point being the processor, let us say a standard intel core 2 duo). One is running (insert your linux distro here: Ubuntu will be used henceforth), and the other is running let's say Mac OS X. One compiles an equivalent program, Let us say something like: int main()
{
int cat = 33;
int dog = 5*cat;
return dog;
} The code is extremely simple, because I don't want to consider the implications of shared libraries yet. When compiled on the respective systems. Is not the main difference between the output a matter of ELF vs Mach-O? If one were to strip each binary of the formatting, leaving a flat binary, wouldn't the disassembled machine instructions be the same? (with perhaps a few differences depending on the compilers habits/tendencies). If one were to develop a program to repackage the flat binary produced from our Ubuntu system, in the Mach-O formatting, would it run in the Mac OS X system? Then, if one only had the compiled binary of the supposed program above, and one had this mystical tool for repackaging flat binaries, would simple programs be able to run on the Mac OS X system? Now let us take it a bit further. We now have a program with source such as: #include <stdio.h>
int main()
{
printf("I like tortoises, but not porpoises");
return 0;
} Assuming this program is compiled and statically linked, would our magical program still be able to repackage the raw binary in the Mach-O format and have it work on mac os X? Seeing as it would not need to rely on any other binaries, (for which the mac system would not have in this case) And now for the final level; What if we used this supposed program to convert all of the necessary shared libraries to the Mach-O format, and then instead compiled the program above with dynamic linking. Would the program still succeed to run? That should be it for now, obviously each step of absurdity relies on the previous base, to even make sense. so If the very first pillar gets destroyed, I doubt there would be much merit to the remaining tiers. I definitely would not even go as far as to think of this with programs with GUI's in mind. Windowing systems would likely be a whole other headache. I am only considering command line programs at this stage. Now, I invite the world to correct me,and tell me everything that is wrong with my absurd line of thinking. | You forget one crucial thing, namely that your program will have to interact with the operating system to do anything interesting. The conventions are different between Linux and OS X so the same binary cannot run as-is without essentially having a chunk of operating system dependent code to be able to interact with it. Many of these things are provided through libraries, which you then need to link in, and that means your program needs to be linkable, and linking is also different between the two systems. And so it goes on and on. What on the surface sounds like doing the same thing is very different in the actual details. | {
"source": [
"https://unix.stackexchange.com/questions/7874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4994/"
]
} |
7,899 | Installing something in windows takes a click of a button. But every time I try to install something in linux, which is not found in APT, I get so confused. You download a zipped folder, then what? If you are lucky there is a README, refering to some documentation, which might help you, if you are lucky. What is the magic trick when "installing" extensions and applications which aren't found in APT? I love linux, but this problem haunts me every day. | If it is a software which obeys the Filesystem Hierarchy Standard than you should place it in /usr/local and the appropriate subdirectories (like bin , lib , share , ...). Other software should be placed in their own directory under /opt . Then either set your PATH variable to include the bin directory or whatever directory which holds the executables, or create symbolic links to /usr/local/bin . | {
"source": [
"https://unix.stackexchange.com/questions/7899",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5066/"
]
} |
7,943 | Can anybody explain(hopefully with a picture), how is the linux graphics stack organised? I hear all the time about X/GTK/GNOME/KDE etc., but I really don't have any idea what they actually do and how they interact with each other and other portions of the stack. How do Unity and Wayland fit in? | The X Window System uses a client-server architecture. The X server runs on the machine that has the display (monitors + input devices), while X clients can run on any other machine, and connect to the X server using the X protocol (not directly, but rather by using a library, like Xlib, or the more modern non-blocking event-driven XCB). The X protocol is designed to be extensible, and has many extensions (see xdpyinfo(1) ). The X server does only low level operations, like creating and destroying windows, doing drawing operations (nowadays most drawing is done on the client and sent as an image to the server), sending events to windows, ... You can see how little an X server does by running X :1 & (use any number not already used by another X server) or Xephyr :1 & (Xephyr runs an X server embedded on your current X server) and then running xterm -display :1 & and switching to the new X server (you may need to setup X authorization using xauth(1) ). As you can see, the X server does very little, it doesn't draw title bars, doesn't do window minimization/iconification, doesn't manage window placement... Of course, you can control window placement manually running a command like xterm -geometry -0-0 , but you will usually have an special X client doing the above things. This client is called a window manager . There can only be one window manager active at a time. If you still have open the bare X server of the previous commands, you can try to run a window manager on it, like twm , metacity , kwin , compiz , larswm , pawm , ... As we said, X only does low level operations, and doesn't provide higher level concepts as pushbuttons, menus, toolbars, ... These are provided by libraries called toolkits , e.g: Xaw, GTK, Qt, FLTK, ... Desktop environments are collections of programs designed to provide a unified user experience. So desktop environments typically provides panels, application launchers, system trays, control panels, configuration infrastructure (where to save settings). Some well known desktop environments are KDE (built using the Qt toolkit), Gnome (using GTK), Enlightenment (using its own toolkit libraries), ... Some modern desktop effects are best done using 3d hardware. So a new component appears, the composite manager . An X extension, the XComposite extension, sends window contents to the composite manager. The composite manager converts those contents to textures and uses 3d hardware via OpenGL to compose them in many ways (alpha blending, 3d projections, ...). Not so long ago, the X server talked directly to hardware devices. A significant portion of this device handling has been moving to the OS kernel: DRI (permitting access to 3d hardware by X and direct rendering clients), evdev (unified interface for input device handling), KMS (moving graphics mode setting to the kernel), GEM/TTM (texture memory management). So, with the complexity of device handling now mostly outside of X, it became easier to experiment with simplified window systems. Wayland is a window system based on the composite manager concept, i.e. the window system is the composite manager. Wayland makes use of the device handling that has moved out of X and renders using OpenGL. As for Unity, it's a desktop environment designed to have a user interface suitable for netbooks. | {
"source": [
"https://unix.stackexchange.com/questions/7943",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4812/"
]
} |
7,950 | I understand that by default, newly created filesystems will be created with 5% of the space allocated for root. I also know you can change the defined space with: tune2fs -m 1 /dev/sdXY What I'm curious about though, is what the actual purpose for this reserved space is. Does it serve any practical purpose which would merit more than 5% space in some circumstances? The reason I've stumbled upon this question is that we recently built a 1TB filestore, and couldn't quite figure out why a df -h left us missing 5% of our capacity. | Saving space for important root processes (and possible rescue actions) is one reason. But there's another. Ext3 is pretty good at avoiding filesystem fragmentation, but once you get above about 95% full, that behavior falls off the cliff, and suddenly filesystem performance becomes a mess. So leaving 5% reserved gives you a buffer against this. Ext4 should be better at this, as explained by Linux filesystem developer/guru Theodore Ts'o : If you set the reserved block count to
zero, it won't affect performance much
except if you run for long periods of
time (with lots of file creates and
deletes) while the filesystem is
almost full (i.e., say above 95%), at
which point you'll be subject to
fragmentation problems. Ext4's
multi-block allocator is much more
fragmentation resistant, because it
tries much harder to find contiguous
blocks, so even if you don't enable
the other ext4 features, you'll see
better results simply mounting an ext3
filesystem using ext4 before the
filesystem gets completely full. If you are just using the filesystem
for long-term archive, where files
aren't changing very often (i.e., a
huge mp3 or video store), it obviously
won't matter. | {
"source": [
"https://unix.stackexchange.com/questions/7950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5067/"
]
} |
7,960 | What's the origin of root account? Where did it come from and why is it called root anyway? (Originally asked by @lizztheblizz on Twitter.) | The original home directory of the root user was the root of the filesystem / ( http://minnie.tuhs.org/cgi-bin/utree.pl?file=V5/etc/passwd ). I think the user was indeed named after that directory. But why 'root' and not 'start' or 'origin' or something else? Well, before Ken Thompson and Dennis Ritchie wrote UNIX, they were (also at Bell Labs) developing Multics. If you take a look at Multics history, you will find that ROOT existed there too ( http://web.mit.edu/multics-history/source/Multics_Internet_Server/Multics_mdds.html ). So the name must come from Multics. | {
"source": [
"https://unix.stackexchange.com/questions/7960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5072/"
]
} |
8,056 | I'm running linux clusters, mostly on SLES10. The servers are mostly blades, accessed via remote console. There is a real console in the server room, but switched off. I would like to disable the screen blanking as it serves no purpose and is a
nuisance. You have to press key to see if you are connected which is a pain. We are running in runlevel 3, so the console is in text mode, no X11 involved. | You can verify what timeout the kernel uses for virtual console blanking via: $ cat /sys/module/kernel/parameters/consoleblank
600 This file is read-only and the timeout is specified in seconds. The current default seems to be 10 minutes. You can change that value with entering the following command on a virtual console (if you are inside an xterm you have to change to a virtual console via hitting e.g. Ctrl + Alt + F1 ). $ setterm -blank VALUE Where the new VALUE is specified in minutes . A value of 0 disables blanking: $ cat /sys/module/kernel/parameters/consoleblank
600
$ setterm -blank 0
$ cat /sys/module/kernel/parameters/consoleblank
0 setterm has other powersaving related options, the most useful combination seems to be: $ setterm -blank 0 -powersave off Thus to permanently/automatically disable virtual console blanking on startup you can either: add the consoleblank=0 kernel parameter to the kernel command line (i.e. edit and update your boot loader configuration) add the setterm -blank 0 command to an rc-local or equivalent startup script add the setterm output to /etc/issue since /etc/issue is output on every virtual console: # setterm -blank 0 >> /etc/issue Choose one alternative from the above. | {
"source": [
"https://unix.stackexchange.com/questions/8056",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5096/"
]
} |
8,062 | This is Ubuntu 9.04, 2.6.28-11-server, 32bit x86 $ cat test.c
main() { int *dt = (int *)0x08049f18; *dt = 1; }
$ readelf -S ./test
...
[18] .dtors PROGBITS 08049f14 000f14 000008 00 WA 0 0 4
...
$ ./test
Segmentation fault
$ For the uninitiated: gcc creates a destructor segment, .dtors , in the elf executable, which is called after main() exits. This table has long been writable, and it looks like it should be in my case (see readelf output). But attempting to write to the table causes a segfault. I realize there has been a movement toward readonly .dtors, plt, got lately, but what I don't understand is the mismatch between readelf and the segfault. | You can verify what timeout the kernel uses for virtual console blanking via: $ cat /sys/module/kernel/parameters/consoleblank
600 This file is read-only and the timeout is specified in seconds. The current default seems to be 10 minutes. You can change that value with entering the following command on a virtual console (if you are inside an xterm you have to change to a virtual console via hitting e.g. Ctrl + Alt + F1 ). $ setterm -blank VALUE Where the new VALUE is specified in minutes . A value of 0 disables blanking: $ cat /sys/module/kernel/parameters/consoleblank
600
$ setterm -blank 0
$ cat /sys/module/kernel/parameters/consoleblank
0 setterm has other powersaving related options, the most useful combination seems to be: $ setterm -blank 0 -powersave off Thus to permanently/automatically disable virtual console blanking on startup you can either: add the consoleblank=0 kernel parameter to the kernel command line (i.e. edit and update your boot loader configuration) add the setterm -blank 0 command to an rc-local or equivalent startup script add the setterm output to /etc/issue since /etc/issue is output on every virtual console: # setterm -blank 0 >> /etc/issue Choose one alternative from the above. | {
"source": [
"https://unix.stackexchange.com/questions/8062",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4896/"
]
} |
8,101 | For instance, :echo strftime(%c) will show the current time on the bottom, but how to insert this time string into the text (right after the cursor)? | :r!date +\%c see :help :r! Note, this is for external commands (they run in your shell), not vim commands. | {
"source": [
"https://unix.stackexchange.com/questions/8101",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5103/"
]
} |
8,118 | How can I see a list of all machines that are available on the LAN I'm am part of. | How much do you know about the LAN in question? I'm assuming you don't know anything just plugged in the cable or connected to wifi. Try requesting an IP address with DHCP. Do you get one? Then you already know a few things: the gateway IP, the DHCP server IP, the subnet mask and maybe DNS servers. If you don't get one there is either no DHCP server or the network is MAC filtered. Either way, start capturing packets with wireshark . If you are on wireless or connected to a hub it's easy. If you are connected to a switch you can try MAC flooding to switch it back to "hub mode" but a smarter switch will just disable your port. If you want to try it anyway ettercap can do this for you. (Or macchanger and a shell script :) ) Looking at the packets you can find IP addresses, but most importantly, you can guess the network parameters. If you suspect MAC filtering change you MAC address to one of the observed ones after it leaves (sends nothing for a while). When you have a good idea about the network configuration (netmask, gateway, etc) use nmap to scan. Nmap can do a lot more than -sP in case some hosts don't respond to ping (check out the documentation ). It's important that nmap only works if your network settings and routes are correct. You can possibly find even more hosts with nmap's idle scan . Some (most?) system administrators don't like a few of the above methods so make sure it is allowed (for example it's your network). Also note that your own firewall can prevent some of these methods (even getting an IP with DHCP) so check your rules first. Nmap Here is how to do basic host discovery with nmap . As I said your network configuration should be correct when you try this. Let's say you are 192.168.0.50 you are on a /24 subnet. Your MAC address is something that is allowed to connect, etc. I like to have wireshark running to see what I'm doing. First I like to try the list scan, which only tries to resolve the PTR records in DNS for the specified IP addresses. It sends nothing to the hosts so there is no guarantee it is really connected or turned on but there is a good chance. This mode obviously needs a DNS server which is willing to talk to you. nmap -vvv -sn -sL 192.168.1.0/16 This may find nothing or it may tell you that every single IP is up. Then I usually go for ARP scan. It sends ARP requests (you see them as "Who has <target IP>? Tell <your IP>" in wireshark). This is pretty reliable since noone filters or fakes ARP. The main disadvantage is that it only works on your subnet. nmap -vvv -sn -PR 192.168.1.0/24 If you want to scan something behind routers or firewalls then use SYN and ACK scans. SYN starts a TCP connection and you either get an RST or a SYNACK in response. Either way the host is up. You might get ICMP communication prohibited or something like that if there is a firewall. Most of the time if a firewall filtered your packets you will get nothing. Some type of firewalls only filter the TCP SYN packets and let every other TCP packet through. This is why ACK scan is useful. You will get RST in response if the host is up. Since you don't know what firewall is in place try both. nmap -vvv -sn -PS 10.1.2.0/24
nmap -vvv -sn -PA 10.1.2.0/24 Then of course you can use the ICMP-based scans with -PE -PP -PM. An other interesting method is -PO with a non-existent protocol number. Often only TCP and UDP is considered on firewalls and noone tests what happens when you try some unknown protocol. You get an ICMP protocol unreachable if the host is up. nmap -vvv -sn -PO160 10.1.2.0/24 You can also tell nmap to skip host discovery (-Pn) and do a portscan on every host. This is very slow but you might find other hosts that the host discovery missed for some reason. | {
"source": [
"https://unix.stackexchange.com/questions/8118",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/688/"
]
} |
8,160 | How can I clear the DNS cache in DD-WRT on my router? DD-WRT uses the dnsmasq daemon. | According to Flush dnsmasq dns cache : dnsmasq is a lightweight DNS, TFTP and DHCP server. It is intended to provide coupled DNS and DHCP service to a LAN. Dnsmasq accepts DNS queries and either answers them from a small, local, cache or forwards them to a real, recursive, DNS server. This software is also installed many cheap routers to cache dns queries. Just restart to flush out dns cache: # stopservice dnsmasq
# startservice dnsmasq | {
"source": [
"https://unix.stackexchange.com/questions/8160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5123/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.