source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
71,585 | Say I have the following output from ls -l : drwxr-xr-x 2 root root 4096 Apr 7 17:21 foo How can I automatically convert this to the format used by chmod ? For example: $ echo drwxr-xr-x | chmod-format
755 I'm using OS X 10.8.3. | Some systems have commands to display the permissions of a file as a number, but unfortunately, nothing portable. zsh has a stat (aka zstat ) builtin in the stat module: zmodload zsh/stat
stat -H s some-file Then, the mode is in $s[mode] but is the mode, that is type + perms. If you want the permissions expressed in octal, you need: perms=$(([##8] s[mode] & 8#7777)) BSDs (including Apple OS/X ) have a stat command as well. stat -f %Lp some-file (without the L , the full mode is returned, in octal) GNU find (from as far back as 1990 and probably before) can print the permissions as octal: find some-file -prune -printf '%m\n' Later (2001, long after zsh stat (1997) but before BSD stat (2002)) a GNU stat command was introduced with again a different syntax: stat -c %a some-file Long before those, IRIX already had a stat command (already there in IRIX 5.3 in 1994) with another syntax: stat -qp some-file Again, when there's no standard command, the best bet for portability is to use perl : perl -e 'printf "%o\n", (stat shift)[2]&07777' some-file | {
"source": [
"https://unix.stackexchange.com/questions/71585",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9191/"
]
} |
71,622 | I have created a really really short life temporary directory that I wanted to share between some users for a few hours : /some/path/tmp Unfortunately I have launched sudo chown 777 -R /tmp instead of sudo chown 777 -R tmp , so my /tmp file is now completely public. Is it a security concern now that it is completely set to public? Should I change it back to more secure settings? What are the correct permissions for /tmp ? | The normal settings for /tmp are 1777, which ls shows as drwxrwxrwt . That is: wide open, except that only the owner of a file can remove it (that's what this extra t bit means for a directory). The problem with a /tmp with mode 777 is that another user could remove a file that you've created and substitute the content of their choice. If your /tmp is a tmpfs filesystem, a reboot will restore everything. Otherwise, run chmod 1777 /tmp . Additionally, a lot of files in /tmp need to be private. However, at least one directory critically needs to be world-readable: /tmp/.X11-unix , and possibly some other similar directories ( /tmp/.XIM-unix , etc.). The following command should mostly set things right: chmod 1777 /tmp
find /tmp \
-mindepth 1 \
-name '.*-unix' -exec chmod 1777 {} + -prune -o \
-exec chmod go-rwx {} + I.e. make all files and directories private (remove all permissions for group and other), but make the X11 sockets accessible to all. Access control on these sockets is enforced by the server, not by the file permissions. There may be other sockets that need to be publicly available. Run find /tmp -type s -user 0 to discover root-owned sockets which you may need to make world-accessible. There may be sockets owned by other system users as well (e.g. to communicate with a system bus); explore with find /tmp -type s ! -user $UID (where $UID is your user ID). | {
"source": [
"https://unix.stackexchange.com/questions/71622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4175/"
]
} |
71,641 | Is there a command I can put in the .vimrc to make gvim look like vim's setup in the terminal? Same back ground, syntax highlighting, colors, and font. To add to my question, is there a may for gvim to follow vim's vimrc? That is, there layouts and actions are identical? | The normal settings for /tmp are 1777, which ls shows as drwxrwxrwt . That is: wide open, except that only the owner of a file can remove it (that's what this extra t bit means for a directory). The problem with a /tmp with mode 777 is that another user could remove a file that you've created and substitute the content of their choice. If your /tmp is a tmpfs filesystem, a reboot will restore everything. Otherwise, run chmod 1777 /tmp . Additionally, a lot of files in /tmp need to be private. However, at least one directory critically needs to be world-readable: /tmp/.X11-unix , and possibly some other similar directories ( /tmp/.XIM-unix , etc.). The following command should mostly set things right: chmod 1777 /tmp
find /tmp \
-mindepth 1 \
-name '.*-unix' -exec chmod 1777 {} + -prune -o \
-exec chmod go-rwx {} + I.e. make all files and directories private (remove all permissions for group and other), but make the X11 sockets accessible to all. Access control on these sockets is enforced by the server, not by the file permissions. There may be other sockets that need to be publicly available. Run find /tmp -type s -user 0 to discover root-owned sockets which you may need to make world-accessible. There may be sockets owned by other system users as well (e.g. to communicate with a system bus); explore with find /tmp -type s ! -user $UID (where $UID is your user ID). | {
"source": [
"https://unix.stackexchange.com/questions/71641",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36241/"
]
} |
71,881 | I'm running zsh on Linux under setopt extended_glob ksh_glob glob_dots . I'm looking for something easy to type on the command line, with no portability requirements. I'm looking at a source code tree, with no “weird” file names (e.g. no \ in file names, no file name beginning with - ). Either of the following commands print the list of subdirectories of the current directory recursively: find -type d
print -l **/*/ This is actually an svn checkout: $ find -type d
./.deps
./.svn
./.svn/text-base
./.svn/prop-base
./.svn/tmp
./.svn/tmp/text-base
./.svn/tmp/prop-base
./.svn/tmp/props
./.svn/props
./lib/.svn
./lib/.svn/text-base
./lib/.svn/prop-base
./lib/.svn/tmp
./lib/.svn/tmp/text-base
./lib/.svn/tmp/prop-base
./lib/.svn/tmp/props
./src/.svn
./src/.svn/text-base
./src/.svn/prop-base
./src/.svn/tmp
./src/.svn/tmp/text-base
./src/.svn/tmp/prop-base
./src/.svn/tmp/props I want to exclude the .svn directories and their subdirectories which are present in every directory. It's easy with find : find -type d -name .svn -prune -o -print Can I do this with a short zsh glob? Ignoring dot files comes close (I need to do it explicitly because I have glob_dots set): print -l **/*(/^D) But this isn't satisfactory because it hides the .deps directory, which I do want to see. I can filter out the paths containing .svn : print -l **/*~(*/|).svn(|/*)(/) But that's barely shorter than find (so what am I using zsh for?). I can shorten it to print -l **/*~*.svn*(/) , but that also filters out directories called hello.svn . Furthermore, zsh traverses the .svn directories, which is a bit slow on NFS or Cygwin. Is there a convenient (as in easy to type) way to exclude a specific directory name (or even better: an arbitrary pattern) in a recursive glob? | Zsh's extended glob operators support matching over / (unlike ksh's, even in zsh's implementation). Zsh's **/ is a shortcut for (*/)# ( */ repeated 0 or more times). So all I need to do is replace that * by ^.svn (anything but .svn ). print -l (^.svn/)# Neat! | {
"source": [
"https://unix.stackexchange.com/questions/71881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/885/"
]
} |
71,940 | I have some dead connection in one application which is in hanged state if client machine is dead. ->192.168.1.214:49029 (ESTABLISHED) Is there a way to terminate these option from linux command line without restarting the server? After search I found solution called as tcpkill. But it will not work for me. As it permanently blocks that ip. | On linux kernel >= 4.9 you can use the ss command from iproute2 with key -K ss -K dst 192.168.1.214 dport = 49029 the kernel have to be compiled with CONFIG_INET_DIAG_DESTROY option enabled. | {
"source": [
"https://unix.stackexchange.com/questions/71940",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22047/"
]
} |
71,979 | How can I get the installed Varnish cache version string from the command line? | According to the varnish documentation , it's varnishd -V | {
"source": [
"https://unix.stackexchange.com/questions/71979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36926/"
]
} |
71,991 | I've been trying to save space on my linux server, and I had a folder containting, in subfolders, 22GB of images. So I decided to compress them. First I used tar: tar -zcf folder.tar folder Then gzip gzip folder And finally, for good measure, just in case, bzip2 bzip2 folder And after all that, the total of all the folder.tar.gz.bzip2 s, came to still 22GB! With, using finer precision, a 1% space saving! Have I done something wrong here? I would expect many times more than a 1% saving! How else can I compress the files? | Compression ratio is very dependent of what you're compressing. The reason text compresses down so well is because it doesn't even begin to fully utilize the full range of numbers representable in the same binary space. So formats that do (e.g compressed files) can store the same information in less space just by virtue of using all those binary numbers that mean nothing in textual encodings and can effectively represent whole progressions of characters in a single byte and get a good compression ratio that way. If the files are already compressed, you're typically not going to see much advantage to compressing them again. If that actually saved you additional space it's probably an indication that the first compression algorithm kind of sucks. Judging from the nature of the question I'm going to assume a lot of these are media files and as such are already compressed (albeit with algorithms that prioritize speed of decompression) and so you're probably not going to get much from them. Sort of a blood from a stone scenario: they're already as small as they could be made without losing information. If I'm super worried about space I just do a "bzip2 -9" and call it good. I've heard good things about the ratio on XZ though. I haven't used XZ myself (other than to decompress other people's stuff), but it's supposed to have a better ratio than bzip2 but take a little longer to compress/decompress. | {
"source": [
"https://unix.stackexchange.com/questions/71991",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20901/"
]
} |
72,035 | I want to install the virtualbox guest addition in a guest Archlinux machine.
The vbox version is 4.2.10 r84104 , and the Arch install media is the latest release 2013.04.01 with systemd as the default program to boot the system. I mounted the iso file and cd to the mount point, run ./VBoxLinuxAdditions.run , but it reports Unable to determine your linux distribution . I checked that install script, and finding that in the function check_system_type() , there's no branch dealing with Archlinux. I tried to touch a file like /etc/gentoo-release but failed. How can I install the additions? Any help or advice will be appreciated. | All you have to do is install virtualbox-guest-utils with pacman . Don't do anything else. Don't even try to install Virtualbox Guest Utils from Virtualbox's menu, and don't mount the iso, that method works with many of the distros, but not with ArchLinux. When you have done what is said on my first sentence, do what is said on the wiki entry . Arch doesn't have releases, it's rolling release, so it's wrong to say "with latest Archlinux". And the age of installation medium doesn't affect anything, it just provides programs which are usable while installation, so it doesn't matter if you install arch with installation medium from 2010. You get same versions of programs installed your final arch installation. | {
"source": [
"https://unix.stackexchange.com/questions/72035",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23979/"
]
} |
72,039 | Read that for comparing strings inside if we need to use double square brackets. Some books says that comparison can be done by = . But it works with the == too. #!/bin/bash
a="hello"
b="world"
if [[ $a == $b ]];then
echo "equal"
fi Is there a difference between = and == in the comparison? | [[ $a == $b ]] is not comparison, it's pattern matching. You need [[ $a == "$b" ]] for byte-to-byte equality comparison. = is the same as == in any shell that supports [[...]] (introduced by ksh ). [[...]] is not standard sh syntax. The [ command is standard, and the standard comparison operator there is = (though some [ implementations also recognise == ¹). Just like in any argument to any command, variable expansions must be quoted to prevent split+glob and empty removal (only the latter being performed in zsh ), so: [ "$a" = "$b" ] In standard sh , pattern matching is done with case : case $a in
($b) ...
esac For completeness, other equality-like operators you may come across in shell scripts: [ "$a" -eq "$b" ] : standard [ operator to compare decimal integer numbers. Some [ implementations allow blanks around the numbers, some allow arbitrary arithmetic expressions, but that's not portable. Portably, one can use [ "$(($a))" -eq "$(($b))" ] for that. See also [ "$((a == b))" -ne 0 ] which would be the standard equivalent (except that POSIXly, the behaviour is only specified if $a and $b contain integer constants) of: ((a == b)) , from ksh and also found in zsh and bash , returns true if the evaluation of the arithmetic expression stored in $a yields the same number as that of $b . Typically, that's used for comparing numbers. Note that there are variations between shells as to how arithmetic expressions are evaluated and what numbers are supported (for instance bash and some implementation/versions of ksh don't support floating point or treat numbers with leading zeros as octal). expr "$a" = "$b" does a number comparison if both operands are recognised as decimal integer numbers (some allowing blanks around the number), and otherwise checks if the two string operands have the same sorting order. It would also fail for values of $a or $b that are expr operators like ( , substr ... awk -- 'BEGIN{exit !(ARGV[1] == ARGV[2])}' "$a" "$b" : if $a and $b are recognised as numbers (at least decimal integer and floating point numbers like 1.2, -1.5e-4, leading trailing blanks ignored, some also recognising hexadecimal, octal or anything recognised by strtod() ), then a numeric comparison is performed. Otherwise, depending on the implementation, it's either a byte-to-byte string comparison, or like for expr a strcoll() comparison, that is whether $a and $b sort the same. See also: What is the difference between [[ $a == z* ]] and [ $a == z* ]? using single or double bracket - bash ¹ that includes GNU [ and the [ builtin of ksh , bash , yash , some though not all ash -based shells and zsh , however note that in zsh , =cmd is a special filename expansion operator (expanded in the same contexts as ~user is) that expands to the path of the corresponding command, so there, unless you turn off the equals option to disable that feature, you'd need to write it [ "$a" '==' "$b" ] or you'd get an error that the = command is not found. Same for [ "$string" '=~' "$regexp" ] | {
"source": [
"https://unix.stackexchange.com/questions/72039",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3539/"
]
} |
72,086 | I came across a sentence in vimdoc: Note: CTRL-S does not work on all terminals and might block
further input, use CTRL-Q to get going again. and using CTRL-S indeed hangs my vim. I was thinking that it was vims fault, since there was no problem when I used C-s / C-x C-s in emacs nox. However, just now, when I was reading a man page and pressed Ctrl-s , it hung man as well ( less was the output pager). So, can someone tell me what's happening? The terminal emulators I've testd are xterm and lxterminal . tty also
has this problem. A Ctrl + q unhangs the terminal in all the cases. | This feature is called Software Flow Control (XON/XOFF flow control) When one end of the data link (in this case the terminal emulator) can't receive any more data (because the buffer is full or nearing full or the user sends C-s ) it will send an "XOFF" to tell the sending end of the data link to pause until the "XON" signal is received. What is happening under the hood is the "XOFF" is telling the TTY driver in the kernel to put the process that is sending data into a sleep state (like pausing a movie) until the TTY driver is sent an "XON" to tell the kernel to resume the process as if it were never stopped in the first place. C-s enables terminal scroll lock. Which prevents your terminal from scrolling (By sending an "XOFF" signal to pause the output of the software). C-q disables the scroll lock. Resuming terminal scrolling (By sending an "XON" signal to resume the output of the software). This feature is legacy (back when terminals were very slow and did not allow scrolling) and is enabled by default. To disable this feature you need the following in either ~/.bash_profile or ~/.bashrc : stty -ixon | {
"source": [
"https://unix.stackexchange.com/questions/72086",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8776/"
]
} |
72,096 | I am working on a collectd plugin to monitor a program I have running in a detached screen session. This program constantly updates the terminal with its status by replacing its old status (similar to programs like top, etc). I would like to be able to grab what it is currently "showing" and parse it to get the current status of the program. I know there is a way to send text to a screen, but is there a way to grab it? Alternatively, is there another program/approach to accomplish what I am looking for? | This feature is called Software Flow Control (XON/XOFF flow control) When one end of the data link (in this case the terminal emulator) can't receive any more data (because the buffer is full or nearing full or the user sends C-s ) it will send an "XOFF" to tell the sending end of the data link to pause until the "XON" signal is received. What is happening under the hood is the "XOFF" is telling the TTY driver in the kernel to put the process that is sending data into a sleep state (like pausing a movie) until the TTY driver is sent an "XON" to tell the kernel to resume the process as if it were never stopped in the first place. C-s enables terminal scroll lock. Which prevents your terminal from scrolling (By sending an "XOFF" signal to pause the output of the software). C-q disables the scroll lock. Resuming terminal scrolling (By sending an "XON" signal to resume the output of the software). This feature is legacy (back when terminals were very slow and did not allow scrolling) and is enabled by default. To disable this feature you need the following in either ~/.bash_profile or ~/.bashrc : stty -ixon | {
"source": [
"https://unix.stackexchange.com/questions/72096",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36997/"
]
} |
72,119 | To navigate to the starting and end of a command, I usually use Ctrl a and Ctrl e . However, when I work within a GNU screen, those keybinding do not work becuase perhaps they are being used by the GNU screen. Is there another way to move to the starting or the end of the command? I am on CentOS6.2 | Ctrl-A followed by the letter 'a' will send the Ctrl-A sequence to the shell. Or you could map the screen command key to something other than Ctrl-A | {
"source": [
"https://unix.stackexchange.com/questions/72119",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28162/"
]
} |
72,223 | What is the best way (reliable, portable, etc.) to check if a given folder is on a mounted remote (nfs) filesystem within a shell script? I am looking for a command that would look like: chk-remote-mountpoint /my/path/to/folder | As Stephane says "there is no universal Unix answer to that". The best solution I have found to my question: df -P -T /my/path/to/folder | tail -n +2 | awk '{print $2}' will return the filesystem type, for example: nfs or ext3 . The -T option is not standard , so it may not work on other Unix/Linux systems... According to Gilles ' comment below: "This works on any non-embedded Linux, but not on BusyBox, *BSD, etc." | {
"source": [
"https://unix.stackexchange.com/questions/72223",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30196/"
]
} |
72,240 | I have a config file which consists of list of URIs I want to download. For example, http://xyz.abc.com/Dir1/Dir3/sds.exe
http://xyz.abc.com/Dir2/Dir4/jhjs.exe
http://xyz.abc.com/Dir1/itr.exe I want to read the config file and and copy each URL but at the same time create the same directory structure as on the host. For example, for the first line in the config file, I want to create the directory structure Dir1/Dir3 on my local machine (if it doesn't exist) and then copy sds.exe to .../Dir1/Dir3/ I found that I can download all the URLs in a file using 'wget -i' but how can I create the corresponding directory structure with that | From man wget : -x, --force-directories: [...] create a hierarchy of directories, even if one would not have been created otherwise. E.g. wget -x http://fly.srk.fer.hr/robots.txt will save the downloaded file to fly.srk.fer.hr/robots.txt. | {
"source": [
"https://unix.stackexchange.com/questions/72240",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37064/"
]
} |
72,320 | I need to hook onto output of currently running terminal (tty1) from virtual terminal and capture it (running X server). | I came across this one tool called ttylog . It's a Perl program available on CPAN here . It has a couple caveats, one being that I could only figure out how to attach to a terminal that was created as part of someone ssh'ing into my box. The other being that you have to run it with elevated privileges (i.e. root or sudo). But it works! For example First ssh into your box in TERM#1: TERM#1% ssh saml@grinchy Note this new terminal's tty: TERM#1% tty
/dev/pts/3 Now in another terminal (TERM#2) run this command: TERM#2% ttylog pts/3
DEBUG: Scanning for psuedo terminal pts/3
DEBUG: Psuedo terminal [pts/3] found.
DEBUG: Found parent sshd pid [13789] for user [saml] Now go back to TERM#1 and type stuff, it'll show up in TERM#2. All the commands I tried, (top, ls, etc.) worked without incident using ttylog . | {
"source": [
"https://unix.stackexchange.com/questions/72320",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/16969/"
]
} |
72,552 | I've read the official definition: ssh-agent is a program to hold private keys used for public key authentication (RSA, DSA, ECDSA). The idea is that ssh-agent is started in the beginning of an X-session or a login session, and all other windows or programs are started as clients to the ssh-agent program . Through use of environment variables the agent can be located and automatically used for authentication when logging in to other machines using ssh(1). "a program to hold private keys" - IMHO, ssh keys are generated by a user with the ssh-keygen command and simply stored in ~/.ssh - why do I need some daemon to hold these keys? How exactly does it hold them anyways? - aren't they just stored in .ssh ? "are started as clients to the ssh-agent program" - I don't get it. Where would one need that? I usually just use ssh as this: ssh -i ~/.ssh/private_key_name username@hostname What exactly does the above definition mean by "clients"? What clients? Don't you just run the ssh command from a terminal to connect to another machine? What other clients are there and why can't they just use the standard path to that private key file, just like the ssh command? | The SSH agent handles signing of authentication data for you. When authenticating to a server, you are required to sign some data using your private key, to prove that you are, well, you. As a security measure, most people sensibly protect their private keys with a passphrase, so any authentication attempt would require you to enter this passphrase. This can be undesirable, so the ssh-agent caches the key for you and you only need to enter the password once, when the agent wants to decrypt it (and often not even that, as the ssh-agent can be integrated with pam, which many distros do). The SSH agent never hands these keys to client programs, but merely presents a socket over which clients can send it data and over which it responds with signed data. A side benefit of this is that you can use your private key even with programs you don't fully trust. Another benefit of the SSH agent is that it can be forwarded over SSH. So when you ssh to host A, while forwarding your agent, you can then ssh from A to another host B without needing your key present (not even in encrypted form) on host A. | {
"source": [
"https://unix.stackexchange.com/questions/72552",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37241/"
]
} |
72,587 | I observed some of the binary files and configuration filenames end with a d .
What is reason for putting a d at the end of the file name? Like httpd , ospfd , pppd , syslogd , telnetd , pptpd , inetd , bootlogd and dhcpd . | They are daemons (Computing) – as in " workers behind the curtain ". http Daemon - Hypertext Transfer Protocol Daemon
ospf Daemon - Open Shortest Path First Daemon (89)
ppp Daemon - Point-to-Point Protocol Daemon
syslog Daemon - Syslog Daemon
telnet Daemon - Telnet server Daemon
pptp Daemon - Point-to-Point Tunneling Protocol Daemon
dhcp Daemon - Dynamic Host Configuration Protocol Daemon All depending on how you interpret the word they can definitively also be
demons. As Wikipedia and Take Our Word For It explains; the words is taken from Maxwell's daemon Maxwell's_demon.svg Htkym CC , Wikipedia – "an imaginary agent which helped sort molecules of different speeds and worked tirelessly in the background." Else the usage of the word is somewhat in these lines: daemon: spirit (polytheistic context)
demon : evil spirit (monotheistic context) Fix#1: And as pointed out by the good Mr. @Michael Kjörling , to emphasize: "Of course, just because the executable's name ends in d doesn't mean it is a daemon." sed Stream Editor
dd Data Description
chmod Change file mode bits
xxd Hex Dump
find Find etc. are examples of frequently used tools ending in d . Then again that would
not be an added suffix as in sedd . ls /usr/bin/*d /bin/*d Though; typically daemons have the letter d appended at the end. telnet vs telnetd Another writeup on the subject of *Nix Daemons. | {
"source": [
"https://unix.stackexchange.com/questions/72587",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17187/"
]
} |
72,625 | For background I have just built a new machine with modern hardware including: AMD FX-8350 Gigabyte GA-990FXA-UD3 motherboard 16GB RAM NVidia GTX 650 Ti Kingston SSD Given that, I tried to install various versions of Linux on the SSD and was met with failure almost every time. I tried installing Arch, Debian stable, Debian sid, and Ubuntu 12.10 from a USB thumb drive but while the BIOS saw the USB drive and started to boot from it, as soon as the OS attempted to enumerate the USB devices I lost all USB functionality (including the boot device). Eventually I burned a DVD and installed Ubuntu 12.10 onto the SSD. It should be noted that my USB keyboard (and mouse) work fine while in the American Megatrends UEFI/BIOS. Even when I'm in the pre-installation menus on the Live Ubuntu DVD the keyboard works fine. As soon as Linux is booted (either Live DVD or from the SSD) I lose all USB functionality and can only navigate the OS using a PS/2 keyboard. What I see in the dmesg/syslog is a few lines about " failed to load microcode amd_ucode/microcode_amd_fam15h.bin " and I can see USB devices failing to initialize. If I do an lsusb I can see all the USB host controllers but none of the devices. Doing an lspci shows me all the hardware I'd expect. And doing an lsmod I do not see any usb modules loaded ( usb_ehci for example). I tried passing noapic to the kernel boot string and it had no effect on this problem. The motherboard supports USB 3.0 but all the devices I have plugged into normal USB 2.0 ports. I'm rather baffled at what could be killing/preventing USB (and my on-board network card) from working in Linux . There doesn't seem to be any problem with any of these devices working in BIOS and I do not have a Windows installation available to test and see if it works. I've already RMA'd the motherboard once but the second one has exactly the same behavior so I think I can safely rule out hardware failure (since the behavior is identical, I don't think the odd of me getting two identically defective boards are greater than the odds of this being a Linux problem). What else can I try to get USB (and ideally my network, but we'll stick to USB for now) working? Edit #1: Since I have no networking I can only relate interesting bits from dmesg here. Of interest in dmesg I can see I have 11 USB host controllers (OHCI, EHCI, and xHCI). It detects my USB devices and then fails immediately as follows: usb 3-1: new high-speed USB device number 2 using ehci_hcd
usb 3-1: device descriptor read/64, error -32 That repeats several times incrementing the number and trying other USB Host controllers until it falls back to OHCI controllers which also fail but have an additional message: usb 8-1: device not accepting address 4, error -32 I think my networking problems have to do with the fact that I don't have IPv6 enabled on my router and that seems to be a problem eth1: no IPv6 routers present Edit #2: lspci -vvv shows that my network adapters (both onboard and expansion) are Realtek Semiconductor (no surprise); RTL8111/8168B and RTL8169/8110 respectively. My USB controllers are Etron Technology EJ168 (xHCI) and AMD nee ATI SB7x0/SB8x0/SB9x0 (EHCI & OHCI) Now running Debian wheezy modprobe shows usb_common , usbcore , xhci_hcd , ehci_hcd , and ohci_hcd all loaded and functioning. | I found the answer from this thread ( http://ubuntuforums.org/showthread.php?t=2114055 ) over at ubuntuforums.org. It seems with newer Gigabyte mainboards (at least) there is a BIOS option called IOMMU Controller that is disabled by default and gives no clue or indication as to what it is for. Enabling this setting and rebooting "magically" restores all my USB and networking problems in a 64-bit Linux OS (doesn't matter which one). I am rather shocked and elated that it was such a long search for such a simple fix. Thanks everyone for your help and suggestions. Hopefully others will find this helpful. Update: I'd just like to add that my current BIOS settings also include enabling XHCI Handoff and EHCI Handoff in addition to IOMMU Controller. Others have mentioned this as well and enabling those two handoffs also allows my USB 3.0 ports to function as expected. | {
"source": [
"https://unix.stackexchange.com/questions/72625",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31107/"
]
} |
72,661 | The Windows dir directory listing command has a line at the end showing the total amount of space taken up by the files listed. For example, dir *.exe shows all the .exe files in the current directory, their sizes, and the sum total of their sizes. I'd love to have similar functionality with my dir alias in bash, but I'm not sure exactly how to go about it. Currently, I have alias dir='ls -FaGl' in my .bash_profile , showing drwxr-x---+ 24 mattdmo 4096 Mar 14 16:35 ./
drwxr-x--x. 256 root 12288 Apr 8 21:29 ../
-rw------- 1 mattdmo 13795 Apr 4 17:52 .bash_history
-rw-r--r-- 1 mattdmo 18 May 10 2012 .bash_logout
-rw-r--r-- 1 mattdmo 395 Dec 9 17:33 .bash_profile
-rw-r--r-- 1 mattdmo 176 May 10 2012 .bash_profile~
-rw-r--r-- 1 mattdmo 411 Dec 9 17:33 .bashrc
-rw-r--r-- 1 mattdmo 124 May 10 2012 .bashrc~
drwx------ 2 mattdmo 4096 Mar 24 20:03 bin/
drwxrwxr-x 2 mattdmo 4096 Mar 11 16:29 download/ for example. Taking the answers from this question : dir | awk '{ total += $4 }; END { print total }' which gives me the total, but doesn't print the directory listing itself. Is there a way to alter this into a one-liner or shell script so I can pass any ls arguments I want to dir and get a full listing plus sum total? For example, I'd like to run dir -R *.jpg *.tif to get the listing and total size of those file types in all subdirectories. Ideally, it would be great if I could get the size of each subdirectory, but this isn't essential. | There's already a UNIX command for this: du Just do: du -bch As per convention you can add one or more file or directory paths at the end of the command. -h is an extension to convert the size into a human-friendly format, -b gives you the file size instead of disk usage, and -c gives a total at the end. | {
"source": [
"https://unix.stackexchange.com/questions/72661",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19512/"
]
} |
72,734 | how to print the line in case the first field start with Linux1 for example: echo Linux1_ver2 12542 kernel-update | awk '{if ($1 ~ Linux1 ) print $0;}' the target is to print the line , while the first field start with Linux1 example of lines: Linux1-new 36352 Version:true
Linux1-1625543 9847
Linux1:16254 8467563 remark - space or TAB could be before the first filed | One way: echo "Linux1_ver2 12542 kernel-update" | awk '$1 ~ /^ *Linux1/' | {
"source": [
"https://unix.stackexchange.com/questions/72734",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23640/"
]
} |
72,819 | I'm trying to count the number of lines of output a certain program produces. The problem is, the program takes a long time to run, and I want to display the output to the user. Is there a way to count the number of lines the last command outputted? I could do program | wc -l but that wouldn't show the output to the user. So as far as I know, I have to do program; program | wc -l - but the program takes at least a minute to run, so I don't want to have to do it more than once just to show a line count at the bottom. EDIT: Is there a way of showing the output as it happens (line by line) and then returning a count at the end? | You can use tee to split the output stream sending one copy to wc and the other copy to STDOUT like normal. program | tee >(wc -l) The >(cmd) is bash syntax which means run cmd and replace the >(cmd) bit with the path to (a named pipe connected to) that program's STDIN. | {
"source": [
"https://unix.stackexchange.com/questions/72819",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37385/"
]
} |
72,838 | I have a file myarchive.zip that contains many directories, files, etc. Let's say this myarchive.zip file lives in a directory called "b". Well, when I use the "unzip myarchive.zip" command, the system creates a directory by default called "myarchive" with the contents of the zip file. I do not want the system to create this "myarchive" directory - I just want the contents to be extracted to directory "b". Is this possible? What I've been doing now is simply issuing a "cp" command to copy the files from the newly created directory (in this case "myarchive" to "b") to where I want them. | My version of unzip has a -j option to not create any directory. So unzip -j /path/to/file.zip Will extract all the files into the current directory without restoring the directory structure stored in the zip file. If you want to only remove one level of directories from the archive, (extract myarchive/dir/file as dir/file , not file ), you could use bsdtar (which does supports zip files in addition to tar files) instead and its -s option. bsdtar -xf /path/to/file.zip -s'|[^/]*/||' | {
"source": [
"https://unix.stackexchange.com/questions/72838",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37392/"
]
} |
72,864 | I'm looking for a solution to be used as
a response to "rm: remove write-protected regular file [x] ?" I was thinking of issuing a character followed by carriage return for several amount of times, in bashrc. How do we do that? | Edit based on updated question: To avoid being asked about removing files, add the -f ("force") option: rm -f /path/to/file This has one side effect you should be aware of: If any of the given paths do not exist, it will not report this, and it will return successfully: $ rm -f /nonexistent/path
$ echo $?
0 Original answer: Here's one simple solution: yes "$string" | head -n $number | tr $'\n' $'\r' yes repeats any string you give it infinitely, separated by newlines. head stops it after $number times, and tr translates the newlines to carriage returns. You might not see any output because of the carriage returns, but passing it to this command (in bash ) should illustrate it: printf %q "$(yes "$string" | head -n $number | tr $'\n' $'\r')" Users without bash can pipe the result to od , hexdump or xxd to see the actual characters returned. | {
"source": [
"https://unix.stackexchange.com/questions/72864",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34041/"
]
} |
72,934 | I would like to start a process with a nice value of -20. This requires me to use a command like sudo nice -n -20 matlab However, this starts matlab as root too. Is there a way to have matlab as non-root? My current approach is sudo nice -n -20 sudo -u myusername matlab which to me looks like a hack. Is there a direct approach to do this? | I would start it normally and use "renice" afterwards... However I was able to make a quick hack together with "su" which works: sudo nice -n -20 su -c command_to_run user_to_run_as If you don't have to give sudo a password - perhaps because you've already just given it - you may add an & to put the whole thing in the background. Since you already become root with the sudo-command, su won't ask you for a password. I was able to start a X-program from a terminal-emulator under X. If you want to run the X-program as another user than the user owning the X-session, you'll probably need to explicitly tell X to allow it (open for X-clients from that user). | {
"source": [
"https://unix.stackexchange.com/questions/72934",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/8496/"
]
} |
72,935 | Is it possible to use bash function inside AWK somehow? Example file (string, int, int, int) Mike 247808 247809 247810 Trying to convert values from decimal to hexadecimal. Function defined either in .bashrc or in shell script. $ awk '{print $1 ; d2h($2)}' file
awk: calling undefined function d2h
input record number 1, file file
source line number 1 | Try to use system() function: awk '{printf("%s ",$1); system("d2h " $2)}' file In your case system will call d2h 247808 and then append output of this command to printf output: Mike 3C800 EDIT: As system uses sh instead of bash I can't find a way to access .bashrc . But you can still use functions from your current bash script: #!/bin/bash
d2h() {
# do some cool conversion here
echo "$1" # or just output the first parameter
}
export -f d2h
awk '{printf("%s ",$1); system("bash -c '\''d2h "$2"'\''")}' file EDIT 2: I don't know why, but this is not working on my Ubuntu 16.04. This is strange, because it used to work on Ubuntu 14.04. | {
"source": [
"https://unix.stackexchange.com/questions/72935",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22762/"
]
} |
72,979 | I regularly use cat to view debugging information in the console from my FPGA development board over the serial connection, but I never have had to tell linux what the baud rate is. How does cat know what the baud rate of the serial connection is? | The stty utility sets or reports on terminal I/O characteristics for the device that is its standard input. These characteristics are used when establishing a connection over that particular medium. cat doesn't know the baud rate as such, it rather prints on the screen information received from the particular connection. As an example stty -F /dev/ttyACM0 gives the current baud rate for the ttyACM0 device. | {
"source": [
"https://unix.stackexchange.com/questions/72979",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37493/"
]
} |
73,212 | Is there an authoritative way to get the GNOME version , as long as I have a working GNOME desktop (any version) running? Several of these suggestions don't work on my system, either because the executables and/or packages simply don't exist or the menu item isn't available. | For GNOME 4 , they use gnome-shell version. If we look at the source code we can see they are reporting the gnome-shell version as the "GNOME version" in the Settings > Info panel: static char *
get_gnome_version (GDBusProxy *proxy)
{
g_autoptr(GVariant) variant = NULL;
const char *gnome_version = NULL;
if (!proxy)
return NULL;
variant = g_dbus_proxy_get_cached_property (proxy, "ShellVersion");
if (!variant)
return NULL;
gnome_version = g_variant_get_string (variant, NULL);
if (!gnome_version || *gnome_version == '\0')
return NULL;
return g_strdup (gnome_version);
} There's a debate right now whether this was the right thing to do, see info-overview: rename "GNOME Version" to "GNOME Shell Version" Note the intro to that discussion confirms what I've been saying all the time: The GNOME version and the GNOME Shell version are not the same thing... It wouldn't surprise me if they change it again in the future. Until then, to get the Gnome DE version means to get the gnome-shell version so use either gnome-shell --version or busctl --user get-property org.gnome.Shell /org/gnome/Shell org.gnome.Shell ShellVersion In GNOME 3 , version is stored in this file: /usr/share/gnome/gnome-version.xml content (on my system): <?xml version="1.0" encoding="UTF-8"?>
<gnome-version>
<platform>3</platform>
<minor>6</minor>
<micro>2</micro>
<distributor>Arch Linux</distributor>
<date>2012-11-13</date>
</gnome-version> The file is part of the upstream package called gnome-desktop (note that some distros split it into several packages so on your distro the file may end up in a package with a different name...) GNOME developers use this file to get the DE version number and display it in System Settings (aka gnome-control-center ). So getting GNOME version "the official way" means parsing the said file and extracting platform , minor and micro values. If you play with that file you can instantly see the results :) In GNOME 2 the file in question is: /usr/share/gnome-about/gnome-version.xml (though this file might be missing on some older Gnome 2 versions IIRC) Note that for GNOME v.2 & v.3 commands like gnome-session --version , gnome-shell --version , gdm --version etc might return confusing numbers. Those are GNOME desktop components , they are separate packages (with different code, history/changelog and maintainers) and as such their version may be different. They'll report the right GNOME version only if they have the same version as gnome-desktop (which is not always the case). | {
"source": [
"https://unix.stackexchange.com/questions/73212",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3645/"
]
} |
73,268 | I need to move files based on a year. I used the find command find /media/WD/backup/osool/olddata/ -mtime +470 -exec ls -lrth {} \;|sort -k6 but for this command to successfully execute i need to know the exact mtime now 470 is just a guess. Means if I can give the year 2012 it gives me files only related to 2012. So i need advice on how to Find files based on year e.g 2012 and move them to other directory. OS release 5.2
FIND version
GNU find version 4.2.27
Features enabled: D_TYPE O_NOFOLLOW(enabled) LEAF_OPTIMISATION SELINUX | You want to use the -newermt option for find : find /media/WD/backup/osool/olddata/ -newermt 20120101 -not -newermt 20130101 to get all the files with modification time in 2012. If your find does not support -newermt you can also do the following to prevent using offset calculations: touch -d 20120101 /var/tmp/2012.ref
touch -d 20130101 /var/tmp/2013.ref
find /media/WD/backup/osool/olddata/ -newer /var/tmp/2012.ref -not -newer /var/tmp/2013.ref Manpage -newerXY reference
Compares the timestamp of the current file with reference. The
reference argument is normally the name of a file (and one of
its timestamps is used for the comparison) but it may also be a
string describing an absolute time. X and Y are placeholders
for other letters, and these letters select which time belonging
to how reference is used for the comparison.
...
m The modification time of the file reference
t reference is interpreted directly as a time | {
"source": [
"https://unix.stackexchange.com/questions/73268",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19946/"
]
} |
73,359 | With an increasing number of archive/compression file formats, is there a single free/open-source command line tool to rule them all? Perhaps something with a consistent / unified set of flags? (note my friendly implicit reference to tar ) I once run into a set of aliases meant to largely simplify the task of compressing/de-compressing files with bindings to tar and other utils, but I can't find this anymore. Update: How can I configure something like atool to not use unzip to extract zip files (which apparently can't handle files larger than 4GB) and to use gunzip instead? $ aunpack large_file.zip
error: Zip file too big (greater than 4294959102 bytes)
Archive: large_file.zip
warning [large_file.zip]: 1491344848 extra bytes at beginning or within zipfile
(attempting to process anyway)
error [large_file.zip]: start of central directory not found;
zipfile corrupt.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
aunpack: unzip ...: non-zero return-code | I use atool . It does the job. It works with many, though not all formats: tar, gzip, bzip2, bzip, lzip, lzop, lzma, zip, rar, lha, arj, arc, p7zip etc. These compression tools are still needed, though as atool is simply a front end for them. I particularly like the als command it provides which lists the contents of any supported archive format. The main atool command uses its own flags for extracting archives (passing the appropriate flags to the specific underlying extraction tools). Oh, and it's in some distributions' repositories (Fedora in my case, though as I recall, back when I used Ubuntu it wasn't in their repos then. and I installed from a tarball.). Update on Repositories : atool is in the following distributions' repositories (current releases checked only): Fedora Debian (thanks @terdon, and, presumably, it's derivatives
like Ubuntu) Ubuntu (q.e.d., and, presumably, derivatives like
Mint) Open Suse CentOS (and, presumably, RHEL) Arch Linux I'm sure there are others... plausibly, most modern distributions. Answer for Updated Question "How can I configure something like atool to not use unzip to extract zip files ... and to use gunzip instead" : Edit the atool config file ~/.atoolrc and add the line: path_unzip /usr/bin/gunzip with the correct path to your gunzip program. See the man page for the complete list of possible variables you can put in this config file, of which there are a lot . If the command line options necessary for gunzip are different than unzip, you may have to modify the atool source (perl) itself. | {
"source": [
"https://unix.stackexchange.com/questions/73359",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4531/"
]
} |
73,412 | Remote machine 10.10.10.1 has password "asdFGH12" for user named "user". I'm able to log in even if I type in password "asdFGH12dasdkjlkjasdus" or any other characters after the "asdFGH12" string. $ ssh -v 10.10.10.1
OpenSSH_5.2p1 FreeBSD-20090522, OpenSSL 0.9.8k 25 Mar 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to 10.10.10.1 [10.10.10.1] port 22.
debug1: Connection established.
debug1: identity file /home/user/.ssh/identity type 0
debug1: identity file /home/user/.ssh/id_rsa type -1
debug1: identity file /home/user/.ssh/id_dsa type 2
debug1: Remote protocol version 1.99, remote software version OpenSSH_4.1
debug1: match: OpenSSH_4.1 pat OpenSSH_4*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.2p1 FreeBSD-20090522
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Host '10.10.10.1' is known and matches the RSA host key.
debug1: Found key in /home/user/.ssh/known_hosts:58
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Next authentication method: publickey
debug1: Offering public key: /home/user/.ssh/id_dsa
debug1: Authentications that can continue: publickey,keyboard-interactive
debug1: Trying private key: /home/user/.ssh/id_rsa
debug1: Next authentication method: keyboard-interactive
Password:
debug1: Authentication succeeded (keyboard-interactive).
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Warning: No xauth data; using fake authentication data for X11 forwarding.
debug1: Requesting X11 forwarding with authentication spoofing.
Last login: Tue Apr 23 14:30:59 2013 from 10.10.10.2
Have a lot of fun...
user@server:~> Is this a known behavior of (certain) SSH server versions? | This is not a limitation on the part of your SSH server, this is a limitation on the part of your server's password hash algorithm. When hashing passwords on Unix, the crypt() function is called. This may use one of many backends, a possibility is using DES, or another limiting algorithm (for this particular case, I will assume your server is using DES). DES is generally not used by default in modern operating systems because it results in a particularly bad limitation: password strength and validation is limited to 8 bytes. This means that if your password was set as "foobarbaz", it becomes "foobarba", usually without a warning or notice. The same limitation applies to validation, which means that "foobarbaz", "foobarba", and "foobarbazqux" all validate for this particular case. | {
"source": [
"https://unix.stackexchange.com/questions/73412",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33060/"
]
} |
73,413 | Hello (I'm no hard core coder:), I try to develop a script to do some very basic monitoring on an IBM SVC. My goal is to get some information about the nodes and my quroum status and then send this information by mail. here's my code so far (I understood that grep command doesn't "work" on CLI for SVC). #check nodes of the cluster with lsnodes and parse status
ssh admin@SVCName superuser>svcinfo lsnode | while read id name sn wwnn status temp;do echo $name" "$status;done
#check quorum status with lsquorum and parse status
ssh admin@SVCName superuser>svcinfo lsquorum | while read quorum_index status id name controller_id controller_name active temp; do echo $controller_name" "$active;done My problem is sending an email from the CLI to designed users. I don't find any valuable information anywhere on the internet. HINT: this script will be deployed on a jumppoint server (probably a windows server) in production, I cannot allow the installation of any exectution environment such as cigwin or perl or anything). Could you help me with that ? | This is not a limitation on the part of your SSH server, this is a limitation on the part of your server's password hash algorithm. When hashing passwords on Unix, the crypt() function is called. This may use one of many backends, a possibility is using DES, or another limiting algorithm (for this particular case, I will assume your server is using DES). DES is generally not used by default in modern operating systems because it results in a particularly bad limitation: password strength and validation is limited to 8 bytes. This means that if your password was set as "foobarbaz", it becomes "foobarba", usually without a warning or notice. The same limitation applies to validation, which means that "foobarbaz", "foobarba", and "foobarbazqux" all validate for this particular case. | {
"source": [
"https://unix.stackexchange.com/questions/73413",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37703/"
]
} |
73,484 | I believe I can do something like export EDITOR=vi , but I'm not sure what exactly to enter, and where. How can I set "vi" as my default editor? | You should add it to your shell’s configuration file. For Bash, this is ~/.bashrc or ~/.bash_profile . You should also set $VISUAL , as some programs (correctly) use that instead of $EDITOR (see VISUAL vs. EDITOR ). Additionally, unless you know why, you should set it to vim instead of vi . TL;DR, add the following to your shell configuration (probably ~/.bashrc ): export VISUAL=vim
export EDITOR="$VISUAL" | {
"source": [
"https://unix.stackexchange.com/questions/73484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38341/"
]
} |
73,498 | In the terminal, I can type Ctrl + R to search for a matching command previously typed in BASH. E.g., if I type Ctrl + R then grep , it lists my last grep command, and I can hit enter to use it. This only gives one suggestion though. Is there any way to cycle through other previously typed matching commands? | If I understand the question correctly you should be able to cycle through
alternatives by repeatedly hitting Ctrl + R . E.g.: Ctrl + R grep Ctrl + R Ctrl + R ... That searches backwards through your history. To search forward instead, use Ctrl + S , but you may need to have set: stty -ixon (either by .bash_profile or manually) prior to that to disable the XON/XOFF feature which takes over Ctrl + S . If it happens anyway, use Ctrl + Q to re-enable screen output (More details here .) | {
"source": [
"https://unix.stackexchange.com/questions/73498",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13099/"
]
} |
73,538 | Let's say when I do ls -li inside a directory, I get this: 12353538 -rw-r--r-- 6 me me 1650 2013-01-10 16:33 fun.txt As the output shows, the file fun.txt has 6 hard links; and the inode number is 12353538 . How do I find all the hard links for the file i.e. files with the same inode number? | The basic premise is to use: find /mount/point -mount -samefile /mount/point/your/file On systems with findmnt you can derive the mount point like this: file=/path/to/your/file
find "$(findmnt -o TARGET -cenT "$file")" -mount -samefile "$file" It's important not to search from / - unless the target file is on that filesystem - because inode numbers are reused in each mounted filesystem. | {
"source": [
"https://unix.stackexchange.com/questions/73538",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/9610/"
]
} |
73,622 | Someone suggested I direct a copy of the unmodified X display to a file and afterwards convert that file to a general purpose video file. What commands would I use to do this on a Kubuntu system? (Edit: He said something about attaching a display port to a file.) If not possible, what is my best option for an excellent quality screen recording that does not depend on fast hardware? Background: I tried using avconv with -f x11grab and some GUI programs. However, no matter what I try, the resulting video either has artifacts/ blurriness or is choppy (missing frames). This is probably due to CPU/ memory constraints. Goals: Video quality must not be noticeably different from seeing the session directly on a screen, because the purpose is to demonstrate an animated application. The final video must be in a common format that can be sent to Windows users and used on the web. I think H.264 MP4 should work. The solution should not presume much prior knowledge. I am familiar with the command line and basic Linux commands, but I am still learning Linux and do not know much about video codecs. What I already tried: Best command so far: ffmpeg -f x11grab -s xga -r 30 -i :0.0 -qscale 0.1 -vcodec huffyuv grab.avi , then convert to mp4 with ffmpeg -i grab.avi -sameq -vcodec mpeg4 grab.mp4 . The picture quality is great, but on my test sytem it lags the computer. On a faster target system it does not lag, but frames are obviously skipped, making the video not very smooth . I am still trying to figure out how to save the grab.avi file to SHM to see if that helps. Using Istanbul and RecordMyDesktop GUI recorders Simple command: avconv -f x11grab -s xga -r 25 -i :0.0 simple.mpg using avconv version 0.8.3-4:0.8.3-0ubuntu0.12.04.1 Adding -codec:copy (fails with: Requested output format 'x11grab' is not a suitable output format ) Adding -same_quant (results in great quality, but is very choppy/ missing many frames) Adding -vpre lossless_ultrafast (fails with: Unrecognized option 'vpre' , Failed to set value 'lossless_ultrafast' for option 'vpre' ) Adding various values of -qscale Adding various values of -b Adding -vcodec h264 (outputs repeatedly: Error while decoding stream #0:0 , [h264 @ 0x8300980] no frame! ) Note: h264 is listed in avconv -formats output as DE h264 raw H.264 video format | If your HDD allows, you can try to do it this way: First write uncompressed file: ffmpeg -f x11grab -s SZ -r 30 -i :0.0 -qscale 0 -vcodec huffyuv grab.avi here SZ is your display size (e.g. 1920x1080). After that you can compress it at any time you want: ffmpeg -i grab.avi grab.mkv Of course, you can change compression, select codec and so on. | {
"source": [
"https://unix.stackexchange.com/questions/73622",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37818/"
]
} |
73,672 | I find the beep useful for some things, so I only want to turn it off for tab completion (I'm not asking how to completely turn it off, that has already been answered in a different question on Serverfault). I also don't have root access, working on RHEL5. | Readline library has bell-style variable: Controls what happens when Readline wants to ring the terminal bell.
If set to ‘none’, Readline never rings the bell. If set to ‘visible’,
Readline uses a visible bell if one is available. If set to ‘audible’
(the default), Readline attempts to ring the terminal’s bell. So you can put into your ~/.inputrc file following line: set bell-style none Next, run bind -f ~/.inputrc once to load it. | {
"source": [
"https://unix.stackexchange.com/questions/73672",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18123/"
]
} |
73,713 | Sometimes when I cat a binary file by mistake, my terminal gets garbled up. Nothing a quick reset can't fix, but couldn't an attacker theoretically create a file that, when displayed on a terminal, would execute some arbitrary code? Through an exploit in the terminal emulator or otherwise. | Whether such output can be exploited depends on the terminal program, and what that terminal does depending on escape codes that are being sent.
I am not aware of terminal programs having such exploitable features, and the only problem now would be if there is an unknown buffer overflow or something like that, that could be exploited. With some older hardware terminals this could be a problem as you programmed e.g. function keys with these kind of escape sequences, by storing a command sequence for that key in the hardware. You would still need a physical key-press to activate that. But there are always (as Hauke so righfully marked 'braindead') people willing to add such a feature if it solves a problem for them, not understanding the loophole they create. In my experience with open source software is that, because of the many eyes looking at the code, this is less likely to happen as with closed source. (I remember that in the mail program on Silicon Grahpics' Irix, in the mid ninetees, you could include commands to be executed on the receivers machine, real paths to executables, ....) | {
"source": [
"https://unix.stackexchange.com/questions/73713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37892/"
]
} |
73,750 | I can define bash functions using or omitting the function keyword. Is there any difference? #!/bin/bash
function foo() {
echo "foo"
}
bar() {
echo "bar"
}
foo
bar Both calls to functions foo and bar succeed and I can't see any difference. So I am wondering if it is just to improve readability, or there is something that I am missing... BTW in other shells like dash ( /bin/sh is symlinked to dash in debian/ubuntu) it fails when using the function keyword. | There is no difference AFAIK, other than the fact that the second version is more portable. | {
"source": [
"https://unix.stackexchange.com/questions/73750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/17326/"
]
} |
73,755 | How do I make cron daemon check cron entries from more than one files.
In my project I need to frequently update cron file, so instead of manipulating the existing file I was planning to write my project cron entries in a dedicated cron file. | There is no difference AFAIK, other than the fact that the second version is more portable. | {
"source": [
"https://unix.stackexchange.com/questions/73755",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/31378/"
]
} |
73,818 | In CentOS and Ubuntu, how do I find out how much free disk space I have left and other disk stats like disk usage? | Type the following command: df -h df : disk free -h : makes the output human-readable | {
"source": [
"https://unix.stackexchange.com/questions/73818",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33183/"
]
} |
73,881 | If I add export HISTCONTROL=ignorespace in .bashrc , bash won't record any commands which have whitespace before them into history. But I do not understand under what situations it will be useful. Can anyone give some examples? | If your commands contain passwords or other sensitive informations | {
"source": [
"https://unix.stackexchange.com/questions/73881",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34173/"
]
} |
73,950 | While I knew for quite some time the existence of Hurd , and its mission as the official GNU Operating System kernel, I was wondering how come Linux is not embraced as the official GNU kernel over the years, seeing as it is in a much better state than the Hurd? Linux has been, more or less, serving this role 20+ years so far, however one can see that the GNU Project is keeping its distance when it comes to Linux. Why is this happening? Is it because of a dream that Hurd will (at some point in the future) be in a production quality level? Is it because the GNU project doesn't see its mission reflected as much as it wants in Linux? Is it for other political reasons? | GNU will not adopt something as a project unless the developers agree to certain stipulations which bind all official GNU projects. Currently the Linux kernel probably does not fit these restrictions, and there is nothing for Linus Torvalds, kernel.org, et al. to gain from placing themselves under the GNU umbrella, and a lot to lose -- the aforementioned binding agreement, and the public perception that the kernel is now a GNU project, which would have a mostly negative impact. GNU's parent organization, the Free Software Foundation (FSF), is a political organization and Torvalds has made various public criticisms of it and the somewhat controversial, iconoclastic lifetime leader/founder of GNU and the FSF, Richard M. Stallman. Further, the Linux kernel does not require the GNU userspace any more than the GNU userspace requires the Linux kernel. This independence should be considered a good thing by the basic principles of software engineering, which favour modularity and looser coupling as opposed to the opposite (monolithic things with tight coupling). Another point against this idea is that while HURD may not be of interest to as many people as Linux, the developers and users of HURD may object to having their project effectively dustbinned in a popularity contest. And good for them; "competition" of this sort is a positive thing, whereas bowing to monopolization is not -- you end up with massive entities that stifle creativity in part because they are prone to monolithic/meglomaniacal control. The Linux Foundation already is an independent organization, it might as well stay that way. | {
"source": [
"https://unix.stackexchange.com/questions/73950",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20260/"
]
} |
73,978 | having a bit of a difficult time trying to create a folder under another user's /home/devuser1/pubic_html folder. I'm trying to avoid using sudo and looking for an alternative. The permissions on the said folder reads as: drwxr-s--- 2 devuser1 www-data 4096 Apr 28 19:40 public_html Alternatively, assuming I use the sudo prefix, what would be the implications be? I've read that it's bad practice to use sudo to make a folder. After the new folder is created, I'm still changing the ownership of it to the user in question. Example: chown -vR devuser1:www-data /home/devuser1/public_html/$vhost | sudo -u [username] mkdir /home/[username]/public_html/[folder_name] works fine. From what I can see the permissions and ownership is the same if I were to log in as the same user and create the folder under public_html . You can also call su -c "mkdir /home/[username]/public_html/[folder_name]" [username] | {
"source": [
"https://unix.stackexchange.com/questions/73978",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37796/"
]
} |
73,989 | I've noticed that man pages and other documents formatted by Unix utilities often use double backticks `` followed by double single quotes '' to wrap quoted phrases instead of the double quote character " . Single quotes are similarly replaced. Why is this? Here are a couple examples, from the man page for grep : To find all occurrences of the pattern `.Pp' at the beginning of a line:
$ grep '^\.Pp' myfile
The apostrophes ensure the entire expression is evaluated by grep instead
of by the user's shell. The caret `^' matches the null string at the
beginning of a line, and the `\' escapes the `.', which would otherwise match
any character.
The grep utility is compliant with the IEEE Std 1003.1-2008 (``POSIX.1'')
specification. | The semantics and the usual glyphs for these characters have changed
(several times) during the last 50 years. The six-bit predecessors of ASCII contained various multi-purpose characters,
including one single quote-like
character, which was used for anything that had some similarity with
a quote: opening quote, closing quote,
apostrophe, or (by overprinting) acute or grave accent. ASCII introduced one more quote-like character, so that now we
had ' , which was used as apostrophe, closing quote, and
acute accent, and ` , which was used as opening quote or
grave accent (the concrete glyphs differed in various fonts). For some bizarre reason, ISO-8859-1 declared ' to be an
apostrophe or undirected quote, declared ` to be a
grave accent, added one more accent ´ (acute accent), and abolished overprinting (so that the isolated accent
marks were now completely pointless). Later extensions (MS-Windows
codepages and Unicode) fixed this by introducing new directed quote characters
and combining accents. What you see here is essentially a relict from ASCII times, when most fonts
had paired (slanted and/or curly) glyphs for ' and ` . | {
"source": [
"https://unix.stackexchange.com/questions/73989",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38032/"
]
} |
73,993 | I get the HTTP Error 500 (Internal Server Error) error with Chrome when trying to open a simple PHP file. I have installed Apache and PHP correctly as instructed in this article . Including the PHP specific part of it. I get this error when I try going to the URL http://localhost/~eduan/php_info.php , I made sure and this file has correct syntax and everything, here's it's contents: <?php
php_info();
?> It contains nothing else and nothing less. I also have a test install of Statamic under http://localhost/~eduan/statamic , and that works and doesn't. It loads correctly, but all the styles and stuff of it is missing, meaning (from what I understand) that it couldn't load the theme which is done with PHP if I'm not mistaken. That's all, any help is greatly appreciated! | The semantics and the usual glyphs for these characters have changed
(several times) during the last 50 years. The six-bit predecessors of ASCII contained various multi-purpose characters,
including one single quote-like
character, which was used for anything that had some similarity with
a quote: opening quote, closing quote,
apostrophe, or (by overprinting) acute or grave accent. ASCII introduced one more quote-like character, so that now we
had ' , which was used as apostrophe, closing quote, and
acute accent, and ` , which was used as opening quote or
grave accent (the concrete glyphs differed in various fonts). For some bizarre reason, ISO-8859-1 declared ' to be an
apostrophe or undirected quote, declared ` to be a
grave accent, added one more accent ´ (acute accent), and abolished overprinting (so that the isolated accent
marks were now completely pointless). Later extensions (MS-Windows
codepages and Unicode) fixed this by introducing new directed quote characters
and combining accents. What you see here is essentially a relict from ASCII times, when most fonts
had paired (slanted and/or curly) glyphs for ' and ` . | {
"source": [
"https://unix.stackexchange.com/questions/73993",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/26568/"
]
} |
74,142 | Say I am running a software, and then I run package manager to upgrade the software, I notice that Linux does not bring down the running process for package upgrade - it is still running fine. How does Linux do this? | The reason is Unix does not lock an executable file while it is executed or even if it does like Linux, this lock applies to the inode, not the file name. That means a process keeping it open is accessing the same (old) data even after the file has been deleted (unlinked actually) and replaced by a new one with the same name which is essentially what a package update does. That is one of the main differences between Unix and Windows. The latter cannot update a file being locked as it is missing a layer between file names and inodes making a major hassle to update or even install some packages as it usually requires a full reboot. | {
"source": [
"https://unix.stackexchange.com/questions/74142",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22534/"
]
} |
74,165 | Here it says that you can rewrite an executable file and the process will run just fine - it will be re-read when a process restarts. However, when I try to replace a binary file while the process is running (with scp, from dev to test server) it says 'file busy'. And if I replace a shared library file (*.so), all the processes that link it crash. Why so? Am I missing something? How can I replace the binary files without stopping/crashing a process? | As mentioned in Why does a software package run just fine even when it is being upgraded? , the lock is placed on inode not on filename. When you load and execute a binary, the the file is marked as busy - which is why you get ETXTBSY (file busy) error when you try to write to it. Now, for shared libraries it is slightly different: the libraries get memory mapped into the process' address space with mmap() . Although MAP_DENYWRITE may be specified, at least Glibc on Linux silently ignores it (according to the man page, feel free to check the sources) - check this thread . Hence you actually are allowed to write the file and, as it is memory mapped, any changes are visible almost immediately - which means that if you try hard enough you can manage to brick your machine by overwriting the library. The correct way to update therefore is: removing the file, which removes reference to the data from the file system, so that it isn't accessible for any newly spawned applications that might want to use it, while keeping the data accessible for anyone who already has it open (or mapped); creating a new file with updated contents. Newly created processes will use the updated contents, running applications will access the old version. This is what any sane package management utility does. Note that it's not completely without any danger though - for example applications dynamically loading code (using dlsym() and friends) will experience troubles if the library's API changes silently. If you want to be on the really, really safe side, shut down the system, mount the file system from another operating system instance, update and bring up the updated system again. | {
"source": [
"https://unix.stackexchange.com/questions/74165",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38191/"
]
} |
74,185 | When I search for some process that doesn't exist, e.g. $ ps aux | grep fnord
wayne 15745 0.0 0.0 13580 928 pts/6 S+ 03:58 0:00 grep fnord Obviously I don't care about grep - that makes as much sense as searching for the ps process! How can I prevent grep from showing up in the results? | Turns out there's a solution found in keychain . $ ps aux | grep "[f]nord" By putting the brackets around the letter and quotes around the string you search for the regex, which says, "Find the character 'f' followed by 'nord'." But since you put the brackets in the pattern 'f' is now followed by ']', so grep won't show up in the results list. Neato! | {
"source": [
"https://unix.stackexchange.com/questions/74185",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5788/"
]
} |
74,334 | I am trying to download a file from sourceforge using wget, but as we all know we have to click on the download button and then wait for it to auto download. how do you download this type of file using wget? I am trying to download this: http://sourceforge.net/projects/bitcoin/files/Bitcoin/bitcoin-0.8.1/bitcoin-0.8.1-linux.tar.gz/download But doing a wget on that url link would not get me the file as the file is auto loaded via the browser. | I would suggest using curl to do this instead of wget . It can follow the redirection using the switches -L , -J , and -O . curl -O -J -L http://sourceforge.net/projects/bitcoin/files/Bitcoin/bitcoin-0.8.1/bitcoin-0.8.1-linux.tar.gz/download switch definitions -O/--remote-name
Write output to a local file named like the remote file we get.
(Only the file part of the remote file is used, the path is cut off.)
-L/--location
(HTTP/HTTPS) If the server reports that the requested page has moved
to a different location (indicated with a Location: header and a 3XX
response code), this option will make curl redo the request on the new
place. If used together with -i/--include or -I/--head, headers from
all requested pages will be shown. When authentication is used, curl only
sends its credentials to the initial host. If a redirect takes curl to a
different host, it won't be able to intercept the user+password.
See also --location-trusted on how to change this. You can limit the
amount of redirects to follow by using the --max-redirs option.
-J/--remote-header-name
(HTTP) This option tells the -O/--remote-name option to use the
server-specified Content-Disposition filename instead of extracting a
filename from the URL. See the curl man page for more details. | {
"source": [
"https://unix.stackexchange.com/questions/74334",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/33183/"
]
} |
74,346 | I am incorporating SED into a Windows batch file. I've got it working for deleting the first seven lines of a text file: SED "1,7d" However, I'd like to make it a smarter statement. Essentially, I have a huge text file that I want to lop off the first few lines…and keep everything from the column heading and below. The first line of text I want is line 8 and is "Year" with quotes. I've tried the following and I'm receiving an error in the command window (running the batch file): SED "1,/"Year"/!d" I am putting in the input and output files after the expression. | I would suggest using curl to do this instead of wget . It can follow the redirection using the switches -L , -J , and -O . curl -O -J -L http://sourceforge.net/projects/bitcoin/files/Bitcoin/bitcoin-0.8.1/bitcoin-0.8.1-linux.tar.gz/download switch definitions -O/--remote-name
Write output to a local file named like the remote file we get.
(Only the file part of the remote file is used, the path is cut off.)
-L/--location
(HTTP/HTTPS) If the server reports that the requested page has moved
to a different location (indicated with a Location: header and a 3XX
response code), this option will make curl redo the request on the new
place. If used together with -i/--include or -I/--head, headers from
all requested pages will be shown. When authentication is used, curl only
sends its credentials to the initial host. If a redirect takes curl to a
different host, it won't be able to intercept the user+password.
See also --location-trusted on how to change this. You can limit the
amount of redirects to follow by using the --max-redirs option.
-J/--remote-header-name
(HTTP) This option tells the -O/--remote-name option to use the
server-specified Content-Disposition filename instead of extracting a
filename from the URL. See the curl man page for more details. | {
"source": [
"https://unix.stackexchange.com/questions/74346",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38324/"
]
} |
74,477 | I have problem with postfix on debian after upgrading from squeeze to wheezy.
Postfix was configured to sign messages using dkim-filter. Before update, everything was working flawlessly, now it fails on connection with service (tcp or unix sockets).
I thought that maybe it was because of debian switch to opendkim, so I removed dkim-filter and installed opendkim - same problem. I even tried setting unix file socket connection instead of tcp option - same problem: postfix/smtpd: warning: connect to Milter service unix:/var/run/opendkim/opendkim.sock: No such file or directory or (with tcp/ip): postfix/cleanup: warning: connect to Milter service inet:localhost:8891: Connection refused I checked twice - socket file exists and service was listening on port 8891. What can I do to fix this? | Check if opendkim is running. (I assume it is as you saw the socket file.) Did you configure opendkim? The configuration file is /etc/opendkim.conf . You need to update the file to match your site/domain and dkim.key path. Add postfix to opendkim group If opendkim.sock permission is as follow $ ls -l /var/run/opendkim
-rw-rw-r-- 1 opendkim opendkim 6 May 2 14:56 opendkim.pid
srwxrwxr-x 1 opendkim opendkim 0 May 2 14:56 opendkim.sock If Not, make sure UMask is set to 0002 in /etc/opendkim.conf . Then do the following sudo adduser postfix opendkim Postfix running in chroot Modify /etc/default/opendkim , change SOCKET option to postfix chroot location SOCKET="local:/var/spool/postfix/var/run/opendkim/opendkim.sock" You will have to create directory /var/spool/postfix/var/run/opendkim and change its permission sudo mkdir -p /var/spool/postfix/var/run/opendkim
sudo chown opendkim:opendkim /var/spool/postfix/var/run/opendkim Restart opendkim sudo service opendkim restart | {
"source": [
"https://unix.stackexchange.com/questions/74477",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38396/"
]
} |
74,484 | I get the error /usr/bin/env: zsh -: No such file or directory ...when I run an executable zsh script that starts with the following shebang line: #!/usr/bin/env zsh - Also, FWIW, replacing - with -- causes /usr/bin/env to print a similar complaint about zsh -- . I have only seen this error under ubuntu, and only in the context of the shebang hack. Under darwin the same script runs fine. And under ubuntu, running % /usr/bin/env zsh - from the command line succeeds. (Actually, "under ubuntu" should be understood as short-hand for "under ubuntu 12.04 LTS and env (GNU coreutils) 8.13 ".) My question is: how can I modify the shebang above to avoid this error? Of course, I know that removing the trailing - will eliminate the error, but this is not an acceptable solution. The rest of this post explains why. The shebang line that is causing the error comes about as an attempt to comply with two entirely independent guidelines: To make scripts portable, use #!/usr/bin/env <cmd ...> rather than #!/path/to/cmd <...> . Putting - as the sole argument to zsh in the shebang line of zsh scripts thwarts certain types of attacks . So, I can restate my question more precisely as follows: Is it possible to satisfy both of these guidelines without triggering the error shown above under ubuntu? | Check if opendkim is running. (I assume it is as you saw the socket file.) Did you configure opendkim? The configuration file is /etc/opendkim.conf . You need to update the file to match your site/domain and dkim.key path. Add postfix to opendkim group If opendkim.sock permission is as follow $ ls -l /var/run/opendkim
-rw-rw-r-- 1 opendkim opendkim 6 May 2 14:56 opendkim.pid
srwxrwxr-x 1 opendkim opendkim 0 May 2 14:56 opendkim.sock If Not, make sure UMask is set to 0002 in /etc/opendkim.conf . Then do the following sudo adduser postfix opendkim Postfix running in chroot Modify /etc/default/opendkim , change SOCKET option to postfix chroot location SOCKET="local:/var/spool/postfix/var/run/opendkim/opendkim.sock" You will have to create directory /var/spool/postfix/var/run/opendkim and change its permission sudo mkdir -p /var/spool/postfix/var/run/opendkim
sudo chown opendkim:opendkim /var/spool/postfix/var/run/opendkim Restart opendkim sudo service opendkim restart | {
"source": [
"https://unix.stackexchange.com/questions/74484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
74,520 | Can I redirect output to a log file and a background process at the same time? In other words, can I do something like this? nohup java -jar myProgram.jar 2>&1 > output.log & Or, is that not a legal command? Or, do I need to manually move it to the background, like this: java -jar myProgram.jar 2>$1 > output.log
jobs
[CTRL-Z]
bg 1 | One problem with your first command is that you redirect stderr to where stdout is (if you changed the $ to a & as suggested in the comment) and then, you redirected stdout to some log file, but that does not pull along the redirected stderr. You must do it in the other order, first send stdout to where you want it to go, and then send stderr to the address stdout is at some_cmd > some_file 2>&1 & and then you could throw the & on to send it to the background. Jobs can be accessed with the jobs command. jobs will show you the running jobs, and number them. You could then talk about the jobs using a % followed by the number like kill %1 or so. Also, without the & on the end you can suspend the command with Ctrl z , use the bg command to put it in the background and fg to bring it back to the foreground. In combination with the jobs command, this is powerful. to clarify the above part about the order you write the commands. Suppose stderr is address 1002, stdout is address 1001, and the file is 1008. The command reads left to right, so the first thing it sees in yours is 2>&1 which moves stderr to the address 1001, it then sees > file which moves stdout to 1008, but keeps stderr at 1001. It does not pull everything pointing at 1001 and move it to 1008, but simply references stdout and moves it to the file. The other way around, it moves stdout to 1008, and then moves stderr to the point that stdout is pointing to, 1008 as well. This way both can point to the single file. | {
"source": [
"https://unix.stackexchange.com/questions/74520",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/7586/"
]
} |
74,522 | On Ubuntu 12.04, if I use the command: upower -i /org/freedesktop/UPower/devices/battery_BAT0 It will output all of the statistics for the battery. However, what does it output if I am using a desktop (I.e. does not have a battery?) | One problem with your first command is that you redirect stderr to where stdout is (if you changed the $ to a & as suggested in the comment) and then, you redirected stdout to some log file, but that does not pull along the redirected stderr. You must do it in the other order, first send stdout to where you want it to go, and then send stderr to the address stdout is at some_cmd > some_file 2>&1 & and then you could throw the & on to send it to the background. Jobs can be accessed with the jobs command. jobs will show you the running jobs, and number them. You could then talk about the jobs using a % followed by the number like kill %1 or so. Also, without the & on the end you can suspend the command with Ctrl z , use the bg command to put it in the background and fg to bring it back to the foreground. In combination with the jobs command, this is powerful. to clarify the above part about the order you write the commands. Suppose stderr is address 1002, stdout is address 1001, and the file is 1008. The command reads left to right, so the first thing it sees in yours is 2>&1 which moves stderr to the address 1001, it then sees > file which moves stdout to 1008, but keeps stderr at 1001. It does not pull everything pointing at 1001 and move it to 1008, but simply references stdout and moves it to the file. The other way around, it moves stdout to 1008, and then moves stderr to the point that stdout is pointing to, 1008 as well. This way both can point to the single file. | {
"source": [
"https://unix.stackexchange.com/questions/74522",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38422/"
]
} |
74,545 | I've found that ssh-keygen ("ssh" package) produces different keys from puttygen (" putty " package). If I create public and private keys with ssh-keygen some SSH servers will not accept my keys. If I create keys with puttygen only one server does accept it. Why don't Linux repositories propose some common solution (package) for it? I've found another package ssh-3.2.9.1 which creates keys that work with PuTTY. But why isn't there any handy solution in SSH? | OpenSSH is the de facto standard implementation of the SSH protocol. If PuTTY and OpenSSH differ, PuTTY is the one that's incompatible. If you generate a key with OpenSSH using ssh-keygen with the default options, it will work with virtually every server out there. A server that doesn't accept such a key would be antique, using a different implementation of SSH, or configured in a weird restrictive way. Keys of a non-default type may not be supported on some servers. In particular, ECDSA keys make session establishment very slightly faster, but they are only supported by recent versions of OpenSSH. PuTTY uses a different key file format. It comes with tools to convert between its own .ppk format and the format of OpenSSH. This ssh-3.2.9.1 you found is a commercial product which has its own different private key format. There isn't any reason to use it instead of OpenSSH. It can only be less compatible, it requires paying, and there's about zero tutorials on how to use it out there. | {
"source": [
"https://unix.stackexchange.com/questions/74545",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/32838/"
]
} |
74,713 | How frequently is the proc file system updated on Linux? Is it 20 milliseconds (time quantum)? | The information that you read from the proc filesystem is not stored on any media (not even in RAM), so there is nothing to update. The purpose of the proc file system is to allow userspace programs to obtain or set kernel data using the simple and familiar file system semantics ( open , close , read , write , lseek ), even though the data that is read or written doesn't reside on any media. This design decision was deemed better (e.g. human readable and easily scriptable) for getting and setting data whose format could not be specified in advance than implementing something such as ASN1 encoded OIDs, which also would have worked fine. The data that you see when you read from the proc filesystem is generated on-the-fly when you do a read from the begining of a file. That is, doing the read causes the data to be generated by a kernel callback function that is specific to the file you are reading. Doing an lseek to the begining of the file and reading again causes another call to the callback that generates the data again. Similarly, when you write to a writable file in the proc filesystem, a callback function is called that parses the input and sets kernel variables. The input data in it's raw form isn't stored. The above is just a slightly more verbose way of saying what Hauke Laging states so succinctly. I suggest that you accept his answer. | {
"source": [
"https://unix.stackexchange.com/questions/74713",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38533/"
]
} |
74,722 | A few days ago I started to care a lot about my data security, I end up nmap ing myself with: nmap 127.0.0.1 Surprise, surprise, I have lots of active services listen to localhost: $ nmap 127.0.0.1
Starting Nmap 5.21 ( http://nmap.org ) at 2013-05-05 00:19 WEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00025s latency).
Not shown: 993 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
53/tcp open domain
111/tcp open rpcbind
139/tcp open netbios-ssn
445/tcp open microsoft-ds
631/tcp open ipp
Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds The only one that I might use is ssh (although it is probably not well configured, I will keep this matter to another question). As far as I know ipp protocol is used by CUPS to share my printers, I don't need to share them, just access printers from a server. This is the output of netstat -lntup by the root user, removing the localhost addresses: Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 497/sshd
tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN 2217/dropbox
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 892/smbd
tcp 0 0 0.0.0.0:50022 0.0.0.0:* LISTEN 1021/rpc.statd
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 892/smbd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 906/rpcbind
tcp6 0 0 :::22 :::* LISTEN 497/sshd
tcp6 0 0 :::42712 :::* LISTEN 1021/rpc.statd
tcp6 0 0 :::445 :::* LISTEN 892/smbd
tcp6 0 0 :::139 :::* LISTEN 892/smbd
tcp6 0 0 :::111 :::* LISTEN 906/rpcbind
udp 0 0 0.0.0.0:51566 0.0.0.0:* 615/avahi-daemon: r
udp 0 0 0.0.0.0:68 0.0.0.0:* 7362/dhclient
udp 0 0 0.0.0.0:111 0.0.0.0:* 906/rpcbind
udp 0 0 192.168.1.255:137 0.0.0.0:* 1782/nmbd
udp 0 0 192.168.1.67:137 0.0.0.0:* 1782/nmbd
udp 0 0 0.0.0.0:137 0.0.0.0:* 1782/nmbd
udp 0 0 192.168.1.255:138 0.0.0.0:* 1782/nmbd
udp 0 0 192.168.1.67:138 0.0.0.0:* 1782/nmbd
udp 0 0 0.0.0.0:138 0.0.0.0:* 1782/nmbd
udp 0 0 0.0.0.0:655 0.0.0.0:* 906/rpcbind
udp 0 0 0.0.0.0:17500 0.0.0.0:* 2217/dropbox
udp 0 0 0.0.0.0:5353 0.0.0.0:* 615/avahi-daemon: r
udp 0 0 0.0.0.0:34805 0.0.0.0:* 1021/rpc.statd
udp6 0 0 :::40192 :::* 1021/rpc.statd
udp6 0 0 :::111 :::* 906/rpcbind
udp6 0 0 :::655 :::* 906/rpcbind
udp6 0 0 :::5353 :::* 615/avahi-daemon: r
udp6 0 0 :::42629 :::* 615/avahi-daemon: r How do I configure those services so they only listen to the outside world when I'm actually using them? | Determine your exposure Taking your output from the netstat command, what looks like a lot of services is actually a very short list: $ netstat -lntup | awk '{print $6 $7}'|sed 's/LISTEN//'| cut -d"/" -f2|sort|uniq|grep -v Foreign
avahi-daemon:r
dhclient
dropbox
nmbd
rpcbind
rpc.statd
smbd
sshd Getting a lay of the land Looking at this list there are several services which I'd leave alone. dhclient DHCP server daemon responsible for getting your IP address, have to have this one. dropbox obviously Dropbox, have to have Start reducing it - disable Samba You can probably right off the bat disable Samba, it accounts for 2 of the above services, nmbd and smbd . It's questionable that you'd really need that running on a laptop whether on localhost or your IP facing your network. To check that they're running you can use the following command, status : $ status nmbd
nmbd start/running, process 19457
$ status smbd
smbd start/running, process 19423 Turning services off can be confusing with all the flux that's been going on with upstart, /etc/rc.d, business so it might be difficult to figure out which service is under which technology. For Samba you can use the service command: $ sudo service nmbd stop
nmbd stop/waiting
$ sudo service smbd stop
smbd stop/waiting Now they're off: $ status nmbd
nmbd stop/waiting
$ status smbd
smbd stop/waiting Keeping them off ... permanently To make them stay off I've been using this tool, sysv-rc-conf , to manage services from a console, it works better than most. It allows you to check which services you want to run and in which runlevel they should be started/stopped: $ sudo apt-get install sysv-rc-conf Disabling the rest of what's NOT needed So now Samba's off we're left with the following: avahi-daemon part of zeroconf (plug-n-play), turn it off rpcbind needed for NFS - turn it off rpc.statd needed for NFS - turn it off For the remaining 3 you can do the same things we did for Samba to turn them off as well. CUPS? To turn CUPS off, which you don't really need by the way, you can follow the same dance of turning the service off and then disabling it from starting up. To be able to print you'll need to setup each printer individually on your system. You can do so through the system-config-printer GUI. Making these services on demand? This is really the heart of your question but there isn't really a silver bullet solution to making these services "smart" so that they run when they're being used, rather than all the time. #1 - systemd vs. upstart Part of it is the current split between systemd and upstart . There's a good overview of the 2 competing technologies here . Both technologies are trying to do slightly different things, IMO, given their feature sets, systemd seems geared more towards servers whereas upstart seems geared more towards the desktop roll. Over time this will work itself out, IMO, and both services will be stable and feature rich. Eventually both services will offer on demand starting & stopping across the board for all the services they manage. Features such as StopWhenUnneeded=yes already exist in systemd for example, so it's only a matter of time until these capabilities get fleshed out. #2 - service support Some services don't support being stopped/started very well if at all. Services such as sshd seem to make little sense to run as on-demand, especially if they're used heavily. Also some services such as Apache provide mechanisms within themselves to spin up more or less of their own listeners managing themselves. So it's unclear how on-demand provided by systemd or upstart are going to integrate with these types of services. starting sshd on first connection to port 22 with upstart's new socket bridge CUPS and systemd: on demand start and stop Is this really necessary? You'll hear from both sides that this is overkill or that you should take a minimalist's approach only installing what you absolutely need, but it's really a personal choice. Understanding that these services are there and what they do is really what's important. At the end of the day a computer is a tool, and by using a Unix system you're already saying that you're willing to peek behind the curtain and understand what makes your computer tick. I'd say that this type of questioning is exactly the frame of mind one should strive for when dealing with computers and Unix in general. References Recommended way to enable & disable services upstart - wikipedia systemd - wikipedia [10: http://tech.cueup.com/blog/2013/03/08/running-daemons/ | {
"source": [
"https://unix.stackexchange.com/questions/74722",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30320/"
]
} |
74,745 | I'm using this command: chown root:www-data /var/www/example.com -R but I get an error message that the directory is not listed. What is wrong? | Determine your exposure Taking your output from the netstat command, what looks like a lot of services is actually a very short list: $ netstat -lntup | awk '{print $6 $7}'|sed 's/LISTEN//'| cut -d"/" -f2|sort|uniq|grep -v Foreign
avahi-daemon:r
dhclient
dropbox
nmbd
rpcbind
rpc.statd
smbd
sshd Getting a lay of the land Looking at this list there are several services which I'd leave alone. dhclient DHCP server daemon responsible for getting your IP address, have to have this one. dropbox obviously Dropbox, have to have Start reducing it - disable Samba You can probably right off the bat disable Samba, it accounts for 2 of the above services, nmbd and smbd . It's questionable that you'd really need that running on a laptop whether on localhost or your IP facing your network. To check that they're running you can use the following command, status : $ status nmbd
nmbd start/running, process 19457
$ status smbd
smbd start/running, process 19423 Turning services off can be confusing with all the flux that's been going on with upstart, /etc/rc.d, business so it might be difficult to figure out which service is under which technology. For Samba you can use the service command: $ sudo service nmbd stop
nmbd stop/waiting
$ sudo service smbd stop
smbd stop/waiting Now they're off: $ status nmbd
nmbd stop/waiting
$ status smbd
smbd stop/waiting Keeping them off ... permanently To make them stay off I've been using this tool, sysv-rc-conf , to manage services from a console, it works better than most. It allows you to check which services you want to run and in which runlevel they should be started/stopped: $ sudo apt-get install sysv-rc-conf Disabling the rest of what's NOT needed So now Samba's off we're left with the following: avahi-daemon part of zeroconf (plug-n-play), turn it off rpcbind needed for NFS - turn it off rpc.statd needed for NFS - turn it off For the remaining 3 you can do the same things we did for Samba to turn them off as well. CUPS? To turn CUPS off, which you don't really need by the way, you can follow the same dance of turning the service off and then disabling it from starting up. To be able to print you'll need to setup each printer individually on your system. You can do so through the system-config-printer GUI. Making these services on demand? This is really the heart of your question but there isn't really a silver bullet solution to making these services "smart" so that they run when they're being used, rather than all the time. #1 - systemd vs. upstart Part of it is the current split between systemd and upstart . There's a good overview of the 2 competing technologies here . Both technologies are trying to do slightly different things, IMO, given their feature sets, systemd seems geared more towards servers whereas upstart seems geared more towards the desktop roll. Over time this will work itself out, IMO, and both services will be stable and feature rich. Eventually both services will offer on demand starting & stopping across the board for all the services they manage. Features such as StopWhenUnneeded=yes already exist in systemd for example, so it's only a matter of time until these capabilities get fleshed out. #2 - service support Some services don't support being stopped/started very well if at all. Services such as sshd seem to make little sense to run as on-demand, especially if they're used heavily. Also some services such as Apache provide mechanisms within themselves to spin up more or less of their own listeners managing themselves. So it's unclear how on-demand provided by systemd or upstart are going to integrate with these types of services. starting sshd on first connection to port 22 with upstart's new socket bridge CUPS and systemd: on demand start and stop Is this really necessary? You'll hear from both sides that this is overkill or that you should take a minimalist's approach only installing what you absolutely need, but it's really a personal choice. Understanding that these services are there and what they do is really what's important. At the end of the day a computer is a tool, and by using a Unix system you're already saying that you're willing to peek behind the curtain and understand what makes your computer tick. I'd say that this type of questioning is exactly the frame of mind one should strive for when dealing with computers and Unix in general. References Recommended way to enable & disable services upstart - wikipedia systemd - wikipedia [10: http://tech.cueup.com/blog/2013/03/08/running-daemons/ | {
"source": [
"https://unix.stackexchange.com/questions/74745",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38549/"
]
} |
74,785 | when starts session named any name like this screen -S name1 i want to open tabs windows in this screen session like when open tabs in gnome-terminal like this gnome-terminal --tab -e "some commands" so how to do this ? | 1. Tabs in screen You're looking for this to add to your .screenrc file: screen -t tab1
screen -t tab2 Here's a nice basic .screenrc to get you started with a status bar etc. NOTE: This is typically located in your home directory /home/<username>/.screenrc . screen -t validate #rtorrent
screen -t compile #irssi
screen -t bash3
screen -t bash4
screen -t bash5
altscreen on
term screen-256color
bind ',' prev
bind '.' next
#
#change the hardstatus settings to give an window list at the bottom of the
#screen, with the time and date and with the current window highlighted
hardstatus alwayslastline
#hardstatus string '%{= kG}%-Lw%{= kW}%50> %n%f* %t%{= kG}%+Lw%< %{= kG}%-=%c:%s%{-}'
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %m-%d %{W}%c %{g}]' screenshot 2. Tabs in screen (with commands run inside) The example .screenrc below will create 2 tabs and run 3 echo commands in each. screen -t tab1
select 0
stuff "echo 'tab1 cmd1'; echo 'tab1 cmd2'; echo 'tab1 cmd3'^M"
screen -t tab2
select 1
stuff "echo 'tab2 cmd1'; echo 'tab2 cmd2'; echo 'tab2 cmd3'^M"
altscreen on
term screen-256color
bind ',' prev
bind '.' next
#
#change the hardstatus settings to give an window list at the bottom of the
#screen, with the time and date and with the current window highlighted
hardstatus alwayslastline
#hardstatus string '%{= kG}%-Lw%{= kW}%50> %n%f* %t%{= kG}%+Lw%< %{= kG}%-=%c:%s%{-}'
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %m-%d %{W}%c %{g}]' This technique makes use of screen's select and stuff commands to initially select one of the tabs, and then "stuff" a string into it. screenshot 3. Creating #2 without using a .screenrc file If you're looking for the scenario where you can: create a screen session load it up with tabs have each tab running their own commands not require a .screenrc file Then this is the one for you! Be prepared though. This one can get a little tricky with the command lines. For starters let's create a screen session: $ screen -AdmS myshell -t tab0 bash The switches -AdmS do the following: (See the screen man page for more details) -A Adapt the sizes of all windows to the size of the current terminal.
By default, screen tries to restore its old window sizes when
attaching to resizable terminals -d -m Start screen in "detached" mode. This creates a new session but
doesn't attach to it. This is useful for system startup scripts. -S sessionname When creating a new session, this option can be used to specify a
meaningful name for the session. This name identifies the session for
"screen -list" and "screen -r" actions. It substitutes the default
[tty.host] suffix. Now let's start loading it up with tabs + their commands: $ screen -S myshell -X screen -t tab1 vim
$ screen -S myshell -X screen -t tab2 ping www.google.com
$ screen -S myshell -X screen -t tab3 bash These 3 commands will create 3 additional tabs and run vim, ping google, and launch a bash shell. If we list out the screen sessions we'll see the following: $ screen -ls
There is a screen on:
26642.myshell (Detached)
1 Socket in /var/run/screen/S-root. If we connect to the screen session, myshell , and list the tabs that it contains we'll see the following: $ screen -r myshell Hit this key combination: Ctrl + A followed by Shift + " Num Name Flags
0 tab0 $
1 tab1 $
2 tab2 $
3 tab3 $ Switching to tab2 : 64 bytes from ord08s08-in-f20.1e100.net (74.125.225.116): icmp_seq=443 ttl=55 time=41.4 ms
64 bytes from ord08s08-in-f20.1e100.net (74.125.225.116): icmp_seq=444 ttl=55 time=33.0 ms
64 bytes from ord08s08-in-f20.1e100.net (74.125.225.116): icmp_seq=445 ttl=55 time=30.1 ms screenshot The above commands are the basic way to accomplish what the OP was looking for. This of course can be condensed and refined using Bash aliases or even shell scripts, this is merely to demonstrate the capability and show the way! References Screen man page | {
"source": [
"https://unix.stackexchange.com/questions/74785",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38575/"
]
} |
74,874 | I want to kill the following processes using pkill "run_tcp_sender.sh" or pkill -SIGKILL "run_tcp_sender.sh"
root 14320 1 0 2012 ? 00:00:00 bash run_tcp_sender.sh 138.96.116.22
root 14323 1 0 2012 ? 00:00:00 bash run_tcp_sender.sh 138.96.116.22
root 14325 1 0 2012 ? 00:00:00 bash run_tcp_sender.sh 138.96.116.22
root 14327 1 0 2012 ? 00:00:00 bash run_tcp_sender.sh 138.96.116.22
root 14328 1 0 2012 ? 00:00:00 bash run_tcp_sender.sh 138.96.116.22
root 14330 1 0 2012 ? 00:00:00 bash run_tcp_sender.sh 138.96.116.22 but it is useless
the processes remain there
what is wrong with my command? BTW:
I can use the following command to achieve what I want kill -9 $(ps -ef|grep "run_tcp"|grep -v "grep"|awk '{print $2}') | pkill by default sends the SIGTERM signal to processes to stop. Here's a list of the signals you can send a process. You can send them by name or number typically: $ kill -l
1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP
6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1
11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM
16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ
26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR
31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3
38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8
43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7
58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2
63) SIGRTMAX-1 64) SIGRTMAX So you're sending signal # 15. If the processes are not responding to this signal then you may need to use signal #9, pkill -SIGKILL . From the man page of pkill : -signal Defines the signal to send to each matched process. Either the
numeric or the symbolic signal name can be used. (pkill only.) Issues with pkill The OP mentioned that he was unsuccessful in getting pkill -SIGKILL "run_tcp" to work. We initially thought that the issue had to do with pkill potentially killing itself before it had finished killing all the "run_tcp" processes. But that was hard to accept given a foot note in the pkill man page: The running pgrep or pkill process will never report itself as a match. In addition to that, @Gilles left a comment basically saying the same thing, that pkill just does not kill itself. Then he gave us a pretty big clue as to what was actually going on. Here's an example that demonstrates what the OP and myself were missing: step 1 - make a sleepy.bash script #!/bin/bash
sleep 10000 step 2 - load up some fake sleep tasks $ for i in `seq 1 5`;do bash sleepy.bash & done
[1] 12703
[2] 12704
[3] 12705
[4] 12706
[5] 12707 step 3 - check the running tasks $ ps -eaf|egrep "sleep 10000|sleepy"
saml 12703 29636 0 21:48 pts/16 00:00:00 bash sleepy.bash
saml 12704 29636 0 21:48 pts/16 00:00:00 bash sleepy.bash
saml 12705 29636 0 21:48 pts/16 00:00:00 bash sleepy.bash
saml 12706 29636 0 21:48 pts/16 00:00:00 bash sleepy.bash
saml 12707 29636 0 21:48 pts/16 00:00:00 bash sleepy.bash
saml 12708 12704 0 21:48 pts/16 00:00:00 sleep 10000
saml 12709 12707 0 21:48 pts/16 00:00:00 sleep 10000
saml 12710 12705 0 21:48 pts/16 00:00:00 sleep 10000
saml 12711 12703 0 21:48 pts/16 00:00:00 sleep 10000
saml 12712 12706 0 21:48 pts/16 00:00:00 sleep 10000 step 4 - try using my pkill $ pkill -SIGTERM sleepy.bash step 5 - what happened? Doing the ps command from above, we see that none of the processes were killed, just like the OPs issue. What's going on? Turns out this is an issue in how we were attempting to make use of pkill . The command: pkill -SIGTERM "sleepy.bash" was looking for a process by the name of "sleepy.bash" . Well there aren't any processes by that name. There's processes that are named "bash sleepy.bash" though. So pkill was looking for processes to kill and not finding any and then exiting. So if we slightly adjust the pkill we're using to this: $ pkill -SIGTERM -f "sleepy.bash"
[1] Terminated bash sleepy.bash
[2] Terminated bash sleepy.bash
[3] Terminated bash sleepy.bash
[4]- Terminated bash sleepy.bash
[5]+ Terminated bash sleepy.bash Now we get the effect we were looking for. What the difference? We made use of the -f switch to pkill which makes pkill use the entire command line path when matching vs. just the process name. from pkill man page -f The pattern is normally only matched against the process name.
When -f is set, the full command line is used. Alternative methods kill, ps This method is pretty verbose but does the job as well: kill -9 $(ps -ef|grep "run_tcp"|grep -v "grep"|awk '{print $2}') pgrep w/ pkill & killall You can use either pgrep to feed a list of PIDs to pkill or make use of killall instead. Examples # pgrep solution
$ pgrep "run_tcp" | pkill -SIGKILL
# killall
killall -SIGKILL -r run_tcp References pgrep man page pkill man page killall man page | {
"source": [
"https://unix.stackexchange.com/questions/74874",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38026/"
]
} |
74,924 | I have a software RAID5 array (Linux md) on 4 disks. I would like to replace one of the disks with a new one, without putting the array in a degraded state , and if possible, online. How would that be possible? It's important because I don't want to: take the risk of stressing the other disks so one may crash during rebuild, take the risk of being in a "no-parity state" so I don't have a safety net for some time. I suppose doing so online is too much asking and I should just raw copy ( dd ) the data of the old disk to the new one offline and then replace it, but I think it is theoretically possible... Some context : Those disks have all been spinning almost continuously for more than 5.5 years. They still work perfectly for the moment and they all pass the (long) SMART self-test. However, I have reasons to think that one of those 4 disks will not last much longer (supposed predictive failure). | Using mdadm 3.3+ Since mdadm 3.3 (released 2013, Sep 3), if you have a 3.2+ kernel , you can proceed as follows: # mdadm /dev/md0 --add /dev/sdc1
# mdadm /dev/md0 --replace /dev/sdd1 --with /dev/sdc1 sdd1 is the device you want to replace, sdc1 is the preferred device to do so and must be declared as a spare on your array. The --with option is optional, if not specified, any available spare will be used. Older mdadm version Note: You still need a 3.2+ kernel . First, add a new drive as a spare (replace md0 and sdc1 with your RAID and disk device, respectively): # mdadm /dev/md0 --add /dev/sdc1 Then, initiate a copy-replace operation like this ( sdd1 being the failing device): # echo want_replacement > /sys/block/md0/md/dev-sdd1/state Result The system will copy all readable blocks from sdd1 to sdc1 . If it comes to an unreadable block, it will reconstruct it from parity. Once the operation is complete, the former spare (here: sdc1 ) will become active, and the failing drive will be marked as failed (F) so you can remove it. Note: credit goes to frostschutz and Ansgar Esztermann who found the original solution (see the duplicate question ). Older kernels Other answers suggest: Johnny 's approach : convert array to RAID6, "replace" the disk, then back to RAID5, Hauke Laging 's approach : briefly remove the disk from the RAID5 array, make it part of a RAID1 (mirror) with the new disk and add that mirror drive back to the RAID5 array (theoretical)... | {
"source": [
"https://unix.stackexchange.com/questions/74924",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/30196/"
]
} |
75,011 | As I understand it this is what happens when a client makes a connection request: The server will be bound to a particular port number. The port number is always bound to a listening process. Since only the server is listening for incoming connections, we don't need to bind on the client side The server will keep on listeninig on that port number. The client will send a connect() request. The server will accept the request using accept() . As soon as the server accepts the client request, the kernel allocates a random port number for the server for further send() and receive() , since the same port number on the server can't be used for sending as well as listening, and the previous port is still listening for new connections Given all that, how does the server find out what port the client is receiving on? I know the client will send TCP segments with a source port and destination port, so the server will use the source port of that segment as its destination port, but what function does the server call to find out about that port? Is it accept() ? | It's part of the TCP (or UDP, etc.) header, in the packet. So the server finds out because the client tells it. This is similar to how it finds out the client's IP address (which is part of the IP header). E.g., every TCP packet includes an IP header (with source IP, destination IP, and protocol [TCP], at least). Then there is a TCP header (with source and destination port, plus more). When the kernel receives a SYN packet (the start of a TCP connection) with a remote IP of 10.11.12.13 (in the IP header) and a remote port of 12345 (in the TCP header), it then knows the remote IP and port. It sends back a SYN|ACK. If it gets an ACK back, the listen call returns a new socket, set up for that connection. A TCP socket is uniquely identified by the four values (remote IP, local IP, remote port, local port). You can have multiple connections/sockets, as long as at least one of those differs. Typically, the local port and local IP will be the same for all connections to a server process (e.g. all connections to sshd will be on local-ip:22). If one remote machine makes multiple connections, each one will use a different remote port. So everything but the remote port will be the same, but that's fine—only one of the four has to differ. You can use, e.g., wirehsark to see the packet, and it'll label all the data for you. Here is the source port highlighted (notice it highlighted in the decoded packet, as well as the hex dump at the bottom): | {
"source": [
"https://unix.stackexchange.com/questions/75011",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34060/"
]
} |
75,046 | when specifying ls --directory a* it should list only directories starting with a* BUT it lists files AND directories starting with a Questions : where might I find some documentation on this, other than man and info where I think I thoroughly looked? does this work in BASH only? | The a* and *a* syntax is implemented by the shell, not by the ls command. When you type ls a* at your shell prompt, the shell expands a* to a list of existing all files in the current directory whose names start with a . For example, it might expand a* to the sequence a1 a2 a3 , and pass those as arguments to ls . The ls command itself never sees the * character; it only sees the three arguments a1 , a2 , and a3 . For purposes of wildcard expansion, "files" refers to all entities in the current directory. For example, a1 might be a normal file, a2 might be a directory, and a3 might be a symlink. They all have directory entries, and the shell's wildcard expansion doesn't care what kind of entity those entries refer to. Practically all shells you're likely to run across (bash, sh, ksh, zsh, csh, tcsh, ...) implement wildcards. The details may vary, but the basic syntax of * matching zero or more characters and ? matching any single character is reasonably consistent. For bash in particular, this is documented in the "Filename expansion" section of the bash manual; run info bash and search for "Filename expansion", or see here . The fact that this is done by the shell, and not by individual commands, has some interesting (and sometimes surprising) consequences. The best thing about it is that wildcard handling is consistent for (very nearly) all commands; if the shell didn't do this, inevitably some commands wouldn't bother, and others would do it in subtly different ways that the author thought was "better". (I think the Windows command shell has this problem, but I'm not familiar enough with it to comment further.) On the other hand, it's difficult to write a command to rename multiple files. If you write: mv *.log *.log.bak it will probably fail, since *.log.bak is expanded based on the files that already exist in the current directory. There are commands that do this kind of thing, but they have to use their own syntax to specify how the files are to be renamed. Some commands (such as find ) can do their own wildcard expansion; you have to quote the arguments to suppress the shell's expansion: find . -name '*.txt' -print The shell's wildcard expansion is based entirely on the syntax of the command-line argument and the set of existing files. It can't be affected by the meaning of the command. For example, if you want to move all .log files up to the parent directory, you can type: mv *.log .. If you forget the .. : mv *.log and there happen to be exactly two .log files in the current directory, it will expand to: mv one.log two.log which will rename one.log and clobber two.log . EDIT : And after 52 upvotes, an accept, and a Guru badge, maybe I should actually answer the question in the title. The -d or --directory option to ls doesn't tell it to list only directories. It tells it to list directories just as themselves, not their contents. If you give a directory name as an argument to ls , by default it will list the contents of the directory, since that's usually what you're interested in. The -d option tells it to list just the directory itself. This can be particularly useful when combined with wildcards. If you type: ls -l a* ls will give you a long listing of each file whose name starts with a , and of the contents of each directory whose name starts with a . If you just want a list of the files and directories, one line for each, you can use: ls -ld a* which is equivalent to: ls -l -d a* Remember again that the ls command never sees the * character. As for where this is documented, man ls will show you the documentation for the ls command on just about any Unix-like system. On most Linux-based systems, the ls command is part of the GNU coreutils package; if you have the info command, either info ls or info coreutils ls should give you more definitive and comprehensive documentation. Other systems, such as MacOS, may use different versions of the ls command, and may not have the info command; for those systems, use man ls . And ls --help will show a relatively short usage message (117 lines on my system) if you're using the GNU coreutils implementation. And yes, even experts need to consult the documentation now and then. See also this classic joke . | {
"source": [
"https://unix.stackexchange.com/questions/75046",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36242/"
]
} |
75,196 | $ echo $(( 255 ))
255
$ echo $(( 33 ))
33
$ echo $(( ~33 ))
-34
$ echo $(( ~255 ))
-256
$ and my kernel is: $ uname -a
Linux HOSTNAME 3.2.0-40-generic-pae #64-Ubuntu SMP Mon Mar 25 21:44:41 UTC 2013 i686 i686 i386 GNU/Linux QUESTION: ~ is for negating the number AFAIK. But why does ~33 produce -34 and why does ~255 produce -256 ? | The man page of bash says: ! ~ logical and bitwise negation Signed numbers are usually stored in Two's complement representation: ...
-4 = 1100
-3 = 1101
-2 = 1110
-1 = 1111
0 = 0000
1 = 0001
2 = 0010
3 = 0011
... This means if you take a number like 2 it is bitwise interpreted as 0010. After bitwise negation this becomes 1101, which is the representation of -3. | {
"source": [
"https://unix.stackexchange.com/questions/75196",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/18526/"
]
} |
75,430 | Is there a .vimrc setting to automatically remove trailing whitespace when saving a file? Ideally (to be safe) I would like to only have this functionality for certain files, e.g. *.rb | This works (in the .vimrc file) for all files: autocmd BufWritePre * :%s/\s\+$//e This works (in the .vimrc file) for just ruby(.rb) files: autocmd BufWritePre *.rb :%s/\s\+$//e | {
"source": [
"https://unix.stackexchange.com/questions/75430",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
75,483 | I followed these DebianEeePC HowTo InstallUsingStandardInstaller instructions at the Debian Wiki, to write a Debian ISO to my USB. dd if=debian-*-netinst.iso of=/dev/sdX Using sha1sum , I can check the checksums of my downloaded ISO file. How can I check the checksum of the USB stick device, to be sure that the USB stick does not have any problems and that the ISO was copied perfectly? | You can use cmp for checking if everything was copied fine: $ cmp -n `stat -c '%s' debian-X-netinst.iso` debian-X-netinst.iso /dev/sdX This solution does not explicitly compute the checksum of your /dev/sdX - but you don't need to do that because you have already done this for the source of the comparison (i.e. debian-X-netinst.iso ). Doing just a dd if=/dev/sdX | sha1sum may yield a mis-matching checksum just because you get trailing blocks ( /dev/sdX is most likely larger than the iso-file). Via cmp -n you make sure that no trailing bytes on your /dev/sdX are compared. If you are paranoid about the quality of your USB mass storage device you call blockdev --flushbufs /dev/sdX , eject it, re-insert it and then do the comparison - else all or some blocks may just come from the kernels VM (cache) - when in reality perhaps bits on the hardware are screwed up. | {
"source": [
"https://unix.stackexchange.com/questions/75483",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/13099/"
]
} |
75,635 | Is there a shell command that returns the pixel size of an image? I'm trying to produce an animated gif starting from different gifs with different sizes using convert (e.g. convert -delay 50 1.gif 2.gif -loop 0 animated.gif ). The problem is that convert simply overlaps the images using the first image's size as the size of the animated gif, and since they have different sizes the result is a bit of a mess, with bits of the old frames showing under the new ones. | found a solution: identify , part of the imagemagick package, does exactly what I need $ identify color.jpg
> color.jpg JPEG 1980x650 1980x650+0+0 8-bit DirectClass 231KB 0.000u 0:00.000 | {
"source": [
"https://unix.stackexchange.com/questions/75635",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36271/"
]
} |
75,681 | I mainly work on a mac and ssh/tmux attach to a Linux machine to do my work. I have ssh-agent running on the Linux machine. I have set -g update-environment "SSH_AUTH_SOCK SSH_ASKPASS WINDOWID SSH_CONNECTION XAUTHORITY" in my .tmux.conf . Yet, whenever I re-attach to this session, I have to run tmux setenv SSH_AUTH_SOCK $SSH_AUTH_SOCK in order for new tmux windows to have $SSH_AUTH_SOCK set correctly. I would prefer to not have to do this. Any ideas? Update I think I'm not explaining this well. Here's my shell function to open a shell on a remote machine: sshh () {
tmux -u neww -n ${host} "ssh -Xt ${host} $*"
} When tmux runs this ssh command, $SSH_AUTH_SOCK is not set, even though it is set in my local environment. If I put this in tmux's environment with the setenv command above, everything works fine. My question is, why do I have to run the setenv command at all? Update 2 More information: When I attach to an existing session, $SSH_AUTH_SOCK is not set in the tmux environment (or global environment). % tmux showenv | grep -i auth_sock
-SSH_AUTH_SOCK If I set it manually, things work: % tmux setenv SSH_AUTH_SOCK $SSH_AUTH_SOCK If I detach and re-attach, $SSH_AUTH_SOCK goes back to not being set. | Since I received the Bounty, I'll repost my key-comment for completeness sake - and to avoid setting visitors with the same problem on the wrong track: Tmux will remove Environment Variables Tmux' man page states that update-environment will remove variables "that do not exist in the source environment [...] as if -r was given to the set-environment command". Apparently that what caused the issue. See Chris' response below . However, I still can't imagine how the variable could be absent in the "source environment" and yet be valid in the newly created tmux window... Previous Answer: How SSH forwarding works On the remote machine, take a look at the environment of your shell after establishing the SSH connection: user@remote:~$ env | grep SSH
SSH_CLIENT=68.38.123.35 45926 22
SSH_TTY=/dev/pts/0
SSH_CONNECTION=68.38.123.35 48926 10.1.35.23 22
SSH_AUTH_SOCK=/tmp/ssh-hRNwjA1342/agent.1342 The important one here is SSH_AUTH_SOCK which is currently set to some file in /tmp. If you examine this file, you'll see that it's a Unix domain socket -- and is connected to the particular instance of ssh that you connected in on. Importantly, this changes every time you connect. As soon as you log out, that particular socket file is gone. Now, if you go and reattach your tmux session, you'll see the problem. It has the environment from when tmux was originally launched -- which could have been weeks ago. That particular socket is long since dead. Solution Since we know the problem has to do with knowing where the currently live SSH authentication socket is, let's just put it in a predictable place! In your .bashrc or .zshrc file on the remote machine, add the following: # Predictable SSH authentication socket location.
SOCK="/tmp/ssh-agent-$USER-screen"
if test $SSH_AUTH_SOCK && [ $SSH_AUTH_SOCK != $SOCK ]
then
rm -f /tmp/ssh-agent-$USER-screen
ln -sf $SSH_AUTH_SOCK $SOCK
export SSH_AUTH_SOCK=$SOCK
fi I don't think you even have to put an 'update-environment command' in your tmux.conf. According to the man page , SSH_AUTH_SOCK is already covered by default. Credit My response is an excerpt of this blog post by Mark 'xb95' Smith who explains the same problem for screen . | {
"source": [
"https://unix.stackexchange.com/questions/75681",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39095/"
]
} |
75,750 | I used a system information utility to take the model number of a system, and also of the motherboard. DMI System Manufacturer LENOVO
DMI System Product 2306CTO
DMI System Version ThinkPad X230
DMI Motherboard Product 2306CTO Is there a way to get model number, in this case 2306CTO , in Linux? | using the dmidecode | grep -A3 '^System Information' command. There you'll find all information from BIOS and hardware. These are examples on three different machines (this is an excerpt of the complete output): System Information
Manufacturer: Dell Inc.
Product Name: Precision M4700
System Information
Manufacturer: MICRO-STAR INTERANTIONAL CO.,LTD
Product Name: MS-7368
System Information
Manufacturer: HP
Product Name: ProLiant ML330 G6 | {
"source": [
"https://unix.stackexchange.com/questions/75750",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22558/"
]
} |
75,807 | When performing apt-get update , I get the following error: root@ADS3-Debian6:/home/aluno# apt-get update
Atingido http://sft.if.usp.br squeeze Release.gpg
Ign http://sft.if.usp.br/debian/ squeeze/contrib Translation-en
Ign http://sft.if.usp.br/debian/ squeeze/contrib Translation-pt
Ign http://sft.if.usp.br/debian/ squeeze/contrib Translation-pt_BR (...) Obter:10 http://security.debian.org squeeze/updates/non-free i386 Packages [14 B]
Baixados 612 kB em 4s (125 kB/s)
Lendo listas de pacotes... Pronto
There is no public key available for the following key IDs: 8B48AD6246925553 | The other answers will work, or not, depending on whether or not the key '8B48AD6246925553' is present in the packages they indicate. If you need a key, you have to get that key, and where to find it, it's in a key server (very probably any key server will do): sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8B48AD6246925553 | {
"source": [
"https://unix.stackexchange.com/questions/75807",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/28160/"
]
} |
75,892 | I am trying to add a public key for installing a program with CPG. But I am pretty new to this but every command I found gave me the same error: gpg --keyserver keyserver.ubuntu.com --recv-keys 94558F59
gpg: requesting key 94558F59 from hkp server keyserver.ubuntu.com
gpg: keyserver timed out
gpg: keyserver receive failed: keyserver error How is this possible it seems that the I am behind some kind of blockade which makes it impossible to establish a connection to the key server. I looked into many OP questions and tried all commands I could find but nothing worked. Anyone had this problem before? | This is usually caused by your firewall blocking the port 11371 . You could unblock the port in your firewall. In case you don't have access to the firewall you could: Force it to use port 80 instead of 11371 $ sudo gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 94558F59 -or alternatively omitting the port- $ sudo gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 94558F59 Alternatively Find and open the key from the key server. Copy it's contents into a text file. Go to System Tool > Preferences > Software Sources > Authentication > Add key, and select the text file created. Ubuntu 14.04 and later try : Software Center -> Edit -> Software Sources -> Authentication -> Import key file | {
"source": [
"https://unix.stackexchange.com/questions/75892",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39219/"
]
} |
75,902 | To keep the overview I like to place multiple commands always in the same order and start them automatically together (gradle, git, database, scala-REPL, jboss...) -H (hold) seems to mean that the terminal isn't closed after termination, but how do I terminate such a process willfully? Not at all? In such a way that I can continue to use the terminal. I'm using xubuntu with with xfce4-terminal and bash. Is there a better GUI-solution to startup multiple commands, with the ability to continue working in that window/tab? Update: If you don't know these commands: Jboss and gradle are continously producing output, which you don't want to have intermixed in the same terminal. And sometimes they need to be interrupted with ^C, and restarted. I don't like to reopen an xfce4-term and navigate to the directory I need to act in. Database and scala-REPL are interactive so there is no sense in starting them in the background. My current startup-script just navigates to the desired directories, and opens all tabs in the right order to find them always at the same position, naming every tab for its purpose: xfce4-terminal -T eclipse --working-directory=/home/stefan/oximity -e "/opt/eclipse/eclipse" \
--tab -T arandr --working-directory=/home/stefan/oximity -e "arandr /home/stefan/.screenlayout/oximity.sh" \
--tab -T bash --working-directory=/home/stefan/oximity \
--tab -T gradle --working-directory=/home/stefan/oximity/med \
--tab -T git --working-directory=/home/stefan/oximity/med \
--tab -T mysql --working-directory=/opt/mini/mysql \
--tab -T jboss --working-directory=/opt/mini/jboss \
--tab -T jboss-log --working-directory=/opt/mini/jboss/standalone/log \
--tab -T scala-REPL --working-directory=/home/stefan/proj/mini/forum -e /opt/scala/bin/scala Eclipse and arandr are detached from the shell and run in their own window, so there the -e (execute) param works. I think for the scala-REPL it works since it is the last command in the list. | The -H/-hold option is to keep the terminal emulator Window open once the applications started in it (shell or other) has exited. In that state, nothing more can happen. If you want to start a command as a job of an interactive shell in the xfce4-terminal terminal emulator and keep the shell running and use it interactively after the application has exited, with bash , you can make use of the $PROMPT_COMMAND environment variable, to have xfce-terminal start an interactive shell that starts the given command just before the first prompt. xfce4-terminal \
-T eclipse \
--working-directory=/home/stefan/oximity \
-e 'env PROMPT_COMMAND="unset PROMPT_COMMAND; /opt/eclipse/eclipse" bash' \
\
--tab -T arandr \
--working-directory=/home/stefan/oximity \
-e 'env PROMPT_COMMAND="unset PROMPT_COMMAND; arandr /home/stefan/.screenlayout/oximity.sh" bash' \
\
--tab -T bash \
--working-directory=/home/stefan/oximity \
... That way, the commands are jobs of that shell which means you can suspend them with Ctrl-Z and resume them later with fg/bg as if you had entered them at the prompt of that interactive shell. That assumes though that you don't set the $PROMPT_COMMAND in your ~/.bashrc . Also note that the exit status of the command will not be available in $? . To make it even more like the command was entered at the shell prompt you can even add it to the history list. Like: xfce4-terminal -T /etc/motd -e 'env PROMPT_COMMAND="
unset PROMPT_COMMAND
history -s vi\ /etc/motd
vi /etc/motd" bash' That way, once you exit vi , you can press the Up key to recall that same vi command. An easier way to write it: PROMPT_COMMAND='unset PROMPT_COMMAND; history -s "$CMD"; eval "$CMD"' \
xfce4-terminal --disable-server \
-T /etc/motd -e 'env CMD="vi /etc/motd" bash' \
--tab -T top -e 'env CMD=top bash' The: xfce4-terminal -e 'sh -c "cmd; exec bash"' solution as given in other answers works but has some drawbacks: If you press Ctrl-C while cmd is running, that kills the outer sh since there's only one process group for both sh and cmd . You can't use Ctrl-Z to suspend cmd Like in the $PROMPT_COMMAND approach, the exit status of the command will not be available in $? . You can work around 1 above by doing: xfce4-terminal -e 'sh -c "trap : INT; cmd; exec bash"' Or: xfce4-terminal -e 'sh -ic "cmd; exec bash"' With that latter one, you'll also be able to suspend the process with Ctrl-Z , but you won't be able to use fg/bg on it. You'll be able to continue it in background though by doing a kill -s CONT on the pid of cmd . | {
"source": [
"https://unix.stackexchange.com/questions/75902",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4485/"
]
} |
75,904 | I heard that FIFOs are named pipes. And they have exactly the same semantics. On the other hand, I think Unix domain socket is quite similar to pipe (although I've never made use of it). So I wonder if they all refer to the same implementation in Linux kernel. Any idea? | UNIX domain sockets and FIFO may share some part of their implementation but they are conceptually very different. FIFO functions at a very low level. One process writes bytes into the pipe and another one reads from it. A UNIX domain socket has similar behaviour as a TCP/IP or UDP/IP socket. A socket is bidirectional and can be used by a lot of processes simultaneously. A process can accept many connections on the same socket and attend several clients simultaneously. The kernel delivers a new file descriptor each time connect(2) or accept(2) is called on the socket. The packets will always go to the right process. On a FIFO, this would be impossible. For bidirectional communication, you need two FIFOs, and you need a pair of FIFOs for each of your clients. There is no way of writing or reading in a selective way, because they are a much more primitive way to communicate. Anonymous pipes and FIFOs are very similar. The difference is that anonymous pipes don't exist as files on the filesystem so no process can open(2) it. They are used by processes that share them by another method. If a process creates pipes and then performs, for example, a fork(2) , its child will inherit its file descriptors and, among them, the pipe. (File descriptors to named pipes/FIFOs can also be passed in the same way.) The UNIX domain sockets, anonymous pipes and FIFOs are similar in the fact they provide interprocess communication using file descriptors, where the kernel handles the system calls and abstracts the mechanism. | {
"source": [
"https://unix.stackexchange.com/questions/75904",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39113/"
]
} |
75,981 | Red Hat docs say: To see which installed packages on your system have updates available,
use the following command: yum check-update What command must I run to view all available versions for a package installed on my system? Example: yum check-update tells me java6 update #43 is available, but what if I want update #40 ? | This command won't focus specifically on one package, but by using a regex to do the matching you can still see what's available: $ yum list available java\*
java-1.4.2-gcj-compat.i386 1.4.2.0-40jpp.115 installed
java-1.6.0-openjdk.i386 1:1.6.0.0-1.36.1.11.9.el5_9 installed
Available Packages
java-1.4.2-gcj-compat-devel.i386 1.4.2.0-40jpp.115 base
java-1.4.2-gcj-compat-javadoc.i386 1.4.2.0-40jpp.115 base
java-1.4.2-gcj-compat-src.i386 1.4.2.0-40jpp.115 base
java-1.6.0-openjdk.i386 1:1.6.0.0-1.40.1.11.11.el5_9 updates
java-1.6.0-openjdk-demo.i386 1:1.6.0.0-1.40.1.11.11.el5_9 You can make it "smarter" by filtering the output using grep . | {
"source": [
"https://unix.stackexchange.com/questions/75981",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/23877/"
]
} |
75,996 | Is it possible to change the soft - and hard limit of a specific process? In my case, my process is mongod and a lot of web resources tell me to simply execute: ulimit -n <my new value> My current thoughts: How will the command know the limit of the process that I'll be modifying? Won't this modify the whole systems open file limit? I'm guessing that this command only changes the soft limit. So is there a way to increase the hard limit too? | To change the limits of a running process, you may use the utility command prlimit . prlimit --pid 12345 --nofile=1024:1024 What that does internally is to call setrlimit(2) . The man page of prlimit should contain some useful invocation examples. Source: https://sig-io.nl/posts/run-time-editing-of-limits-in-linux/ | {
"source": [
"https://unix.stackexchange.com/questions/75996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39269/"
]
} |
76,049 | Everywhere I see someone needing to get a sorted, unique list, they always pipe to sort | uniq . I've never seen any examples where someone uses sort -u instead. Why not? What's the difference, and why is it better to use uniq than the unique flag to sort? | sort | uniq existed before sort -u , and is compatible with a wider range of systems, although almost all modern systems do support -u -- it's POSIX. It's mostly a throwback to the days when sort -u didn't exist (and people don't tend to change their methods if the way that they know continues to work, just look at ifconfig vs. ip adoption). The two were likely merged because removing duplicates within a file requires sorting (at least, in the standard case), and is an extremely common use case of sort. It is also faster internally as a result of being able to do both operations at the same time (and due to the fact that it doesn't require IPC ( Inter-process communication ) between uniq and sort ). Especially if the file is big, sort -u will likely use fewer intermediate files to sort the data. On my system I consistently get results like this: $ dd if=/dev/urandom of=/dev/shm/file bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 8.95208 s, 11.7 MB/s
$ time sort -u /dev/shm/file >/dev/null
real 0m0.500s
user 0m0.767s
sys 0m0.167s
$ time sort /dev/shm/file | uniq >/dev/null
real 0m0.772s
user 0m1.137s
sys 0m0.273s It also doesn't mask the return code of sort , which may be important (in modern shells there are ways to get this, for example, bash 's $PIPESTATUS array, but this wasn't always true). | {
"source": [
"https://unix.stackexchange.com/questions/76049",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5032/"
]
} |
76,061 | I have a document with a lot of empty lines. How can I remove them when there are 2 or more together. I tried sed "s/\n\n//" file but it didn't work. No error. | sed '/^$/d' sed is line-oriented, so thinking in terms of "2 or more of a particular byte" works, except when that byte is a newline. Then you have to think of something that works for the entire line. | {
"source": [
"https://unix.stackexchange.com/questions/76061",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
76,189 | What is the file with the ~ at the end of the filename for? $ ls # aliased to add flags
-rwxrwxr-x 1 durrantm 2741 May 16 09:28 strip_out_rspec_prep_cmds.sh~*
drwxrwxr-x 13 durrantm 4096 May 16 14:21 ../
-rwxrwxr-x 1 durrantm 2221 May 16 14:58 strip_out_rspec_prep_cmds.sh* This is not the same as .swp files which are there while editing. The two files have quite a few differences and the newer file ( no ~ at the end) has the most recent changes and those changes are not in the older (~) file. Looks like I can delete it? | Typically files ending with a ~ are backups created by editors like emacs , nano or vi . | {
"source": [
"https://unix.stackexchange.com/questions/76189",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10043/"
]
} |
76,294 | I have /tmp on a separate partition, and mounted with noexec . I am using Debian. The installation of some packages fails, because the post-installation scripts of some packages need to run from /tmp . I was wondering if it would be possible to "hook" a simple script to apt-get , which would be run every time before apt-get , and remount /tmp to exec . And similarly, remount it to noexec after apt-get has finished. | You can use dpkg 's hook system to remount it -- put this in /etc/apt/apt.conf.d/00exectmp : DPkg::Pre-Invoke {"mount -o remount,exec /tmp";};
DPkg::Post-Invoke {"mount -o remount /tmp";}; | {
"source": [
"https://unix.stackexchange.com/questions/76294",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/36112/"
]
} |
76,389 | What's the recommended way of installing python packages on Arch? Searching for them on the AUR and installing them from there (or create a PKGBUILD file to make a package yourself) or using pip ? I started off by installing stuff from pacman and the AUR and don't know if it would be wise to mix with pip packages. | If you don't need the python packages for all users then you can install them in your home like this: pip install --user packagename Installing in your home will not conflict with the package manager. By default pip install --user will install in your "user site" directory. Usually that is something like: /home/lesmana/.local/lib/python3.6/site-packages . The following command will print, among others, your "user site" location: python -m site To customize the install location: PYTHONUSERBASE=$HOME/some/dir pip install --user packagename this will install everything under $HOME/some/dir to run: PYTHONUSERBASE=$HOME/some/dir $HOME/some/dir/bin/progname See the pip manual for more information. if you do want the python package for all users then the best place to install it is /opt . for example like this: PYTHONUSERBASE=/opt/packagedir pip install packagename (note the missing --user ) and to run, as above: PYTHONUSERBASE=/opt/packagedir /opt/packagedir/bin/progname Background explanation: /opt is commonly acknowledged by gnu/linux distributions as the directory where the local user or system administrator can install his own stuff. in other words: the package manager of distributions usually do not touch /opt . this is more or less standardized in the Filesystem Hierarchy Standard For comfort for the users you will still want to write a wrapper script and place it in /bin or /usr/bin . This still bears risk of colliding with the distribution package manager but at least it is just one wrapper script file. So the damage that might be done is minimal. You can name the wrapper script something like local-foo or custom-foo to further minimize the risk of collision with the distribution package manager. Alternatively you can modify PATH to include /opt/bin and place your wrapper script there. But this again requires you to modify a (or some) system files where PATH is defined which again may be overwritten by the distribution package manager. In short: if you want to install for all users then do so in /opt . Where you place the wrapper script for comfort is a judgement call. More Information about /opt and Filesystem Hierarchy Standard: What is the difference between /opt and /usr/local? http://www.pathname.com/fhs/2.2/fhs-3.12.html | {
"source": [
"https://unix.stackexchange.com/questions/76389",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34167/"
]
} |
76,402 | Why is the command md5sum <<< 'ddd' (output: d6d88f2e50080b9602da53dac1102762 - )
right, and md5sum << 'ddd' not? What does <<< mean? | The <<< starts a “here string”: The string is expanded and fed to the program’s stdin. (In your case, there is not much of expansion happening.) It is equivalent to this: echo ddd | md5sum On the other hand, << starts a here document. All the following lines up to one containing the marker ddd will comprise the input of the program. (You should use a marker that is not likely to appear in your data.) You could achieve the same effect as above like this: md5sum <<END
ddd
END There is one difference between <<END and <<'END' : Without the quotes, any variables, escape sequences etc. in the here document will be expanded as usual. | {
"source": [
"https://unix.stackexchange.com/questions/76402",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39499/"
]
} |
76,457 | I'm trying to find the speed of a network interface using a file descriptor. It's easy to do it for ethX , just calling cat /sys/class/net/eth0/speed . Unfortunately, this method doesn't work with wireless interfaces. When I call /sys/class/net/wlan0/speed I get an error: invalid argument. So, do you know any /sys/class/net/eth0/speed like analog for WLAN interfaces? | You can use the iwconfig tool to find this info out: $ iwconfig wlan0
wlan0 IEEE 802.11bg ESSID:"SECRETSSID"
Mode:Managed Frequency:2.462 GHz Access Point: 00:10:7A:93:AE:BF
Bit Rate=48 Mb/s Tx-Power=14 dBm
Retry long limit:7 RTS thr:off Fragment thr:off
Power Management:off
Link Quality=55/70 Signal level=-55 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0 If you want the bit rate from /sys , directly try this: $ cat /sys/class/net/wlan0/wireless/link
51 Or from /proc : $ cat /proc/net/wireless
Inter-| sta-| Quality | Discarded packets | Missed | WE
face | tus | link level noise | nwid crypt frag retry misc | beacon | 22
wlan0: 0000 56. -54. -256 0 0 0 0 0 0 NOTE: The value for the link in the 2nd example is 56, for e.g. I believe the MB/s is a calculated value, so it won't be stored anywhere specifically for the wlan0 device. I think it's taking the aggregate bits transferred over the interface and dividing it by the time it took said data to be transferred. One additional way to get this information is using the tool iw . This tool is a nl80211 based CLI configuration utility for wireless devices. It should be on any recent Linux distro. $ iw dev wlan0 link
Connected to 00:10:7A:93:AE:BF (on wlan0)
SSID: SECRETSSID
freq: 2462
RX: 89045514 bytes (194863 packets)
TX: 34783321 bytes (164504 packets)
signal: -54 dBm
tx bitrate: 48.0 MBit/s This also shows the amount of sent and received packets (RX/TX). | {
"source": [
"https://unix.stackexchange.com/questions/76457",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39538/"
]
} |
76,481 | If there's a "First World Problems" for scripting, this would be it. I have the following code in a script I'm updating: if [ $diffLines -eq 1 ]; then
dateLastChanged=$(stat --format '%y' /.bbdata | awk '{print $1" "$2}' | sed 's/\.[0-9]*//g')
mailx -r "Systems and Operations <sysadmin@[redacted].edu>" -s "Warning Stale BB Data" jadavis6@[redacted].edu <<EOI
Last Change: $dateLastChanged
This is an automated warning of stale data for the UNC-G Blackboard Snapshot process.
EOI
else
echo "$diffLines have changed"
fi The script sends email without issues, but the mailx command is nested within an if statement so I appear to be left with two choices: Put EOI on a new line and break indentation patterns or Keep with indentation but use something like an echo statement to get mailx to suck up my email. I'm open to alternatives to heredoc, but if there's a way to get around this it's my preferred syntax. | You can change the here-doc operator to <<- . You can then indent both the here-doc and the delimiter with tabs: #! /bin/bash
cat <<-EOF
indented
EOF
echo Done Note that you must use tabs , not spaces to indent the here-doc. This means the above example won't work copied (Stack Exchange replaces tabs with spaces). There can not be any quotes around the first EOF delimiter, else parameter expansion, command substitution, and arithmetic expansion are not in effect. | {
"source": [
"https://unix.stackexchange.com/questions/76481",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/34520/"
]
} |
76,484 | Is there any explanation/history behind the name of the command dmesg (which prints out some kernel messages)? | I think it stands for "diagnostic messages" , as per the older 1 man page (referenced here too). dmesg - system diagnostic messages
Dmesg looks in a system buffer for recent kernel diagnostic messages and reproduces them on the standard output One of the oldest references appears to be a man page revision by Kirk McKusick dating back from 1985. 1: the link doesn't always work - no idea why... I'm attaching a screenshot though you should still be able to access the page via Google's cache. | {
"source": [
"https://unix.stackexchange.com/questions/76484",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/4319/"
]
} |
76,505 | What is a portable 1 way for a (zsh) script to determine its absolute path? On Linux I use something like mypath=$(readlink -f $0) ...but this is not portable. (E.g., readlink on darwin does not recognize the -f flag, nor has any equivalent.) (Also, using readlink for this is, admittedly, a pretty obscure-looking hack.) What's a more portable way? 1 Portable across OSs in the Unix family, that is. | In zsh you can do the following: mypath=${0:a} Or, to get the directory in which the script resides: mydir=${0:a:h} See the Zsh documentation on history expansion modifiers , visible locally in man zshexpn or with info -f zsh -n Modifiers if the Info documentation is installed. | {
"source": [
"https://unix.stackexchange.com/questions/76505",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10618/"
]
} |
76,566 | For example: "\e[1;5C"
"\e[Z"
"\e-1\C-i" I only know bits and pieces, like \e stands for escape and C- for Ctrl , but what are these numbers ( 1 ) and letters ( Z )? What are the ; , [ and - signs for? Is there only trial and error, or is there a complete list of bash key codes and an explanation of their syntax? | Those are sequences of characters sent by your terminal when you press a given key. Nothing to do with bash or readline per se, but you'll want to know what sequence of characters a given key or key combination sends if you want to configure readline to do something upon a given key press. When you press the A key, generally terminals send the a (0x61) character. If you press <Ctrl-I> or <Tab> , then generally send the ^I character also known as TAB or \t (0x9). Most of the function and navigation keys generally send a sequence of characters that starts with the ^[ (control-[), also known as ESC or \e (0x1b, 033 octal), but the exact sequence varies from terminal to terminal. The best way to find out what a key or key combination sends for your terminal, is run sed -n l and to type it followed by Enter on the keyboard. Then you'll see something like: $ sed -n l
^[[1;5A
\033[1;5A$ The first line is caused by the local terminal echo done by the terminal device (it may not be reliable as terminal device settings would affect it). The second line is output by sed . The $ is not to be included, it's only to show you where the end of the line is. Above that means that Ctrl-Up (which I've pressed) send the 6 characters ESC , [ , 1 , ; , 5 and A (0x1b 0x5b 0x31 0x3b 0x35 0x41) The terminfo database records a number of sequences for a number of common keys for a number of terminals (based on $TERM value). For instance: TERM=rxvt tput kdch1 | sed -n l Would tell you what escape sequence is send by rxvt upon pressing the Delete key. You can look up what key corresponds to a given sequence with your current terminal with infocmp (here assuming ncurses infocmp): $ infocmp -L1 | grep -F '=\E[Z'
back_tab=\E[Z,
key_btab=\E[Z, Key combinations like Ctrl-Up don't have corresponding entries in the terminfo database, so to find out what they send, either read the source or documentation for the corresponding terminal or try it out with the sed -n l method described above. | {
"source": [
"https://unix.stackexchange.com/questions/76566",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/22310/"
]
} |
76,710 | I'm trying to get programs to log in local time for my own sanity. I have updated my timezone with: dpkg-reconfigure tzdata But the result of that command is: Current default time zone: 'Australia/Adelaide'
Local time is now: Mon May 20 03:09:52 UTC 2013.
Universal Time is now: Mon May 20 03:09:52 UTC 2013. Notice the UTC in Local time Any reason why this may be? I have done a lot of Googling but my problem seems different to all of them :( Here are some more details: # cat /etc/timezone
Australia/Adelaide
# date
Mon May 20 03:41:06 UTC 2013
# export TZ='Australia/Adelaide'; date
Mon May 20 13:16:11 CST 2013 Setting export TZ='Australia/Adelaide'; in my /etc/profile makes date work by default in a bash session but does not change the system log date (after restarting the service) Edit: # ls -l /etc/localtime
lrwxrwxrwx 1 root root 20 May 10 14:48 /etc/localtime -> /usr/share/zoneinfo/
# ls /etc/localtime/
Adelaide Chile GMT Japan PST8PDT Universal
Africa Cuba GMT+0 Kwajalein Pacific W-SU
America EET GMT-0 Libya Poland WET
Antarctica EST GMT0 MET Portugal Zulu
Arctic EST5EDT Greenwich MST ROC iso3166.tab
Asia Egypt HST MST7MDT ROK localtime
Atlantic Eire Hongkong Mexico Singapore localtime.dpkg-new
Australia Etc Iceland Mideast SystemV posix
Brazil Europe Indian NZ Turkey posixrules
CET Factory Iran NZ-CHAT UCT right
CST6CDT GB Israel Navajo US zone.tab
Canada GB-Eire Jamaica PRC UTC Answer: Worked it out thanks to jamzed. for some reason I had /etc/localtime as a symlink... the IT Guy here set up the server using Turnkey 12 so maybe that was the problem. # mv /etc/localtime /etc/localtime.old
# cp /usr/share/zoneinfo/Australia/Adelaide /etc/localtime
# date
Thu May 23 09:36:17 CST 2013 | Try this way: $ sudo cp /usr/share/zoneinfo/Australia/Adelaide /etc/localtime | {
"source": [
"https://unix.stackexchange.com/questions/76710",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39687/"
]
} |
76,717 | How can I launch a process in background and check when it ends within a bash script?
My idea is a script like this: launch backgroundprocess &
while [ Process is running ];do
echo "PROCESS IS RUNNING\r"
done;
echo "PROCESS TERMINATED" | The key is the "wait" command: #!/bin/bash
/my/process &
/another/process &
wait
echo "All processes done!" | {
"source": [
"https://unix.stackexchange.com/questions/76717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39709/"
]
} |
76,821 | In a bash shell, when I have to remove multiple files in the same directory, I currently need to do something like this: rm /some/path/file1 /some/path/file2 Is there a shorter way to write this so I don't have to retype /some/path/ without using a variable or changing the working directory? Perhaps something similar to: rm /some/path/(file1,file2) | You're close: rm /some/path/{file1,file2} or even rm /some/path/file{1,2} Related, and supported by other shells, is a pattern like rm /some/path/file[12] The first two are expanded to two explicit file name arguments; the third is a pattern against which all files in /some/path are matched. | {
"source": [
"https://unix.stackexchange.com/questions/76821",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/-1/"
]
} |
76,870 | I want to cat a file in current folder and all files in all subfolders (and subsubfolders). Here is my directory structure $ tree
.
├── f
│ └── foo
└── yo I want to cat foo and yo . I've tried this command but did not work: cat */* It just cat s foo . | try: find . -type f -exec cat {} + | {
"source": [
"https://unix.stackexchange.com/questions/76870",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/19718/"
]
} |
76,875 | So I have a local repo and a server with git installed and a git user. I want to send (push) the repo to the server. When I simply login to the server via ssh I have to specify my .pem file and a passphrase. The following: sudo git push [email protected]:somerepo.git throws this error: Permission denied (publickey).
fatal: The remote end hung up unexpectedly Another attempt another error: git push ssh://[email protected]:somerepo.git
ssh: Could not resolve hostname : Name or service not known
fatal: The remote end hung up unexpectedly | try: find . -type f -exec cat {} + | {
"source": [
"https://unix.stackexchange.com/questions/76875",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/39750/"
]
} |
77,007 | I wrote a script to move some files form one folder to another folder
but I got the following error, I checked 2 folders and notice for 1 folder there are such files and another there is no such files, but why all of them shows "mv cannot stat No such files or directory" mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/194.199.68.165_tcp.folder/data/*': No such file or directory
mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/194.42.17.124_tcp.folder/data/*': No such file or directory
mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/195.113.161.13_tcp.folder/data/*': No such file or directory
mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/203.159.127.3_tcp.folder/data/*': No such file or directory
mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/212.199.61.205_tcp.folder/data/*': No such file or directory
mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/212.51.218.235_tcp.folder/data/*': No such file or directory
mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/213.73.40.105_tcp.folder/data/*': No such file or directory
mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/41.225.7.4_tcp.folder/data/*': No such file or directory
mv: cannot stat `/home/esolve/project/capture/tcp_50x50/dest_folder/83.230.127.122_tcp.folder/data/*': No such file or directory
[esolve@kitty tcp_50x50]$ ls /home/wgong/project/capture/tcp_50x50/dest_folder/194.199.68.165_tcp.folder/
[esolve@kitty tcp_50x50]$ ls /home/wgong/project/capture/tcp_50x50/dest_folder/203.159.127.3_tcp.folder/data/
129.88.70.226 132.187.230.1 138.96.116.22 155.185.54.250 192.38.109.144 193.136.227.163 193.175.135.61 195.113.161.13 83.230.127.122
130.104.72.200 132.227.62.122 147.83.29.232 156.17.10.52 192.42.43.22 193.137.173.218 193.205.215.74 212.199.61.205
131.130.69.164 132.252.152.194 148.81.140.193 157.181.175.249 192.43.193.71 193.144.21.131 193.226.19.30 212.51.218.235
131.188.44.102 134.151.255.180 152.66.245.162 160.78.253.31 193.1.170.136 193.145.46.243 194.199.68.165 213.73.40.105
131.254.208.10 138.48.3.203 152.81.47.4 192.114.4.3 193.136.166.56 193.166.160.98 194.42.17.124 41.225.7.4 the script is: list=`ls dest_folder`
cd dest_folder
cwd=`pwd`
for folder in $list;do
mv ${cwd}/${folder}'/data/*' ${cwd}/${folder}
done I ran it in /home/esolve/project/capture/tcp_50x50/ . | mv ${cwd}/${folder}'/data/*' ${cwd}/${folder} The quotes ( ' ) there prevent the shell from doing globbing. The * is being passed literally to the mv command, which fails since it doesn't find files called * in the directories indicated. Change this to: mv "${cwd}/${folder}/data"/* "${cwd}/${folder}" (Double quotes to prevent problems if you ever have a directory name with spaces in it. * outside the quotes.) You'll still get the errors for the empty directories though. (Same sort of reason: if the file doesn't find a match for the pattern, it passes the pattern itself as an argument to the command.) | {
"source": [
"https://unix.stackexchange.com/questions/77007",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/38026/"
]
} |
77,077 | What are the practical uses of both pushd and popd when there is an advantage of using these two commands over cd and cd - ? EDIT : I'm looking for some practical examples of uses for both of these commands or reasons for keeping stack with directories (when you have tab completion, cd - , aliases for shortening cd .. , etc.). | pushd , popd , and dirs are shell builtins which allow you manipulate the directory stack . This can be used to change directories but return to the directory from which you came. For example start up with the following directories: $ pwd
/home/saml/somedir
$ ls
dir1 dir2 dir3 pushd to dir1 $ pushd dir1
~/somedir/dir1 ~/somedir
$ dirs
~/somedir/dir1 ~/somedir dirs command confirms that we have 2 directories on the stack now. dir1 and the original dir, somedir . NOTE: Our "current" directory is ~/somedir/dir1 . pushd to ../dir3 (because we're inside dir1 now) $ pushd ../dir3
~/somedir/dir3 ~/somedir/dir1 ~/somedir
$ dirs
~/somedir/dir3 ~/somedir/dir1 ~/somedir
$ pwd
/home/saml/somedir/dir3 dirs shows we have 3 directories in the stack now. dir3 , dir1 , and somedir . Notice the direction. Every new directory is getting added to the left. When we start popping directories off, they'll come from the left as well. manually change directories to ../dir2 $ cd ../dir2
$ pwd
/home/saml/somedir/dir2
$ dirs
~/somedir/dir2 ~/somedir/dir1 ~/somedir Now start popping directories $ popd
~/somedir/dir1 ~/somedir
$ pwd
/home/saml/somedir/dir1 Notice we popped back to dir1 . Pop again... $ popd
~/somedir
$ pwd
/home/saml/somedir And we're back where we started, somedir . Might get a little confusing, but the head of the stack is the directory that you're currently in. Hence when we get back to somedir , even though dirs shows this: $ dirs
~/somedir Our stack is in fact empty. $ popd
bash: popd: directory stack empty | {
"source": [
"https://unix.stackexchange.com/questions/77077",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20334/"
]
} |
77,127 | rm -rf /some/path/* deletes all non-hidden files in that dir (and subdirs). rm -rf /some/path/.* deletes all hidden files in that dir (but not subdirs) and also gives the following error/warning: rm: cannot remove directory: `/some/dir/.'
rm: cannot remove directory: `/some/dir/..' What is the proper way to remove all hidden and non-hidden files and folders recursively in a target directory without receiving the warning/error about . and .. ? | You could always send error messages to /dev/null rm -rf /some/path/.* 2> /dev/null You could also just rm -rf /some/path/
mkdir /some/path/ ...then you won't have to bother with hidden files in the first place. | {
"source": [
"https://unix.stackexchange.com/questions/77127",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/21844/"
]
} |
77,277 | I am writing a bash script to look for a file if it doesn't exist then create it and append this to it: Host localhost
ForwardAgent yes So "line then new line 'tab' then text" I think its a sensitive format.
I know you can do this: cat temp.txt >> data.txt But it seems weird since its two lines. Is there a way to append that in this format: echo "hello" >> greetings.txt | # possibility 1:
echo "line 1" >> greetings.txt
echo "line 2" >> greetings.txt
# possibility 2:
echo "line 1
line 2" >> greetings.txt
# possibility 3:
cat <<EOT >> greetings.txt
line 1
line 2
EOT
# possibility 4 (more about input than output):
arr=( 'line 1' 'line 2' );
printf '%s\n' "${arr[@]}" >> greetings.txt If sudo (other user privileges) is needed to write to the file, use this: # possibility 1:
echo "line 1" | sudo tee -a greetings.txt > /dev/null
# possibility 3:
sudo tee -a greetings.txt > /dev/null <<EOT
line 1
line 2
EOT | {
"source": [
"https://unix.stackexchange.com/questions/77277",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/37692/"
]
} |
77,296 | When handling log files, some end up as gzipped files thanks to logrotate and others not. So when you try something like this: $ zcat * you end up with a command line like zcat xyz.log xyz.log.1 xyz.log.2.gz xyz.log.3.gz and then with: gzip: xyz.log: not in gzip format Is there a tool that will take the magic bytes, similar to how file works, and use zcat or cat depending on the outcome so that I can pipe the output to grep for example? NB: I know I can script it, but I am asking whether there is a tool out there already. | Try it with -f or --force : zcat -f -- * Since zcat is just a simple script that runs exec gzip -cd "$@" with long options that would translate to exec gzip --stdout --decompress "$@" and, as per the man gzip (emphasize mine): -f --force Force compression or decompression even if the file has multiple links
or the corresponding file already exists, or if the compressed data is
read from or written to a terminal. If the input data is not in a format
recognized by gzip, and if the option --stdout is also given, copy the
input data without change to the standard output: let zcat behave as cat . Also: so that I can pipe the output to grep for example You could use zgrep for that: zgrep -- PATTERN * though see Stéphane's comment below. | {
"source": [
"https://unix.stackexchange.com/questions/77296",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5462/"
]
} |
77,356 | If I run a command which requires root privileges with sudo , I will be asked to enter the current user's password. After that for a while, if I execute the same sort of commands with sudo , I won't be asked for the password again. So my guess is that the password is cached somewhere until it expires. How long is this expiration time? Is it possible to configure it? | man 5 sudoers informs us that there is an option timestamp_timeout : timestamp_timeout Number of minutes that can elapse before sudo
will ask for a passwd again. The timeout may include a fractional
component if minute granularity is insufficient, for example 2.5. The default is 5. Set this to 0 to
always
prompt for a password. If set to a value less than 0 the user’s time stamp will never expire. This can be used
to allow users to create or delete their own time stamps via “sudo -v” and “sudo -k” respectively. So yes, it can be configured via /etc/sudoers , and by default it expires after 5 minutes. Also, please remember to use visudo to make any edits to /etc/sudoers . When saving your edits visudo will run validity checks before actually overwriting the sudoers file. This protects you from a painful recovery process if you lock yourself out of sudo access. | {
"source": [
"https://unix.stackexchange.com/questions/77356",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/10867/"
]
} |
77,377 | We have a regular job that does du summaries of a number of subdirectories, picking out worst offenders, and use the output to find if there are things that are rapidly rising to spot potential problems. We use diff against snapshots to compare them. There is a top level directory, with a number (few hundred) of subdirectories, each of which may contain 10's of thousands of files each (or more). A " du -s " in this context can be very IO aggressive, causing our server to bail its cache and then massive IO spikes which are a very unwelcome side affect. What strategy can be used to get the same data, without the unwanted side effects? | Take a look at ionice . From man ionice : This program sets or gets the io scheduling class and priority for a program. If no arguments or just -p is given, ionice will query the current io scheduling class and priority for that process. To run du with the "idle" I/O class, which is the lowest priority available, you can do something like this: ionice -c 3 du -s This should stop du from interfering with other process' I/O. You might also want to consider renicing the program to lower its CPU priority, like so: renice -n 19 "$duPid" You can also do both at initialisation time: nice -n 19 ionice -c 3 du | {
"source": [
"https://unix.stackexchange.com/questions/77377",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/3201/"
]
} |
77,395 | If I run a command like grep -rl test . | xargs vim I get a warning "Vim: Warning: Input is not from a terminal." But I am still able to edit the files. Why the warning? | Because Vim is invoked from inside the pipeline , the stdin is connected to the previous pipeline's output, not the terminal. As an interactive command, Vim needs to receive its input from the terminal. Better avoid the pipe, e.g. via $ vim $(grep -rl test .) or from inside Vim: :args `grep -rl test .` | {
"source": [
"https://unix.stackexchange.com/questions/77395",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/6973/"
]
} |
77,453 | Can someone tell me what I'm doing wrong, what this is, or how to fix it? I'm running Fedora 18 and getting the error shown [root@servername /]# find . -name ngirc
find: `./run/user/1000/gvfs': Permission denied
[root@servername /]#
[root@thinktank /]# pwd
/
[root@thinktank /]# ls -ltr ./run/user/1000
ls: cannot access ./run/user/1000/gvfs: Permission denied
total 0
d?????????? ? ? ? ? ? gvfs
lrwxrwxrwx. 1 root root 17 May 28 12:30 X11-display -> /tmp/.X11-unix/X0
drwx------. 2 kal kal 120 May 28 12:30 keyring-QjDw4b
drwx------. 2 kal kal 40 May 28 12:30 gvfs-burn
drwx------. 2 kal kal 60 May 28 12:30 krb5cc_5f0bcaf94f916d6b61696e2251a4dbb3
drwx------. 2 kal kal 60 May 28 18:25 dconf | You aren't doing anything wrong, and there's nothing to fix. /run/user/$uid/gvfs or ~$user/.gvfs is the mount point for the FUSE interface to GVFS . GVFS is a virtual filesystem implementation for Gnome, which allows Gnome applications to access resources such as FTP or Samba servers or the content of zip files like local directories. FUSE is a way to implement filesystem drivers as user code (instead of kernel code). The GVFS-FUSE gateway makes GVFS filesystem drivers accessible to all applications, not just the ones using Gnome libraries. Managing trust boundaries with FUSE filesystems is difficult, because the filesystem driver is running as an unprivileged user, as opposed to kernel code for traditional filesystems. To avoid complications, by default, FUSE filesystems are only accessible to the user running the driver process. Even root doesn't get to bypass this restriction. If you're searching for a file on local filesystems only, pass -xdev to find . If you want to traverse multiple local filesystems, enumerate them all. find / /home -xdev -name ngirc If the file has been present since yesterday, you may try locate ngirc instead ( locate searches through a file name database which is typically updated nightly). If you do want to traverse the GVFS mount points, you'll have to do so as the appropriate user. find / -name ngirc -path '/run/user/*/gvfs' -prune -o -path '/home/*/.gvfs' -prune -o -name ngirc -print
for d in /run/user/*; do su "${d##*/}" -c "find $d -name ngirc -print"; done | {
"source": [
"https://unix.stackexchange.com/questions/77453",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/40039/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.